Visualizing natural language interaction for conversational in-vehicle information systems to minimize driver distraction
In this paper we investigate how natural language interfaces can be integrated with cars in a way such that their influence on driving performance is being minimized. In particular, we focus on how speech-based interaction can be supported through a visualization of the conversation. Our work is motivated by the fact that speech interfaces (like Alexa, Siri, Cortana, etc.) are increasingly finding their way into our everyday life. We expect such interfaces to become commonplace in vehicles in the future. Cars are a challenging environment, since speech interaction here is a secondary task that should not negatively affect the primary task, that is driving. At the outset of our work, we identify the design space for such interfaces. We then compare different visualization concepts in a driving simulator study with 64 participants. Our results yield that (1) text summaries support drivers in recalling information and enhances user experience but can also increase distraction, (2) the use of keywords minimizes cognitive load and influence on driving performance, and (3) the use of icons increases the attractiveness of the interface.
An investigation of driver behavior on urban general road and in tunnel areas
The objective of this study is to examine experience-related differences in microscope driving behavior as drivers performed six separate maneuvers, namely 1) driving on general urban roads, 2) approaching a tunnel portal, 3) driving through a tunnel's threshold zone, 4) driving in the interior tunnel zone, 5) driving in the zone ahead the tunnel exit and 6) driving after the tunnel exit. An on-road experiment was conducted with 20 drivers in two groups. The first group was made up of new licensed drivers, and the second group contained the more experienced drivers. The study consisted of one between-subject (experience) and five within-subject variables (drive environment type). The drivers' behavior was measured through Mean Glance Duration, AOI Attention Ration, Horizontal Eye Activity, Vertical Eye Activity, Percentage of Eyelid Closure, and Heart Rate Variability. With respect to the relevant psychological measures, the results show that in general more attention is focused on the far left-hand side of the road and the near front road when driving through tunnel areas when compared with driving on general roads. In addition, the psychological measurements indicate that tunnel's dark narrow environment causes anxiety on driving for lower heart rate variation coefficient (RRCV). New licensed drivers were more severely affected by the tunnel environment than the experienced drivers.
Eye Tracking Glasses
Simulator
Driver state monitoring using consumer electronic devices: Innovation report
An impaired mental and physical state such as fatigue, high level of workload, or distraction, can make a driver prone to errors and lead to sub-optimal driving performance. If a human remains in full or partial control of a vehicle, drivers’ state is an important aspect of driving and cannot be neglected, given its significant impact on road safety. It is, therefore, beneficial for future automobiles to be fitted with a feature which enables detection of any physical and mental abnormalities in drivers’ state in real time using physiological and emotional indicators. Such a feature is often referred to as Driver State Monitoring (DSM) system. It is forecasted that DSM is expected to become a standard passenger car feature by 2025 and its integration is encouraged by the standards authorities. Previous research has predominantly considered the use of medical grade devices for the purpose of DSM. Instead, this research project has considered the potential use of Consumer Electronic Devices (CEDs) as part of DSM. However, the literature lacks evidence that this can be accomplished in a valid and reliable manner. Thus, the research, presented in this doctorate, aims to provide knowledge that describes feasibility and integration of CEDs into the vehicles for the purpose of DSM, from both technological and human factors perspectives. Firstly, this research project has produced a model of a hybrid DSM system. The model combines physiological and emotional sensing within CEDs. This can be used to enhance validity and reliability of DSM in a flexible and cost-efficient manner. The model also acknowledges barriers of introduction of hybrid DSM into the automotive market. Acceptance, one of the important adoption barriers of DSM technology, was studied and behaviour intention to use the system was statistically appraised using the Unified Theory of Acceptance and Use of Technology (UTAUT) model. It was found that social influence is a significant factor affecting drivers’ behaviour intention to use hybrid DSM in the near future. On the other hand, it was demonstrated that there is no significant negative attitude towards the use of hybrid DSM technology due to apprehension, intimidation, or fear of making mistakes. These findings indicate viability of DSM in the driving context. To further deepen understanding of CED-based DSM, three driving simulator user trials were conducted. Overall, supporting evidence for adoption of CEDs in DSM was provided by utilising state of the art methodology in DSM while characterising sensory capabilities of CEDs. The studies were specifically aiming to (1) determine the reliability and validity of wearable CEDs to measure human physiology while driving, (2) provide supporting evidence for employing CEDs in physiological and emotional evaluation of common driving activities, and (3) explore the effect of cognitive and visual workload on drivers’ state and driving performance during the automated to manual control transition scenarios. All three studies have demonstrated evidence of CEDs being well suited to reliably monitor drivers’ state. For instance, it was demonstrated how an extent of workload can be reliably measured using heart rate variability, captured by means of CEDs in the driving context. This approach could enable cost-efficient access to drivers’ state outside of driving activities. To facilitate this, a modular and cost-effective mobile DSM toolkit was designed and developed in-house. The toolkit enabled driver-state-related data collection, filtering, on-board analysis, storage, and synchronisation. It can be concluded that this EngD has successfully demonstrated that CEDs can be used for the purpose of DSM.
Eye Tracking Glasses
Simulator
Effects of mental demands on situation awareness during platooning: A driving simulator study
Previous research shows that drivers of automated vehicles are likely to engage in visually demanding tasks, causing impaired situation awareness. How mental task demands affect situation awareness is less clear. In a driving simulator experiment, 33 participants completed three 40-min runs in an automated platoon, each run with a different level of mental task demands. Results showed that high task demands (i.e., performing a 2-back task, a working memory task in which participants had to recall a letter, presented two letters ago) induced high self-reported mental demands (71% on the NASA Task Load Index), while participants reported low levels of self-reported task engagement (measured with the Dundee Stress State Questionnaire) in all three task conditions in comparison to the pre-task measurement. Participants’ situation awareness, as measured using a think-out-loud protocol, was affected by mental task demands, with participants being more involved with the mental task itself (i.e., to remember letters) and less likely to comment on situational features (e.g., car, looking, overtaking) when task demands increased. Furthermore, our results shed light on temporal effects, with heart rate decreasing and self-constructed mental models of automation growing in complexity, with run number. It is concluded that mental task demands reduce situation awareness, and that not only type-of-task, but also time-on-task, should be considered in Human Factors research of automated driving.
Eye Tracking Glasses
Simulator
Effects of searching for street parking on driver behaviour, physiology, and visual attention allocation: An on-road study
On-street parking is a major aspect of the urban street system, and entails important costs to drivers, in terms of time, inconvenience and energy to find a parking space. The efforts needed to park on-street can create frustrations and stress among drivers, which can contribute to unsafe driving behavior and ultimately affect road safety. Additionally, the search for available spaces creates disturbances to traffic, delays to other vehicles, and increased pollution due to extra fuel consumption. In this study, the effects of searching for on-street parking on driver behavior, physiology, and visual attention allocation were investigated using an on-road experimental approach. A total of 32 drivers participated in the study, during which their driving behavior, physiological responses, and visual attention were monitored while they searched for parking on urban streets in Toronto, Canada. Results indicated that searching for on-street parking led to significant changes in drivers' behavior, including reduced speeds and abrupt stops, as well as increases in physiological stress markers and visual attention diversion. Understanding these effects is crucial for urban traffic management and designing policies to mitigate the negative impacts associated with on-street parking search.
Eye Tracking Glasses
Software
Ensuring the take-over readiness of the driver based on the gaze behavior in conditionally automated driving scenarios
Conditional automation is the next step towards the fully automated vehicle. Under prespecified conditions an automated driving function can take-over the driving task and the responsibility for the vehicle, thus enabling the driver to perform secondary tasks. However, performing secondary tasks and the resulting reduced attention towards the road may lead to critical situations in take-over situations. In such situations, the automated driving function reaches its limits, forcing the driver to take-over responsibility and the control of the vehicle again. Thus, the driver represents the fallback level for the conditionally automated system. At this point the question arises as to how it can be ensured that the driver can take-over adequately and timely without restricting the automated driving system or the new freedom of the driver. To answer this question, this work proposes a novel prototype for an advanced driver assistance system which is able to automatically classify the driver’s take-over readiness for keeping the driver ”in-the-loop”. The results show the feasibility of such a classification of the take-over readiness even in the highly dynamic vehicle environment using a machine learning approach. It was verified that far more than half of the drivers performing a low-quality take-over would have been warned shortly before the actual take-over, whereas nearly 90% of the drivers performing a high-quality take-over would not have been interrupted by the driver assistance system during a driving simulator study. The classification of the take-over readiness of the driver is performed by means of machine learning algorithms. The underlying features for this classification are mainly based on the head and eye movement behavior of the driver. It is shown how the secondary tasks currently being performed as well as the glances on the road can be derived from these measured signals. Therefore, novel, online-capable approaches for driver-activity recognition and Eyes-on-Road detection are introduced, evaluated, and compared to each other based on both data of a simulator and real-driving study. These novel approaches are able to deal with multiple challenges of current state-of-the-art methods such as: i) only a coarse separation of driver activities possible, ii) necessity for costly and time-consuming calibrations, and iii) no adaption to conditionally automated driving scenarios.
Eye Tracking Glasses
Software
How to warn drivers in various safety-critical situations–Different strategies, different reactions
Technological advances allow supporting drivers in a multitude of occasions, ranging from comfort enhancement to collision avoidance, for example through driver warnings, which are especially crucial for traffic safety. This psychological driving simulator experiment investigated how to warn drivers visually in order to prevent accidents in various safety-critical situations. Collision frequencies, driving behavior and subjective evaluations of situation criticality, warning understandability and helpfulness of sixty drivers were measured in two trials of eight scenarios each (within-subjects factors). The warning type in the head-up display (HUD) varied (between-subjects) in its strategy (attention-/reaction-oriented) and specificity (generic/specific) over four warning groups and a control group without a warning. The results show that the scenarios differed in their situation criticality and drivers adapted their reactions accordingly, which underlines the importance of testing driver assistance systems in diverse scenarios. Besides some learning effects over the trials, all warned drivers showed faster and stronger brake reactions. Some warning concepts were understood better than others, but all were accepted. Generic warnings were effective, yet the warning strategy should adapt to situation requirements and/or driver behavior. A stop symbol as reaction generic warning is recommendable for diverse kinds of use cases, leading to fast and strong reactions. However, for rather moderate driver reactions an attention generic approach with a caution symbol might be more suitable. Further research should investigate multi-stage warnings with adaptive strategies for application to various situations including other modalities and false alarms.
Introduction matters: Manipulating trust in automation and reliance in automated driving
Trust in automation is a key determinant for the adoption of automated systems and their appropriate use. Therefore, it constitutes an essential research area for the introduction of automated vehicles to road traffic. In this study, we investigated the influence of trust promoting (Trust promoted group) and trust lowering (Trust lowered group) introductory information on reported trust, reliance behavior and take-over performance. Forty participants encountered three situations in a 17-min highway drive in a conditionally automated vehicle (SAE Level 3). Situation 1 and Situation 3 were non-critical situations where a take-over was optional. Situation 2 represented a critical situation where a take-over was necessary to avoid a collision. A non-driving-related task (NDRT) was presented between the situations to record the allocation of visual attention. Participants reporting a higher trust level spent less time looking at the road or instrument cluster and more time looking at the NDRT. The manipulation of introductory information resulted in medium differences in reported trust and influenced participants' reliance behavior. Participants of the Trust promoted group looked less at the road or instrument cluster and more at the NDRT. The odds of participants of the Trust promoted group to overrule the automated driving system in the non-critical situations were 3.65 times (Situation 1) to 5 times (Situation 3) higher. In Situation 2, the Trust promoted group's mean take-over time was extended by 1154 ms and the mean minimum time-to-collision was 933 ms shorter. Six participants from the Trust promoted group compared to no participant of the Trust lowered group collided with the obstacle. The results demonstrate that the individual trust level influences how much drivers monitor the environment while performing an NDRT. Introductory information influences this trust level, reliance on an automated driving system, and if a critical take-over situation can be successfully solved.
Eye Tracking Glasses
Simulator
Predicting Drivers’ Eyes‐Off‐Road Duration in Different Driving Scenarios
Drivers consecutively direct their gaze to various areas to select relevant information from the traffic environment. The rate of crash risk increases with different off-road glance durations in different driving scenarios. This paper proposed an approach to identify current driving scenarios and predict driver’s eyes-off-road durations using Hidden Markov Model (HMM). A moving base driving simulator study with 26 participants driving in three driving scenarios (urban, rural, and motorway) was conducted. Three different fixed occlusion durations (0-s, 1-s, and 2-s) were applied to quantify eyes-off-road durations. Participants could initiate each occlusion for certain duration by pressing a microswitch on a finger. They were instructed to occlude their vision as often as possible while still driving safely. Drivers’ visual behavior and occlusion behavior were captured and analyzed based on manually frame by frame coding. Visual behaviors in terms of glance duration and glance location in time series were used as input to train HMMs. The results showed that current driving scenarios could be identified ideally using glance location sequences, the accuracy achieving up to 89.3%. And motorway was relatively distinguishable easily with over 90% accuracy. Moreover, HMM-based algorithms that fed up with both glance duration and glance location sequences resulted in a highest accuracy of 92.7% in driver’s eyes-off-road durations prediction. And higher accuracy achieved in longer eyes-off-road durations prediction. It indicates that time series of glance allocations could be used to predict driving behavior and indentify driving environment. The developed models in this study could contribute to the development of scenario sensitive visual inattention prewarning system.