Understanding and Supporting Anticipatory Driving in Automated Vehicles
Understanding and Supporting Anticipatory Driving in Automated Vehicles
He, Dengbo
University of Toronto (Canada), 2020.
Abstract: As automated vehicles (AVs) are increasingly becoming a reality on our roads, understanding the interaction between human drivers and these vehicles is critical. Anticipatory driving refers to the human driver's ability to predict and react to road events before they occur, a skill that enhances safety and efficiency. This dissertation explores methods to support anticipatory driving behaviors in AVs through improved human-vehicle interaction. The research identifies key anticipatory behaviors, develops support systems for these behaviors, and evaluates their effectiveness. Findings suggest that enhancing AV interfaces and feedback mechanisms can significantly improve human-vehicle collaboration and overall driving performance.
Understanding the Cognitive and Psychological Impacts of Emerging Technologies on Driver Decision-Making Using Physiological Data
Emerging technologies, such as advanced driver-assistance systems (ADAS) and autonomous vehicles (AVs), are transforming the driving experience. These technologies can influence driver cognition and decision-making processes in various ways. This study aims to understand the cognitive and psychological impacts of these emerging technologies on driver decision-making by utilizing physiological data. Through the analysis of data such as heart rate variability, skin conductance, and eye-tracking metrics, the research investigates how drivers' mental and physical states are affected during interaction with ADAS and AVs. The findings aim to provide insights into improving the design and safety of these technologies, ultimately enhancing driver comfort and performance.
Eye Tracking Glasses
Software
User interface for in-vehicle systems with on-wheel finger spreading gestures and head-up displays
Interacting with an in-vehicle system through a central console is known to induce visual and biomechanical distractions, thereby delaying the danger recognition and response times of the driver and significantly increasing the risk of an accident. To address this problem, various hand gestures have been developed. Although such gestures can reduce visual demand, they are limited in number, lack passive feedback, and can be vague and imprecise, difficult to understand and remember, and culture-bound. To overcome these limitations, we developed a novel on-wheel finger spreading gestural interface combined with a head-up display (HUD) allowing the user to choose a menu displayed in the HUD with a gesture. This interface displays audio and air conditioning functions on the central console of a HUD and enables their control using a specific number of fingers while keeping both hands on the steering wheel. We compared the effectiveness of the newly proposed hybrid interface against a traditional tactile interface for a central console using objective measurements and subjective evaluations regarding both the vehicle and driver behaviour. A total of 32 subjects were recruited to conduct experiments on a driving simulator equipped with the proposed interface under various scenarios. The results showed that the proposed interface was approximately 20% faster in emergency response than the traditional interface, whereas its performance in maintaining vehicle speed and lane was not significantly different from that of the traditional one.
Eye Tracking Glasses
Simulator
A user experience‐based toolset for automotive human‐machine interface technology development
The development of new automotive Human-Machine Interface (HMI) technologies must consider the competing and often conflicting demands of commercial value, User Experience (UX) and safety. Technology innovation offers manufacturers the opportunity to gain commercial advantage in a competitive and crowded marketplace, leading to an increase in the features and functionality available to the driver. User response to technology influences the perception of the brand as a whole, so it is important that in-vehicle systems provide a high-quality user experience. However, introducing new technologies into the car can also increase accident risk. The demands of usability and UX must therefore be balanced against the requirement for driver safety. Adopting a technology-focused business strategy carries a degree of risk, as most innovations fail before they reach the market. Obtaining clear and relevant information on the UX and safety of new technologies early in their development can help to inform and support robust product development (PD) decision making, improving product outcomes. In order to achieve this, manufacturers need processes and tools to evaluate new technologies, providing customer-focused data to drive development. This work details the development of an Evaluation Toolset for automotive HMI technologies encompassing safety-related functional metrics and UX measures. The Toolset consists of four elements: an evaluation protocol, based on methods identified from the Human Factors, UX and Sensory Science literature; a fixed-base driving simulator providing a context-rich, configurable evaluation environment, supporting both hardware and software-based technologies; a standardised simulation scenario providing a repeatable basis for technology evaluations, allowing comparisons across multiple technologies and studies; and a technology scorecard that collates and presents evaluation data to support PD decision making processes. The Evaluation Toolset was applied in three technology evaluation case studies, conducted in conjunction with the industrial partner, Jaguar Land Rover. All three were live technology development projects, representing hardware and software concepts with different technology readiness levels. Case study 1 evaluated a software-based voice messaging system with reference to industry guidelines, confirming its performance and identifying potential UX improvements. Case study 2 compared three touchscreen technologies, identifying user preference and highlighting specific usability issues that would not have been found though analytical means. Case study 3 evaluated autostereoscopic 3D displays, assessing the effectiveness of 3D information while highlighting design considerations for long-term use and defining a design space for 3D content. Findings from the case studies, along with learning from visits to research facilities in the UK and USA, was used to validate and improve the toolset, with recommendations made for implementation into the PD workflow. The driving simulator received significant upgrades as part of a new interdisciplinary research collaboration with the Department of Psychology to support future research activity. Findings from the case studies also directly supported their respective technology development projects; the main outcome from the research therefore is an Evaluation Toolset that has been demonstrated, through application to live technology development projects, to provide valid, relevant and detailed information on holistic evaluations of early-phase HMI technologies, thus adding value to the PD process.
Derivation of a model of safety critical transitions between driver and vehicle in automated driving
Abstract In automated driving, there is the risk that users must take over the vehicle guidance despite a potential lack of involvement in the driving task. This publication presents an initial model of control distribution between users and the automated system. In this model, the elements of the control distribution in automated driving are addressed together with possible and safe transitions between different driving modes. Furthermore, the approach is initially empirically validated. In a driving study, in which participants operated both driving and a non-driving related task, objective driving data as well as eye-tracking parameters are used to estimate the model’s accuracy. Such an explanatory model can serve as a first approach to describe potential concepts of cooperation between users and automated vehicles. In this way, prospective road traffic concepts could be improved by preventing safety critical transitions between the driver and the vehicle.
Eye Tracking Glasses
Software
Detecting driver’s fatigue, distraction and activity using a non-intrusive ai-based monitoring system
The lack of attention during the driving task is considered as a major risk factor for fatal road accidents around the world. Despite the ever-growing trend for autonomous driving which promises to bring greater road-safety benefits, the fact is today’s vehicles still only feature partial and conditional automation, demanding frequent driver action. Moreover, the monotony of such a scenario may induce fatigue or distraction, reducing driver awareness and impairing the regain of the vehicle’s control. To address this challenge, we introduce a non-intrusive system to monitor the driver in terms of fatigue, distraction, and activity. The proposed system explores state-of-the-art sensors, as well as machine learning algorithms for data extraction and modeling. In the domain of fatigue supervision, we propose a feature set that considers the vehicle’s automation level. In terms of distraction assessment, the contributions concern (i) a holistic system that covers the full range of driver distraction types and (ii) a monitoring unit that predicts the driver activity causing the faulty behavior. By comparing the performance of Support Vector Machines against Decision Trees, conducted experiments indicated that our system can predict the driver’s state with an accuracy ranging from 89% to 93%.
Eye Tracking Glasses
Software
Detection Response Task Evaluation for Driver Distraction Measurement for Auditory-Vocal Tasks: Experiment 2
This research evaluated the Detection Response Task (DRT) as a measure of the attentional demands of auditory-vocal in-vehicle tasks. DRT is an ISO standardized method that requires participants to respond to simple targets that occur every 3-5 s during in-vehicle task performance. DRT variants use different targets: Remote DRT (RDRT) uses visual targets; Tactile DRT (TDRT) uses vibrating targets. A single experiment evaluated the sensitivity of the two DRT variants in two test venues (driving simulator and non-driving) using auditory-vocal tasks. Participant selection criteria from the Visual-Manual NHTSA Driver Distraction Guidelines were used to recruit 192 participants; 48 were assigned to each combination of DRT variant and test venue. Identical production vehicles were used in each venue. In the simulator, participants wore a head-mounted eye tracker and performed in-vehicle tasks while driving in a car-following scenario. In the non-driving venue, occlusion testing required participants to perform the four discrete tasks while wearing occlusion goggles, which restricted viewing intermittently to simulate driving task demands. In-vehicle tasks for both venues included three discrete auditory-vocal tasks (destination entry, phone dialing, radio tuning), one discrete visual-manual task (radio tuning), and two continuous auditory-vocal digit-recall tasks representing acceptable (1-back) and unacceptable (2-back) levels of attentional load. Testing in each venue had a second part. All participants’ last procedural step involved brake response time (BRT) testing in the simulator which required participants to brake in response to both expected and unexpected lead-vehicle (LV) braking events while performing selected in-vehicle tasks. Differences observed between test venues suggest that some in-vehicle tasks are more demanding when performed intermittently in the driving simulator than when performed continuously in the non-driving venue, thus pointing to the driving simulator as the better test venue. BRT results provided some support for a connection between DRT RT and BRT; however, the experiment did not provide sufficient control of speed and headway to allow a stronger comparison. DRT results support the conclusion that the 2-back condition represents too much attentional demand and that acceptable tasks should have a lower level of attentional demand. Differences between TSOT and TEORT indicated that occlusion is not suitable for assessing auditory-vocal tasks; however, TEORT and other glance-based metrics appear suitable for use with auditory-vocal tasks. BRT testing revealed a small effect of attentional load for unexpected LV braking events but not for expected LV braking events. Mean heart rate was sensitive to differences in attentional load.
Eye Tracking Glasses
Simulator
Effect of Technical and Quiet Eye Training on the gaze behavior and long-term learning of volleyball serve reception in 10 to 12-year-old female
Background: A quiet eye is the final fixation or tracking before moving on, which requires concentration and attention, and is an effective way of teaching interceptive tasks. Methods: In the current semi-experimental study, 20 volunteer female students from a volleyball center of Shiraz District 1 (mean age = 12.10, SD = 0.718) were selected as the participants from February 2017 to February 2018. After taking the pre-test, they were randomly divided into two groups of 10 (technical training and quiet eye training). The intended task was to receive volleyball serve with the forearm from three receiving areas of the mini-volleyball court. To measure the accuracy of the volleyball serve reception, a volleyball Serve Reception Test by forearm was used in mini-volleyball court. Ergoneers eye tracking (EET) was used to record the visual data. After the pre-test, the participants took part in 9 separate training sessions three sessions a week, and 48 hours after the last training session, the first retention test and one month later the second retention test was performed. Data were analyzed by 2 × 3 mixed analysis of variance (ANOVA) of quiet eye duration and performance, using SPSS software at a significant level of P ≤ 0.05. Results: The results showed that the mean performance of the quiet eye training group increased from 4.30 ± 1.76 in pre-test to 11 ± 1.76 in the first retention and 12 ± 2 in long-term retention in comparison to the technical training group (P = 0.007). However, there was no significant difference between the mean quiet eye duration of the quiet eye and technical training groups (P = 0.512). Conclusions: It seems that quiet eye training has a significant effect on the long-term learning of beginners compared to technical training, but it does not have a significant difference in the duration of beginners’ quiet eye compared to technical training.
Eye Tracking Glasses
Software
Ergonomics Studies on Non-Traditional In-Vehicle Displays for Reducing Information Access Costs
Ergonomics Studies on Non-Traditional In-Vehicle Displays for Reducing Information Access Costs Donghyun Beck Department of Industrial Engineering The Graduate School Seoul National University Drivers should keep their eyes forward most of the time during driving to be in full control of the vehicle and to be aware of the dynamic road scene. Thus, it is important to locate in-vehicle displays showing information required for a series of driving tasks close to the driver’s forward line-of-sight, and therefore, to reduce the eyes-off-the-road time. Automotive head-up display (HUD) system and camera monitor system (CMS) are promising non-traditional in-vehicle display systems that can reduce information access costs. HUD presents various information items directly on the driver’s forward field of view, and allows the drivers to acquire necessary information while looking at the road ahead. CMS consists of cameras capturing vehicle’s side and rear views and in-vehicle electronic displays presenting the real-time visual information, allowing the driver to obtain it inside a vehicle. Despite the potential benefits and promising applications of HUD system and CMS, however, there are some important research questions to be addressed for their ergonomics design. As for HUD system, presenting many information items indiscriminately can cause undesirable consequences, such as information overload, visual clutter and cognitive capture. Thus, only the necessary and important information must be selected and adequately presented according to the driving situation at hand. As for CMS, the electronic displays can be placed at any positions inside a vehicle and this flexibility in display layout design may be leveraged to develop systems that facilitate the driver’s information processing, and also, alleviate the physical demands associated with checking side and rear views. Therefore, the following ergonomics research questions were considered: 1) ‘Among various information items displayed by the existing HUD systems, which ones are important?’, 2) ‘How should the important HUD information items be presented according to the driving situation?', 3) ‘What are the design characteristics of CMS display layouts that can facilitate driver information processing?’, and 4) ‘What are the design characteristics of CMS display layouts that can reduce physical demands of driving?’ As an effort to address some key knowledge gaps regarding these research questions and contribute to the ergonomics design of these non-traditional in-vehicle display systems, two major studies were conducted – one on HUD information items, and the other on CMS display layouts. In the study on HUD information items, a user survey was conducted to 1) determine the perceived importance of twenty-two information items displayed by the existing commercial automotive HUD systems, and to 2) examine the contexts of use and the user-perceived design improvement points for high-priority HUD information items. A total of fifty-one drivers with significant prior HUD use experience participated. For each information item, the participants subjectively evaluated its importance, and described its contexts of use and design improvement points. The information items varied greatly in perceived importance, and current speed, speed limit, turn-by-turn navigation instructions, maintenance warning, cruise control status, and low fuel warning were of highest importance. For eleven high-priority information items, design implications and future research directions for the ergonomics design of HUD systems were derived. In the study on CMS display layouts, a driving simulator experiment was conducted to comparatively evaluate three CMS display layouts with the traditional side-view mirror arrangement in terms of 1) driver information processing and 2) physical demands of driving. The three layouts placed two side-view displays inside the car nearby the conventional side-view mirrors, on the dashboard at each side of the steering wheel, and on the center fascia with the displays joined side-by-side, respectively. Twenty-two participants performed a safety-critical lane changing task with each layout design. Compared to the traditional mirror system, all three CMS display layouts facilitated information processing and reduced physical demands. Design characteristics leading to such beneficial effects were placing CMS displays close to the normal line-of-sight to reduce eye gaze travel distance and locating each CMS display on each side of the driver to maintain compatibility. Keywords: head up display (HUD), experienced users, importance of information items, contexts of information use, design improvement points, camera monitor system (CMS), in-vehicle side-view displays, display layout, information processing, physical demands Student Number: 2013-21072
Eye Tracking Glasses
Simulator