Detecting driver’s fatigue, distraction and activity using a non-intrusive ai-based monitoring system
The lack of attention during the driving task is considered as a major risk factor for fatal road accidents around the world. Despite the ever-growing trend for autonomous driving which promises to bring greater road-safety benefits, the fact is today’s vehicles still only feature partial and conditional automation, demanding frequent driver action. Moreover, the monotony of such a scenario may induce fatigue or distraction, reducing driver awareness and impairing the regain of the vehicle’s control. To address this challenge, we introduce a non-intrusive system to monitor the driver in terms of fatigue, distraction, and activity. The proposed system explores state-of-the-art sensors, as well as machine learning algorithms for data extraction and modeling. In the domain of fatigue supervision, we propose a feature set that considers the vehicle’s automation level. In terms of distraction assessment, the contributions concern (i) a holistic system that covers the full range of driver distraction types and (ii) a monitoring unit that predicts the driver activity causing the faulty behavior. By comparing the performance of Support Vector Machines against Decision Trees, conducted experiments indicated that our system can predict the driver’s state with an accuracy ranging from 89% to 93%.
Eye Tracking Glasses
Software
Detection Response Task Evaluation for Driver Distraction Measurement for Auditory-Vocal Tasks: Experiment 2
This research evaluated the Detection Response Task (DRT) as a measure of the attentional demands of auditory-vocal in-vehicle tasks. DRT is an ISO standardized method that requires participants to respond to simple targets that occur every 3-5 s during in-vehicle task performance. DRT variants use different targets: Remote DRT (RDRT) uses visual targets; Tactile DRT (TDRT) uses vibrating targets. A single experiment evaluated the sensitivity of the two DRT variants in two test venues (driving simulator and non-driving) using auditory-vocal tasks. Participant selection criteria from the Visual-Manual NHTSA Driver Distraction Guidelines were used to recruit 192 participants; 48 were assigned to each combination of DRT variant and test venue. Identical production vehicles were used in each venue. In the simulator, participants wore a head-mounted eye tracker and performed in-vehicle tasks while driving in a car-following scenario. In the non-driving venue, occlusion testing required participants to perform the four discrete tasks while wearing occlusion goggles, which restricted viewing intermittently to simulate driving task demands. In-vehicle tasks for both venues included three discrete auditory-vocal tasks (destination entry, phone dialing, radio tuning), one discrete visual-manual task (radio tuning), and two continuous auditory-vocal digit-recall tasks representing acceptable (1-back) and unacceptable (2-back) levels of attentional load. Testing in each venue had a second part. All participants’ last procedural step involved brake response time (BRT) testing in the simulator which required participants to brake in response to both expected and unexpected lead-vehicle (LV) braking events while performing selected in-vehicle tasks. Differences observed between test venues suggest that some in-vehicle tasks are more demanding when performed intermittently in the driving simulator than when performed continuously in the non-driving venue, thus pointing to the driving simulator as the better test venue. BRT results provided some support for a connection between DRT RT and BRT; however, the experiment did not provide sufficient control of speed and headway to allow a stronger comparison. DRT results support the conclusion that the 2-back condition represents too much attentional demand and that acceptable tasks should have a lower level of attentional demand. Differences between TSOT and TEORT indicated that occlusion is not suitable for assessing auditory-vocal tasks; however, TEORT and other glance-based metrics appear suitable for use with auditory-vocal tasks. BRT testing revealed a small effect of attentional load for unexpected LV braking events but not for expected LV braking events. Mean heart rate was sensitive to differences in attentional load.
Eye Tracking Glasses
Simulator
Do Different Tests of Spatial Navigation Measure the Same Ability?
Our knowledge about the principles of human spatial navigation, their deficits in aging and disease, and the efficiency of countermeasures is still in its early phase. One factor that hindered more rapid progress in this field of research the bewildering variety of tests by which navigation was assessed in the past. For example, available tests assessed participants’ landmark knowledge (Rosenbaum et al. 2004), mental-imagery abilities (Ino et al. 2002), knowledge of ego- or allocentric directions or distances (Wolbers et al. 2004), path integration abilities (Allen et al. 2004; Goeke et al. 2013) or route generation abilities (Moffat et al. 2006), but it remained unclear to what extent those tests quantify similar versus distinct underlying abilities. The aim of the present study was to establish the feasibility of a factor analytical approach to determine the underlying latent variables.
Eye Tracking Glasses
Simulator
Software
Effect of sport-vision training and mindfulness on vision perception and decision-making accuracy of basketball’s referees
The purpose of this study was to determine the effect of sports-vision training and mindfulness on visual perception and decision-making accuracy of basketball referees. The participants of this study consisted of 52 (20 females and 32 males) basketball referees who were selected using convenience sampling method and matched based on gender and degree of judgment in four groups: sports-vision training, mindfulness training, combined (sports-vision training and mindfulness training), and control group. The sports-vision training and combined groups participated in an eight week sports vision training three sessions per week. Mindfulness and combined groups received mindfulness training for eight weeks one session per week. During this period, the control group performed their daily activities. Before intervention and one day and also one month after the intervention, the accuracy of referees’ decision making was evaluated using video test and visual perception by eye tracking device. Data were analyzed using repeated measures ANOVA. Findings of the study showed that visual perception and decision-making accuracy in sports vision training, mindfulness training and combined groups were significantly better than the control group in the post-test and follow up, but no significant difference was observed between the training groups. Therefore, it is suggested that sports-vision training and mindfulness training be used to increase visual perception and accuracy of referee’s decision making.
Eye Tracking Glasses
Software
Effect of Technical and Quiet Eye Training on the gaze behavior and long-term learning of volleyball serve reception in 10 to 12-year-old female
Background: A quiet eye is the final fixation or tracking before moving on, which requires concentration and attention, and is an effective way of teaching interceptive tasks. Methods: In the current semi-experimental study, 20 volunteer female students from a volleyball center of Shiraz District 1 (mean age = 12.10, SD = 0.718) were selected as the participants from February 2017 to February 2018. After taking the pre-test, they were randomly divided into two groups of 10 (technical training and quiet eye training). The intended task was to receive volleyball serve with the forearm from three receiving areas of the mini-volleyball court. To measure the accuracy of the volleyball serve reception, a volleyball Serve Reception Test by forearm was used in mini-volleyball court. Ergoneers eye tracking (EET) was used to record the visual data. After the pre-test, the participants took part in 9 separate training sessions three sessions a week, and 48 hours after the last training session, the first retention test and one month later the second retention test was performed. Data were analyzed by 2 × 3 mixed analysis of variance (ANOVA) of quiet eye duration and performance, using SPSS software at a significant level of P ≤ 0.05. Results: The results showed that the mean performance of the quiet eye training group increased from 4.30 ± 1.76 in pre-test to 11 ± 1.76 in the first retention and 12 ± 2 in long-term retention in comparison to the technical training group (P = 0.007). However, there was no significant difference between the mean quiet eye duration of the quiet eye and technical training groups (P = 0.512). Conclusions: It seems that quiet eye training has a significant effect on the long-term learning of beginners compared to technical training, but it does not have a significant difference in the duration of beginners’ quiet eye compared to technical training.
Eye Tracking Glasses
Software
ElectroOculoGraphy (EOG) Eye-Tracking for Virtual Reality
Virtual reality (VR) is an immersive computer simulation of a three-dimensional environment where the user experiences visual, auditory and sensory feedbacks and can interact with the surrounding through various kinds of controllers. Virtual Reality is achieved through VR headsets that project images with a pair of displays and lenses placed few centimetres from the user's eyes in order to create the illusion of a three-dimensional world. Most diffused VR headsets are HTC Vive, Oculus Rift and Oculus Go. The key factor for the success of VR is immersion, which is the condition where the user loses awareness of being in an environment which is not real. Immersion is achieved by virtually replicating human senses stimuli. One of the biggest factors for achieving a high level of immersion is the ability to accurately simulate human vision. For this purpose, new technologies are being developed to achieve higher screens resolutions and bigger fields of view. In this context, eye-tracking is taking place in virtual reality. Eye tracking is the ability to sense the position of user's eye gaze point in the surroundings. In virtual reality, it then comes to being able to live track the coordinates of the eye gaze point on the screen(s) placed in front of the users’ face. Such eye-tracking can bring many advantages to virtual reality. One of them is foveated rendering, which consists on the possibility to render parts of the VR 3d scene that are not focused by the user’s eyes at a lower resolution, resulting in big savings of computational power. Moreover, it is possible to control VR interfaces with eyes, or conduct more accurate researches on consumers’ focus and behaviour. Other applications are training people for complex jobs (for example astronauts or surgeons) or enhancing virtual avatars with human-like eye movements. Eye tracking is achieved through three methods: tracking contact lenses, camera systems and ElectroOculoGraphy (EOG). The first two provide accurate results but are relatively expensive, big, and battery/computation hungry while EOG appears to be also suitable for compact products (as virtual reality headsets) at reduced costs. EOG consists in capturing and analysing the electrical properties of the skin around the eyes through small electrodes placed in strategic points of the user’s face. Since virtual reality headsets integrate a mask that makes large contact with the user’s face, it is possible to integrate the necessary electrodes to enable EOG.
Eye Tracking Glasses
Simulator
Ergonomics Studies on Non-Traditional In-Vehicle Displays for Reducing Information Access Costs
Ergonomics Studies on Non-Traditional In-Vehicle Displays for Reducing Information Access Costs Donghyun Beck Department of Industrial Engineering The Graduate School Seoul National University Drivers should keep their eyes forward most of the time during driving to be in full control of the vehicle and to be aware of the dynamic road scene. Thus, it is important to locate in-vehicle displays showing information required for a series of driving tasks close to the driver’s forward line-of-sight, and therefore, to reduce the eyes-off-the-road time. Automotive head-up display (HUD) system and camera monitor system (CMS) are promising non-traditional in-vehicle display systems that can reduce information access costs. HUD presents various information items directly on the driver’s forward field of view, and allows the drivers to acquire necessary information while looking at the road ahead. CMS consists of cameras capturing vehicle’s side and rear views and in-vehicle electronic displays presenting the real-time visual information, allowing the driver to obtain it inside a vehicle. Despite the potential benefits and promising applications of HUD system and CMS, however, there are some important research questions to be addressed for their ergonomics design. As for HUD system, presenting many information items indiscriminately can cause undesirable consequences, such as information overload, visual clutter and cognitive capture. Thus, only the necessary and important information must be selected and adequately presented according to the driving situation at hand. As for CMS, the electronic displays can be placed at any positions inside a vehicle and this flexibility in display layout design may be leveraged to develop systems that facilitate the driver’s information processing, and also, alleviate the physical demands associated with checking side and rear views. Therefore, the following ergonomics research questions were considered: 1) ‘Among various information items displayed by the existing HUD systems, which ones are important?’, 2) ‘How should the important HUD information items be presented according to the driving situation?', 3) ‘What are the design characteristics of CMS display layouts that can facilitate driver information processing?’, and 4) ‘What are the design characteristics of CMS display layouts that can reduce physical demands of driving?’ As an effort to address some key knowledge gaps regarding these research questions and contribute to the ergonomics design of these non-traditional in-vehicle display systems, two major studies were conducted – one on HUD information items, and the other on CMS display layouts. In the study on HUD information items, a user survey was conducted to 1) determine the perceived importance of twenty-two information items displayed by the existing commercial automotive HUD systems, and to 2) examine the contexts of use and the user-perceived design improvement points for high-priority HUD information items. A total of fifty-one drivers with significant prior HUD use experience participated. For each information item, the participants subjectively evaluated its importance, and described its contexts of use and design improvement points. The information items varied greatly in perceived importance, and current speed, speed limit, turn-by-turn navigation instructions, maintenance warning, cruise control status, and low fuel warning were of highest importance. For eleven high-priority information items, design implications and future research directions for the ergonomics design of HUD systems were derived. In the study on CMS display layouts, a driving simulator experiment was conducted to comparatively evaluate three CMS display layouts with the traditional side-view mirror arrangement in terms of 1) driver information processing and 2) physical demands of driving. The three layouts placed two side-view displays inside the car nearby the conventional side-view mirrors, on the dashboard at each side of the steering wheel, and on the center fascia with the displays joined side-by-side, respectively. Twenty-two participants performed a safety-critical lane changing task with each layout design. Compared to the traditional mirror system, all three CMS display layouts facilitated information processing and reduced physical demands. Design characteristics leading to such beneficial effects were placing CMS displays close to the normal line-of-sight to reduce eye gaze travel distance and locating each CMS display on each side of the driver to maintain compatibility. Keywords: head up display (HUD), experienced users, importance of information items, contexts of information use, design improvement points, camera monitor system (CMS), in-vehicle side-view displays, display layout, information processing, physical demands Student Number: 2013-21072
Eye Tracking Glasses
Simulator