Eye-tracking technology in vehicles: application and design
This work analyses the eye-tracking technology and, as an outcome, it presents an idea of implementing it, along with other kinds of technology, in vehicles. The main advantage of such an implementation would be to augment safety while driving. The setup and the methodology used for detecting human activity and interaction using the means of the eye-tracking technology are investigated. Research in that area is growing rapidly and its results are used in a variety of cases. The main reasons for that growth are the constant lowering of prices of the special equipment that is necessary, the portability that is available in some cases as well as the easiness of use that make the usage of that technology more user-friendly than it was a few years ago. The whole idea of eye-tracking is to track the movements of the eyes in an effort to determine the direction of the gaze, using sophisticated software and purpose built hardware. This manuscript, makes a brief introduction in the history of eye monitoring presenting the very early scientific approaches used in an effort to better understand the movements of the human while tracking an object or during an activity. Following, there is an overview of the theory and the methodology used to track a specific object. As a result there exists a short presentation of the image processing and the machine learning procedures that are used to accomplish such tasks. Thereafter, we further analyze the specific eye-tracking technologies and techniques that are used nowadays and the characteristics that affect the exact choice of eye-tracking equipment. For the appropriate choice we have to take into account the area of research-interest in which the equipment will be used. In addition, the main categories of eye-tracking applications are presented and we shortlist the latest state of the art eye-tracking commercial systems. Following, we present our first approach, trying to describe an eye-tracking device that could be used in vehicles offering much better safety standards, controlling various parameters, continuously checking the readiness of the driver and alerting him for potential imminent collision incidents. Finally, we describe the existing way of connecting a device, in our case an eye-tracker, can be connected to an automobile’s system.
Eye Tracking Glasses
Software
GazeNav: Gaze-based pedestrian navigation
Pedestrian navigation systems help us make a series of decisions that lead us to a destination. Most current pedestrian navigation systems communicate using map-based turn-by-turn instructions. This interaction mode suffers from ambiguity, its user's ability to match the instruction with the environment, and it requires a redirection of visual attention from the environment to the screen. In this paper we present GazeNav, a novel gaze-based approach for pedestrian navigation. GazeNav communicates the route to take based on the user's gaze at a decision point. We evaluate GazeNav against the map-based turn-by-turn instructions. Based on an experiment conducted in a virtual environment with 32 participants we found a significantly improved user experience of GazeNav, compared to map-based instructions, and showed the effectiveness of GazeNav as well as evidence for better local spatial learning. We provide a complete comparison of navigation efficiency and effectiveness between the two approaches.
Reviewers: Ian Giblet, CAS-UK Jan-Patrick Osterloh, OFF
This deliverable reports the progress of the HoliDes consortium to develop methods, techniques, and tools (MTTs) for the Human Factors-Reference Technology Platform (HF-RTP), version 1.0. For work package 5 (WP5), it concludes project cycle I. During this cycle, as a first step we received the requirements from the application work packages WP6-9. After an analysis of these requirements (cf. D5.1), we selected those metrics and methods to be developed in WP5, which could best meet the AdCoS owners’ needs. Having documented those MTTs as HF-RTP 0.5 in D5.2, the first instantiations of these techniques and tools were made. The result of this work is described in this document. For each method, technique or tool, a detailed description is provided concerning data the MTTs receive, data they provide, their current functionality as well as specific and five definitive and further potential use cases. These use cases (see Table 1) originate from the four HoliDes domains Health, Aerospace, Control Room and Automotive. In our definition, a method is a general way to solve a problem. This could be the use of task analysis to answer a general design question. A technique is a concrete instantiation of such a method, as would be the application of a specific form of task analysis to the development and evaluation of an adaptive system. Finally, a tool is a technique, which has been realized as either hard- or software. Such a tool could be a program that aids the collection and organisation of observations during the task analysis. The MTTs created in this work package follow the objective to “Develop techniques and tools for empirical analysis of Adaptive Cooperative Human-Machine Systems (AdCoS) against human factors and safety regulations.” To achieve this objective and provide the application work packages 6–9 (WP6-9) with methods that best suit their needs the starting point of our work has been the AdCoS requirements from WP6-9. Some of these requirements describe genuine AdCoS functionality, while others relate to MTTs necessary to develop AdCoS functionality or aspects of the design process itself. Consequently, the purpose of WP5’s MTTs is to enable, aid, and assist the empirical analysis of adaptive, cooperative systems, or to act as part of these systems functionality. The actual outcome of the work presented here will be software tools, empirical results as the basis for system functionality and design decisions, but also procedures and algorithms and their implementation. All of these efforts help realize the human centred design process as e.g. described in ISO 9241-210, both during design and evaluation. For a quick overview over WP5’s MTT landscape, Table 1 shows both names and short descriptions of all methods, techniques and tools as well as the use cases they are being applied to.
SET: a pupil detection method using sinusoidal approximation
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS.
Eye Tracking Glasses
Software
Smartwatches vs. smartphones: A preliminary report of driver behavior and perceived risk while responding to notifications
This study examines driver engagement with smartwatches and smartphones while driving. Twelve participants (7 novice and 5 experienced smartwatch users) drove in a high-fidelity simulator while receiving notifications from either a smartwatch (Pebble) or a smartphone (LG Nexus 5). It was found that participants had more glances, on average, per notification while using the smartwatch compared to the smartphone. Further, their brake response times were longer when they received notifications prior to a lead vehicle braking event on the smartwatch compared to when they did not receive any notifications and when they received notifications on the smartphone. Contrary to these glance and driving performance findings, participants perceived similar levels of risk for the two devices, and they largely reported that smartwatch use while driving should receive penalties equal to or less than smartphone use with respect to distracted driving legislation. Thus, there appears to be a disconnection between drivers' actual performance while using smartwatches and their perceptions.
Towards virtually transparent vehicles: first results of a simulator study and a field trial
Current versions of heavy trucks, tanks or excavators are subject to limited visibility due to small windshields. In order to overcome such limitations one option is to create a virtually transparent vehicle by using a camera-monitor / Head Mounted Display (HMD) system to provide a seamless vision to the driver. The aim of the study is to compare two vision systems for 'virtually transparent' vehicles, a HMD and a camera-monitor system, in a simulation environment with regard to ergonomic aspects and future prospects. The structure of the simulator includes a generic mock-up of the vehicle to emulate the visual masking effects of a real armoured vehicle. Thus, the driver can experience the obstruction of the visual space caused by the A-pillars. In addition, the degree of immersion of the driver is increased by windows on the left and right sides. The vision system with monitors is built in a semicircular shape in front of the driver with five 13 inch monitors. In this arrangement, the interior angle between adjacent displays is 40°, hence a total of 160° view can be displayed. The display panels have a maximum resolution of 1280 x 960 and an aspect ratio of 16:10. The alternative vision system with HMD uses an Oculus Rift DK2. In order to create a three-dimensional view around the driver, the images are projected on a curved surface and which provides a freedom for the driver to look around in all the directions. The Oculus Rift provides a nominal field of view (FoV) of approximately 100°. A simulated distance of about 16 km was repeatedly driven for 2 hours in different test conditions like federal highways, short pieces of off-road tracks and crossings with simulated intersection traffic, under consideration of the rules of the road. In order to minimise a sequence effect, the order in which these test conditions were presented was changed. After driving for each condition acceptance, performance, subjective stress (NASA TLX), workload, usability and driving performance were determined. As a secondary task, the driver had to identify and announce possible threats out loud.
Eye Tracking Glasses
Simulator
Using sound to reduce visual distraction from in-vehicle human–machine interfaces
Objective: Driver distraction and inattention are the main causes of accidents. The fact that devices such as navigation displays and media players are part of the distraction problem has led to the formulation of guidelines advocating various means for minimizing the visual distraction from such interfaces. However, although design guidelines and recommendations are followed, certain interface interactions, such as menu browsing, still require off-road visual attention that increases crash risk. In this article, we investigate whether adding sound to an in-vehicle user interface can provide the support necessary to create a significant reduction in glances toward a visual display when browsing menus. Methods: Two sound concepts were developed and studied; spearcons (time-compressed speech sounds) and earcons (musical sounds). A simulator study was conducted in which 14 participants between the ages of 36 and 59 took part. Participants performed 6 different interface tasks while driving along a highway route. A 3 × 6 within-group factorial design was employed with sound (no sound /earcons/spearcons) and task (6 different task types) as factors. Eye glances and corresponding measures were recorded using a head-mounted eye tracker. Participants’ self-assessed driving performance was also collected after each task with a 10-point scale ranging from 1 = very bad to 10 = very good. Separate analyses of variance (ANOVAs) were conducted for different eye glance measures and self-rated driving performance. Results: It was found that the added spearcon sounds significantly reduced total glance time as well as number of glances while retaining task time as compared to the baseline (= no sound) condition (total glance time M = 4.15 for spearcons vs. M = 7.56 for baseline, p =.03). The earcon sounds did not result in such distraction-reducing effects. Furthermore, participants ratings of their driving performance were statistically significantly higher in the spearcon conditions compared to the baseline and earcon conditions (M = 7.08 vs. M = 6.05 and M = 5.99 respectively, p =.035 and p =.002). Conclusions: The spearcon sounds seem to efficiently reduce visual distraction, whereas the earcon sounds did not reduce distraction measures or increase subjective driving performance. An aspect that must be further investigated is how well spearcons and other types of auditory displays are accepted by drivers in general and how they work in real traffic.
Eye Tracking Glasses
Simulator
3d displays in cars: Exploring the user performance for a stereoscopic instrument cluster
In this paper, we investigate user performance for stereoscopic automotive user interfaces (UI). Our work is motivated by the fact that stereoscopic displays are about to find their way into cars. Such a safety-critical application area creates an inherent need to understand how the use of stereoscopic 3D visualizations impacts user performance. We conducted a comprehensive study with 56 participants to investigate the impact of a 3D instrument cluster (IC) on primary and secondary task performance. We investigated different visualizations (2D and 3D) and complexities (low vs. high amount of details) of the IC as well as two 3D display technologies (shutter vs. autostereoscopy). As secondary tasks the participants judged spatial relations between UI elements (expected events) and reacted on pop-up instructions (unexpected events) in the IC. The results show that stereoscopy increases accuracy for expected events, decreases task completion times for unexpected tasks, and increases the attractiveness of the interface. Furthermore, we found a significant influence of the used technology, indicating that secondary task performance improves for shutter displays.