Publication Hub Archive

Transportation & Mobility

You have reached the Ergoneers Publication Hub for:

Field of Application > Transportation & Mobility

Find all Publications here:

Publication Hub

Total results: 302

Driver Demand: Eye Glance Measures

Year: 2016

Authors: S Seaman, L Hsieh, R Young

This study investigated driver glances while engaging in infotainment tasks in a stationary vehicle while surrogate driving: watching a driving video recorded from a driver’s viewpoint and projected on a large screen, performing a lane-tracking task, and performing the Tactile Detection Response Task (TDRT) to measure attentional effects of secondary tasks on event detection and response. Twenty-four participants were seated in a 2014 Toyota Corolla production vehicle with the navigation system option. They performed the lane-tracking task using the vehicle’s steering wheel, fitted with a laser pointer to indicate wheel movement on the driving video. Participants simultaneously performed the TDRT and a variety of infotainment tasks, including Manual and Mixed-Mode versions of Destination Entry and Cancel, Contact Dialing, Radio Tuning, Radio Preset selection, and other Manual tasks. Participants also completed the 0-and 1-Back pure auditory-vocal tasks. Glances were recorded using an eye-tracker, and validated by manual inspection. Glances were classified as on-road (i.e., looking through the windshield) or off-road (i.e., to locations other than through the windshield). Three off-road glance metrics were tabulated and scored using the NHTSA Guidelines methods: Mean Single Glance Duration (MSGD), Total Eyes-Off-Road Time (TEORT), and Long Glance Proportion (LGP). Comparisons were made for these metric values between the task conditions and a 30-s Baseline condition with no task. Mixed-Mode tasks did not have a statistically significant longer MSGD or TEORT, or higher LGP, than Baseline (except for Mixed-Mode Destination Entry), whereas all the Manual tasks did. Mixed-Mode tasks improved compliance with the NHTSA Guidelines.

2 versions available

Driver Demand: Eye Glance Measures 2016-01-1421

Year: 2016

Authors: S Seaman, L Hsieh, R Young

This study investigated driver glances while engaging in infotainment tasks in a stationary vehicle while surrogate driving: watching a driving video recorded from a driver’s viewpoint and projected on a large screen, performing a lane-tracking task, and performing the Tactile Detection Response Task (TDRT) to measure attentional effects of secondary tasks on event detection and response. Twenty-four participants were seated in a 2014 Toyota Corolla production vehicle with the navigation system option. They performed the lane-tracking task using the vehicle’s steering wheel, fitted with a laser pointer to indicate wheel movement on the driving video. Participants simultaneously performed the TDRT and a variety of infotainment tasks, including Manual and Mixed-Mode versions of Destination Entry and Cancel, Contact Dialing, Radio Tuning, Radio Preset selection, and other Manual tasks. Participants also completed the 0-and 1-Back pure auditory-vocal tasks. Glances were recorded using an eye-tracker, and validated by manual inspection. Glances were classified as on-road (i.e., looking through the windshield) or off-road (i.e., to locations other than through the windshield). Three off-road glance metrics were tabulated and scored using the NHTSA Guidelines methods: Mean Single Glance Duration (MSGD), Total Eyes-Off-Road Time (TEORT), and Long Glance Proportion (LGP). Comparisons were made for these metric values between the task conditions and a 30-s Baseline condition with no task. Mixed-Mode tasks did not have a statistically significant longer MSGD or TEORT, or higher LGP, than Baseline (except for Mixed-Mode Destination Entry), whereas all the Manual tasks did. Mixed-Mode tasks improved compliance with the NHTSA Guidelines.

1 version available:

Experimental evaluation of the controllability of interacting advanced driver assistance systems

Year: 2016

Authors: O Schädler,S Müller, M Gründl

A method for the experimental evaluation of the controllability of interacting advanced driver assistance systems (ADAS) is presented at the beginning of this paper. Here, driving situations where particular ADAS are acting within, at or beyond their system limits or during and after system failures have been implemented into a static driving simulator. According to the recommendation of the Code of Practice (CoP) each situation has been assessed to select three critical driving situations. The second part of the paper describes two driving simulator studies to evaluate the controllability of four interacting ADAS (Automatic Emergency Brake Assist (AEB), Adaptive Cruise Control (ACC), Lane Keeping Assist (LKA) and Lane Change Decision Aid System (LCDAS)) in critical driving situations. Each study is based on a within-subjects design. In these studies, each participant was driving each of the three scenarios ('Stationary Obstacle Avoidance', 'Braking Object Vehicle' and 'Three-Lane Motorway') without and with ADAS. The recorded physical and physiological data and the subjective perceptions of the participants were analysed. One of the findings was e.g., that some drivers became confused when ACC was braking while LKA overlaid a steering torque during a system failure (3 Nm steering torque ramp) in the scenario 'Three-Lane Motorway'. It could also be shown that accidents have happened during an evasive manoeuvre where ACC has accelerated and LKA had a overlaying steering torque. Based on the results of the study, functional improvements which might enhance the interaction of ADAS have been derived and are presented in this paper. These improvements have been tested and evaluated in a second replication driver simulator study. The results of this study show an improvement of the controllability when the vehicle in front is detected earlier. It also confirms that an unexpected braking and warning of an Adaptive Cruise Control and an overlay of a steering moment lead to an uncontrollable behaviour.

3 versions available

Gaze augmentation in egocentric video improves awareness of intention

Year: 2016

Authors: D Akkil,P Isokoski

Video communication using head-mounted cameras could be useful to mediate shared activities and support collaboration. Growing popularity of wearable gaze trackers presents an opportunity to add gaze information on the egocentric video. We hypothesized three potential benefits of gaze-augmented egocentric video to support collaborative scenarios: support deictic referencing, enable grounding in communication, and enable better awareness of the collaborator's intentions. Previous research on using egocentric videos for real-world collaborative tasks has failed to show clear benefits of gaze point visualization. We designed a study, deconstructing a collaborative car navigation scenario, to specifically target the value of gaze-augmented video for intention prediction. Our results show that viewers of gaze-augmented video could predict the direction taken by a driver at a four-way intersection more accurately and more confidently than a viewer of the same video without the superimposed gaze point. Our study demonstrates that gaze augmentation can be useful and encourages further study in real-world collaborative scenarios.

3 versions available

Keep your scanners peeled: Gaze behavior as a measure of automation trust during highly automated driving

Year: 2016

Authors: S Hergeth, L Lorenz, R Vilimek,JF Krems

Objective: The feasibility of measuring drivers’ automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers’ automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers’ automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers’ self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers’ automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers’ automation trust and reliance during highly automated driving.

8 versions available

Measuring safety for urban tunnel entrance and exit based on nanoscopic driving behaviors

Year: 2016

Authors: S Fei, X Qian, X Xiaoling, M Chao

The entrance and exit zones of urban tunnel have been considered as the most dangerous parts along the tunnel for its sharp changes of the driving environment. The objective of this paper is to extract a comprehensive measure from numerous original measures which reflect drivers' Nanoscopic behaviors to measure the safety of urban tunnel entrance and exit. Field test was conducted at Xi'an men tunnel located in Nanjing. The drivers' heart rate (HR) and eye movement, together with the pupillary diameter at the entrance and exit of urban tunnel were collected synchronously with theD-Lab system. Operating speed was recorded by the video camera, and then the individual vehicle acceleration was calculated. Following the factor analysis procedure, three factors, which explain 90.18% and 89.15% of the variance in the original data for entrance and exit separately, are retained from the initial four Nanoscopic driving behavior measures. According to weight score of each factor, a comprehensive measure (FE) which could reflect Nanoscopic driving behavior was extracted by linear combination of the three retained factors. To measure the safety level of urban tunnel gateway, FE is classified into three levels. The criterion for safety classification is given as: |FE|=0.05, safe, 0.05<|FE|<0.10, moderately dangerous, |FE|>0.10, dangerous. The validation by comparing with tunnel environmental shows that the measure proposed in this paper is acceptable and more accurate in evaluating the safety of tunnel gateway zones.

3 versions available

NaviLight: investigating ambient light displays for turn-by-turn navigation in cars

Year: 2016

Authors: A Matviienko,A Löcken,A El Ali,W Heuten

Car navigation systems typically combine multiple output modalities; for example, GPS-based navigation aids show a real-time map, or feature spoken prompts indicating upcoming maneuvers. However, the drawback of graphical navigation displays is that drivers have to explicitly glance at them, which can distract from a situation on the road. To decrease driver distraction while driving with a navigation system, we explore the use of ambient light as a navigation aid in the car, in order to shift navigation aids to the periphery of human attention. We investigated this by conducting studies in a driving simulator, where we found that drivers spent significantly less time glancing at the ambient light navigation aid than on a GUI navigation display. Moreover, ambient light-based navigation was perceived to be easy to use and understand, and preferred over traditional GUI navigation displays. We discuss the implications of these outcomes on automotive personal navigation devices.

9 versions available

On the visual distraction effects of audio-visual route guidance

Year: 2016

Authors: T Kujala,H Grahn, J Mäkelä, A Lasch

This is the first controlled quantitative analysis on the visual distraction effects of audio-visual route guidance in simulated, but ecologically realistic driving scenarios with dynamic maneuvers and self-controlled speed (N = 24). The audio-visual route guidance system under testing passed the set verification criteria, which was based on drivers' preferred occlusion distances on the test routes. There were no significant effects of an upcoming maneuver instruction location (up, down) on the in-car display on any metric or on the experienced workload. The drivers' median occlusion distances correlated significantly with median in-car glance distances. There was no correlation between drivers' median occlusion distance and intolerance of uncertainty but significant inverse correlations between occlusion distances and age as well as driving experience were found. The findings suggest that the visual distraction effects of audio-visual route guidance are low and provide general support for the proposed testing method.

3 versions available

Operator information acquisition in excavators–Insights from a field study using eye-tracking

Year: 2016

Authors: M Koppenborg, M Huelke, P Nickel, A Lungfiel

Poor operator direct sight can lead to collisions between excavators and humans, especially during reversing movements. Viewing aids, such as mirrors and camera monitor systems (CMS) are intended to compensate this. As empirical evidence on operators’ visual information acquisition is scarce, this study investigated utilization of mirrors and CMS during regular work on construction sites by using eye-tracking and task observation. Results show that, during reversing movements, especially the left mirror and the CMS monitor were used. Implications of utilization and neglect are discussed with regard to safety and machinery design, such as configuration of viewing aids.

1 version available:

Speech feedback reduces driver distraction caused by in-vehicle visual interfaces

Year: 2016

Authors: P Larsson

Driver distraction and inattention are the main causes of accidents today and one way for vehicle manufacturers to address this problem may be to replace or complement visual information in in-vehicle interfaces with auditory displays. In this paper, we address the specific problem of giving text input to an interface while driving. We test whether the handwriting input method, which previously has been shown to be promising in terms of reducing distraction, can be further improved by adding speech feedback. A driving simulator study was carried out in which 11 persons, (3 female) drove in two different scenarios (curvy road and straight motorway) while performing three different handwriting text input tasks. Glance behavior was measured using a head mounted eyetracker, and subjective responses were also acquired. ANOVA Analysis revealed that speech feedback resulted in less distraction as measured by total glance time compared to the baseline condition (no speech). There were however also interaction effects which indicated that the positive effect of speech feedback were not as prominent for the curvy road scenario. Post-experiment interviews nonetheless showed that the participants felt as if the speech feedback made the text input task safer, and also that they preferred speech feedback over no speech.

1 version available: