Publication Hub Archive

Transportation & Mobility

You have reached the Ergoneers Publication Hub for:

Field of Application > Transportation & Mobility

Find all Publications here:

Publication Hub

Total results: 302

Quiet eye predicts goaltender success in deflected ice hockey shots

Year: 2017

Authors: D Panchuk, JN Vickers, WG Hopkins, University of Victoria

In interceptive timing tasks, long quiet eye (QE) durations at the release point, along with early tracking on the object, allow performers to couple their actions to the kinematics of their opponent and regulate their movements based on emergent information from the object’s trajectory. We used a mobile eye tracker to record the QE of eight university-level ice hockey goaltenders of an equivalent skill level as they responded to shots that deflected off a board placed to their left or right, resulting in a trajectory with low predictability. QE behaviour was assessed using logistic regression and magnitude-based inference. We found that when QE onset occurred later in the shot (950 ± 580 ms, mean ± SD) there was an increase in the proportion of goals allowed (41% vs 22%) compared to when QE onset occurred earlier. A shorter QE duration (1260 ± 630 ms) predicted a large increase in the proportion of goals scored (38% vs 14%). More saves occurred when QE duration (2074 ± 47 ms) was longer. An earlier QE offset (2004 ± 66 ms) also resulted in a large increase in the number of goals allowed (37% vs 11%) compared to a later offset (2132 ± 41 ms). Since an early, sustained QE duration contributed to a higher percentage of saves, it is important that coaches develop practice activities that challenge the goaltender’s ability to fixate the puck early, as well as sustain a long QE fixation on the puck until after it is released from the stick.

2 versions available

Ready for take-over? A new driver assistance system for an automated classification of driver take-over readiness

Year: 2017

Authors: C Braunagel,W Rosenstiel

Recent studies analyzing driver behavior report that various factors may influence a driver's take-over readiness when resuming control after an automated driving section. However, there has been little effort made to transfer and integrate these findings into an automated system which classifies the driver's take-over readiness and derives the expected take-over quality. This study now introduces a new advanced driver assistance system to classify the driver's takeover readiness in conditionally automated driving scenarios. The proposed system works preemptively, i.e., the driver is warned in advance if a low take-over readiness is to be expected. The classification of the take-over readiness is based on three information sources: (i) the complexity of the traffic situation, (ii) the current secondary task of the driver, and (iii) the gazes at the road. An evaluation based on a driving simulator study with 81 subjects showed that the proposed system can detect the take-over readiness with an accuracy of 79%. Moreover, the impact of the character of the take-over intervention on the classification result is investigated. Finally, a proof of concept of the novel driver assistance system is provided showing that more than half of the drivers with a low take-over readiness would be warned preemptively with only a 13% false alarm rate.

5 versions available

The effects of continuous driving-related feedback on drivers’ response to automation failures

Year: 2017

Authors: G Cohen

During prolonged periods of autonomous driving, drivers tend to shift their attention away from the driving task. As a result, they require more time to regain awareness of the driving situation and to react to it. This study examined the use of informative automation that during Level-3 autonomous driving provided drivers with continuous feedback regarding the vehicle’s actions and surroundings. It was hypothesized that the operation of informative automation will trigger drivers to allocate more attention to the driving task and will improve their reaction times when resuming control of the vehicle. Sixteen participants drove manual and autonomous driving segments in a driving simulator equipped with Level-3 automation. For half of the participants, the informative automation issued alerts and messages while for the other half no messages were issued (control). The number of on-road glances served as a proxy for drivers’ attention. Drivers’ performance on handling an unexpected automation failure event was measured using their time-to-brake and time-to-steer. Results showed that drivers using the informative automation made more frequent on-road glances than drivers in the control group. Yet, there were no significant differences in reaction times to the automation failure event between the groups. Explanations and implications of these results are discussed.

6 versions available

The effects of situational demands on gaze, speech and gesture input in the vehicle

Year: 2017

Authors: F Roider,S Rümelin,B Pfleging,T Gross

Various on-the-road situations can make additional demands on the driver that go beyond the basic demands of driving. Thereby, they influence the appropriateness of in-vehicle input modalities to operate secondary tasks in the car. In this work, we assess the specific impacts of situational demands on gaze, gesture and speech input regarding driving performance, interaction efficiency and subjective ratings. An experiment with 29 participants in a driving simulator revealed significant interactions between situational demands and the input modality on secondary task completion times, perceived suitability and cognitive workload. Impairments were greatest when the situational demand addressed the same sensory channel as the used input modality. This was reflected differently in objective and subjective data depending on the used input modality. With this work, we explore the performance of natural input modalities across different situations and thereby support interaction designers that plan to integrate these modalities in automotive interaction concepts.

4 versions available

Towards practical driver cognitive load detection based on visual attention information

Year: 2017

Authors: CC Liu

Driving is a complex activity that requires drivers to maintain a high level of cognitive functioning. High cognitive load can impair driving performance and increase the risk of accidents. To detect cognitive load in drivers, researchers have proposed various methods, including physiological measures, vehicle-based metrics, and visual attention information. Among these, visual attention information is a promising indicator due to its non-intrusive nature and potential to provide real-time monitoring. This research aims to develop a practical approach for detecting driver cognitive load based on visual attention information. The study explores different visual attention features, such as gaze patterns, fixation duration, and saccade movements, and evaluates their effectiveness in predicting cognitive load. The findings of this research could contribute to the development of advanced driver assistance systems (ADAS) that enhance road safety by monitoring and responding to drivers' cognitive states.

3 versions available

Virtual eye height and display height influence visual distraction measures in simulated driving conditions

Year: 2017

Authors: P Larsson,J Engström, C Wege

Glance behaviour towards in-vehicle visual displays is likely not only a result of the design of the display itself, but also influenced by other factors such as the position of the display and characteristics of the surrounding road scene. In the current study, it was hypothesized that both display position and simulator view will affect a driver’s glance behaviour. A simulator study was conducted in which 25 participants drove in a highway scenario while performing three different tasks in a smartphone positioned at two different heights. Two different simulator views used: one corresponding to the view from the driver’s seat of a truck and the other one corresponded to the view from the driver’s seat of a car. A within-group design was used with simulator view, smartphone position, and task as factors. Results showed that type of view and display position to some extent influenced glance behaviour as well as subjective ratings of driving performance. These results may have implications for eye glance measurement procedures as well as for guidelines relating to driver distraction, e.g. that simulated road scenes must correspond to the vehicle class that the device under test is intended for.

2 versions available

Visual distraction effects of in-car text entry methods: Comparing keyboard, handwriting and voice recognition

Year: 2017

Authors: T Kujala,H Grahn

Three text entry methods were compared in a driving simulator study with 17 participants. Ninety-seven drivers' occlusion distance (OD) data mapped on the test routes was used as a baseline to evaluate the methods' visual distraction potential. Only the voice recognition-based text entry tasks passed the set verification criteria. Handwriting tasks were experienced as the most demanding and the voice recognition tasks as the least demanding. An individual in-car glance length preference was found, but against expectations, drivers' ODs did not correlate with in-car glance lengths or visual short-term memory capacity. The handwriting method was further studied with 24 participants with instructions and practice on writing eyes-on-road. The practice did not affect the test results. The findings suggest that handwriting could be visually less demanding than touch screen typing but the reliability of character recognition should be improved or the driver well-experienced with the method to minimize its distraction potential.

2 versions available

Angle of attack visualization: a proposal for a tangible interactive in-flight loss of control recovery system

Year: 2016

Authors: N Kasdaglis

Abstract: Angle of Attack Visualization: A Proposal for a Tangible Interactive In-flight Loss of Control Recovery System by Nicholas Kasdaglis. Loss of control inflight (LOC-I) is the leading cause of fatalities in aviation accidents. A tangible interactive system that provides real-time angle of attack (AOA) visualization could enhance pilot situational awareness and improve recovery techniques during LOC-I events.

2 versions available

Assisting drivers with ambient take-over requests in highly automated driving

Year: 2016

Authors: SS Borojeni,L Chuang,W Heuten,S Boll

Take-over situations in highly automated driving occur when drivers have to take over vehicle control due to automation shortcomings. Due to high visual processing demand of the driving task and time limitation of a take-over maneuver, appropriate user interface designs for take-over requests (TOR) are needed. In this paper, we propose applying ambient TORs, which address the peripheral vision of a driver. Conducting an experiment in a driving simulator, we tested a) ambient displays as TORs, b) whether contextual information could be conveyed through ambient TORs, and c) if the presentation pattern (static, moving) of the contextual TORs has an effect on take-over behavior. Results showed that conveying contextual information through ambient displays led to shorter reaction times and longer times to collision without increasing the workload. The presentation pattern however, did not have an effect on take-over performance.

6 versions available

Distracted driving: scientific basis for risk assessments of driver’s workplaces

Year: 2016

Authors: B Gross, S Birska, M Bretschneider

At professional driver’s workplaces, mobile devices are used as telematics applications for information exchange between dispatchers and drivers. In addition to the wide-ranging benefits, it nevertheless emerges potential for new risks, such as distracting drivers. The present study is based on conditions encountered in an existing company in the passenger transport sector and is part of a consultation of the Institute for Occupational Safety and Health, Germany to support the implementation of a risk assessment regarding the applied telematics software. In order to analyze the impact on driving performance and visual processing of the used telematics application, the study employed two driving simulation sessions (LCT, rFactor 1) and one eye-tracking session. Results indicated that the examined application may be considered tolerable in terms of the AAM criteria for In-Vehicle Information and Communication Systems.

1 version available: