Publication Hub Archive

Transportation & Mobility

You have reached the Ergoneers Publication Hub for:

Field of Application > Transportation & Mobility

Find all Publications here:

Publication Hub

Total results: 302

Prediction of take-over time in highly automated driving by two psychometric tests

Year: 2015

Authors: M Körber, T Weißgerber, L Kalb, C Blaschke, M Farid

In this study, we investigated if the driver's ability to take over vehicle control when being engaged in a secondary task (Surrogate Reference Task) can be predicted by a subject's multitasking ability and reaction time. 23 participants performed a multitasking test and a simple response task and then drove for about 38 min highly automated on a highway and encountered five take-over situations. Data analysis revealed significant correlations between the multitasking performance and take-over time as well as gaze distributions for Situations 1 and 2, even when reaction time was controlled. This correlation diminished beginning with Situation 3, but a stable difference between the worst multitaskers and the best multitaskers persisted. Reaction time was not a significant predictor in any situation. The results can be seen as evidence for stable individual differences in dual task situations regarding automated driving, but they also highlight effects associated with the experience of a take-over situation.

16 versions available

Reviewers: Ian Giblet, CAS-UK Jan-Patrick Osterloh, OFF

Year: 2015

Authors: STWT Borchers, MUTO Botta, SSNV Collina

This deliverable reports the progress of the HoliDes consortium to develop methods, techniques, and tools (MTTs) for the Human Factors-Reference Technology Platform (HF-RTP), version 1.0. For work package 5 (WP5), it concludes project cycle I. During this cycle, as a first step we received the requirements from the application work packages WP6-9. After an analysis of these requirements (cf. D5.1), we selected those metrics and methods to be developed in WP5, which could best meet the AdCoS owners’ needs. Having documented those MTTs as HF-RTP 0.5 in D5.2, the first instantiations of these techniques and tools were made. The result of this work is described in this document. For each method, technique or tool, a detailed description is provided concerning data the MTTs receive, data they provide, their current functionality as well as specific and five definitive and further potential use cases. These use cases (see Table 1) originate from the four HoliDes domains Health, Aerospace, Control Room and Automotive. In our definition, a method is a general way to solve a problem. This could be the use of task analysis to answer a general design question. A technique is a concrete instantiation of such a method, as would be the application of a specific form of task analysis to the development and evaluation of an adaptive system. Finally, a tool is a technique, which has been realized as either hard- or software. Such a tool could be a program that aids the collection and organisation of observations during the task analysis. The MTTs created in this work package follow the objective to “Develop techniques and tools for empirical analysis of Adaptive Cooperative Human-Machine Systems (AdCoS) against human factors and safety regulations.” To achieve this objective and provide the application work packages 6–9 (WP6-9) with methods that best suit their needs the starting point of our work has been the AdCoS requirements from WP6-9. Some of these requirements describe genuine AdCoS functionality, while others relate to MTTs necessary to develop AdCoS functionality or aspects of the design process itself. Consequently, the purpose of WP5’s MTTs is to enable, aid, and assist the empirical analysis of adaptive, cooperative systems, or to act as part of these systems functionality. The actual outcome of the work presented here will be software tools, empirical results as the basis for system functionality and design decisions, but also procedures and algorithms and their implementation. All of these efforts help realize the human centred design process as e.g. described in ISO 9241-210, both during design and evaluation. For a quick overview over WP5’s MTT landscape, Table 1 shows both names and short descriptions of all methods, techniques and tools as well as the use cases they are being applied to.

1 version available:

SET: a pupil detection method using sinusoidal approximation

Year: 2015

Authors: AH Javadi, Z Hakimi, M Barati,V Walsh

Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS.

13 versions available

Smartwatches vs. smartphones: A preliminary report of driver behavior and perceived risk while responding to notifications

Year: 2015

Authors: WCW Giang, I Shanti,HYW Chen, A Zhou

This study examines driver engagement with smartwatches and smartphones while driving. Twelve participants (7 novice and 5 experienced smartwatch users) drove in a high-fidelity simulator while receiving notifications from either a smartwatch (Pebble) or a smartphone (LG Nexus 5). It was found that participants had more glances, on average, per notification while using the smartwatch compared to the smartphone. Further, their brake response times were longer when they received notifications prior to a lead vehicle braking event on the smartwatch compared to when they did not receive any notifications and when they received notifications on the smartphone. Contrary to these glance and driving performance findings, participants perceived similar levels of risk for the two devices, and they largely reported that smartwatch use while driving should receive penalties equal to or less than smartphone use with respect to distracted driving legislation. Thus, there appears to be a disconnection between drivers' actual performance while using smartwatches and their perceptions.

2 versions available

Towards virtually transparent vehicles: first results of a simulator study and a field trial

Year: 2015

Authors: MCA Baltzer, A Krasni, P Boehmsdorff

Current versions of heavy trucks, tanks or excavators are subject to limited visibility due to small windshields. In order to overcome such limitations one option is to create a virtually transparent vehicle by using a camera-monitor / Head Mounted Display (HMD) system to provide a seamless vision to the driver. The aim of the study is to compare two vision systems for 'virtually transparent' vehicles, a HMD and a camera-monitor system, in a simulation environment with regard to ergonomic aspects and future prospects. The structure of the simulator includes a generic mock-up of the vehicle to emulate the visual masking effects of a real armoured vehicle. Thus, the driver can experience the obstruction of the visual space caused by the A-pillars. In addition, the degree of immersion of the driver is increased by windows on the left and right sides. The vision system with monitors is built in a semicircular shape in front of the driver with five 13 inch monitors. In this arrangement, the interior angle between adjacent displays is 40°, hence a total of 160° view can be displayed. The display panels have a maximum resolution of 1280 x 960 and an aspect ratio of 16:10. The alternative vision system with HMD uses an Oculus Rift DK2. In order to create a three-dimensional view around the driver, the images are projected on a curved surface and which provides a freedom for the driver to look around in all the directions. The Oculus Rift provides a nominal field of view (FoV) of approximately 100°. A simulated distance of about 16 km was repeatedly driven for 2 hours in different test conditions like federal highways, short pieces of off-road tracks and crossings with simulated intersection traffic, under consideration of the rules of the road. In order to minimise a sequence effect, the order in which these test conditions were presented was changed. After driving for each condition acceptance, performance, subjective stress (NASA TLX), workload, usability and driving performance were determined. As a secondary task, the driver had to identify and announce possible threats out loud.

1 version available:

Using sound to reduce visual distraction from in-vehicle human–machine interfaces

Year: 2015

Authors: P Larsson, M Niemand

Objective: Driver distraction and inattention are the main causes of accidents. The fact that devices such as navigation displays and media players are part of the distraction problem has led to the formulation of guidelines advocating various means for minimizing the visual distraction from such interfaces. However, although design guidelines and recommendations are followed, certain interface interactions, such as menu browsing, still require off-road visual attention that increases crash risk. In this article, we investigate whether adding sound to an in-vehicle user interface can provide the support necessary to create a significant reduction in glances toward a visual display when browsing menus. Methods: Two sound concepts were developed and studied; spearcons (time-compressed speech sounds) and earcons (musical sounds). A simulator study was conducted in which 14 participants between the ages of 36 and 59 took part. Participants performed 6 different interface tasks while driving along a highway route. A 3 × 6 within-group factorial design was employed with sound (no sound /earcons/spearcons) and task (6 different task types) as factors. Eye glances and corresponding measures were recorded using a head-mounted eye tracker. Participants’ self-assessed driving performance was also collected after each task with a 10-point scale ranging from 1 = very bad to 10 = very good. Separate analyses of variance (ANOVAs) were conducted for different eye glance measures and self-rated driving performance. Results: It was found that the added spearcon sounds significantly reduced total glance time as well as number of glances while retaining task time as compared to the baseline (= no sound) condition (total glance time M = 4.15 for spearcons vs. M = 7.56 for baseline, p =.03). The earcon sounds did not result in such distraction-reducing effects. Furthermore, participants ratings of their driving performance were statistically significantly higher in the spearcon conditions compared to the baseline and earcon conditions (M = 7.08 vs. M = 6.05 and M = 5.99 respectively, p =.035 and p =.002). Conclusions: The spearcon sounds seem to efficiently reduce visual distraction, whereas the earcon sounds did not reduce distraction measures or increase subjective driving performance. An aspect that must be further investigated is how well spearcons and other types of auditory displays are accepted by drivers in general and how they work in real traffic.

11 versions available

3d displays in cars: Exploring the user performance for a stereoscopic instrument cluster

Year: 2014

Authors: N Broy,F Alt,S Schneegass,B Pfleging

In this paper, we investigate user performance for stereoscopic automotive user interfaces (UI). Our work is motivated by the fact that stereoscopic displays are about to find their way into cars. Such a safety-critical application area creates an inherent need to understand how the use of stereoscopic 3D visualizations impacts user performance. We conducted a comprehensive study with 56 participants to investigate the impact of a 3D instrument cluster (IC) on primary and secondary task performance. We investigated different visualizations (2D and 3D) and complexities (low vs. high amount of details) of the IC as well as two 3D display technologies (shutter vs. autostereoscopy). As secondary tasks the participants judged spatial relations between UI elements (expected events) and reacted on pop-up instructions (unexpected events) in the IC. The results show that stereoscopy increases accuracy for expected events, decreases task completion times for unexpected tasks, and increases the attractiveness of the interface. Furthermore, we found a significant influence of the used technology, indicating that secondary task performance improves for shutter displays.

11 versions available

Adjunct Proceedings

Year: 2014

Authors: LN Boyle,AL Kun,S Osswald, B Pearce,D Szostak

The MIT AgeLab n-back: a multi-modal android application implementation 56 Cognitive Workload and Driver Glance Behavior 62 Using an OpenDS Driving Simulator for Car Following: A First Attempt 64 Cognitive load in autonomous vehicles 70 WS3: Pointing towards future automotive HMIs: The potential for gesture 74 Linda Angell, Yu Zhang Page 8 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA vii Pointing Towards Future Automotive HMIs: The Potential for Gesture Interaction 75 Applying Popular Usability Heuristics to Gesture Interaction in the Vehicle 81 The steering wheel as a touch interface: Using thumb-based gestural interfaces as control inputs while driving 88 WS4: EVIS 2014 3rd Workshop on Electric Vehicle Information Systems 92 Sebastian Osswald, Technische Universität München, Germany Sebastian Loehmann, University of Munich (LMU), Germany Anders Lundström, Royal Institute of Technology, Sweden Ronald Schroeter, Queensland University of Technology, Australia Andreas Butz, University of Munich (LMU), Germany Markus Lienkamp, Technische Universität München, Germany Page 15 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 121 Workshop 5: Human Factors Design Principles for the Driver-Vehicle Interface (DVI) Organizers: John L. Campbell, Battelle, USA Christian M. Richard, Battelle, USA L. Paige Bacon, Battelle, USA Zachary R. Doerzaph, Virginia Tech Transportation Institute, USA Page 16 Adjunct Proceedings of the 6 th International Conference on Automotive Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 128 Workshop 6: Designing for People: Keeping the User in mind Organizers: JohnRobert Wilson, User Experience (UX) Group, Fujitsu Ten Corp. of America Jenny Le, User Experience (UX) Group, Fujitsu Ten Corp. of America Page 17 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 133 Workshop 7: 2nd Workshop on User Experience of Autonomous Driving at AutomotiveUI 2014 Organizers: Alexander Meschtscherjakov, University of Salzburg, Austria Manfred Tscheligi, University of Salzburg, Austria Dalila Szostak, Google, USA Rabindra Ratan, Michigan State University, USA Ioannis Politis, University of Glasgow, UK Roderick McCall, University of Luxembourg, Luxembourg Sven Krome, RMIT University, Australia Page 18 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 152 Workshop 8: Wearable Technologies for Automotive User Interfaces: Danger or Opportunity? Organizers: Maurizio Caon, University of Applied Sciences and Arts Western Switzerland, Switzerland Leonardo Angelini, University of Applied Sciences and Arts Western Switzerland, Switzerland Elena Mugellini, University of Applied Sciences and Arts Western Switzerland, Switzerland Michele Tagliabue, Paris Descartes University, France Paolo Perego, Politecnico di Milano, Italy Giuseppe Andreoni, Politecnico di Milano, Italy Page 19 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 158 Work in Progress Page 20 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 255 Interactive Demo

1 version available:

D5. 2-Plan for Integration of Empirical Analysis Techniques and Tools into the HF-RTP and Methodology

Year: 2014

Authors: MUTO Botta, STWT Borchers, CTWT Curio

This deliverable consists of two parts. The “Integration Plan Common Part” is shared by the deliverables D2.2 to D5.2. It explains how to integrate methods, tools and techniques (MTTs) into the Human Factors Reference Technology Platform (HF-RTP). The present document details the MTTs which will be contributed by WP5 as components to the HF-RTP. Details concerning the HoliDes RTP, its methodology and the integration of components can also be found in D1.1 and the forthcoming D1.3. Here, we will describe MTTs which the partners are developing or advancing in WP5 of HoliDes. These MTTs will eventually form the HF-RTP. They serve WP5’s vision to extend and develop empirical methods, which aid the design and development of adaptive, cooperative Human-machine systems. These methods support developers to conform to existing norms and standards. The MTTs of WP5 consist largely of empirical methods. Empirical methods are an integral part of any Human-centered systems engineering process. Their precise position and use in a workflow depends on the AdCoS under development, the organization that uses them, as well as individual considerations. These questions will determine the tailoring of the RTP for a specific use case. Empirical MTTs are an essential part of both, early and late stages of any design process of a Human-machine system, for example during requirements analysis or verification of Human Factors related non-functional requirements. However, empirical MTTs can also be an integral part of the development phase, especially when using principles of agile requirements engineering approaches. While in the CESAR RTP it is only software tools that manipulate data, in HoliDes various kinds of MTTs are being used. Each MTT that is part of the development and evaluation of an AdCoS manipulates data and is an integral part of the engineering environment.

1 version available:

Designing driver assistance systems with crossmodal signals: Multisensory integration rules for saccadic reaction times apply

Year: 2014

Authors: R Steenken, L Weber,H Colonius,A Diederich

Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver's visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror), presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel) was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic “time window of integration” model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target–nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.

12 versions available