Publication Hub Archive

CAN Bus

You have reached the Ergoneers Publication Hub for:

Used Tool > CAN Bus

Find all Publications here:

Publication Hub

Total results: 87

Towards virtually transparent vehicles: first results of a simulator study and a field trial

Year: 2015

Authors: MCA Baltzer, A Krasni, P Boehmsdorff

Current versions of heavy trucks, tanks or excavators are subject to limited visibility due to small windshields. In order to overcome such limitations one option is to create a virtually transparent vehicle by using a camera-monitor / Head Mounted Display (HMD) system to provide a seamless vision to the driver. The aim of the study is to compare two vision systems for 'virtually transparent' vehicles, a HMD and a camera-monitor system, in a simulation environment with regard to ergonomic aspects and future prospects. The structure of the simulator includes a generic mock-up of the vehicle to emulate the visual masking effects of a real armoured vehicle. Thus, the driver can experience the obstruction of the visual space caused by the A-pillars. In addition, the degree of immersion of the driver is increased by windows on the left and right sides. The vision system with monitors is built in a semicircular shape in front of the driver with five 13 inch monitors. In this arrangement, the interior angle between adjacent displays is 40°, hence a total of 160° view can be displayed. The display panels have a maximum resolution of 1280 x 960 and an aspect ratio of 16:10. The alternative vision system with HMD uses an Oculus Rift DK2. In order to create a three-dimensional view around the driver, the images are projected on a curved surface and which provides a freedom for the driver to look around in all the directions. The Oculus Rift provides a nominal field of view (FoV) of approximately 100°. A simulated distance of about 16 km was repeatedly driven for 2 hours in different test conditions like federal highways, short pieces of off-road tracks and crossings with simulated intersection traffic, under consideration of the rules of the road. In order to minimise a sequence effect, the order in which these test conditions were presented was changed. After driving for each condition acceptance, performance, subjective stress (NASA TLX), workload, usability and driving performance were determined. As a secondary task, the driver had to identify and announce possible threats out loud.

Eye Tracking Glasses
Simulator

1 version available:

Adjunct Proceedings

Year: 2014

Authors: LN Boyle,AL Kun,S Osswald, B Pearce,D Szostak

The MIT AgeLab n-back: a multi-modal android application implementation 56 Cognitive Workload and Driver Glance Behavior 62 Using an OpenDS Driving Simulator for Car Following: A First Attempt 64 Cognitive load in autonomous vehicles 70 WS3: Pointing towards future automotive HMIs: The potential for gesture 74 Linda Angell, Yu Zhang Page 8 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA vii Pointing Towards Future Automotive HMIs: The Potential for Gesture Interaction 75 Applying Popular Usability Heuristics to Gesture Interaction in the Vehicle 81 The steering wheel as a touch interface: Using thumb-based gestural interfaces as control inputs while driving 88 WS4: EVIS 2014 3rd Workshop on Electric Vehicle Information Systems 92 Sebastian Osswald, Technische Universität München, Germany Sebastian Loehmann, University of Munich (LMU), Germany Anders Lundström, Royal Institute of Technology, Sweden Ronald Schroeter, Queensland University of Technology, Australia Andreas Butz, University of Munich (LMU), Germany Markus Lienkamp, Technische Universität München, Germany Page 15 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 121 Workshop 5: Human Factors Design Principles for the Driver-Vehicle Interface (DVI) Organizers: John L. Campbell, Battelle, USA Christian M. Richard, Battelle, USA L. Paige Bacon, Battelle, USA Zachary R. Doerzaph, Virginia Tech Transportation Institute, USA Page 16 Adjunct Proceedings of the 6 th International Conference on Automotive Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 128 Workshop 6: Designing for People: Keeping the User in mind Organizers: JohnRobert Wilson, User Experience (UX) Group, Fujitsu Ten Corp. of America Jenny Le, User Experience (UX) Group, Fujitsu Ten Corp. of America Page 17 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 133 Workshop 7: 2nd Workshop on User Experience of Autonomous Driving at AutomotiveUI 2014 Organizers: Alexander Meschtscherjakov, University of Salzburg, Austria Manfred Tscheligi, University of Salzburg, Austria Dalila Szostak, Google, USA Rabindra Ratan, Michigan State University, USA Ioannis Politis, University of Glasgow, UK Roderick McCall, University of Luxembourg, Luxembourg Sven Krome, RMIT University, Australia Page 18 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 152 Workshop 8: Wearable Technologies for Automotive User Interfaces: Danger or Opportunity? Organizers: Maurizio Caon, University of Applied Sciences and Arts Western Switzerland, Switzerland Leonardo Angelini, University of Applied Sciences and Arts Western Switzerland, Switzerland Elena Mugellini, University of Applied Sciences and Arts Western Switzerland, Switzerland Michele Tagliabue, Paris Descartes University, France Paolo Perego, Politecnico di Milano, Italy Giuseppe Andreoni, Politecnico di Milano, Italy Page 19 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 158 Work in Progress Page 20 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 255 Interactive Demo

Simulator
Software

1 version available:

Eye tracking in the car: Challenges in a dual-task scenario on a test track

Year: 2014

Authors: S Trösterer,A Meschtscherjakov, D Wilfinger

In our research, we aim at developing and enhancing an approach that allows us to capture visual, cognitive, and manual distraction of the driver while operating an In-Vehicle Infotainment System (IVIS) under most preferable real conditions. Based on our experiences in three consecutive studies conducted on a test track, we want to point out and discuss issues and challenges we had to face when applying eye tracking in this context. These challenges include how to choose the right system, integrate it into the vehicle, set it up for each participant, and gather data on in-car tasks with an acceptable workload for the researcher. The contribution of this paper is to raise awareness for eye tracking issues in the automotive UI community and to provide lessons learned for AUI researchers when applying eye tracking methods in comparable setups.

Eye Tracking Glasses
Software

2 versions available

Masking Action Relevant Stimuli in dynamic environments–The MARS method

Year: 2014

Authors: L Rittger,A Kiesel,G Schmidt, C Maag

We present the novel MARS (Masking Action Relevant Stimuli) method for measuring drivers’ information demand for an action relevant stimulus in the driving scene. In a driving simulator setting, the traffic light as dynamic action relevant stimulus was masked. Drivers pressed a button to unmask the traffic light for a fixed period of time as often as they wanted. We compared the number of button presses with the number of fixations on the traffic light in a separate block using eye tracking. For the driving task, we varied the road environment by presenting different traffic light states, by adding a lead vehicle or no lead vehicle and by manipulating the visibility of the driving environment by fog or no fog. Results showed that these experimental variations affected the number of button presses as dependent measure of the MARS method. Although the number of fixations was affected qualitatively similar, changes were more pronounced in the number of fixations compared to the number of button presses. We argue that the number of button presses is an indicator for action relevance of the stimulus, complementing or even substituting the recording and analyses of gaze behaviour for specific research questions. In addition, using the MARS method did not change dynamic driving behaviour and driving with the MARS method was neither disturbing, nor difficult to learn. Future research is required to show the generalisability of the method to other stimuli in the driving scene.

Simulator
Software

10 versions available

SteerPad Development and Evaluation of a Touchpad in the Steering Wheel from a User Experience Perspective

Year: 2014

Authors: V Swantesson, D Gunnarsson

Driver safety has since the birth of automobiles been paramount. In a time when technologies are changing the way people interact with the outside world, the vehicle industries need to keep up with these changes in terms of both safety and user experience. When trying to assess this complication, some of these technologies have been integrated into the cars, thus leading to more distractions while driving. This thesis describes this dilemma as the gap between automobile safety and in-vehicle infotainment. By the use of a touchpad installed on the right hand side of the steering wheel, the thesis has developed and evaluated a prototype interface that is located in the vehicles dashboard display with goals to lower driver distraction. This touchpad is developed with three main sources of interaction; swipes, tactile interaction and character recognition. By merging and combining these sources the thesis has successfully developed a test prototype to be used for evaluation. The prototype was tested against an already existing in-vehicle information system where a number of use cases and scenarios were used to test the systems in terms of usability and user experience. Guidelines on safety regulations set by NHTSA have been studied and applied to the projects development and user studies. Test results indicate that this technology has the potential to lower the driver distraction while still maintaining a high level of usability and user experience. Finally the thesis presents a number of suggestions and ideas in reference to further development and studies.

Simulator
Software

3 versions available

D9. 3-Requirements & Specification & first Modelling for the Automotive AdCoS and HF-RTP Requirements Definition Update (Feedback)

Year: 2013

Authors: FT CRF, EL REL, T Bellet, JC Bornard, D Gruyer

The main objective of WP9 is the development and qualification of AdCoS in Automotive (AUT) domain using the tailored HF-RTP and methodology from WP1, to demonstrate the added value for industrial engineering processes, in terms of reduced cost, fewer necessary development cycles and better functional performances. This report describes the requirements, specifications and the first modelling for the AdCoS applications in the Automotive (AUT) domain, with reference to the target-scenarios (TSs) and the Use-cases (UCs) described in the deliverable D9.1 “Requirements Definition for the HF-RTP, Methodology and Techniques and Tools from an Automotive Perspective”. In particular, we mainly refer to the two AdCoS applications implemented on the real test-vehicles (TVs): • Adapted Assistance, that is a Lane-Change Assistant (LCA) system, led by the CRF partner. • Adapted Automation, that is an automatic Intuitive Driving (ID) system, led by the IAS partner. In addition, this report includes the results of a first attempt to model the AdCoS using the HF-RTP and methodology utilising either pre-existing tools or new tools to be developed in the frame of the HoliDes project. Section §2 contains a list of tools definitely applied from WP1-5. Section §3 describes each AdCoS use case including AdCoS operational definitions, HMI for the AdCoS, tools applied from the HF-RTP, requirements and specifications, and the system architecture. Section §4 reports on feedback from WP 1-5. Section §5 presents some conclusions and the next steps.

Simulator
Software

1 version available:

Dynamic simulation and prediction of drivers’ attention distribution

Year: 2013

Authors: B Wortelen,M Baumann,A Lüdtke

The distribution of driver’s attention is a crucial aspect for safe driving. The SEEV model by Wickens is a state of the art model that provides an easy but abstract way to estimate the distribution of attention for specific situations. The present paper presents an extension of the SEEV model, the Adaptive Information Expectancy (AIE) model. The AIE model is a sophisticated model of attention control, able to provide estimates based on a far more detailed simulation of human allocation of attention within a cognitive architecture. The AIE model relates attention directly to a task model, which is executed within the architecture. It is able to automatically measure task-dependent event frequencies and adapt its distribution of attention according to these frequencies. The AIE model was used to create a dynamic cognitive driver model. A driving simulator study with 21 participants has been conducted to evaluate the predictions of the driver model. Event rates for the primary driving task and an artificial secondary task have been varied, as well as the priorization of tasks. Both the SEEV and the AIE model provided estimates for percentage dwell times with similar quality, while the AIE model was able to provide estimates for further measure like gaze frequencies and link values.

Simulator
Software

8 versions available

Increasing complexity of driving situations and its impact on an ADAS for anticipatory assistance for the reduction of fuel consumption

Year: 2013

Authors: C Rommerskirchen, M Helmbrecht

This paper presents a study of the impact of different complex situations on an advanced driver assistance system (ADAS) for anticipatory assistance for the reduction of fuel consumption. Different studies showed that it is possible to reduce the individual fuel consumption of drivers by extending the driver's anticipation horizon through an ADAS. But the influences of the driving situation on the success of such a system has not been researched yet. Therefore the driving simulator study which is presented in this paper deals with the impact of different traffic situations and its complexity on an anticipatory ADAS for fuel reduction in deceleration scenarios. For this, different rural and urban deceleration scenarios where chosen and situations of different complexity were implemented by changing traffic and environmental conditions. As the main focus of the ADAS lies on the reduction of the fuel consumption, this was one of the main variables which was measured. Additionally the glance time on the HMI was analyzed as an indicator for the manner how the system was used. The results showed that the degree of complexity of the chosen road traffic situations has generally no impact on the fuel consumption if it was driven without any assistance system. The glance times on the HMI of the ADAS shorten if a situation is more complex. But this does not lead to differences in the reduction of the fuel consumptions by the ADAS in different complex situations. The overall fuel consumption was reduced by about 10%. These results lead to the assumption that a good designed anticipatory ADAS reduces the driver related fuel consumption independently from the degree of the complexity of a situation.

Simulator
Software

3 versions available

Integrated simulation of attention distribution and driving behavior

Year: 2013

Authors: B Wortelen,A Lüdtke,M Baumann

A suitable distribution of attention to task demands is an essential component for efficient handling of multitasking situations. In most cases humans are not consciously aware of how they allocate attention to tasks. Yet they automatically weight their distribution to properties of the task like task value or the frequency of information events for a specific task. The Adaptive Information Expectancy (AIE) model was developed as a dynamic model of attention allocation and integrated into a cognitive architecture. It automatically derives the rate of information events for a task based on the interaction of a formal task model with the environment. The attention of the model is distributed according to these event rates and task priorities. Previous studies demonstrated that a dynamic driver model which uses the AIE model could reproduce many key characteristics of visual attention. In this paper it is shown, how changes in attention distribution are reflected in the task performance of the driver model for the three tasks of (1) keeping the car in the center of the lane, (2) keeping the speed close to 100 km/h and (3) solving a continuous in-vehicle secondary task. Driver model performance is compared to experimental data from a study on human drivers. Shortcomings of the driver model are discussed based on this comparison.

Simulator
Software

1 version available:

Bayesian online clustering of eye movement data

Year: 2012

Authors: E Tafaj,G Kasneci,W Rosenstiel

The task of automatically tracking the visual attention in dynamic visual scenes is highly challenging. To approach it, we propose a Bayesian online learning algorithm. As the visual scene changes and new objects appear, based on a mixture model, the algorithm can identify and tell visual saccades (transitions) from visual fixation clusters (regions of interest). The approach is evaluated on real-world data, collected from eye-tracking experiments in driving sessions.

Eye Tracking Glasses
Software

2 versions available