Towards virtually transparent vehicles: first results of a simulator study and a field trial
Current versions of heavy trucks, tanks or excavators are subject to limited visibility due to small windshields. In order to overcome such limitations one option is to create a virtually transparent vehicle by using a camera-monitor / Head Mounted Display (HMD) system to provide a seamless vision to the driver. The aim of the study is to compare two vision systems for 'virtually transparent' vehicles, a HMD and a camera-monitor system, in a simulation environment with regard to ergonomic aspects and future prospects. The structure of the simulator includes a generic mock-up of the vehicle to emulate the visual masking effects of a real armoured vehicle. Thus, the driver can experience the obstruction of the visual space caused by the A-pillars. In addition, the degree of immersion of the driver is increased by windows on the left and right sides. The vision system with monitors is built in a semicircular shape in front of the driver with five 13 inch monitors. In this arrangement, the interior angle between adjacent displays is 40°, hence a total of 160° view can be displayed. The display panels have a maximum resolution of 1280 x 960 and an aspect ratio of 16:10. The alternative vision system with HMD uses an Oculus Rift DK2. In order to create a three-dimensional view around the driver, the images are projected on a curved surface and which provides a freedom for the driver to look around in all the directions. The Oculus Rift provides a nominal field of view (FoV) of approximately 100°. A simulated distance of about 16 km was repeatedly driven for 2 hours in different test conditions like federal highways, short pieces of off-road tracks and crossings with simulated intersection traffic, under consideration of the rules of the road. In order to minimise a sequence effect, the order in which these test conditions were presented was changed. After driving for each condition acceptance, performance, subjective stress (NASA TLX), workload, usability and driving performance were determined. As a secondary task, the driver had to identify and announce possible threats out loud.
Eye Tracking Glasses
Simulator
Adjunct Proceedings
The MIT AgeLab n-back: a multi-modal android application implementation 56 Cognitive Workload and Driver Glance Behavior 62 Using an OpenDS Driving Simulator for Car Following: A First Attempt 64 Cognitive load in autonomous vehicles 70 WS3: Pointing towards future automotive HMIs: The potential for gesture 74 Linda Angell, Yu Zhang Page 8 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA vii Pointing Towards Future Automotive HMIs: The Potential for Gesture Interaction 75 Applying Popular Usability Heuristics to Gesture Interaction in the Vehicle 81 The steering wheel as a touch interface: Using thumb-based gestural interfaces as control inputs while driving 88 WS4: EVIS 2014 3rd Workshop on Electric Vehicle Information Systems 92 Sebastian Osswald, Technische Universität München, Germany Sebastian Loehmann, University of Munich (LMU), Germany Anders Lundström, Royal Institute of Technology, Sweden Ronald Schroeter, Queensland University of Technology, Australia Andreas Butz, University of Munich (LMU), Germany Markus Lienkamp, Technische Universität München, Germany Page 15 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 121 Workshop 5: Human Factors Design Principles for the Driver-Vehicle Interface (DVI) Organizers: John L. Campbell, Battelle, USA Christian M. Richard, Battelle, USA L. Paige Bacon, Battelle, USA Zachary R. Doerzaph, Virginia Tech Transportation Institute, USA Page 16 Adjunct Proceedings of the 6 th International Conference on Automotive Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 128 Workshop 6: Designing for People: Keeping the User in mind Organizers: JohnRobert Wilson, User Experience (UX) Group, Fujitsu Ten Corp. of America Jenny Le, User Experience (UX) Group, Fujitsu Ten Corp. of America Page 17 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 133 Workshop 7: 2nd Workshop on User Experience of Autonomous Driving at AutomotiveUI 2014 Organizers: Alexander Meschtscherjakov, University of Salzburg, Austria Manfred Tscheligi, University of Salzburg, Austria Dalila Szostak, Google, USA Rabindra Ratan, Michigan State University, USA Ioannis Politis, University of Glasgow, UK Roderick McCall, University of Luxembourg, Luxembourg Sven Krome, RMIT University, Australia Page 18 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 152 Workshop 8: Wearable Technologies for Automotive User Interfaces: Danger or Opportunity? Organizers: Maurizio Caon, University of Applied Sciences and Arts Western Switzerland, Switzerland Leonardo Angelini, University of Applied Sciences and Arts Western Switzerland, Switzerland Elena Mugellini, University of Applied Sciences and Arts Western Switzerland, Switzerland Michele Tagliabue, Paris Descartes University, France Paolo Perego, Politecnico di Milano, Italy Giuseppe Andreoni, Politecnico di Milano, Italy Page 19 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 158 Work in Progress Page 20 Adjunct Proceedings of the 6 th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14), Sept. 17–19, 2014, Seattle, WA, USA 255 Interactive Demo
D9. 3-Requirements & Specification & first Modelling for the Automotive AdCoS and HF-RTP Requirements Definition Update (Feedback)
The main objective of WP9 is the development and qualification of AdCoS in Automotive (AUT) domain using the tailored HF-RTP and methodology from WP1, to demonstrate the added value for industrial engineering processes, in terms of reduced cost, fewer necessary development cycles and better functional performances. This report describes the requirements, specifications and the first modelling for the AdCoS applications in the Automotive (AUT) domain, with reference to the target-scenarios (TSs) and the Use-cases (UCs) described in the deliverable D9.1 “Requirements Definition for the HF-RTP, Methodology and Techniques and Tools from an Automotive Perspective”. In particular, we mainly refer to the two AdCoS applications implemented on the real test-vehicles (TVs):
• Adapted Assistance, that is a Lane-Change Assistant (LCA) system, led by the CRF partner.
• Adapted Automation, that is an automatic Intuitive Driving (ID) system, led by the IAS partner.
In addition, this report includes the results of a first attempt to model the AdCoS using the HF-RTP and methodology utilising either pre-existing tools or new tools to be developed in the frame of the HoliDes project.
Section §2 contains a list of tools definitely applied from WP1-5. Section §3 describes each AdCoS use case including AdCoS operational definitions, HMI for the AdCoS, tools applied from the HF-RTP, requirements and specifications, and the system architecture. Section §4 reports on feedback from WP 1-5. Section §5 presents some conclusions and the next steps.