Malfunction of a traffic light assistant application on a smartphone
A traffic light assistant on a smartphone is assessed in real traffic, with an eye tracking system. In one experimental condition, the system showed (intentionally) false information to the drivers to simulate a malfunction. The glances for this condition showed similar gaze parameters, as a working system. The subjective ratings of the test subjects after this malfunction dropped significantly. The gathered gaze data are compared to three former studies (two in a driving simulator and another study in real road driving). Findings indicate, that a driving simulator is a safe and reliable alternative to get some of the glance data (e.g., glance durations to the smartphone) without driving in real traffic.
Postural sway and gaze can track the complex motion of a visual target
Variability is an inherent and important feature of human movement. This variability has form exhibiting a chaotic structure. Visual feedback training using regular predictable visual target motions does not take into account this essential characteristic of the human movement, and may result in task-specific learning and loss of visuo-motor adaptability. In this study, we asked how well healthy young adults can track visual target cues of varying degrees of complexity during whole-body swaying in the Anterior-Posterior (AP) and Medio-Lateral (ML) directions. Participants were asked to track three visual target motions: a complex (Lorenz attractor), a noise (brown) and a periodic (sine) moving target while receiving online visual feedback about their performance. Postural sway, gaze, and target motion were synchronously recorded and the degree of force-target and gaze-target coupling was quantified using spectral coherence and Cross-Approximate entropy. Analysis revealed that both force-target and gaze-target coupling was sensitive to the complexity of the visual stimuli motions. Postural sway showed a higher degree of coherence with the Lorenz attractor than the brown noise or sinusoidal stimulus motion. Similarly, gaze was more synchronous with the Lorenz attractor than the brown noise and sinusoidal stimulus motion. These results were similar regardless of whether tracking was performed in the AP or ML direction. Based on the theoretical model of optimal movement variability, tracking of a complex signal may provide a better stimulus to improve visuo-motor adaptation and learning in postural control.
Pupil segmentation approach on low resolution images
The use of the characteristics of the iris and the pupil is useful in a wide range of applications. There are several studies about pupil detection, however, most of these works are evaluated using infrared, ideal or high quality images. In this paper we propose a method based on a combination of pre-processing (it includes histogram equalization, thresholding, morphological operations), edge detection and Hough transform. The evaluation was performed with 1214 low resolution images from the UBIRIS database. The experimental results show that the proposed method is feasible and has acceptable accuracy. The major advantage is the possibility to be used with images captured with a cheap webcam.
Reviewers: Ian Giblet, CAS-UK Jan-Patrick Osterloh, OFF
This deliverable reports the progress of the HoliDes consortium to develop methods, techniques, and tools (MTTs) for the Human Factors-Reference Technology Platform (HF-RTP), version 1.0. For work package 5 (WP5), it concludes project cycle I. During this cycle, as a first step we received the requirements from the application work packages WP6-9. After an analysis of these requirements (cf. D5.1), we selected those metrics and methods to be developed in WP5, which could best meet the AdCoS owners’ needs. Having documented those MTTs as HF-RTP 0.5 in D5.2, the first instantiations of these techniques and tools were made. The result of this work is described in this document. For each method, technique or tool, a detailed description is provided concerning data the MTTs receive, data they provide, their current functionality as well as specific and five definitive and further potential use cases. These use cases (see Table 1) originate from the four HoliDes domains Health, Aerospace, Control Room and Automotive. In our definition, a method is a general way to solve a problem. This could be the use of task analysis to answer a general design question. A technique is a concrete instantiation of such a method, as would be the application of a specific form of task analysis to the development and evaluation of an adaptive system. Finally, a tool is a technique, which has been realized as either hard- or software. Such a tool could be a program that aids the collection and organisation of observations during the task analysis. The MTTs created in this work package follow the objective to “Develop techniques and tools for empirical analysis of Adaptive Cooperative Human-Machine Systems (AdCoS) against human factors and safety regulations.” To achieve this objective and provide the application work packages 6–9 (WP6-9) with methods that best suit their needs the starting point of our work has been the AdCoS requirements from WP6-9. Some of these requirements describe genuine AdCoS functionality, while others relate to MTTs necessary to develop AdCoS functionality or aspects of the design process itself. Consequently, the purpose of WP5’s MTTs is to enable, aid, and assist the empirical analysis of adaptive, cooperative systems, or to act as part of these systems functionality. The actual outcome of the work presented here will be software tools, empirical results as the basis for system functionality and design decisions, but also procedures and algorithms and their implementation. All of these efforts help realize the human centred design process as e.g. described in ISO 9241-210, both during design and evaluation. For a quick overview over WP5’s MTT landscape, Table 1 shows both names and short descriptions of all methods, techniques and tools as well as the use cases they are being applied to.
SET: a pupil detection method using sinusoidal approximation
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS.
Smartwatches vs. smartphones: A preliminary report of driver behavior and perceived risk while responding to notifications
This study examines driver engagement with smartwatches and smartphones while driving. Twelve participants (7 novice and 5 experienced smartwatch users) drove in a high-fidelity simulator while receiving notifications from either a smartwatch (Pebble) or a smartphone (LG Nexus 5). It was found that participants had more glances, on average, per notification while using the smartwatch compared to the smartphone. Further, their brake response times were longer when they received notifications prior to a lead vehicle braking event on the smartwatch compared to when they did not receive any notifications and when they received notifications on the smartphone. Contrary to these glance and driving performance findings, participants perceived similar levels of risk for the two devices, and they largely reported that smartwatch use while driving should receive penalties equal to or less than smartphone use with respect to distracted driving legislation. Thus, there appears to be a disconnection between drivers' actual performance while using smartwatches and their perceptions.
Speaking and listening with the eyes: Gaze signaling during dyadic interactions
Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.
Towards virtually transparent vehicles: first results of a simulator study and a field trial
Current versions of heavy trucks, tanks or excavators are subject to limited visibility due to small windshields. In order to overcome such limitations one option is to create a virtually transparent vehicle by using a camera-monitor / Head Mounted Display (HMD) system to provide a seamless vision to the driver. The aim of the study is to compare two vision systems for 'virtually transparent' vehicles, a HMD and a camera-monitor system, in a simulation environment with regard to ergonomic aspects and future prospects. The structure of the simulator includes a generic mock-up of the vehicle to emulate the visual masking effects of a real armoured vehicle. Thus, the driver can experience the obstruction of the visual space caused by the A-pillars. In addition, the degree of immersion of the driver is increased by windows on the left and right sides. The vision system with monitors is built in a semicircular shape in front of the driver with five 13 inch monitors. In this arrangement, the interior angle between adjacent displays is 40°, hence a total of 160° view can be displayed. The display panels have a maximum resolution of 1280 x 960 and an aspect ratio of 16:10. The alternative vision system with HMD uses an Oculus Rift DK2. In order to create a three-dimensional view around the driver, the images are projected on a curved surface and which provides a freedom for the driver to look around in all the directions. The Oculus Rift provides a nominal field of view (FoV) of approximately 100°. A simulated distance of about 16 km was repeatedly driven for 2 hours in different test conditions like federal highways, short pieces of off-road tracks and crossings with simulated intersection traffic, under consideration of the rules of the road. In order to minimise a sequence effect, the order in which these test conditions were presented was changed. After driving for each condition acceptance, performance, subjective stress (NASA TLX), workload, usability and driving performance were determined. As a secondary task, the driver had to identify and announce possible threats out loud.