D4. 2–Plan for Integration of Model-Based Analysis Techniques and Tools
This document gives details on the integration of MTTs into the HF-RTP for WP4. Please refer to the common Integration Plan document for further details on the HF-RTP and possible integration techniques. The objective of WP4 is to develop techniques and tools for model-based formal simulation and formal verification of Adaptive Cooperative Human-Machine Systems (AdCoS) against human factor and safety regulations. As described in D4.1, verification and validation are two system engineering technical processes (ISO IEC 2008). Verification tries to answer the question “Are we building the system right?”, and validation deals with final user and operational related requirements, trying to answer the question “Are we building the right system?”. Model-based analysis is an approach to support verification and validation processes. The idea is to construct an intermediate representation of the future system – the model - and to search for evidences directly on this representation. With this approach, evidence can be a mathematical demonstration or a global observation performed on all possible states of the system, e.g. with formal verification techniques. More details on model-based analysis can be found in D4.1. The modelling-languages and their editors are defined in WP2, and instantiated in WP4 for the model-based analysis. WP3 will add adaptation techniques to the models. In the first cycle, WP4 will mainly focus on the automotive AdCoSs for demonstration purposes. In the next cycles, the MTTs of WP4 will be extended to other domains as well. The following sections describe the initial set of MTTs used within WP4, which are provided to the other work packages. Many of them are coming from WP2 (Modelling Techniques and Tools work package), as they are developed within WP2 and applied in WP4.
Eye Tracking Glasses
Software
D5. 2-Plan for Integration of Empirical Analysis Techniques and Tools into the HF-RTP and Methodology
This deliverable consists of two parts. The “Integration Plan Common Part” is shared by the deliverables D2.2 to D5.2. It explains how to integrate methods, tools and techniques (MTTs) into the Human Factors Reference Technology Platform (HF-RTP). The present document details the MTTs which will be contributed by WP5 as components to the HF-RTP. Details concerning the HoliDes RTP, its methodology and the integration of components can also be found in D1.1 and the forthcoming D1.3. Here, we will describe MTTs which the partners are developing or advancing in WP5 of HoliDes. These MTTs will eventually form the HF-RTP. They serve WP5’s vision to extend and develop empirical methods, which aid the design and development of adaptive, cooperative Human-machine systems. These methods support developers to conform to existing norms and standards. The MTTs of WP5 consist largely of empirical methods. Empirical methods are an integral part of any Human-centered systems engineering process. Their precise position and use in a workflow depends on the AdCoS under development, the organization that uses them, as well as individual considerations. These questions will determine the tailoring of the RTP for a specific use case. Empirical MTTs are an essential part of both, early and late stages of any design process of a Human-machine system, for example during requirements analysis or verification of Human Factors related non-functional requirements. However, empirical MTTs can also be an integral part of the development phase, especially when using principles of agile requirements engineering approaches. While in the CESAR RTP it is only software tools that manipulate data, in HoliDes various kinds of MTTs are being used. Each MTT that is part of the development and evaluation of an AdCoS manipulates data and is an integral part of the engineering environment.
Eye Tracking Glasses
Software
Designing driver assistance systems with crossmodal signals: Multisensory integration rules for saccadic reaction times apply
Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver's visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror), presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel) was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic “time window of integration” model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target–nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.
Eye Tracking Glasses
Simulator
Development and evaluation of an assistant system to aid monitoring behavior during multi-UAV supervisory control: experiences from the D3CoS project
The core function a human operator in charge of a supervisory control task is responsible for is monitoring. However, research has shown that the correct execution of this function is often violated. The consequences can be disastrous for human life and the environment. Within the framework of the European project D3CoS, we developed an assistant system to aid the monitoring behavior of a human operator in charge of supervisory control of highly automated unmanned aerial vehicles. The idea behind the assistant system was to continuously invoke visual cues on the display used to supervise the mission in order to guide the operator's visual attention towards information demanding attention. Two studies were performed to evaluate the system along different target measures, such as situation awareness, workload, user acceptance and market potential. Overall, the results show that the system has positive effects on many target measures but not on all of them. Further research is needed to improve the system functions.
Driving with binocular visual field loss? A study on a supervised on-road parcours with simultaneous eye and head tracking
Post-chiasmal visual pathway lesions and glaucomatous optic neuropathy cause binocular visual field defects (VFDs) that may critically interfere with quality of life and driving licensure. The aims of this study were (i) to assess the on-road driving performance of patients suffering from binocular visual field loss using a dual-brake vehicle, and (ii) to investigate the related compensatory mechanisms. A driving instructor, blinded to the participants' diagnosis, rated the driving performance (passed/failed) of ten patients with homonymous visual field defects (HP), including four patients with right (HR) and six patients with left homonymous visual field defects (HL), ten glaucoma patients (GP), and twenty age and gender-related ophthalmologically healthy control subjects (C) during a 40-minute driving task on a pre-specified public on-road parcours. In order to investigate the subjects' visual exploration ability, eye movements were recorded by means of a mobile eye tracker. Two additional cameras were used to monitor the driving scene and record head and shoulder movements. Thus this study is novel as a quantitative assessment of eye movements and an additional evaluation of head and shoulder was performed. Six out of ten HP and four out of ten GP were rated as fit to drive by the driving instructor, despite their binocular visual field loss. Three out of 20 control subjects failed the on-road assessment. The extent of the visual field defect was of minor importance with regard to the driving performance. The site of the homonymous visual field defect (HVFD) critically interfered with the driving ability: all failed HP subjects suffered from left homonymous visual field loss (HL) due to right hemispheric lesions. Patients who failed the driving assessment had mainly difficulties with lane keeping and gap judgment ability. Patients who passed the test displayed different exploration patterns than those who failed. Patients who passed focused longer on the central area of the visual field than patients who failed the test. In addition, patients who passed the test performed more glances towards the area of their visual field defect. In conclusion, our findings support the hypothesis that the extent of visual field per se cannot predict driving fitness, because some patients with HVFDs and advanced glaucoma can compensate for their deficit by effective visual scanning. Head movements appeared to be superior to eye and shoulder movements in predicting the outcome of the driving test under the present study scenario.
Eye Tracking Glasses
Simulator
Eye tracking in the car: Challenges in a dual-task scenario on a test track
In our research, we aim at developing and enhancing an approach that allows us to capture visual, cognitive, and manual distraction of the driver while operating an In-Vehicle Infotainment System (IVIS) under most preferable real conditions. Based on our experiences in three consecutive studies conducted on a test track, we want to point out and discuss issues and challenges we had to face when applying eye tracking in this context. These challenges include how to choose the right system, integrate it into the vehicle, set it up for each participant, and gather data on in-car tasks with an acceptable workload for the researcher. The contribution of this paper is to raise awareness for eye tracking issues in the automotive UI community and to provide lessons learned for AUI researchers when applying eye tracking methods in comparable setups.
Eye Tracking Glasses
Software
Gaze guidance for the visually impaired
Visual perception is perhaps the most important sensory input. During driving, about 90% of the relevant information is related to the visual input [Taylor 1982]. However, the quality of visual perception decreases with age, mainly related to a reduce in the visual acuity or in consequence of diseases affecting the visual system. Amongst the most severe types of visual impairments are visual field defects (areas of reduced perception in the visual field), which occur as a consequence of diseases affecting the brain, e.g., stroke, brain injury, trauma, or diseases affecting the optic nerve, e.g., glaucoma. Due to demographic aging, the number of people with such visual impairments is expected to rise [Kasneci 2013]. Since persons suffering from visual impairments may overlook hazardous objects, they are prohibited from driving. This, however, leads to a decrease in quality of life, mobility, and participation in social life. Several studies have shown that some patients show a safe driving behavior despite their visual impairment by performing effective visual exploration, i.e., adequate eye and head movements (e.g., towards their visual field defect [Kasneci et al. 2014b]). Thus, a better understanding of visual perception mechanisms, i.e., of why and how we attend certain parts of our environment while 'ignoring' others, is a key question to helping visually impaired persons in complex, real-life tasks, such as driving a car.
Eye Tracking Glasses
Simulator
Homonymous visual field loss and its impact on visual exploration: A supermarket study
Purpose. : Homonymous visual field defects (HVFDs) may critically interfere with quality of life. The aim of this study was to assess the impact of HVFDs on a supermarket search task and to investigate the influence of visual search on task performance. Methods. : Ten patients with HVFDs (four with a right-sided [HR] and six with a left-sided defect [HL]), and 10 healthy-sighted, sex-, and age-matched control subjects were asked to collect 20 products placed on two supermarket shelves as quickly as possible. Task performance was rated as “passed” or “failed” with regard to the time per correctly collected item ( T C -failed = 4.84 seconds based on the performance of healthy subjects). Eye movements were analyzed regarding the horizontal gaze activity, glance frequency, and glance proportion for different VF areas. Results. : Seven of 10 HVFD patients (three HR, four HL) passed the supermarket search task. Patients who passed needed significantly less time per correctly collected item and looked more frequently toward the VFD area than patients who failed. HL patients who passed the test showed a higher percentage of glances beyond the 60° VF ( P < 0.05). Conclusion. : A considerable number of HVFD patients performed successfully and could compensate for the HVFD by shifting the gaze toward the peripheral VF and the VFD area. Translational Relevance. : These findings provide new insights on gaze adaptations in patients with HVFDs during activities of daily living and will enhance the design and development of realistic examination tools for use in the clinical setting to improve daily functioning.
Eye Tracking Glasses
Software
Legibility difference between e-books and paper books by using an eye tracker
The aim of the study was to evaluate the difference in legibility between e-books and paper books by using an eye tracker. Eight male and eight female subjects free of eye disease participated in the experiment. The experiment was conducted using a 2 × 3 within-subject design. The book type (e-book, paper book) and font size (8 pt, 10 pt, 12 pt) were independent variables, and fixation duration time, saccade length, blink rate and subjective discomfort were dependent variables. In the results, all dependent variables showed that reading paper books provided a better experience than reading e-books did. These results indicate that the legibility of e-books needs further improvement, considering fixation duration time, saccade movement, eye fatigue, device and so on. Abstract Practitioner Summary: This study evaluated the legibility difference between e-books and paper books from the viewpoint of readability, eye fatigue and subjective discomfort by using an eye tracker. The results showed that paper books provided a better experience than e-books. This indicates that the readability of e-books needs further improvement in relation to paper books.