Binocular Glaucomatous Visual Field Loss and Its Impact on Visual Exploration-A
Advanced glaucomatous visual field loss may critically interfere with quality of life. The purpose of this study was to (i) assess the impact of binocular glaucomatous visual field loss on a supermarket search task as an example of everyday living activities, (ii) to identify factors influencing the performance, and (iii) to investigate the related compensatory mechanisms. Ten patients with binocular glaucoma (GP), and ten healthy-sighted control subjects (GC) were asked to collect twenty different products chosen randomly in two supermarket racks as quickly as possible. The task performance was rated as "passed" or "failed" with regard to the time per correctly collected item. Based on the performance of control subjects, the threshold value for failing the task was defined as m+3s (in seconds per correctly collected item). Eye movements were recorded by means of a mobile eye tracker. Eight out of ten patients with glaucoma and all control subjects passed the task. Patients who failed the task needed significantly longer time (111.47 s 612.12 s) to complete the task than patients who passed (64.45 s 613.36 s, t-test, p,0.001). Furthermore, patients who passed the task showed a significantly higher number of glances towards the visual field defect (VFD) area than patients who failed (t-test, p,0.05). According to these results, glaucoma patients with defects in the binocular visual field display on average longer search times in a naturalistic supermarket task. However, a considerable number of patients, who compensate by frequent glancing towards the VFD, showed successful task performance. Therefore, systematic exploration of the VFD area seems to be a "time-effective" compensatory mechanism during the present supermarket task.
Eye Tracking Glasses
Software
Binocular Glaucomatous Visual Field Loss and Its Impact on Visual Exploration-A Supermarket Study
Advanced glaucomatous visual field loss may critically interfere with quality of life. The purpose of this study was to (i) assess the impact of binocular glaucomatous visual field loss on a supermarket search task as an example of everyday living activities, (ii) to identify factors influencing the performance, and (iii) to investigate the related compensatory mechanisms. Ten patients with binocular glaucoma (GP), and ten healthy-sighted control subjects (GC) were asked to collect twenty different products chosen randomly in two supermarket racks as quickly as possible. The task performance was rated as “passed” or “failed” with regard to the time per correctly collected item. Based on the performance of control subjects, the threshold value for failing the task was defined as m+3s (in seconds per correctly collected item). Eye movements were recorded by means of a mobile eye tracker. Eight out of ten patients with glaucoma and all control subjects passed the task. Patients who failed the task needed significantly longer time (111.47 s ± 12.12 s) to complete the task than patients who passed (64.45 s ± 13.36 s, t-test, p < 0.001). Furthermore, patients who passed the task showed a significantly higher number of glances towards the visual field defect (VFD) area than patients who failed (t-test, p < 0.05). According to these results, glaucoma patients with defects in the binocular visual field display on average longer search times in a naturalistic supermarket task. However, a considerable number of patients, who compensate by frequent glancing towards the VFD, showed successful task performance. Therefore, systematic exploration of the VFD area seems to be a “time-effective” compensatory mechanism during the present supermarket task.
Eye Tracking Glasses
Software
Boundary conditions for information visualization with respect to the user’s gaze
Gaze tracking in Augmented Reality is mainly used to trigger buttons and access information. Such selectable objects are usually placed in the world or in screen coordinates of a head- or hand-mounted display. Yet, no work has investigated options to place information with respect to the line of sight. This work presents our first steps towards gaze-mounted information visualization and interaction, determining boundary conditions for such an approach. We propose a general concept for information presentation at an angular offset to the line of sight. A user can look around freely, yet having information attached nearby the line of sight. Whenever the user wants to look at the information and does so, the information is placed directly at the axis of sight for a short time. Based on this concept we investigate how users understand frames of reference, specifically, if users relate directions and alignments in head or world coordinates. We further investigate if information may have a preferred motion behavior. Prototypical implementations of three variants are presented to users in guided interviews. The three variants resemble a rigid offset and two different floating motion behaviors of the information. Floating algorithms implement an inertia based model and either allow the user's gaze to surpass the information or to push information with the gaze. Testing our proto-types yielded findings that users strongly prefer information maintaining world-relation and that less extra motion is preferred.
Eye Tracking Glasses
Software
D4. 2–Plan for Integration of Model-Based Analysis Techniques and Tools
This document gives details on the integration of MTTs into the HF-RTP for WP4. Please refer to the common Integration Plan document for further details on the HF-RTP and possible integration techniques. The objective of WP4 is to develop techniques and tools for model-based formal simulation and formal verification of Adaptive Cooperative Human-Machine Systems (AdCoS) against human factor and safety regulations. As described in D4.1, verification and validation are two system engineering technical processes (ISO IEC 2008). Verification tries to answer the question “Are we building the system right?”, and validation deals with final user and operational related requirements, trying to answer the question “Are we building the right system?”. Model-based analysis is an approach to support verification and validation processes. The idea is to construct an intermediate representation of the future system – the model - and to search for evidences directly on this representation. With this approach, evidence can be a mathematical demonstration or a global observation performed on all possible states of the system, e.g. with formal verification techniques. More details on model-based analysis can be found in D4.1. The modelling-languages and their editors are defined in WP2, and instantiated in WP4 for the model-based analysis. WP3 will add adaptation techniques to the models. In the first cycle, WP4 will mainly focus on the automotive AdCoSs for demonstration purposes. In the next cycles, the MTTs of WP4 will be extended to other domains as well. The following sections describe the initial set of MTTs used within WP4, which are provided to the other work packages. Many of them are coming from WP2 (Modelling Techniques and Tools work package), as they are developed within WP2 and applied in WP4.
Eye Tracking Glasses
Software
D5. 2-Plan for Integration of Empirical Analysis Techniques and Tools into the HF-RTP and Methodology
This deliverable consists of two parts. The “Integration Plan Common Part” is shared by the deliverables D2.2 to D5.2. It explains how to integrate methods, tools and techniques (MTTs) into the Human Factors Reference Technology Platform (HF-RTP). The present document details the MTTs which will be contributed by WP5 as components to the HF-RTP. Details concerning the HoliDes RTP, its methodology and the integration of components can also be found in D1.1 and the forthcoming D1.3. Here, we will describe MTTs which the partners are developing or advancing in WP5 of HoliDes. These MTTs will eventually form the HF-RTP. They serve WP5’s vision to extend and develop empirical methods, which aid the design and development of adaptive, cooperative Human-machine systems. These methods support developers to conform to existing norms and standards. The MTTs of WP5 consist largely of empirical methods. Empirical methods are an integral part of any Human-centered systems engineering process. Their precise position and use in a workflow depends on the AdCoS under development, the organization that uses them, as well as individual considerations. These questions will determine the tailoring of the RTP for a specific use case. Empirical MTTs are an essential part of both, early and late stages of any design process of a Human-machine system, for example during requirements analysis or verification of Human Factors related non-functional requirements. However, empirical MTTs can also be an integral part of the development phase, especially when using principles of agile requirements engineering approaches. While in the CESAR RTP it is only software tools that manipulate data, in HoliDes various kinds of MTTs are being used. Each MTT that is part of the development and evaluation of an AdCoS manipulates data and is an integral part of the engineering environment.
Eye Tracking Glasses
Software
Designing driver assistance systems with crossmodal signals: Multisensory integration rules for saccadic reaction times apply
Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver's visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror), presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel) was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic “time window of integration” model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target–nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.
Eye Tracking Glasses
Simulator
Development and evaluation of an assistant system to aid monitoring behavior during multi-UAV supervisory control: experiences from the D3CoS project
The core function a human operator in charge of a supervisory control task is responsible for is monitoring. However, research has shown that the correct execution of this function is often violated. The consequences can be disastrous for human life and the environment. Within the framework of the European project D3CoS, we developed an assistant system to aid the monitoring behavior of a human operator in charge of supervisory control of highly automated unmanned aerial vehicles. The idea behind the assistant system was to continuously invoke visual cues on the display used to supervise the mission in order to guide the operator's visual attention towards information demanding attention. Two studies were performed to evaluate the system along different target measures, such as situation awareness, workload, user acceptance and market potential. Overall, the results show that the system has positive effects on many target measures but not on all of them. Further research is needed to improve the system functions.
Driving with binocular visual field loss? A study on a supervised on-road parcours with simultaneous eye and head tracking
Post-chiasmal visual pathway lesions and glaucomatous optic neuropathy cause binocular visual field defects (VFDs) that may critically interfere with quality of life and driving licensure. The aims of this study were (i) to assess the on-road driving performance of patients suffering from binocular visual field loss using a dual-brake vehicle, and (ii) to investigate the related compensatory mechanisms. A driving instructor, blinded to the participants' diagnosis, rated the driving performance (passed/failed) of ten patients with homonymous visual field defects (HP), including four patients with right (HR) and six patients with left homonymous visual field defects (HL), ten glaucoma patients (GP), and twenty age and gender-related ophthalmologically healthy control subjects (C) during a 40-minute driving task on a pre-specified public on-road parcours. In order to investigate the subjects' visual exploration ability, eye movements were recorded by means of a mobile eye tracker. Two additional cameras were used to monitor the driving scene and record head and shoulder movements. Thus this study is novel as a quantitative assessment of eye movements and an additional evaluation of head and shoulder was performed. Six out of ten HP and four out of ten GP were rated as fit to drive by the driving instructor, despite their binocular visual field loss. Three out of 20 control subjects failed the on-road assessment. The extent of the visual field defect was of minor importance with regard to the driving performance. The site of the homonymous visual field defect (HVFD) critically interfered with the driving ability: all failed HP subjects suffered from left homonymous visual field loss (HL) due to right hemispheric lesions. Patients who failed the driving assessment had mainly difficulties with lane keeping and gap judgment ability. Patients who passed the test displayed different exploration patterns than those who failed. Patients who passed focused longer on the central area of the visual field than patients who failed the test. In addition, patients who passed the test performed more glances towards the area of their visual field defect. In conclusion, our findings support the hypothesis that the extent of visual field per se cannot predict driving fitness, because some patients with HVFDs and advanced glaucoma can compensate for their deficit by effective visual scanning. Head movements appeared to be superior to eye and shoulder movements in predicting the outcome of the driving test under the present study scenario.
Eye Tracking Glasses
Simulator
Eye tracking in the car: Challenges in a dual-task scenario on a test track
In our research, we aim at developing and enhancing an approach that allows us to capture visual, cognitive, and manual distraction of the driver while operating an In-Vehicle Infotainment System (IVIS) under most preferable real conditions. Based on our experiences in three consecutive studies conducted on a test track, we want to point out and discuss issues and challenges we had to face when applying eye tracking in this context. These challenges include how to choose the right system, integrate it into the vehicle, set it up for each participant, and gather data on in-car tasks with an acceptable workload for the researcher. The contribution of this paper is to raise awareness for eye tracking issues in the automotive UI community and to provide lessons learned for AUI researchers when applying eye tracking methods in comparable setups.
Eye Tracking Glasses
Software
Gaze guidance for the visually impaired
Visual perception is perhaps the most important sensory input. During driving, about 90% of the relevant information is related to the visual input [Taylor 1982]. However, the quality of visual perception decreases with age, mainly related to a reduce in the visual acuity or in consequence of diseases affecting the visual system. Amongst the most severe types of visual impairments are visual field defects (areas of reduced perception in the visual field), which occur as a consequence of diseases affecting the brain, e.g., stroke, brain injury, trauma, or diseases affecting the optic nerve, e.g., glaucoma. Due to demographic aging, the number of people with such visual impairments is expected to rise [Kasneci 2013]. Since persons suffering from visual impairments may overlook hazardous objects, they are prohibited from driving. This, however, leads to a decrease in quality of life, mobility, and participation in social life. Several studies have shown that some patients show a safe driving behavior despite their visual impairment by performing effective visual exploration, i.e., adequate eye and head movements (e.g., towards their visual field defect [Kasneci et al. 2014b]). Thus, a better understanding of visual perception mechanisms, i.e., of why and how we attend certain parts of our environment while 'ignoring' others, is a key question to helping visually impaired persons in complex, real-life tasks, such as driving a car.
Eye Tracking Glasses
Simulator