The Role of Visuomotor Behaviours in Understanding the Functionality of Upper Limb Prostheses
Advanced upper limb prostheses aim to restore coordinated hand and arm function. However, this objective can be difficult to quantify as coordinated movements require an intact visuomotor system. Eye tracking has recently been applied to study the visuomotor behaviours of upper limb prosthesis users by enabling the calculation of eye movement metrics. This scoping review aims to characterize the visuomotor behaviours of upper limb prosthesis users as described by eye tracking metrics, to summarize the eye tracking metrics used to describe prosthetic behaviour, and to identify gaps in the literature and potential areas for future research. A review of the literature was performed to identify articles that reported eye tracking metrics to evaluate the visual behaviours of individuals using an upper limb prosthesis. Data on the level of amputation, type of prosthetic device, type of eye tracker, primary eye metrics, secondary outcome metrics, experimental task, aims, and key findings were extracted. Seventeen studies were included in this scoping review. A consistently reported finding is that prosthesis users have a characteristic visuomotor behaviour that differs from that of individuals with intact arm function. Visual attention has been reported to be directed more towards the hand and less towards the target during object manipulation tasks. A gaze switching strategy and delay to disengage gaze from the current target has also been reported. Differences in the type of prosthetic device and experimental task have revealed some distinct gaze behaviours. Control factors have been shown to be related to gaze behaviour, while sensory feedback and training interventions have been demonstrated to reduce the visual attention associated with prosthesis use. Eye tracking metrics have also been used to assess the cognitive load and sense of agency of prosthesis users. Overall, there is evidence that eye tracking is an effective tool to quantitatively assess the visuomotor behaviour of prosthesis users and the recorded eye metrics are sensitive to change in response to various factors. Additional studies are needed to validate the eye metrics used to assess cognitive load and sense of agency in upper limb prosthesis users.
Eye Tracking Glasses
Software
Training benefits driver behaviour while using automation with an attention monitoring system
Attention, or more generally, driver monitoring systems have been identified as a necessity to address overreliance on driving automation. However, research suggests that monitoring systems may not be sufficient to support safe use of advanced driver assistance systems (ADAS), also evidenced by a recent major recall of Tesla’s monitoring software. The objective of the current study was to investigate whether different training approaches improve driver behaviour while using ADAS with an attention monitoring system. A driving simulator study was conducted with three between-subject groups: no training, limitation-focused training (highlighted situations where ADAS would not work), and responsibility-focused training (highlighted the driver’s role/responsibility while using ADAS). All participants (N = 47) experienced eight events which required the ego-vehicle to slow down to avoid a collision. Anticipatory cues in the environment indicated the potential for the upcoming events. Event type (covered in training vs. not covered) and event criticality (action-necessary vs. action-not-necessary) were within-subject factors. The responsibility-focused group made fewer long glances (≥ 3 s) to a secondary task than the no training and limitation-focused groups when there were no anticipatory cues. Responsibility-focused training and no training were associated with faster takeover time at the events than limitation-focused training. There were additional benefits of responsibility-focused training for events that were covered in training (e.g., higher percent of time looking at the anticipatory cues). Overall, our results suggest that even if attention monitoring systems are implemented, there may be benefits to driver ADAS training. Responsibility-focused training may be preferable to limitation-focused training, especially for situations where minimizing training length is advantageous.
Usability Assessments in User Studies on Human-Machine Interfaces for Conditionally Automated Driving: Effects of the Context of Use
The introduction of conditionally automated driving (CAD) entails a paradigm change in automotive mobility. For the first time, the driver is temporarily released from the responsibility of the driving task. This paradigm change challenges the development of human-machine interfaces (HMIs) facilitating the intended and safe interaction. User studies on the usability of such HMIs are commonly conducted in driving simulators and within one single culture. Identifying the potential effects of this context of use is crucial for the validity of research conducted in the HMI development. Following a review of the relevant literature, five research questions are derived that are addressed in this thesis. A systematic literature review offers insights into common research practices of studies on the usability of HMIs for CAD. Following, a best practice advice is developed. The advice builds the basis for the experimental design for two of the three validation studies conducted in this thesis (Exp_Testing-Environment & Exp_Culture). The first validation study, Exp_Testing-Environment, investigates the effect of the testing environment on usability assessments. An experiment conducted in a static driving simulator is compared to an otherwise identical experiment conducted in an instrumented vehicle on a test track. The findings suggest relative validity but no absolute validity. The study concludes that problems with HMI concepts identified in the driving simulator will likely be more pronounced in test track experiments. Based on the findings, driving simulators are deemed a valid tool. The second validation study, Exp_Culture, investigates the effect of the users’ cultural background on the usability assessment by comparing the usability ratings of U.S.-American participants to German participants. Regarding absolute validity, the database needs to be more conclusive. The findings, however, confirm relative validity. The study concludes that the results of usability assessments may be transferred across cultures of the Western industrialized world. Limitations are expected only regarding the usability facet satisfaction. The third validation study, Survey_Culture, addresses the effect of the users’ cultural background on the subjective importance ratings of usability factors. The comparison of U.S.- American and German ratings shows neither considerable nor systematic cultural effects. In line with Exp_Culture, this study concludes that usability assessments may be conducted within one culture of the Western industrialized world. The findings of the three validation studies are consolidated in a set of preliminary recommendations. The set is discussed and refined in an expert workshop. The final 12 recommendations suggest methods for conducting user studies on the usability of HMIs in the context of CAD. This thesis provides novel empirical findings on experimental methods in user studies on usability assessments, focusing on the validity of usability assessments in varying contexts of use. Based on prevalent literature and an expert workshop, the results are consolidated and refined. Concluding, the thesis contributes to the advancement of valid research methods for conducting usability assessments of HMIs for CAD.
Using eye tracking to support professional learning in vision-intensive professions: a case of aviation pilots
In an authentic flight simulator, the instructor is traditionally located behind the learner and is thus unable to observe the pilot’s visual attention (i.e. gaze behavior). The focus of this article is visual attention in relation to pilots’ professional learning in an Airbus A320 Full Flight Simulator. For this purpose, we measured and analyzed pilots’ visual scanning behavior during flight simulation-based training. Eye-tracking data were collected from the participants (N = 15 pilots in training) to objectively and non-intrusively study their visual attention behavior. First, we derived and compared the visual scanning patterns. The descriptive statistics revealed the pilots’ visual scanning paths and whether they followed the expected flight protocol. Second, we developed a procedure to automate the analysis. Specifically, a Hidden Markov model (HMM) was used to automatically capture the actual phases of pilots’ visual scanning. The advantage of this technique is that it is not bound to manual assessment based on graphs or descriptive data. In addition, different scanning patterns can be revealed in authentic learning situations where gaze behavior is not known in advance. Our results illustrate that HMM can provide a complementary approach to descriptive statistics. Implications for future research are discussed, including how artificial intelligence in education could benefit from the HMM approach.
Visual recognition analysis of optically long tunnels: interaction of dynamic vision and visual perception
To understand the relation between the geometric design of optically long tunnels and visibility of the exit area, in this study oculomotor (eye movement) data are collected from several drivers in Yunnan Province, China, and drivers’ fixation rate and saccade amplitude in the visible zone of the tunnel are measured as key indicators. The driver’s visual recognition is analyzed and key elements in the optimal design of the exit points of optically long tunnels are discussed. The results show that visual recognition is closely associated with the radius of the road curvature: as the radius of curve decreases, the visual focus is gradually attracted to the inner side of the curve, the proportion of small-angle saccade increases, and the dispersion of the saccade amplitude decreases.
Eye Tracking Glasses
Simulator
VR-Based Technologies: Improving Safety Training Effectiveness for a Heterogeneous Workforce from a Physiological Perspective
The enhancement of construction safety performance heavily relies on effective safety training. While virtual reality (VR) technologies have been utilized to improve construction safety training programs, the extent and mechanisms of improvement brought by VR remain unexplored. This study provided explanations on how the effectiveness of VR-based safety training for a heterogeneous workforce was achieved by investigating two mechanisms, namely embodied cognition and emotion arousal, from the physiological perspective. Randomized controlled experiments were conducted with three forms of safety training, namely paper-based training, VR-based learning, and VR-based experiencing, for both novice learners (NPs) and learners with prior knowledge (PPs). Digital eye-tracking and physiological devices and measurements were used to collect objective data. The results revealed better hazard recognition performance in both VR-based learning and VR-based experiencing groups than that in paper-based training groups. The results also revealed that VR-based learning was more effective for NPs than for PPs in acquiring safety knowledge, but VR-based experiencing was more effective for PPs than for NPs in stimulation of emotions. This means that the NPs benefit more from embodied cognition provided by the immersive environment of VR-based learning, and the PPs would be trained better with emotional arousal from the thrill of VR-based experiencing. The discovered mechanisms of embodied cognition and emotion arousal shed light on the underlying processes that contribute to the positive outcomes and promotion of VR-based training.
Eye Tracking Glasses
Simulator
Word frequency and cognitive effort in turns-at-talk: turn structure affects processing load in natural conversation
Frequency distributions are known to widely affect psycholinguistic processes. The effects of word frequency in turns-at-talk, the nucleus of social action in conversation, have, by contrast, been largely neglected. This study probes into this gap by applying corpus-linguistic methods on the conversational component of the British National Corpus (BNC) and the Freiburg Multimodal Interaction Corpus (FreMIC). The latter includes continuous pupil size measures of participants of the recorded conversations, allowing for a systematic investigation of patterns in the contained speech and language on the one hand and their relation to concurrent processing costs they may incur in speakers and recipients on the other hand. We test a first hypothesis in this vein, analyzing whether word frequency distributions within turns-at-talk are correlated with interlocutors' processing effort during the production and reception of these turns. Turns are found to generally show a regular distribution pattern of word frequency, with highly frequent words in turn-initial positions, mid-range frequency words in turn-medial positions, and low-frequency words in turn-final positions. Speakers' pupil size is found to tend to increase during the course of a turn at talk, reaching a climax toward the turn end. Notably, the observed decrease in word frequency within turns is inversely correlated with the observed increase in pupil size in speakers, but not in recipients, with steeper decreases in word frequency going along with steeper increases in pupil size in speakers. We discuss the implications of these findings for theories of speech processing, turn structure, and information packaging. Crucially, we propose that the intensification of processing effort in speakers during a turn at talk is owed to an informational climax, which entails a progression from high-frequency, low-information words through intermediate levels to low-frequency, high-information words. At least in English conversation, interlocutors seem to make use of this pattern as one way to achieve efficiency in conversational interaction, creating a regularly recurring distribution of processing load across speaking turns, which aids smooth turn transitions, content prediction, and effective information transfer.
Eye Tracking Glasses
Software
A deep learning palpebral fissure segmentation model in the context of computer user monitoring
The intense use of computers and visual terminals is a daily practice for many people. As a consequence, there are frequent complaints of visual and non-visual symptoms, such as headaches and neck pain. These symptoms make up Computer Vision Syndrome and among the factors related to this syndrome are: the distance between the user and the screen, the number of hours of use of the equipment and the reduction in the blink rate, and also the number of incomplete blinks while using the device. Although some of these items can be controlled by ergonomic measures, controlling blinks and their efficiency is more complex. A considerable number of studies have looked at measuring blinks, but few have dealt with the presence of incomplete blinks. Conventional measurement techniques have limitations when it comes to detecting and analyzing the completeness of blinks, especially due to the different eye and blink characteristics of individuals, as well as the position and movement of the user. Segmenting the palpebral fissure can be a first step towards solving this problem, by characterizing individuals well regardless of these factors. This work investigates with the development of Deep Learning models to perform palpebral fissure segmentation in situations where the eyes cover a small region of the images, such as images from a computer webcam. The segmentation of the palpebral fissure can be a first step in solving this problem, characterizing individuals well regardless of these factors. Training, validation and test sets were generated based on the CelebAMask-HQ and Closed Eyes in the Wild datasets. Various machine learning techniques are used, resulting in a final trained model with a Dice Coefficient metric close to 0.90 for the test data, a result similar to that obtained by models trained with images in which the eye region occupies most of the image. Keywords: Palpebral fissure. UNet. LinkNet. Computer Vision Syndrome. incomplete blink.
Eye Tracking Glasses
Software
A novel end-to-end dual-camera system for eye gaze synchrony assessment in face-to-face interaction
Quantification of face-to-face interaction can provide highly relevant information in cognitive and psychological science research. Current commercial glint-dependent solutions suffer from several disadvantages and limitations when applied in face-to-face interaction, including data loss, parallax errors, the inconvenience and distracting effect of wearables, and/or the need for several cameras to capture each person. Here we present a novel eye-tracking solution, consisting of a dual-camera system used in conjunction with an individually optimized deep learning approach that aims to overcome some of these limitations. Our data show that this system can accurately classify gaze location within different areas of the face of two interlocutors, and capture subtle differences in interpersonal gaze synchrony between two individuals during a (semi-)naturalistic face-to-face interaction.
Eye Tracking Glasses
Software