Publication Hub Archive

Eye Tracker

You have reached the Ergoneers Publication Hub for:

Used Tool > Eye Tracker

Find all Publications here:

Publication Hub

Total results: 582

Eye tracking algorithms, techniques, tools, and applications with an emphasis on machine learning and Internet of Things technologies

Year: 2021

Authors: AF Klaib,NO Alsrehin, WY Melhem

Eye tracking is the process of measuring where one is looking (point of gaze) or the motion of an eye relative to the head. Researchers have developed different algorithms and techniques to automatically track the gaze position and direction, which are helpful in different applications. Research on eye tracking is increasing owing to its ability to facilitate many different tasks, particularly for the elderly or users with special needs. This study aims to explore and review eye tracking concepts, methods, and techniques by further elaborating on efficient and effective modern approaches such as machine learning (ML), Internet of Things (IoT), and cloud computing. These approaches have been in use for more than two decades and are heavily used in the development of recent eye tracking applications. The results of this study indicate that ML and IoT are important aspects in evolving eye tracking applications owing to their ability to learn from existing data, make better decisions, be flexible, and eliminate the need to manually re-calibrate the tracker during the eye tracking process. In addition, they show that eye tracking techniques have more accurate detection results compared with traditional event-detection methods. In addition, various motives and factors in the use of a specific eye tracking technique or application are explored and recommended. Finally, some future directions related to the use of eye tracking in several developed applications are described.

Eye Tracking Glasses
Software

1 version available:

Eye–head–trunk coordination while walking and turning in a simulated grocery shopping task

Year: 2021

Authors: K Kim, M Fricke, O Bock

Previous studies argued that body turns are executed in an ordered sequence: the eyes turn first, followed by the head and then by the trunk. The purpose of this study was to find out whether this sequence holds even if body turns are not explicitly instructed, but nevertheless are necessary to reach an instructed distal goal. We asked participants to shop for grocery products in a simulated supermarket. To retrieve each product, they had to walk down an aisle, and then turn left or right into a corridor that led towards the target shelf. The need to make a turn was never mentioned by the experimenter, but it nevertheless was required in order to approach the target shelf. Main variables of interest were the delay between eye and head turns towards the target shelf, as well as the delay between head and trunk turns towards the target shelf. We found that both delays were consistently positive, and that their magnitude was near the top of the range reported in literature. We conclude that the ordered sequence of eye – then head – then trunk turns can be observed not only with a proximal, but also with a distal goal.

Eye Tracking Glasses
Simulator

8 versions available

Functional resonance analysis in an overtaking situation in road traffic: comparing the performance variability mechanisms between human and automation

Year: 2021

Authors: N Grabbe, A Gales, M Höcher,K Bengler

Automated driving promises great possibilities in traffic safety advancement, frequently assuming that human error is the main cause of accidents, and promising a significant decrease in road accidents through automation. However, this assumption is too simplistic and does not consider potential side effects and adaptations in the socio-technical system that traffic represents. Thus, a differentiated analysis, including the understanding of road system mechanisms regarding accident development and accident avoidance, is required to avoid adverse automation surprises, which is currently lacking. This paper, therefore, argues in favour of Resilience Engineering using the functional resonance analysis method (FRAM) to reveal these mechanisms in an overtaking scenario on a rural road to compare the contributions between the human driver and potential automation, in order to derive system design recommendations. Finally, this serves to demonstrate how FRAM can be used for a systemic function allocation for the driving task between humans and automation. Thus, an in-depth FRAM model was developed for both agents based on document knowledge elicitation and observations and interviews in a driving simulator, which was validated by a focus group with peers. Further, the performance variabilities were identified by structured interviews with human drivers as well as automation experts and observations in the driving simulator. Then, the aggregation and propagation of variability were analysed focusing on the interaction and complexity in the system by a semi-quantitative approach combined with a Space-Time/Agency framework. Finally, design recommendations for managing performance variability were proposed in order to enhance system safety. The outcomes show that the current automation strategy should focus on adaptive automation based on a human-automation collaboration, rather than full automation. In conclusion, the FRAM analysis supports decision-makers in enhancing safety enriched by the identification of non-linear and complex risks.

Simulator
Software

11 versions available

Hey, watch where you’re going! An on-road study of driver scanning failures towards pedestrians and cyclists

Year: 2021

Authors: N Kaya, J Girgis, B Hansma,B Donmez

The safety of Vulnerable Road Users (VRUs), such as pedestrians and cyclists, is a serious public health concern, especially at urban intersections. A major reason for vehicle-VRU collisions is driver attentional errors. Prior studies suggest that cross-modal transportation experiences (e.g., being a driver who also cycles) improve visual attention allocation toward VRUs. However, these studies were conducted in simulators or in a laboratory, limiting their generalizability to real world driving. We utilized an instrumented vehicle equipped with eye tracking technology to examine (a) the prevalence of drivers’ visual scanning failures toward VRUs at real intersections and (b) whether there is an effect of cycling experience on this prevalence. Twenty-six experienced drivers (13 cyclists and 13 non-cyclists), between the ages of 35 and 54, completed 18 different turns at urban Toronto intersections, for which gaze and video data were utilized to determine drivers’ visual scanning failures towards areas where conflicting VRUs could approach. Among the 443 unique turn events, 25% were identified as having a visual scanning failure. Results from a mixed effects logit model showed that the odds of committing visual scanning failures towards VRUs during a turning maneuver at an intersection were 2.01 times greater for drivers without cycling experience compared to drivers with cycling experience. Given that our participants represented a low crash-risk age group, this study suggests that the rate at which VRUs are unattended to may be much higher.

Eye Tracking Glasses
Software

6 versions available

How will drivers take back control in automated vehicles? A driving simulator test of an interleaving framework

Year: 2021

Authors: D Nagaraju,A Ansah,NAN Ch,C Mills

We explore the transfer of control from an automated vehicle to the driver. Based on data from N=19 participants who participated in a driving simulator experiment, we find evidence that the transfer of control often does not take place in one step. In other words, when the automated system requests the transfer of control back to the driver, the driver often does not simply stop the non-driving task. Rather, the transfer unfolds as a process of interleaving the non-driving and driving tasks. We also find that the process is moderated by the length of time available for the transfer of control: interleaving is more likely when more time is available. Our interface designs for automated vehicles must take these results into account so as to allow drivers to safely take back control from automation.

Eye Tracking Glasses
Simulator

4 versions available

In-vehicle displays to support driver anticipation of traffic conflicts in automated vehicles

Year: 2021

Authors: D He,D Kanaan,B Donmez

Objective: This paper investigates the effectiveness of in-vehicle displays in supporting drivers’ anticipation of traffic conflicts in automated vehicles (AVs). Background: Providing takeover requests (TORs) along with information on automation capability (AC) has been found effective in supporting AV drivers’ reactions to traffic conflicts. However, it is unclear what type of information can support drivers in anticipating traffic conflicts, so they can intervene (pre-event action) or prepare to intervene (pre-event preparation) proactively to avert them. Method: In a driving simulator study with 24 experienced and 24 novice drivers, we evaluated the effectiveness of two in-vehicle displays in supporting anticipatory driving in AVs with adaptive cruise control and lane keeping assistance: TORAC (TOR + AC information) and STTORAC displays (surrounding traffic (ST) information + TOR + AC information). Both displays were evaluated against a baseline display that only showed whether the automation was engaged. Results: Compared to the baseline display, STTORAC led to more anticipatory driving behaviors (pre-event action or pre-event preparation) while TORAC led to less, along with decreased attention to environmental cues that indicated an upcoming event. STTORAC led to the highest level of driving safety, as indicated by minimum gap time for scenarios that required driver intervention, followed by TORAC, and then the baseline display. Conclusions: Providing surrounding traffic information to drivers of AVs, in addition to TORs and automation capability information, can support their anticipation of potential traffic conflicts. Without the surrounding traffic information, drivers can over-rely on displays that provide TORs and automation capability information.

Simulator
Software

7 versions available

Integrating technology in psychological skills training for performance optimization in elite athletes: A systematic review

Year: 2021

Authors: M Siekańska, RZ Bondár,S di Fronso

Objectives: The aim of the current study was to systematically review the literature on the integration of technology in psychological skills training (PST) to optimize elite athletes’ performance. Design: Systematic review. Method: Published English, Italian, and Russian language articles were identified using electronic databases. Eighteen articles (out of 3753 records) fulfilled the inclusion criteria, and their quality was assessed using the Mixed Method Appraisal Tool (MMAT). Six papers were judged to be excellent and four to be high quality. There were significant methodological inconsistencies across eight studies. An overall score of quality assessment ranged from 20% to 100%. Results: The included studies implemented various technologies, in combination with PST, to identify, monitor and/or have an intervention aimed at optimizing elite athletes' performance. The results suggested that the integration covered different meanings, i.e., functional integration, integration between technologies and measures, integration between technology, theoretical framework, and psychological skills training. There was no distinct consistency between the studies with regards to the theory or model used. Conclusions: Technology and mental training should not be viewed as interchangeable facets of performance enhancement, but rather as complementary ones – where technology integrated in psychological skills training can lead to identify and monitor optimal performance and to implement more effective interventions.

Eye Tracking Glasses
Software

4 versions available

Multi-sensor eye-tracking systems and tools for capturing Student attention and understanding engagement in learning: A review

Year: 2021

Authors: Y Wang,S Lu,D Harter

Dramatic advances in the design and deployment of sensors, edge communication and computing in recent years have enabled a significant shift and evolution in education. Pervasive learning in various learning environments, including face-to-face classrooms, online learning, virtual learning, and hybrid learning, is becoming our dominant learning paradigm for students who expect highly flexible and efficient options for studying at their own time, place, and pace. Especially, in our post-pandemic present, online learning will continue to gain in importance as a significant available and required learning component. This article provides a comprehensive review of the state-of-the-art systems and studies for assisting student learning and attempting to capture student attention and engagement in many different ways. It focuses on sensors and hardware, ranging from commercial multi-sensor eye-tracking devices to open-source, low-cost systems. We also explore and present system infrastructure, data features, data processing techniques, tools and software, and key technologies that are useful for enhancing many student learning environments. Additionally, the advantages, use cases, features, and limitations of different systems and techniques are explored and contrasted, where the results of our comparative analysis are summarized in tables for readers. This review article could assist both teachers and researchers to have a better understanding of current sensors, multi-sensor systems, and related technologies for capturing student attention and understanding their performance. It can also be applied to providing practical information for novel system design, promoting ongoing research studies, and fostering sensing and technology innovations for smart education.

Eye Tracking Glasses
Software

3 versions available

Multitasking in driving as optimal adaptation under uncertainty

Year: 2021

Authors: JPP Jokinen,T Kujala,A Oulasvirta

Objective: The objective was to better understand how people adapt multitasking behavior when circumstances in driving change and how safe versus unsafe behaviors emerge. Background: Multitasking strategies in driving adapt to changes in the task environment, but the cognitive mechanisms of this adaptation are not well known. Missing is a unifying account to explain the joint contribution of task constraints, goals, cognitive capabilities, and beliefs about the driving environment. Method: We model the driver’s decision to deploy visual attention as a stochastic sequential decision-making problem and propose hierarchical reinforcement learning as a computationally tractable solution to it. The supervisory level deploys attention based on per-task value estimates, which incorporate beliefs about risk. Model simulations are compared against human data collected in a driving simulator. Results: Human data show adaptation to the attentional demands of ongoing tasks, as measured in lane deviation and in-car gaze deployment. The predictions of our model fit the human data on these metrics. Conclusion: Multitasking strategies can be understood as optimal adaptation under uncertainty, wherein the driver adapts to cognitive constraints and the task environment’s uncertainties, aiming to maximize the expected long-term utility. Safe and unsafe behaviors emerge as the driver has to arbitrate between conflicting goals and manage uncertainty about them. Application: Simulations can inform studies of conditions that are likely to give rise to unsafe driving behavior.

Simulator
Software

13 versions available

Novel time-delay side-collision warning model at non-signalized intersections based on vehicle-to-infrastructure communication

Year: 2021

Authors: N Lyu, J Wen, C Wu

In complex traffic environments, collision warning systems that rely only on in-vehicle sensors are limited in accuracy and range. Vehicle-to-infrastructure (V2I) communication systems, however, offer more robust information exchange, and thus, warnings. In this study, V2I was used to analyze side-collision warning models at non-signalized intersections: A novel time-delay side-collision warning model was developed according to the motion compensation principle. This novel time-delay model was compared with and verified against a traditional side-collision warning model. Using a V2I-oriented simulated driving platform, three vehicle-vehicle collision scenarios were designed at non-signalized intersections. Twenty participants were recruited to conduct simulated driving experiments to test and verify the performance of each collision warning model. The results showed that compared with no warning system, both side-collision warning models reduced the proportion of vehicle collisions. In terms of efficacy, the traditional model generated an effective warning in 84.2% of cases, while the novel time-delay model generated an effective warning in 90.2%. In terms of response time and conflict time difference, the traditional model gave a longer response time of 0.91 s (that of the time-delay model is 0.78 s), but the time-delay model reduced the driving risk with a larger conflict time difference. Based on an analysis of driver gaze change post-warning, the statistical results showed that the proportion of effective gaze changes reached 84.3%. Based on subjective evaluations, drivers reported a higher degree of acceptance of the time-delay model. Therefore, the time-delay side-collision warning model for non-signalized intersections proposed herein can improve the applicability and efficacy of warning systems in such complex traffic environments and provide reference for safety applications in V2I systems.

Simulator
Software

12 versions available