Publication Hub Archive

Eye Tracker

You have reached the Ergoneers Publication Hub for:

Used Tool > Eye Tracker

Find all Publications here:

Publication Hub

Total results: 582

Group cycling in urban environments: Analyzing visual attention and riding performance for enhanced road safety

Year: 2025 | Published by: School of Safety Science, Tsinghua University, Beijing 100084, China; Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing 100084, China; Anhui Province Key Laboratory of Human Safety, Hefei Anhui 230601, China; Beijing Key Laboratory of Comprehensive Emergency Response Science, Beijing, China; College of Electronic Science and Technology, National University of Defense Technology, Changsha, 410073, China

Authors: Meng Li , Yan Zhang , Tao Chen, Hao Du , Kaifeng Deng

China is a major cycling nation with nearly 400 million bicycles, significantly alleviating urban traffic congestion. However, safety concerns are prominent, with approximately 35% of cyclists forming groups with family, friends, or colleagues, exerting a significant impact on the traffic system. This study focuses on group cycling, employing urban cycling experiments, GPS trajectory tracking, and eye-tracking to analyze the visual search, and cycling control of both groups and individuals. Findings reveal that group cyclists tend to focus more on companions, leading to a dispersed gaze pattern compared to individual riders who focus more on the direct path and surroundings. Group riders also exhibit shorter fixation times on traffic signs, potentially indicating decreased attention to traffic regulations. Despite similar lateral position deviation, group cyclists exhibit higher steering entropy, indicating greater variability in their steering choices. Additionally, group riders demonstrate varied passing times, suggesting a collective advantage in navigating complex traffic conditions. This study enhances our understanding of bicycles within traffic dynamics, offering valuable insights for traffic management systems.

Eye Tracking Glasses
Software

1 version available: Science Direct

Real-time detection method of angry driving behavior based on bracelet data

Year: 2025 | Published by: 1.Key Laboratory of Automotive Transportation Safety Assurance Technology for Transportation Industry,Chang'an University,Xi'an 710064,China 2.School of Automobile,Chang'an University,Xi'an 710064,China

Authors: Shi-feng NIU(1,2),Shi-jie YU(2),Yan-jun LIU(2),Chong MA(2)

A method for detecting drivers' angry driving behavior has been designed using widely used popular smart bracelet, which provides a new way and method for effective monitoring of angry driving behavior. 50 drivers were recruited to conduct a simulated driving experiment, and a simulated driving scene that caused anger was designed. Then, heart rate index HR and eight heart rate variability (HRV) indexes such as RR.mean, SDNN, RMSSD, PNN50, SDSD, HF, LF and LF/HF obtained from bracelet collection data were used to study the correlation between the acquisition indexes and the angry driving behavior, and screen the significant influence indexes Finally, using three methods, namely support vector machine (SVM), K-nearest neighbor (KNN) and linear discriminant analysis (LDA), established and verified the detection model of angry driving behavior. The results show that the model based on KNN algorithm has the best performance on anger recognition. The accuracy of anger intensity recognition can reach 75%, and the accuracy of anger state recognition is 86 %. The results show that the wearable device (smart bracelet) can reasonably detect the driver 's anger state and anger intensity. Key words: vehicle application engineering, anger driving behavior, machine learning, smart bracelet, heart rate variability

1 version available: Journal of Jilin University(Engineering and Technology Edition)

Study on the Influence of Rural Highway Landscape Green Vision Rate on Driving Load Based on Factor Analysis

Year: 2025 | Published by: School of Civil Engineering Architecture and the Environment, Hubei University of Technology, Wuhan 430068, China , Key Laboratory of Intelligent Health Perception and Ecological Restoration of Rivers and Lakes, Ministry of Education, Hubei University of Technology, Wuhan 430068, China

Authors: Hao Li, Jiabao Yang, Heng Jiang

The green vision rate of rural highway greening landscape is a key factor affecting the driver’s visual load. Based on this, this paper uses the eye tracking method to study the visual characteristics of drivers in different green vision environments on rural highways in Xianning County. Based on the HSV color space model, this paper obtains four sections of rural highway with a green vision rate of 10~20%, green vision rate of 20~30%, green vision rate of 30~40%, and green vision rate of 40~50%. Through the real car test, the pupil area, fixation time, saccade time, saccade angle, saccade speed, and other visual indicators of the driver’s green vision rate in each section were obtained. The visual load quantization model was combined with factor analysis to explore the influence degree of the green vision rate in each section on the driver’s visual load. The results show that the visual load of the driver in the four segments with different green vision rate is as follows: Z10~20% > Z20~30% > Z30~40% > Z40~50%. When the green vision rate is 10~20%, the driver’s fixation time becomes longer, the pupil area becomes larger, the visual load is the highest, and the driving is unstable. When the green vision rate is 40% to 50%, the driver’s fixation time and pupil area reach the minimum, the visual load is the lowest, and the driving stability is the highest. The research results can provide theoretical support for the design of rural highway landscape green vision rate and help to promote the theoretical research of traffic safety.

Eye Tracking Glasses
Software

2 versions available

Training and dashboard design: Impact on operator performance and mental workload for flight safety

Year: 2025 | Published by: a) Department of Industrial Engineering and Engineering Management National Tsing Hua University, Hsinchu, Taiwan b) National Chung-Shan Institute of Science and Technology, Taichung, Taiwan c) Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan

Authors: Kuang-Jou Chen a , Jia-Jing Shin a , Hsuan-Lin Chu b , Yan-Lin Chen b , Chih-Hsing Chu a , Ying-Yin Huang c , Yun-Ju Lee a

Pilot training, the design of flight instrument panels, and mental workload are essential elements for ensuring aviation safety. Prior studies on icon learning have shown that chunking techniques can improve understanding of icon-related information. The research explores the effects of different learning methods and instrument panel designs on learning. The study compares two types of panel layouts: a chunking layout and a long-scanning path layout. Thirty participants were enlisted and divided into two groups: one using the chunking method and a control group. The chunking group was trained to recognize instruments through functional grouping, whereas the control group received training in a random sequence. Both objective and subjective evaluations were used to assess the participants’ workload. Findings indicated that the chunking group was more efficient in visual search during training. However, the two groups had no notable differences in learning rates or NASA-TLX scores. The results support using chunking as a training strategy and an optimized panel layout to improve performance significantly. By integrating the proven benefits of chunking-based training and optimized panel layouts, the aviation industry could significantly enhance pilot efficiency and reduce mental workload, improving flight safety and operational effectiveness.

Eye Tracking Glasses
Simulator
Software

1 version available: Sciencedirect

Analysis and regulation of driving behavior in the entrance zone of freeway tunnels: Implementation of visual guidance systems in China

Year: 2024

Authors: R Bei, Z Du, T Huang, J Mei, S He, X Zhang

In China, visual guidance systems are commonly used in tunnels to optimize the visual reference system. However, studies focusing specifically on visual guidance systems in the tunnel entrance zone are limited. Hence, a driving simulation test is performed in this study to quantitatively evaluate the effectiveness of (i) visual guidance devices at different vertical positions (pavement and roadside) and (ii) a multilayer visual guidance system for regulating driving behavior in the tunnel entrance zone. Furthermore, the characteristics of driving behavior and their effects on traffic safety in the tunnel entrance zone are examined. Data such as the vehicle position, area of interest (AOI), throttle position, steering wheel angle, and lane center offset are obtained using a driving simulation platform and an eye-tracking device. As indicators, the first fixation position (FP), starting deceleration position (DP), average throttle position (TPav), number of deceleration stages (N|DS), gradual change degree of the vehicle trajectory (G|VT), and average steering wheel angle (SWAav) are derived. The regulatory effect of visual guidance devices on driving performance is investigated. First, high-position roadside visual guidance devices effectively reduce decision urgency and significantly enhance deceleration and lane-keeping performance. Specifically, the advanced deceleration performance (AD), smooth deceleration performance (SD), trajectory gradualness (TG), and trajectory stability (TS) in the tunnel entrance zone improve by 63%, 225%, 269%, and 244%, respectively. Additionally, the roadside low-position visual guidance devices primarily target the trajectory gradualness (TG), thus resulting in improvements by 80% and 448% in the TG and TS, respectively. Meanwhile, the pavement visual guidance devices focus solely on enhancing the TS and demonstrates a relatively lower improvement rate of 99%. Finally, the synergistic effect of these visual guidance devices facilitates the multilayer visual guidance system in enhancing the deceleration and lane-keeping performance. This aids drivers in early detection and deceleration at the tunnel entrance zone, reduces the urgency of deceleration decisions, promotes smoother deceleration, and improves the gradualness and stability of trajectories.

Eye Tracking Glasses
Simulator

6 versions available

Association between length of upstream tunnels and visual load in connection zones of highway tunnel groups

Year: 2024

Authors: H Zheng, S Rasouli, Z Du,S Wang

To investigate drivers' visual load and comfort in the distance between adjacent tunnels (tunnel group connection zones), the maximum transient vibration value (MTVV) of the pupil area is used in this study as the index to analyze the visual load characteristics of the driver throughout the connection zones in highway tunnel groups. Data was collected using field driving experiments during which the pupil area change rate is measured as an additional indicator to evaluate the sufficiency of the length of the connection zones from the perspective of drivers’ visual adaptation. The findings show that the length of the upstream tunnel affects the visual strain of the drivers when they enter the connection zone. The visual load and its association with the length of the upstream tunnel appeared to be in the following descending order: short > extra-long > long > medium tunnel. The visual discomfort level in the short upstream tunnel has shown to be “uncomfortable,” while the level of comfort slightly rises to “fairly uncomfortable,” in the connection zone when the upstream tunnel is extra long and long. Departing from medium upstream tunnel resulted in the highest level of comfort “a little uncomfortable level” in the connection zone. When the upstream tunnels are short and medium in length, the required time for light adaptation is 5 s. The connection zone length threshold which is the minimum length of connection zone in order for two consecutive tunnels not to affect each other in terms of visual load of drivers is calculated to be 713.89 m. The driver's pupil area change during light adaptation when the upstream tunnel is short and medium is in the range of 30–40 %. When upstream tunnel is long and extremely long, the light adaptation time is 8 s and 9 s, respectively, and the respective thresholds for connection zone are 797.22 m and 825 m. The drivers' pupil area change in long and extremely long tunnels during light adaptation is in the range of 38–50 % and 43–50 %, respectively. Findings in this study can be used for the design of connection zones between tunnels in a highway tunnel group.

Eye Tracking Glasses
Simulator

2 versions available

Biosignals Monitoring for Driver Drowsiness Detection using Deep Neural Networks

Year: 2024

Authors: J Alguindigue,A Singh,A Narayan, S Samuel

Drowsy driving poses a significant risk to road safety, necessitating the development of reliable drowsiness detection systems. In particular, the advancement of Artificial Intelligence based neuroadaptive systems is imperative to effectively mitigate this risk. Towards reaching this goal, the present research focuses on investigating the efficacy of physiological indicators, including heart rate variability (HRV), percentage of eyelid closure over the pupil over time (PERCLOS), blink rate, blink percentage, and electrodermal activity (EDA) signals, in predicting driver drowsiness. The study was conducted with a cohort of 30 participants in controlled simulated driving scenarios, with half driving in a non-monotonous environment and the other half in a monotonous environment. Three deep learning algorithms were employed: sequential neural network (SNN) for HRV, 1D-convolutional neural network (1D-CNN) for EDA, and convolutional recurrent neural network (CRNN) for eye tracking. The HRV-Based Model and EDA-Based Model exhibited strong performance in drowsiness classification, with the HRV model achieving precision, recall, and F1-score of 98.28%, 98%, and 98%, respectively, and the EDA model achieving 96.32%, 96%, and 96% for the same metrics. The confusion matrix further illustrates the model's performance and highlights high accuracy in both HRV and EDA models, affirming their efficiency in detecting driver drowsiness. However, the Eye-Based Model faced difficulties in identifying drowsiness instances, potentially attributable to dataset imbalances and underrepresentation of specific fatigue states. Despite the challenges, this work significantly contributes to ongoing efforts to improve road safety by laying the foundation for effective real-time neuro-adaptive systems for drowsiness detection and mitigation.

Eye Tracking Glasses
Simulator

2 versions available

Breaking the silence: understanding teachers’ use of silence in classrooms

Year: 2024

Authors: SC Tan,AL Tan,AVY Lee

Silence in classrooms is an undervalued and understudied phenomenon. There is limited research on how teachers behave and think during teachers’ silence in lessons. There are also methodological constraints due to the lack of teacher’s talk during silence. This study used eye-tracking technology to visualize the noticing patterns of two science teachers during silence lasting more than three seconds. Using video data recorded from cameras and eye trackers, we examined each silent event and interpreted teachers’ perceptions and interpretations with consideration of eye fixations, actions of students and teachers during the silence, and teachers’ actions immediately after they broke the silence. We further examined expert-novice differences in teachers’ use of silence. Four categories of teachers’ silence were identified: silence for (1) preparing the classroom for learning; (2) teaching, questioning, and facilitating learning; (3) reflecting and thinking, and (4) behavioural management. Expert-novice differences were identified, especially in the teachers’ use of silence for approaches to teaching, reflection, and behavioural management. The novel contribution of this paper lies in the characterization of silences as observed in actual classroom settings as well as the methodological innovation in using eye trackers and video to overcome the constraints of lack of talk data during silence.

Eye Tracking Glasses
Software

1 version available:

Comparing eye–hand coordination between controller-mediated virtual reality, and a real-world object interaction task

Year: 2024

Authors: E Lavoie,JS Hebert,CS Chapman

Virtual reality (VR) technology has advanced significantly in recent years, with many potential applications. However, it is unclear how well VR simulations mimic real-world experiences, particularly in terms of eye–hand coordination. This study compares eye–hand coordination from a previously validated real-world object interaction task to the same task re-created in controller-mediated VR. We recorded eye and body movements and segmented participants’ gaze data using the movement data. In the real-world condition, participants wore a head-mounted eye tracker and motion capture markers and moved a pasta box into and out of a set of shelves. In the VR condition, participants wore a VR headset and moved a virtual box using handheld controllers. Unsurprisingly, VR participants took longer to complete the task. Before picking up or dropping off the box, participants in the real world visually fixated the box about half a second before their hand arrived at the area of action. This 500-ms minimum fixation time before the hand arrived was preserved in VR. Real-world participants disengaged their eyes from the box almost immediately after their hand initiated or terminated the interaction, but VR participants stayed fixated on the box for much longer after it was picked up or dropped off. We speculate that the limited haptic feedback during object interactions in VR forces users to maintain visual fixation on objects longer than in the real world, altering eye–hand coordination. These findings suggest that current VR technology does not replicate real-world experience in terms of eye–hand coordination.

Eye Tracking Glasses
Software

7 versions available

Designing an Experimental Platform to Assess Ergonomic Factors and Distraction Index in Law Enforcement Vehicles during Mission-Based Routes

Year: 2024

Authors: MH Cheng, J Guan, HK Dave, RS White, RL Whisler

Mission-based routes for various occupations play a crucial role in occupational driver safety, with accident causes varying according to specific mission requirements. This study focuses on the development of a system to address driver distraction among law enforcement officers by optimizing the Driver–Vehicle Interface (DVI). Poorly designed DVIs in law enforcement vehicles, often fitted with aftermarket police equipment, can lead to perceptual-motor problems such as obstructed vision, difficulty reaching controls, and operational errors, resulting in driver distraction. To mitigate these issues, we developed a driving simulation platform specifically for law enforcement vehicles. The development process involved the selection and placement of sensors to monitor driver behavior and interaction with equipment. Key criteria for sensor selection included accuracy, reliability, and the ability to integrate seamlessly with existing vehicle systems. Sensor positions were strategically located based on previous ergonomic studies and digital human modeling to ensure comprehensive monitoring without obstructing the driver’s field of view or access to controls. Our system incorporates sensors positioned on the dashboard, steering wheel, and critical control interfaces, providing real-time data on driver interactions with the vehicle equipment. A supervised machine learning-based prediction model was devised to evaluate the driver’s level of distraction. The configured placement and integration of sensors should be further studied to ensure the updated DVI reduces driver distraction and supports safer mission-based driving operations.

Simulator
Software

2 versions available