Publication Hub Archive

D-Lab

You have reached the Ergoneers Publication Hub for:

Product Name > D-Lab

Find all Publications here:

Publication Hub

Total results: 417

Real-time detection method of angry driving behavior based on bracelet data

Year: 2025 | Published by: 1.Key Laboratory of Automotive Transportation Safety Assurance Technology for Transportation Industry,Chang'an University,Xi'an 710064,China 2.School of Automobile,Chang'an University,Xi'an 710064,China

Authors: Shi-feng NIU(1,2),Shi-jie YU(2),Yan-jun LIU(2),Chong MA(2)

A method for detecting drivers' angry driving behavior has been designed using widely used popular smart bracelet, which provides a new way and method for effective monitoring of angry driving behavior. 50 drivers were recruited to conduct a simulated driving experiment, and a simulated driving scene that caused anger was designed. Then, heart rate index HR and eight heart rate variability (HRV) indexes such as RR.mean, SDNN, RMSSD, PNN50, SDSD, HF, LF and LF/HF obtained from bracelet collection data were used to study the correlation between the acquisition indexes and the angry driving behavior, and screen the significant influence indexes Finally, using three methods, namely support vector machine (SVM), K-nearest neighbor (KNN) and linear discriminant analysis (LDA), established and verified the detection model of angry driving behavior. The results show that the model based on KNN algorithm has the best performance on anger recognition. The accuracy of anger intensity recognition can reach 75%, and the accuracy of anger state recognition is 86 %. The results show that the wearable device (smart bracelet) can reasonably detect the driver 's anger state and anger intensity. Key words: vehicle application engineering, anger driving behavior, machine learning, smart bracelet, heart rate variability

1 version available: Journal of Jilin University(Engineering and Technology Edition)

Study on the Influence of Rural Highway Landscape Green Vision Rate on Driving Load Based on Factor Analysis

Year: 2025 | Published by: School of Civil Engineering Architecture and the Environment, Hubei University of Technology, Wuhan 430068, China , Key Laboratory of Intelligent Health Perception and Ecological Restoration of Rivers and Lakes, Ministry of Education, Hubei University of Technology, Wuhan 430068, China

Authors: Hao Li, Jiabao Yang, Heng Jiang

The green vision rate of rural highway greening landscape is a key factor affecting the driver’s visual load. Based on this, this paper uses the eye tracking method to study the visual characteristics of drivers in different green vision environments on rural highways in Xianning County. Based on the HSV color space model, this paper obtains four sections of rural highway with a green vision rate of 10~20%, green vision rate of 20~30%, green vision rate of 30~40%, and green vision rate of 40~50%. Through the real car test, the pupil area, fixation time, saccade time, saccade angle, saccade speed, and other visual indicators of the driver’s green vision rate in each section were obtained. The visual load quantization model was combined with factor analysis to explore the influence degree of the green vision rate in each section on the driver’s visual load. The results show that the visual load of the driver in the four segments with different green vision rate is as follows: Z10~20% > Z20~30% > Z30~40% > Z40~50%. When the green vision rate is 10~20%, the driver’s fixation time becomes longer, the pupil area becomes larger, the visual load is the highest, and the driving is unstable. When the green vision rate is 40% to 50%, the driver’s fixation time and pupil area reach the minimum, the visual load is the lowest, and the driving stability is the highest. The research results can provide theoretical support for the design of rural highway landscape green vision rate and help to promote the theoretical research of traffic safety.

Eye Tracking Glasses
Software

2 versions available

Association between length of upstream tunnels and visual load in connection zones of highway tunnel groups

Year: 2024

Authors: H Zheng, S Rasouli, Z Du,S Wang

To investigate drivers' visual load and comfort in the distance between adjacent tunnels (tunnel group connection zones), the maximum transient vibration value (MTVV) of the pupil area is used in this study as the index to analyze the visual load characteristics of the driver throughout the connection zones in highway tunnel groups. Data was collected using field driving experiments during which the pupil area change rate is measured as an additional indicator to evaluate the sufficiency of the length of the connection zones from the perspective of drivers’ visual adaptation. The findings show that the length of the upstream tunnel affects the visual strain of the drivers when they enter the connection zone. The visual load and its association with the length of the upstream tunnel appeared to be in the following descending order: short > extra-long > long > medium tunnel. The visual discomfort level in the short upstream tunnel has shown to be “uncomfortable,” while the level of comfort slightly rises to “fairly uncomfortable,” in the connection zone when the upstream tunnel is extra long and long. Departing from medium upstream tunnel resulted in the highest level of comfort “a little uncomfortable level” in the connection zone. When the upstream tunnels are short and medium in length, the required time for light adaptation is 5 s. The connection zone length threshold which is the minimum length of connection zone in order for two consecutive tunnels not to affect each other in terms of visual load of drivers is calculated to be 713.89 m. The driver's pupil area change during light adaptation when the upstream tunnel is short and medium is in the range of 30–40 %. When upstream tunnel is long and extremely long, the light adaptation time is 8 s and 9 s, respectively, and the respective thresholds for connection zone are 797.22 m and 825 m. The drivers' pupil area change in long and extremely long tunnels during light adaptation is in the range of 38–50 % and 43–50 %, respectively. Findings in this study can be used for the design of connection zones between tunnels in a highway tunnel group.

3 versions available

Biosignals Monitoring for Driver Drowsiness Detection using Deep Neural Networks

Year: 2024

Authors: J Alguindigue,A Singh,A Narayan, S Samuel

Drowsy driving poses a significant risk to road safety, necessitating the development of reliable drowsiness detection systems. In particular, the advancement of Artificial Intelligence based neuroadaptive systems is imperative to effectively mitigate this risk. Towards reaching this goal, the present research focuses on investigating the efficacy of physiological indicators, including heart rate variability (HRV), percentage of eyelid closure over the pupil over time (PERCLOS), blink rate, blink percentage, and electrodermal activity (EDA) signals, in predicting driver drowsiness. The study was conducted with a cohort of 30 participants in controlled simulated driving scenarios, with half driving in a non-monotonous environment and the other half in a monotonous environment. Three deep learning algorithms were employed: sequential neural network (SNN) for HRV, 1D-convolutional neural network (1D-CNN) for EDA, and convolutional recurrent neural network (CRNN) for eye tracking. The HRV-Based Model and EDA-Based Model exhibited strong performance in drowsiness classification, with the HRV model achieving precision, recall, and F1-score of 98.28%, 98%, and 98%, respectively, and the EDA model achieving 96.32%, 96%, and 96% for the same metrics. The confusion matrix further illustrates the model's performance and highlights high accuracy in both HRV and EDA models, affirming their efficiency in detecting driver drowsiness. However, the Eye-Based Model faced difficulties in identifying drowsiness instances, potentially attributable to dataset imbalances and underrepresentation of specific fatigue states. Despite the challenges, this work significantly contributes to ongoing efforts to improve road safety by laying the foundation for effective real-time neuro-adaptive systems for drowsiness detection and mitigation.

2 versions available

Breaking the silence: understanding teachers’ use of silence in classrooms

Year: 2024

Authors: SC Tan,AL Tan,AVY Lee

Silence in classrooms is an undervalued and understudied phenomenon. There is limited research on how teachers behave and think during teachers’ silence in lessons. There are also methodological constraints due to the lack of teacher’s talk during silence. This study used eye-tracking technology to visualize the noticing patterns of two science teachers during silence lasting more than three seconds. Using video data recorded from cameras and eye trackers, we examined each silent event and interpreted teachers’ perceptions and interpretations with consideration of eye fixations, actions of students and teachers during the silence, and teachers’ actions immediately after they broke the silence. We further examined expert-novice differences in teachers’ use of silence. Four categories of teachers’ silence were identified: silence for (1) preparing the classroom for learning; (2) teaching, questioning, and facilitating learning; (3) reflecting and thinking, and (4) behavioural management. Expert-novice differences were identified, especially in the teachers’ use of silence for approaches to teaching, reflection, and behavioural management. The novel contribution of this paper lies in the characterization of silences as observed in actual classroom settings as well as the methodological innovation in using eye trackers and video to overcome the constraints of lack of talk data during silence.

1 version available:

Designing an Experimental Platform to Assess Ergonomic Factors and Distraction Index in Law Enforcement Vehicles during Mission-Based Routes

Year: 2024

Authors: MH Cheng, J Guan, HK Dave, RS White, RL Whisler

Mission-based routes for various occupations play a crucial role in occupational driver safety, with accident causes varying according to specific mission requirements. This study focuses on the development of a system to address driver distraction among law enforcement officers by optimizing the Driver–Vehicle Interface (DVI). Poorly designed DVIs in law enforcement vehicles, often fitted with aftermarket police equipment, can lead to perceptual-motor problems such as obstructed vision, difficulty reaching controls, and operational errors, resulting in driver distraction. To mitigate these issues, we developed a driving simulation platform specifically for law enforcement vehicles. The development process involved the selection and placement of sensors to monitor driver behavior and interaction with equipment. Key criteria for sensor selection included accuracy, reliability, and the ability to integrate seamlessly with existing vehicle systems. Sensor positions were strategically located based on previous ergonomic studies and digital human modeling to ensure comprehensive monitoring without obstructing the driver’s field of view or access to controls. Our system incorporates sensors positioned on the dashboard, steering wheel, and critical control interfaces, providing real-time data on driver interactions with the vehicle equipment. A supervised machine learning-based prediction model was devised to evaluate the driver’s level of distraction. The configured placement and integration of sensors should be further studied to ensure the updated DVI reduces driver distraction and supports safer mission-based driving operations.

2 versions available

Dynamic Alert Design Based on Driver’s Cognitive State for Take-over Request in Automated Vehicles

Year: 2024

Authors: W Umpaipant

This thesis investigates the effectiveness of dynamic alert systems tailored to drivers’ cognitive states in automated driving environments, focusing on enhancing takeover readiness during critical transitions. Utilizing a large-scale immersive driving simulation, the study evaluated drivers’ response times and physiological measures when reacting to various alert intensities and the presence of a secondary typing task. The experiment revealed that dynamic alerts significantly improved response times and takeover performance, especially in high-distraction scenarios. Drivers responded more effectively when alerts were adjusted to their cognitive load, with strong alerts resulting in the fastest reaction times under distracted conditions. On average, dynamic alerts reduced response times by approximately 1.75 seconds compared to static alerts. Additionally, higher lateral accelerations were observed under strong alerts, indicating more decisive maneuvering. Self-rated attention-capturing scores were notably higher with dynamic alerts, particularly under strong alert conditions and in the presence of secondary tasks. The ANOVA results showed significant improvements in attention capturing and overall alert effectiveness when dynamic alerts were employed, demonstrating the robust design’s ability to capture attention and enhance driver responsiveness. The study confirmed that adaptive alert designs, which adjust based on the driver’s cognitive state, can markedly enhance overall driving experience and safety. Participants reported higher levels of confidence with dynamic alerts, especially in scenarios involving secondary tasks. Despite the strong alerts, annoyance levels remained low, indicating that dynamic alerts are effective without causing undue stress. These results underscore the potential of using adaptive systems to improve safety and efficiency in automated driving, advocating for a more nuanced approach to system alerts that considers the variable cognitive states of drivers. Future research should validate these findings with on-road studies, explore a broader range of alert modalities, and refine physiological monitoring techniques to further enhance adaptive alert systems.

2 versions available

Gaze alternation predicts inclusive next-speaker selection: evidence from eyetracking

Year: 2024

Authors: C Rühlemann

Next-speaker selection refers to the practices conversationalists rely on to designate who should speak next. Speakers have various methods available to them to select a next speaker. Certain actions, however, systematically co-select more than one particular participant to respond. These actions include asking “open-floor” questions, which are addressed to more than one recipient and that more than one recipient are eligible to answer. Here, next-speaker selection is inclusive. How are these questions multimodally designed? How does their multimodal design differ from the design of “closed-floor” questions, in which just one participant is selected as next speaker and where next-speaker selection is exclusive? Based on eyetracking data collected in naturalistic conversation, this study demonstrates that unlike closed-floor questions, open-floor questions can be predicted based on the speaker’s gaze alternation during the question. The discussion highlights cases of gaze alternation in open-floor questions and exhaustively explores deviant cases in closed-floor questions. It also addresses the functional relation of gaze alternation and gaze selection, arguing that the two selection techniques may collide, creating disorderly turntaking due to a fundamental change in participation framework from focally dyadic to inclusive. Data are in British and American English.

1 version available:

GazeAway: Designing for Gaze Aversion Experiences

Year: 2024

Authors: N Overdevest,R Patibanda,A Saini

Gaze aversion is embedded in our behaviour: we look at a blank area to support remembering and creative thinking, and as a social cue that we are thinking. We hypothesise that a person's gaze aversion experience can be mediated through technology, in turn supporting embodied cognition. In this design exploration we present six ideas for interactive technologies that mediate the gaze aversion experience. One of these ideas we developed into “GazeAway”: a prototype that swings a screen into the wearer's field of vision when they perform gaze aversion. Six participants experienced the prototype and based on their interviews, we found that GazeAway changed their gaze aversion experience threefold: increased awareness of gaze aversion behaviour, novel cross-modal perception of gaze aversion behaviour, and changing gaze aversion behaviour to suit social interaction. We hope that ultimately, our design exploration offers a starting point for the design of gaze aversion experiences.

3 versions available

Guiding gaze gestures on smartwatches: Introducing fireworks

Year: 2024

Authors: W Delamare, D Harada, L Yang,X Ren

Smartwatches enable interaction anytime and anywhere, with both digital and augmented physical objects. However, situations with busy hands can prevent user inputs. To address this limitation, we propose Fireworks, an innovative hands-free alternative that empowers smartwatch users to trigger commands effortlessly through intuitive gaze gestures by providing post-activation guidance. Fireworks allows command activation by guiding users to follow targets moving from the screen center to the edge, mimicking real life fireworks. We present the experimental design and evaluation of two Fireworks instances. The first design employs temporal parallelization, displaying few dynamic targets during microinteractions (e.g., snoozing a notification while cooking). The second design sequentially displays targets to support more commands (e.g., 20 commands), ideal for various scenarios other than microinteractions (e.g., turn on lights in a smart home). Results show that Fireworks’ single straight gestures enable faster and more accurate command selection compared to state-of-the-art baselines, namely Orbits and Stroke. Additionally, participants expressed a clear preference for Fireworks’ original visual guidance.

4 versions available