Publication Hub Archive

Driving Simulation

You have reached the Ergoneers Publication Hub for:

Field of Application > Driving Simulation

Find all Publications here:

Publication Hub

Total results: 274

Driving Simulator Evaluation of Long Persistent Self-Luminous Pavement Markings’ Visual Guidance

Year: 2025

Authors: X Yang, C Xian, X Feng, Y Cao, C Peng, International Journal of Pavement Research and Technology

To address the problem of poor nighttime visibility of conventional pavement markings, this study developed a long-persistence self-luminous pavement marking (SLPM) using rare-earth strontium aluminate as the luminescent material. A driving simulator-based virtual environment was employed to evaluate the visual guidance performance of standard and self-luminous markings on both straight and curved highway segments. The analysis focused on gaze behavior indicators, including visual scanning time, scanning angle, and fixation stability. Results show that the long-persistence SLPM markedly enhances drivers’ visual adaptability under low-visibility conditions. Drivers’ visual scanning times were primarily concentrated within the 0–30ms range, with scanning angles between 2° and 4°, while the frequency of scanning was notably lower than that for standard markings, indicating more stable and efficient information acquisition.These findings demonstrate that self-luminous markings can effectively improve nighttime visual guidance, providing theoretical and practical support for the broader implementation of long-persistence luminescent technologies in roadway design.

1 version available:

Dynamic glare evaluation modeling of human eye visual properties and smart materials

Year: 2025

Authors: H Zhang, H Di, X Wang, Y Li, W Qiao, Mechanics of Advanced Materials and Structures, Volume 32

In response to the limitations of traditional threshold increment methods in dynamic glare evaluation, this study integrates human visual characteristics with the adaptive properties of intelligent materials to investigate the impact of motion speed on dynamic vision. A fuzzy circle model is employed to simulate the human eye’s refractive effect, analyzing the response of intelligent materials to variations in equivalent luminous screen brightness under different dynamic vision conditions. Dynamic vision detection and road lighting measurement experiments were conducted to validate the proposed approach. Based on these findings, a dynamic glare evaluation model incorporating the sensing and actuation mechanisms of smart materials was developed, enabling adaptive glare perception optimization in response to environmental changes. Experimental results indicate that the relative error between simulated and actual fuzzy circles is below 1%, while the deviation in the dynamic-to-static brightness ratio is only 0.4% in a stationary state, confirming the model’s accuracy and reliability. Additionally, the model exhibits consistency with traditional static glare evaluation methods. This study provides a new theoretical foundation and practical framework for applying intelligent materials in dynamic visual perception assessment.

1 version available:

Enhancing Gaze Prediction in Multi-Party Conversations via Speaker-Aware Multimodal Adaptation

Year: 2025

Authors: MC Lee, Z Deng, ICMI '25: Proceedings of the 27th International Conference on Multimodal Interaction

Modeling gaze patterns in multiparty conversations is crucial to build socially-aware dialogue agents and humanoid robots. However, existing approaches typically rely on visual data or focus on dyadic settings. We propose a novel framework for social attention modeling — predicting gaze directions from linguistic and speaker cues alone, without direct visual input. We introduce SAT5, a speaker-aware adaptation of the T5 language model, pre-trained using multi-task objectives that capture both span corruption and speaker state modeling. Using a new dataset of three-party face-to-face conversations with synchronized speech, gaze, and motion capture data, we demonstrate that SAT5 significantly outperforms both pretrained and RNN-based baselines in predicting gaze targets. Our findings highlight the importance of conversational structure and speaker dynamics in modeling social attention, and offer a strong foundation for gaze-aware multimodal systems

1 version available:

Evaluating In-Car Tasks’ Distraction Effects with Drive-In Lab

Year: 2025

Authors: T Kujala, A Sarkar , Proceedings of the 2025 CHI Conference on Human

Existing measurements of driver distraction in laboratory settings lack construct and ecological validity, and therefore, cannot provide reliable estimates of in-car tasks’ distraction effects. In this paper, we operationalize driver distraction in a novel way with the help of Drive-In Lab, where any passenger car can be connected to a driving simulation. The operationalization is based on drivers’ headway maintenance during in-car tasks as compared to baseline driving, while accommodating situational and driver-specific variables, such as brake response times. Realistic visual looming cues enable evaluation of distraction effects on cognitive processes crucial for safe driving. Validation studies with two 2024 car models indicate that the method can reliably differentiate distraction effects between cars, in-car tasks, and drivers as large, medium, small, or no effect on crash potential. The method supports design of in-car interactions by providing valid means to reveal the worst and best practices in in-car user interface design.

1 version available:

Exploring how physio-psychological states affect drivers’ takeover performance in conditional automated vehicles

Year: 2025

Authors: A Wang, J Wang, C Huang, D He, H Yang, Accident Analysis & Prevention

Although driving automation is promised to improve driving safety, drivers are still required to take over the control of the vehicles in case of emergency. Estimating drivers’ takeover performance serves as the basis for adaptive driving automation and takeover request (TOR) to ensure driving safety. However, although algorithms have been proposed to estimate drivers’ takeover performance through physiological and eye-tracking measures, the complex interrelationships between these metrics and driver behavior, as well as the interactions among the metrics themselves, are not fully understood. To answer this question, a driving simulation experiment involving 42 participants was conducted. Drivers experienced three types of takeover scenarios requested by TOR while driving a conditionally automated vehicle. Drivers’ physiological, eye-tracking metrics and psychological states, as imposed by several non-driving-related tasks were collected. A structural equation model was used to explore the interactions among physiological metrics (i.e., cardiac activity, respiratory activity, electrodermal activity), eye-tracking metrics, psychological states (i.e., trust in driving automation and perceived workload), and variations in takeover time and takeover quality. The results showed that trust was positively associated with takeover quality, while workload was positively associated with takeover time. Additionally, physiological and eye-tracking metrics were indirectly associated with takeover quality via psychological states. This study reveals the hierarchical relationship among takeover-performance-related variables and provides insights for designing driver monitoring systems aimed at estimating takeover performance in vehicles with driving automation and adaptive driving automation to improve driving safety.

2 versions available

Formation and Development of Mental Models in Partial and Conditional Driving Automation

Year: 2025

Authors: S Feinauer, Technische Universität Dresden

With the introduction of assisted and automated driving functions, driver-vehicle interaction fundamentally changes. In that context, drivers’ mental models of these functions play a central role. However, due to the novelty of these systems it can be assumed that a lack of knowledge and misconceptions of automated functions are common among drivers. For this reason, this thesis sought to shed light on the question of how the formation of adequate mental models can be supported, and to derive recommendations on the design of driver instruction for assisted and automated driving functions. In a first study (N = 45), the effect of lack of information prior to the first assisted/automated drive on drivers’ mental model formation, attitudes, and interaction with the automated vehicle was assessed. The results of this study emphasize the relevance of driver instruction and its benefits for mental model formation and attitudes towards the vehicle. Based on these findings, the focus of the following three studies was to develop recommendations for approaches to driver instruction. In that context, intrinsic motivation to learn can be expected to be central to enhance learning outcomes. In an online study (N = 220), elements aimed at enhancing learning motivation were added to an instruction on assisted and automated functions, and their effects on learner motivation and mental model formation was assessed. Results indicate that elements providing feedback to the learner on their progress on the instruction help to increase learning motivation. Based on this finding, a gamified instruction was developed and subsequently evaluated in a driving simulator study (N = 65). Gamification is expected to increase intrinsic motivation and thus learning outcomes. Indeed, this study showed that gamification benefits learning motivation, mental model formation, and reliance behaviour. In order to make user education easily accessible to drivers, the fourth study within this thesis comprised the development and evaluation of a tutorial concept that supports drivers during their first drive with an automated vehicle. Results (N = 32) indicate that learning during the drive can be as efficient as before the drive, and benefits acceptance of the automated driving function as well as driver interaction with it. Overall, this thesis provides recommendations for the design of driver education for drivers of current and future automated vehicles. It emphasizes the need to consider learner motivation as a central element of instructional design and provides evidence of the positive effects that low threshold driver education for automated functions can have.

1 version available:

Image-analysis-based method for exploring factors influencing the visual saliency of signage in metro stations

Year: 2025

Authors: M Yin, X Zhou, Q Ji, H Peng, S Yang, C Li, Cognitive Systems Research, Wuhan University of Technology

Many studies have been conducted on the effects of colour, light, and signage location on the visual saliency of underground signage. However, few studies have investigated the influence of indoor visual environments on the saliency of pedestrian signage. To explore the factors that influence the visual saliency of signage in metro stations, we developed a novel analysis method using a combination of saliency and focus maps. Then, questionnaires were utilised to unify the various formats of results from the saliency and focus maps. The factors that influence the visual saliency of signage were explored using the proposed method at selected sites and validated through virtual reality experiments. Additionally, this study proposes an image-analysis-based method that reveals the multilevel factors affecting pedestrian attention to signage in underground metro stations, including spatial interfaces, crowd flow, and ambient light. The results indicate that crowd flow has the greatest impact on pedestrian attention to signage. The findings of this study are expected to improve the wayfinding efficiency of pedestrians and assist designers in producing high-quality metro experiences.

1 version available:

Impact of multi-dimensional cognitive demands on takeover performance, physiological and eye-tracking measures in conditionally automated vehicles

Year: 2025

Authors: A Wang, W Shi, D He, H Yang, Transportation Research Volume 114Part F: Traffic Psychology and Behaviour

Before fully autonomous vehicles come true, drivers are still required to be responsible for driving safety and take over control of the vehicle when prompted by takeover requests in conditionally automated vehicles. Thus, drivers’ takeover performance can affect the safety of conditionally automated driving. However, though cognitive distraction can impair takeover performance in general, the influence of multi-dimensionality in the cognitive resources was under-investigated. At the same time, it is unknown how physiological and eye-tracking metrics are associated with different modalities of cognitive tasks in conditionally automated vehicles. Thus, through a driving simulation study with 42 participants, we investigated the effects of multidimensional cognitive demands, as imposed by multiple types of non-driving-related tasks, on drivers’ takeover performance, physiological responses, and eye-tracking metrics in conditionally automated vehicles. Results showed that certain takeover performance (i.e., vehicle speed and lateral acceleration), and physiological and eye-tracking metrics (i.e., differences between consecutive R-peaks, skin conductance level, variation in respiratory intervals, number of fixations, number of saccades and saccade angle) are still responsive to cognitive load in the context of driving automation. Further, the modality of the cognitive tasks can moderate the takeover performance (i.e., takeover time) and certain physiological (i.e., ratio of spectral power in the low- and high-frequency range and respiration depth) metrics. These findings suggest that, in future conditionally automated vehicles, in-vehicle task designs should consider the modality of the cognitive demands for driving safety and for the performance of the driver monitoring systems.

1 version available:

In-Vehicle Displays for Supporting Operation of Driving Automation Systems: Design and Evaluation using Driver Gaze Measures

Year: 2025

Authors: Dina Kanaan, Mattea Powell, Michael Lu, Birsen Donmez , Department of Mechanical and Industrial Engineering University of Toronto

In-vehicle displays can support the safe operation of driving automation systems, but a challenge lies in balancing the information conveyed against situational demands. Driver gaze measures are a useful tool for evaluating such displays as they can provide a proxy for driver attention, particularly when the driver is not physically controlling the vehicle. The objective of this dissertation is to systematically identify the design space for displays aimed at supporting safe operation of automation and provide a comprehensive review of their evaluation using gaze measures. First, a scoping literature review revealed extensive research focus on takeover requests, with relatively less focus on informational displays that communicate the automation’s intent or explicitly identify hazards. Surprisingly, there was little focus on displays that monitor and manage driver attention, which are becoming increasingly available and mandated in some jurisdictions. The gaze measures adopted for evaluation mostly relied on static areas of interest that were not dependent on traffic context.

2 versions available

Inflating system expectations prior to SAE level 3 automated vehicle use: effects on monitoring behavior, resumption of control, and attitudes toward driving …

Year: 2025

Authors: DJ Souders, S Agrawal, I Benedyk, Y Guo, Y Li, Transportation Research,, 2025

Increasing levels of vehicle automation bring new risks to drivers, particularly those who are using a new automated driving system (ADS). The overreliance on partially automated advanced driver assistance systems (SAE L2 ADAS) has led to crashes, a concern that might escalate with conditional ADS (SAE L3), which require timely driver intervention. This study examines the impact of how L3 ADS’ capabilities and limitations are communicated to users on their subjective attitudes toward ADS and their driving behavior and performance, particularly concerning safety. Method In a driving simulator study, participants received introductory videos about the role of drivers at different automation levels, the capabilities and limitations of L3 ADS, and its human–machine interface (HMI). Videos concluded with either “Highlighted Benefits” of higher automation levels, reflecting current marketing trends of ADAS and ADS, or an “Explicit Reminder” of driver responsibilities in L3 ADS usage. Participants then completed three driving simulator runs (repeated measures) during or after which visual monitoring behavior, take-over performance, and subjective attitudes (trust and acceptance) toward the ADS were gathered. Results Participants resumed control when receiving uncertainty alert from the ADS across both introductory information conditions, with minor differences in take-over performance and monitoring behavior. No significant differences were observed in road monitoring behavior, take-over performance, and subjective attitudes between conditions, except for subjective familiarity ratings, which decreased over runs for the Explicit Reminder group compared to the Highlighted Benefits group. Both conditions showed take-over performance improvements, particularly after practice. Conclusions Successful crash avoidance in both groups indicates that graded warnings and practice can effectively improve take-over performance. This similarity in outcomes suggests that introductory information about ADS may not significantly affect monitoring behavior and performance of ADS users. These results highlight the potential for such systems to mitigate ADS complacency and promote safer resumption of vehicle control during automation failures.

2 versions available