Company

Publication Hub

Welcome to the Ergoneers Publication Hub

Find studies in your field of interest, connect to authors, and push common goals.

If you are missing your own, relevant publication or would like to contribute to our  international research community, please contact us.

Groundbreaking publications are to be recognized by our Jury on a yearly basis. Submit a Paper

Ergoneers_Worldmap

Filter menu

Search by keywords

Filter by

Fields of Application

Used Tools

Product Categories

Product Names

Publication Years

Total results: 603

Exploring the occupational fatigue risk of short-haul truck drivers: effects of sleep pattern, driving task, and time-on-task on driving behavior and eye-motion metrics

Year: 2024

Authors: C Zhang,Y Ma, S Chen, J Zhang, G Xing

Driver fatigue is the leading cause of truck-related accidents. The most significant occupational fatigue factors among short-haul truck drivers are sleep patterns, the round-trip driving task, and the time-on-task. However, the underlying mechanisms of these influential factors remain unclear. This study aims to explore the interactive effects of sleep patterns, driving task, and time-on-task on driving behavior and eye-motion metrics among short-haul truck drivers. We obtained test data from eleven professional short-haul truck drivers, with each driver participating in a three-day test under the conditions of two driving tasks and three different sleep patterns. We applied three-way repeated-measure ANOVA and non-parametric tests to analyze the data. The results reveal that: (1) violation of sleep-related legal requirements, insufficient sleep, and unreasonable time-on-task can have negative effects on short-haul truck drivers' vigilance and driving performance; (2) both driving task and sleep pattern contribute to driver fatigue, and the interaction of time-on-task and sleep pattern exacerbates driver fatigue more than the effects of any single factor alone; and (3) short-haul truck drivers who are sleep deprived exhibit short periods of controlled compensatory behavior during the outbound task, and sleepiness is more prevalent during the inbound task compared to the outbound task due to the monotony and low workload of the driving process. These findings provide theoretical and practical guidance for transportation industry managers to strengthen company-wide fatigue-related regulations, ensure adequate sleep for drivers via regulations, and optimize work schedules to improve safety outcomes of short-haul truck drivers.

Eye Tracking Glasses
Simulator

5 versions available

Gaze alternation predicts inclusive next-speaker selection: evidence from eyetracking

Year: 2024

Authors: C Rühlemann

Next-speaker selection refers to the practices conversationalists rely on to designate who should speak next. Speakers have various methods available to them to select a next speaker. Certain actions, however, systematically co-select more than one particular participant to respond. These actions include asking “open-floor” questions, which are addressed to more than one recipient and that more than one recipient are eligible to answer. Here, next-speaker selection is inclusive. How are these questions multimodally designed? How does their multimodal design differ from the design of “closed-floor” questions, in which just one participant is selected as next speaker and where next-speaker selection is exclusive? Based on eyetracking data collected in naturalistic conversation, this study demonstrates that unlike closed-floor questions, open-floor questions can be predicted based on the speaker’s gaze alternation during the question. The discussion highlights cases of gaze alternation in open-floor questions and exhaustively explores deviant cases in closed-floor questions. It also addresses the functional relation of gaze alternation and gaze selection, arguing that the two selection techniques may collide, creating disorderly turntaking due to a fundamental change in participation framework from focally dyadic to inclusive. Data are in British and American English.

Eye Tracking Glasses
Software

1 version available:

GazeAway: Designing for Gaze Aversion Experiences

Year: 2024

Authors: N Overdevest,R Patibanda,A Saini

Gaze aversion is embedded in our behaviour: we look at a blank area to support remembering and creative thinking, and as a social cue that we are thinking. We hypothesise that a person's gaze aversion experience can be mediated through technology, in turn supporting embodied cognition. In this design exploration we present six ideas for interactive technologies that mediate the gaze aversion experience. One of these ideas we developed into “GazeAway”: a prototype that swings a screen into the wearer's field of vision when they perform gaze aversion. Six participants experienced the prototype and based on their interviews, we found that GazeAway changed their gaze aversion experience threefold: increased awareness of gaze aversion behaviour, novel cross-modal perception of gaze aversion behaviour, and changing gaze aversion behaviour to suit social interaction. We hope that ultimately, our design exploration offers a starting point for the design of gaze aversion experiences.

Eye Tracking Glasses
Software

3 versions available

GazeTrak: Exploring Acoustic-based Eye Tracking on a Glass Frame

Year: 2024

Authors: K Li,R Zhang, B Chen,S Chen, S Yin

In this paper, we present GazeTrak, the first acoustic-based eye tracking system on glasses. Our system only needs one speaker and four microphones attached to each side of the glasses. These acoustic sensors capture the formations of the eyeballs and the surrounding areas by emitting encoded inaudible sound towards eyeballs and receiving the reflected signals. These reflected signals are further processed to calculate the echo profiles, which are fed to a customized deep learning pipeline to continuously infer the gaze position. In a user study with 20 participants, GazeTrak achieves an accuracy of 3.6° within the same remounting session and 4.9° across different sessions with a refreshing rate of 83.3 Hz and a power signature of 287.9 mW. Furthermore, we report the performance of our gaze tracking system fully implemented on an MCU with a low-power CNN accelerator (MAX78002). In this configuration, the system runs at up to 83.3 Hz and has a total power signature of 95.4 mW with a 30 Hz FPS.

Eye Tracking Glasses
Software

3 versions available

Group Cycling in Urban Environments: Analyzing Visual Attention, Hazard Perception, and Riding Performance for Enhanced Road Safety

Year: 2024

Authors: M Li, Y Zhang, T Chen, H Du, K Deng

China is a major cycling nation with nearly 400 million bicycles. The widespread use of bicycles effectively alleviates urban traffic congestion. However, safety concerns are prominent, with approximately 35% of cyclists forming groups with family, friends, or colleagues, exerting a significant impact on the traffic system. This study focuses on group cycling, employing urban cycling experiments, GPS trajectory tracking, and eye-tracking to analyze the visual search, hazard perception, and cycling control of both groups and individuals. Findings reveal interdependence in visual attention among group cyclists in busy and complex road conditions, leading to reduced attention to traffic safety targets and potential decreases in risk perception. In terms of lateral control, group cycling exhibits lower lateral deviation and higher steering entropy, particularly at complex intersections. While group cycling results in decreased speed, it forms a clustering advantage at complex intersections, competitively advancing to shorten intersection passage times. Overall, group cyclists differ from individuals in visual, hazard perception, and control aspects, potentially elevating cycling risks. Consequently, there is a need for corresponding traffic safety education and intervention, along with consideration of group cycling characteristics in urban traffic planning to enhance safety and efficiency.

Eye Tracking Glasses
Simulator

1 version available:

Guiding gaze gestures on smartwatches: Introducing fireworks

Year: 2024

Authors: W Delamare, D Harada, L Yang,X Ren

Smartwatches enable interaction anytime and anywhere, with both digital and augmented physical objects. However, situations with busy hands can prevent user inputs. To address this limitation, we propose Fireworks, an innovative hands-free alternative that empowers smartwatch users to trigger commands effortlessly through intuitive gaze gestures by providing post-activation guidance. Fireworks allows command activation by guiding users to follow targets moving from the screen center to the edge, mimicking real life fireworks. We present the experimental design and evaluation of two Fireworks instances. The first design employs temporal parallelization, displaying few dynamic targets during microinteractions (e.g., snoozing a notification while cooking). The second design sequentially displays targets to support more commands (e.g., 20 commands), ideal for various scenarios other than microinteractions (e.g., turn on lights in a smart home). Results show that Fireworks’ single straight gestures enable faster and more accurate command selection compared to state-of-the-art baselines, namely Orbits and Stroke. Additionally, participants expressed a clear preference for Fireworks’ original visual guidance.

Eye Tracking Glasses
Software

4 versions available

Head-mounted eye tracker videos and raw data collected during breathing recognition attempts in in simulated cardiac arrest

Year: 2024

Authors: M Pedrotti, M Stanek, L Gelin, P Terrier

This paper presents data collected by Pedrotti et al. (2022, 2024) [1][2], which includes videos captured using a Dikablis head-mounted eye tracker (Ergoneers GmbH, Germany), along with the corresponding raw data. The data collection aimed to assess participants' ability to recognize breathing in a simulated cardiac arrest scenario. Equipped with the eye tracker, participants entered a room where a manikin was positioned on the floor. Their task was to determine if the manikin was breathing and respond accordingly, such as initiating cardiopulmonary resuscitation if the victim was not breathing. Our analysis focused on examining looking time on the manikin's thorax by inspecting the videos. Potential applications of the dataset [3] include identifying fixation and saccades using custom algorithms, analyzing pupil diameter data, and conducting secondary analyses involving participant characteristics like age and gender as independent variables.

Eye Tracking Glasses
Simulator

2 versions available

Image-Analysis-Based Method for Exploring Factors Influencing the Visual Saliency of Signage in Metro Stations

Year: 2024

Authors: M Yin,X ZHOU, S Yang, H Peng, C LI

Many studies have been conducted on the effects of colour, light, and signage location on the visual saliency of underground signage. However, few studies have investigated the influence of indoor visual environments on the saliency of pedestrian signage. To explore the factors that influence the visual saliency of signage in metro stations, we developed a novel analysis method using a combination of saliency and focus maps. Then, questionnaires were utilised to unify the various formats of results from the saliency and focus maps. The factors that influence the visual saliency of signage were explored using the proposed method at selected sites and validated through virtual reality experiments. Additionally, this study proposes an image-analysis-based method that reveals the multilevel factors affecting pedestrian attention to signage in underground metro stations, including spatial interfaces, crowd flow, and ambient light. The results indicate that crowd flow has the greatest impact on pedestrian attention to signage. The findings of this study are expected to improve the wayfinding efficiency of pedestrians and assist designers in producing high-quality metro experiences.

Eye Tracking Glasses
Software

1 version available:

Inducing visual attention through audiovisual stimuli: Can synchronous sound be a salient event?

Year: 2024

Authors: I Salselas,F Pereira,E Sousa

We present an experimental research aiming to explore how spatial attention may be biased through auditory stimuli. In particular, we investigate how synchronous sound and image may affect attention and increase the saliency of the audiovisual event. We have designed and implemented an experimental study where subjects, wearing an eye-tracking system, were examined regarding their gaze toward the audiovisual stimuli being displayed. The audiovisual stimuli were specifically tailored for this experiment, consisting of videos contrasting in terms of Synch Points (i.e., moments where a visual event is associated with a visible trigger movement, synchronous with its correspondent sound). While consistency across audiovisual sensory modalities revealed to be an attention-drawing feature, when combined with synchrony, it clearly emphasized the biasing, triggering orienting, that is, focal attention towards the particular scene that contains the Synch Point. Consequently, results revealed synchrony to be a saliency factor, contributing to the strengthening of the focal attention. In today's increasingly complex multimedia landscape, the interaction between auditory and visual stimuli plays a pivotal role in shaping our perception and directing our attention. Within the context of the research on multisensory attention, this study endeavors to explore the intricate dynamics of attentional allocation concerning audiovisual stimuli, specifically focusing on the impact of synchronized auditory and visual cues on capturing and directing attention.

Eye Tracking Glasses
Software

7 versions available

Knowing me, knowing you—A study on top-down requirements for compensatory scanning in drivers with homonymous visual field loss

Year: 2024

Authors: B Biebl,M Kuhn, F Stolle, J Xu,K Bengler,AR Bowers

Objective It is currently still unknown why some drivers with visual field loss can compensate well for their visual impairment while others adopt ineffective strategies. This paper contributes to the methodological investigation of the associated top-down mechanisms and aims at validating a theoretical model on the requirements for successful compensation among drivers with homonymous visual field loss. Methods A driving simulator study was conducted with eight participants with homonymous visual field loss and eight participants with normal vision. Participants drove through an urban surrounding and experienced a baseline scenario and scenarios with visual precursors indicating increased likelihoods of crossing hazards. Novel measures for the assessment of the mental model of their visual abilities, the mental model of the driving scene and the perceived attention demand were developed and used to investigate the top-down mechanisms behind attention allocation and hazard avoidance. Results Participants with an overestimation of their visual field size tended to prioritize their seeing side over their blind side both in subjective and objective measures. The mental model of the driving scene showed close relations to the subjective and actual attention allocation. While participants with homonymous visual field loss were less anticipatory in their usage of the visual precursors and showed poorer performances compared to participants with normal vision, the results indicate a stronger reliance on top-down mechanism for drivers with visual impairments. A subjective focus on the seeing side or on near peripheries more frequently led to bad performances in terms of collisions with crossing cyclists. Conclusion The study yielded promising indicators for the potential of novel measures to elucidate top-down mechanisms in drivers with homonymous visual field loss. Furthermore, the results largely support the model of requirements for successful compensatory scanning. The findings highlight the importance of individualized interventions and driver assistance systems tailored to address these mechanisms.

Simulator
Software

8 versions available

Explore Cutting-Edge Research in Human Factors and Ergonomics

Welcome to our comprehensive publication library, where we bring together the best research on human factors, ergonomics, psychology, usability, and consumer behavior. Our extensive collection includes white papers, PhD theses, and scholarly articles that delve into applications across various fields such as aerospace, defence, automotive, transportation, sport science, and education.

For researchers and engineers, our library serves as a vital resource, offering the latest insights to inspire innovation and drive projects forward. With a focus on sensor-based studies—utilizing technologies like EEG, ECG, eye tracking, and motion tracking—we provide a platform to explore how these tools enhance understanding of human performance and interaction.

Our unique offerings include advanced simulators for flight and driving, enabling users to study complex human behaviors in controlled environments. By fusing and synchronizing diverse data sources, our platform delivers in-depth analyses across correlated factors, streamlining research processes and saving valuable time.

Ergoneers has been at the forefront of innovation in physiological and environmental data-based research tools for over two decades. Our publication library invites the community to engage in exchange and growth, fostering collaboration around humanitarian goals.

Whether you’re a researcher, an engineer, or an educator, our library is designed to support your work, providing you with the resources necessary to advance your understanding and application of human factors in real-world scenarios. Discover how you can leverage the latest findings to enhance user experience and performance in your field. Join us in shaping the future of human-centered design and research—explore our publication library today!