Publication Hub Archive

Work Safety

You have reached the Ergoneers Publication Hub for:

Field of Application > Work Safety

Find all Publications here:

Publication Hub

Total results: 548

Gaze alternation predicts inclusive next-speaker selection: evidence from eyetracking

Year: 2024

Authors: C Rühlemann

Next-speaker selection refers to the practices conversationalists rely on to designate who should speak next. Speakers have various methods available to them to select a next speaker. Certain actions, however, systematically co-select more than one particular participant to respond. These actions include asking “open-floor” questions, which are addressed to more than one recipient and that more than one recipient are eligible to answer. Here, next-speaker selection is inclusive. How are these questions multimodally designed? How does their multimodal design differ from the design of “closed-floor” questions, in which just one participant is selected as next speaker and where next-speaker selection is exclusive? Based on eyetracking data collected in naturalistic conversation, this study demonstrates that unlike closed-floor questions, open-floor questions can be predicted based on the speaker’s gaze alternation during the question. The discussion highlights cases of gaze alternation in open-floor questions and exhaustively explores deviant cases in closed-floor questions. It also addresses the functional relation of gaze alternation and gaze selection, arguing that the two selection techniques may collide, creating disorderly turntaking due to a fundamental change in participation framework from focally dyadic to inclusive. Data are in British and American English.

Eye Tracking Glasses
Software

1 version available:

GazeAway: Designing for Gaze Aversion Experiences

Year: 2024

Authors: N Overdevest,R Patibanda,A Saini

Gaze aversion is embedded in our behaviour: we look at a blank area to support remembering and creative thinking, and as a social cue that we are thinking. We hypothesise that a person's gaze aversion experience can be mediated through technology, in turn supporting embodied cognition. In this design exploration we present six ideas for interactive technologies that mediate the gaze aversion experience. One of these ideas we developed into “GazeAway”: a prototype that swings a screen into the wearer's field of vision when they perform gaze aversion. Six participants experienced the prototype and based on their interviews, we found that GazeAway changed their gaze aversion experience threefold: increased awareness of gaze aversion behaviour, novel cross-modal perception of gaze aversion behaviour, and changing gaze aversion behaviour to suit social interaction. We hope that ultimately, our design exploration offers a starting point for the design of gaze aversion experiences.

Eye Tracking Glasses
Software

3 versions available

GazeTrak: Exploring Acoustic-based Eye Tracking on a Glass Frame

Year: 2024

Authors: K Li,R Zhang, B Chen,S Chen, S Yin

In this paper, we present GazeTrak, the first acoustic-based eye tracking system on glasses. Our system only needs one speaker and four microphones attached to each side of the glasses. These acoustic sensors capture the formations of the eyeballs and the surrounding areas by emitting encoded inaudible sound towards eyeballs and receiving the reflected signals. These reflected signals are further processed to calculate the echo profiles, which are fed to a customized deep learning pipeline to continuously infer the gaze position. In a user study with 20 participants, GazeTrak achieves an accuracy of 3.6° within the same remounting session and 4.9° across different sessions with a refreshing rate of 83.3 Hz and a power signature of 287.9 mW. Furthermore, we report the performance of our gaze tracking system fully implemented on an MCU with a low-power CNN accelerator (MAX78002). In this configuration, the system runs at up to 83.3 Hz and has a total power signature of 95.4 mW with a 30 Hz FPS.

Eye Tracking Glasses
Software

3 versions available

Group Cycling in Urban Environments: Analyzing Visual Attention, Hazard Perception, and Riding Performance for Enhanced Road Safety

Year: 2024

Authors: M Li, Y Zhang, T Chen, H Du, K Deng

China is a major cycling nation with nearly 400 million bicycles. The widespread use of bicycles effectively alleviates urban traffic congestion. However, safety concerns are prominent, with approximately 35% of cyclists forming groups with family, friends, or colleagues, exerting a significant impact on the traffic system. This study focuses on group cycling, employing urban cycling experiments, GPS trajectory tracking, and eye-tracking to analyze the visual search, hazard perception, and cycling control of both groups and individuals. Findings reveal interdependence in visual attention among group cyclists in busy and complex road conditions, leading to reduced attention to traffic safety targets and potential decreases in risk perception. In terms of lateral control, group cycling exhibits lower lateral deviation and higher steering entropy, particularly at complex intersections. While group cycling results in decreased speed, it forms a clustering advantage at complex intersections, competitively advancing to shorten intersection passage times. Overall, group cyclists differ from individuals in visual, hazard perception, and control aspects, potentially elevating cycling risks. Consequently, there is a need for corresponding traffic safety education and intervention, along with consideration of group cycling characteristics in urban traffic planning to enhance safety and efficiency.

Eye Tracking Glasses
Simulator

1 version available:

Guiding gaze gestures on smartwatches: Introducing fireworks

Year: 2024

Authors: W Delamare, D Harada, L Yang,X Ren

Smartwatches enable interaction anytime and anywhere, with both digital and augmented physical objects. However, situations with busy hands can prevent user inputs. To address this limitation, we propose Fireworks, an innovative hands-free alternative that empowers smartwatch users to trigger commands effortlessly through intuitive gaze gestures by providing post-activation guidance. Fireworks allows command activation by guiding users to follow targets moving from the screen center to the edge, mimicking real life fireworks. We present the experimental design and evaluation of two Fireworks instances. The first design employs temporal parallelization, displaying few dynamic targets during microinteractions (e.g., snoozing a notification while cooking). The second design sequentially displays targets to support more commands (e.g., 20 commands), ideal for various scenarios other than microinteractions (e.g., turn on lights in a smart home). Results show that Fireworks’ single straight gestures enable faster and more accurate command selection compared to state-of-the-art baselines, namely Orbits and Stroke. Additionally, participants expressed a clear preference for Fireworks’ original visual guidance.

Eye Tracking Glasses
Software

4 versions available

Head-mounted eye tracker videos and raw data collected during breathing recognition attempts in in simulated cardiac arrest

Year: 2024

Authors: M Pedrotti, M Stanek, L Gelin, P Terrier

This paper presents data collected by Pedrotti et al. (2022, 2024) [1][2], which includes videos captured using a Dikablis head-mounted eye tracker (Ergoneers GmbH, Germany), along with the corresponding raw data. The data collection aimed to assess participants' ability to recognize breathing in a simulated cardiac arrest scenario. Equipped with the eye tracker, participants entered a room where a manikin was positioned on the floor. Their task was to determine if the manikin was breathing and respond accordingly, such as initiating cardiopulmonary resuscitation if the victim was not breathing. Our analysis focused on examining looking time on the manikin's thorax by inspecting the videos. Potential applications of the dataset [3] include identifying fixation and saccades using custom algorithms, analyzing pupil diameter data, and conducting secondary analyses involving participant characteristics like age and gender as independent variables.

Eye Tracking Glasses
Simulator

2 versions available

Image-Analysis-Based Method for Exploring Factors Influencing the Visual Saliency of Signage in Metro Stations

Year: 2024

Authors: M Yin,X ZHOU, S Yang, H Peng, C LI

Many studies have been conducted on the effects of colour, light, and signage location on the visual saliency of underground signage. However, few studies have investigated the influence of indoor visual environments on the saliency of pedestrian signage. To explore the factors that influence the visual saliency of signage in metro stations, we developed a novel analysis method using a combination of saliency and focus maps. Then, questionnaires were utilised to unify the various formats of results from the saliency and focus maps. The factors that influence the visual saliency of signage were explored using the proposed method at selected sites and validated through virtual reality experiments. Additionally, this study proposes an image-analysis-based method that reveals the multilevel factors affecting pedestrian attention to signage in underground metro stations, including spatial interfaces, crowd flow, and ambient light. The results indicate that crowd flow has the greatest impact on pedestrian attention to signage. The findings of this study are expected to improve the wayfinding efficiency of pedestrians and assist designers in producing high-quality metro experiences.

Eye Tracking Glasses
Software

1 version available:

Inducing visual attention through audiovisual stimuli: Can synchronous sound be a salient event?

Year: 2024

Authors: I Salselas,F Pereira,E Sousa

We present an experimental research aiming to explore how spatial attention may be biased through auditory stimuli. In particular, we investigate how synchronous sound and image may affect attention and increase the saliency of the audiovisual event. We have designed and implemented an experimental study where subjects, wearing an eye-tracking system, were examined regarding their gaze toward the audiovisual stimuli being displayed. The audiovisual stimuli were specifically tailored for this experiment, consisting of videos contrasting in terms of Synch Points (i.e., moments where a visual event is associated with a visible trigger movement, synchronous with its correspondent sound). While consistency across audiovisual sensory modalities revealed to be an attention-drawing feature, when combined with synchrony, it clearly emphasized the biasing, triggering orienting, that is, focal attention towards the particular scene that contains the Synch Point. Consequently, results revealed synchrony to be a saliency factor, contributing to the strengthening of the focal attention. In today's increasingly complex multimedia landscape, the interaction between auditory and visual stimuli plays a pivotal role in shaping our perception and directing our attention. Within the context of the research on multisensory attention, this study endeavors to explore the intricate dynamics of attentional allocation concerning audiovisual stimuli, specifically focusing on the impact of synchronized auditory and visual cues on capturing and directing attention.

Eye Tracking Glasses
Software

7 versions available

Knowing me, knowing you—A study on top-down requirements for compensatory scanning in drivers with homonymous visual field loss

Year: 2024

Authors: B Biebl,M Kuhn, F Stolle, J Xu,K Bengler,AR Bowers

Objective It is currently still unknown why some drivers with visual field loss can compensate well for their visual impairment while others adopt ineffective strategies. This paper contributes to the methodological investigation of the associated top-down mechanisms and aims at validating a theoretical model on the requirements for successful compensation among drivers with homonymous visual field loss. Methods A driving simulator study was conducted with eight participants with homonymous visual field loss and eight participants with normal vision. Participants drove through an urban surrounding and experienced a baseline scenario and scenarios with visual precursors indicating increased likelihoods of crossing hazards. Novel measures for the assessment of the mental model of their visual abilities, the mental model of the driving scene and the perceived attention demand were developed and used to investigate the top-down mechanisms behind attention allocation and hazard avoidance. Results Participants with an overestimation of their visual field size tended to prioritize their seeing side over their blind side both in subjective and objective measures. The mental model of the driving scene showed close relations to the subjective and actual attention allocation. While participants with homonymous visual field loss were less anticipatory in their usage of the visual precursors and showed poorer performances compared to participants with normal vision, the results indicate a stronger reliance on top-down mechanism for drivers with visual impairments. A subjective focus on the seeing side or on near peripheries more frequently led to bad performances in terms of collisions with crossing cyclists. Conclusion The study yielded promising indicators for the potential of novel measures to elucidate top-down mechanisms in drivers with homonymous visual field loss. Furthermore, the results largely support the model of requirements for successful compensatory scanning. The findings highlight the importance of individualized interventions and driver assistance systems tailored to address these mechanisms.

Simulator
Software

8 versions available

Mediational Affordances at a Science Centre Gallery: An Exploratory and Small Study Using Eye Tracking and Interviews

Year: 2024

Authors: TW Teo, ZHJ Loh, LE Kee, G Soh

Science centres are informal learning spaces embedded with artefacts embodying mediational affordances. This exploratory and small-scale mixed methods study juxtaposes eye-tracking technologies and qualitative interviews to examine how visitors to a gallery navigated this space and interacted with different artefacts. A total of 15 visitors to the science centre gallery, Energy Story, participated in the study. The findings revealed inconclusive results about the directionality of their navigation. The mediational affordances of the artefacts, as interpreted from the interactive elements and interaction of the visitors and interviews, suggested that it was better to distribute the mediational affordances across a few artefacts in an exhibit rather than have one artefact embody several affordances. The concept of “mediational threshold” was suggested as a topic for future study. The findings contributed to the academic literature on eye-tracking studies at science centres. They also provided ideas for science centre curators and teachers who bring students with diverse learning needs to this mediational space.

Eye Tracking Glasses
Software

3 versions available