Publication Hub Archive

Work Safety

You have reached the Ergoneers Publication Hub for:

Field of Application > Work Safety

Find all Publications here:

Publication Hub

Total results: 548

GazeNav: Gaze-based pedestrian navigation

Year: 2015

Authors: I Giannopoulos,P Kiefer,M Raubal

Pedestrian navigation systems help us make a series of decisions that lead us to a destination. Most current pedestrian navigation systems communicate using map-based turn-by-turn instructions. This interaction mode suffers from ambiguity, its user's ability to match the instruction with the environment, and it requires a redirection of visual attention from the environment to the screen. In this paper we present GazeNav, a novel gaze-based approach for pedestrian navigation. GazeNav communicates the route to take based on the user's gaze at a decision point. We evaluate GazeNav against the map-based turn-by-turn instructions. Based on an experiment conducted in a virtual environment with 32 participants we found a significantly improved user experience of GazeNav, compared to map-based instructions, and showed the effectiveness of GazeNav as well as evidence for better local spatial learning. We provide a complete comparison of navigation efficiency and effectiveness between the two approaches.

Simulator
Software

7 versions available

Glance awareness and gaze interaction in smartwatches

Year: 2015

Authors: D Akkil,J Kangas,J Rantala,P Isokoski

Smartwatches are widely available and increasingly adopted by consumers. The most common way of interacting with smartwatches is either touching a screen or pressing buttons on the sides. However, such techniques require using both hands. We propose glance awareness and active gaze interaction as alternative techniques to interact with smartwatches. We will describe an experiment conducted to understand the user preferences for visual and haptic feedback on a "glance" at the wristwatch. Following the glance, the users interacted with the watch using gaze gestures. Our results showed that user preferences differed depending on the complexity of the interaction. No clear preference emerged for complex interaction. For simple interaction, haptics was the preferred glance feedback modality.

Eye Tracking Glasses
Software

3 versions available

Graphical processing unit assisted image processing for accelerated eye tracking

Year: 2015

Authors: JPL Du Plessis

Eye tracking is a well-established tool utilised in research areas such as neuroscience, psychology and marketing. There are currently many different types of eye trackers available, the most common being video-based remote eye trackers. Many of the currently available remote eye trackers are either expensive, or provide a relatively low sampling frequency. The goal of this dissertation is to present researchers with the option of an affordable high-speed eye tracker. The eye tracker implementation presented in this dissertation was developed to address the lack of low-cost high-speed eye trackers currently available. Traditionally, low-cost systems make use of commercial off-the-shelf components. However, the high frequency at which the developed system runs prohibits the use of such hardware. Instead, affordability of the eye tracker has been evaluated relative to existing commercial systems. To facilitate these high frequencies, the eye tracker developed in this dissertation utilised the Graphical Processing Unit, Microsoft DirectX and HLSL in an attempt to accelerate eye tracking tasks – specifically the processing of the eye video. The final system was evaluated through experimentation to determine its performance in terms of accuracy, precision, trackability and sampling frequency. Through an experiment involving 31 participants, it was demonstrated that the developed solution is capable of sampling at frequencies of 200 Hz and higher, while allowing for head movements within an area of 10×6×10 cm. Furthermore, the system reports a pooled variance precision of approximately 0.3° and an accuracy of around 1° of visual angle for human participants. The entire system can be built for less than 700 euros, and will run on a mid-range computer system. Through the study an alternative is presented for more accessible research in numerous application fields.

Eye Tracking Glasses
Software

2 versions available

Inferring mindful cognitive‐processing of peer‐feedback via eye‐tracking: Role of feedback‐characteristics, fixation‐durations and transitions

Year: 2015

Authors: M Bolzer,JW Strijbos,F Fischer

Feedback literature identifies mindful cognitive processing of (peer) feedback and (peer) feedback characteristics – as well as the presence of justifications for feedback – as important for its efficiency. However, mindful cognitive processing has yet to be operationalized and investigated. In this study, an operationalization of mindful cognitive processing is introduced, alongside an investigation to identify valid measures for it. In a between-subjects design, peer feedback (PF) content [elaborated specific feedback with justifications (ESF + J) vs. elaborated specific feedback without justifications (ESF)] was varied. Students received a scenario containing an essay by a fictional student and fictional PF, followed by a text revision, distraction and PF recall task. Eye tracking was applied to measure (a) how written PF was (re-) read (fixation durations) and (b) the number of transitions occurring between PF and essay text. Mindful cognitive processing was inferred from the relation between fixation durations on PF and number of transitions between essay text and PF with (a) text revision performance and (b) PF recall performance. When no justifications were provided, recipients invested more time in reading the PF and essay and increased the effort to relate the PF to essay text. Fixation durations and number of transitions proved to be valid measures to infer mindful cognitive processing.

Eye Tracking Glasses
Software

7 versions available

Investigating the mechanisms underlying fixation durations during the first year of life: a computational account

Year: 2015

Authors: IR Saez de Urabain

Infants’ eye-movements provide a window onto the development of cognitive functions over the first years of life. Despite considerable advances in the past decade, studying the mechanisms underlying infant fixation duration and saccadic control remains a challenge due to practical and technical constraints in infant testing. This thesis addresses these issues and investigates infant oculomotor control by presenting novel software and methods for dealing with low-quality infant data (GraFIX), a series of behavioural studies involving novel gaze-contingent and scene-viewing paradigms, and computational modelling of fixation timing throughout development. In a cross-sectional study and two longitudinal studies, participants were eye-tracked while viewing dynamic and static complex scenes, and performed gap-overlap and double-step paradigms. Fixation data from these studies were modelled in a number of simulation studies with the CRISP model of fixation durations in adults in scene viewing. Empirical results showed how fixation durations decreased with age for all viewing conditions but at different rates. Individual differences between long- and short-lookers were found across visits and viewing conditions, with static images being the most stable viewing condition. Modelling results confirmed the CRISP theoretical framework’s applicability to infant data and highlighted the influence of both cognitive processing and the developmental state of the visuo-motor system on fixation durations during the first few months of life. More specifically, while the present work suggests that infant fixation durations reflect on-line perceptual and cognitive activity similarly to adults, the individual developmental state of the visuo-motor system still affects this relationship until 10 months of age. Furthermore, results suggested that infants are already able to program saccades in two stages at 3.5 months: (1) an initial labile stage subject to cancellation and (2) a subsequent non-labile stage that cannot be cancelled. The length of the non-labile stage decreased relative to the labile stage especially from 3.5 to 5 months, indicating a greater ability to cancel saccade programs as infants grew older. In summary, the present work provides unprecedented insights into the development of fixation durations and saccadic control during the first year of life and demonstrates the benefits of mixing behavioural and computational approaches to investigate methodologically challenging research topics such as oculomotor control in infancy.

Eye Tracking Glasses
Software

4 versions available

Lexical processing in children and adults during word copying

Year: 2015

Authors: AE Laishley,SP Liversedge

Copying text may seem trivial, but the task itself is psychologically complex. It involves a series of sequential visual and cognitive processes, which must be co-ordinated; these include visual encoding, mental representation and written production. To investigate the time course of word processing during copying, we recorded eye movements of adults and children as they hand-copied isolated words presented on a classroom board. Longer and lower frequency words extended adults' encoding durations, suggesting whole word encoding. Only children's short word encoding was extended by lower frequency. Though children spent more time encoding long words compared to short words, gaze durations for long words were extended similarly for high- and low-frequency words. This suggested that for long words children used partial word representations and encoded multiple sublexical units rather than single whole words. Piecemeal word representation underpinned copying longer words in children, but reliance on partial word representations was not shown in adult readers.

Eye Tracking Glasses
Software

4 versions available

Malfunction of a traffic light assistant application on a smartphone

Year: 2015

Authors: M Krause, S Weichelt,K Bengler

A traffic light assistant on a smartphone is assessed in real traffic, with an eye tracking system. In one experimental condition, the system showed (intentionally) false information to the drivers to simulate a malfunction. The glances for this condition showed similar gaze parameters, as a working system. The subjective ratings of the test subjects after this malfunction dropped significantly. The gathered gaze data are compared to three former studies (two in a driving simulator and another study in real road driving). Findings indicate, that a driving simulator is a safe and reliable alternative to get some of the glance data (e.g., glance durations to the smartphone) without driving in real traffic.

Simulator
Software

4 versions available

Mechanisms underlying selecting objects for action

Year: 2015

Authors: M Wulff,R Laverick,GW Humphreys

Activities of daily living such as making a cup of tea are computationally complex and potentially rely on the integration of different cognitive processes. We assessed the factors which affect the selection of objects for action, focusing on the role of action knowledge and its modulation by distracters. Fourteen neuropsychological patients and 10 healthy aged-matched controls selected pairs of objects commonly used together among distracters in two contexts: with real objects and with pictures of the same objects presented sequentially on a computer screen. Across both tasks, semantically related distracters led to slower responses and more errors than unrelated distracters and the object actively used for action was selected prior to the object that would be passively held during the action. We identified a sub-group of patients (N = 6) whose accuracy was 2SDs below the controls performances in the real object task. Interestingly, these impaired patients were more affected by the presence of unrelated distracters during both tasks than intact patients and healthy controls. Note that the impaired patients had lesions to left parietal, right anterior temporal and bilateral pre-motor regions. We conclude that: (1) motor procedures guide object selection for action, (2) semantic knowledge affects action-based selection, (3) impaired action decision making is associated with the inability to ignore distracting information and (4) lesions to either the dorsal or ventral visual stream can lead to deficits in making action decisions. Overall, the data indicate that impairments in everyday tasks can be evaluated using a simulated computer task. The implications for rehabilitation are discussed.

Simulator
Software

17 versions available

Mobile cognition: balancing user support and learning

Year: 2015

Authors: M Raubal

People engage in mobile decision-making on a daily basis. Spatially aware mobile devices have the potential to support users in spatio-temporal decision situations by augmenting their cognitive abilities or compensating for their deficiencies. In many cases though, this technology has a negative impact on people's spatial learning of the environment, such as during wayfinding. In this position paper we argue that mobile cognition must strive for solutions that find the right balance between immediate goals and longer-term objectives such as spatial learning.

Eye Tracking Glasses
Software

2 versions available

Postural sway and gaze can track the complex motion of a visual target

Year: 2015

Authors: V Hatzitaki,N Stergiou,G Sofianidis,A Kyvelidou

Variability is an inherent and important feature of human movement. This variability has form exhibiting a chaotic structure. Visual feedback training using regular predictable visual target motions does not take into account this essential characteristic of the human movement, and may result in task-specific learning and loss of visuo-motor adaptability. In this study, we asked how well healthy young adults can track visual target cues of varying degrees of complexity during whole-body swaying in the Anterior-Posterior (AP) and Medio-Lateral (ML) directions. Participants were asked to track three visual target motions: a complex (Lorenz attractor), a noise (brown) and a periodic (sine) moving target while receiving online visual feedback about their performance. Postural sway, gaze, and target motion were synchronously recorded and the degree of force-target and gaze-target coupling was quantified using spectral coherence and Cross-Approximate entropy. Analysis revealed that both force-target and gaze-target coupling was sensitive to the complexity of the visual stimuli motions. Postural sway showed a higher degree of coherence with the Lorenz attractor than the brown noise or sinusoidal stimulus motion. Similarly, gaze was more synchronous with the Lorenz attractor than the brown noise and sinusoidal stimulus motion. These results were similar regardless of whether tracking was performed in the AP or ML direction. Based on the theoretical model of optimal movement variability, tracking of a complex signal may provide a better stimulus to improve visuo-motor adaptation and learning in postural control.

Eye Tracking Glasses
Simulator

19 versions available