Publication Hub Archive

UX Analysis

You have reached the Ergoneers Publication Hub for:

Field of Application > UX Analysis

Find all Publications here:

Publication Hub

Total results: 588

Eye tracking recycle labels on packaging: Are attitudes and behaviors a predictor of viewing them?

Year: 2015

Authors: VA Smith

This study explores whether consumer attitudes and behaviors can predict their likelihood of viewing recycling labels on product packaging. Using eye-tracking technology, the research examines visual attention to recycling labels and assesses which factors might influence this behavior. The findings aim to provide insights into improving the design and placement of recycling labels to enhance consumer engagement and promote sustainable practices.

3 versions available

Eye-tracking technology in vehicles: application and design

Year: 2015

Authors: V Selimis

This work analyses the eye-tracking technology and, as an outcome, it presents an idea of implementing it, along with other kinds of technology, in vehicles. The main advantage of such an implementation would be to augment safety while driving. The setup and the methodology used for detecting human activity and interaction using the means of the eye-tracking technology are investigated. Research in that area is growing rapidly and its results are used in a variety of cases. The main reasons for that growth are the constant lowering of prices of the special equipment that is necessary, the portability that is available in some cases as well as the easiness of use that make the usage of that technology more user-friendly than it was a few years ago. The whole idea of eye-tracking is to track the movements of the eyes in an effort to determine the direction of the gaze, using sophisticated software and purpose built hardware. This manuscript, makes a brief introduction in the history of eye monitoring presenting the very early scientific approaches used in an effort to better understand the movements of the human while tracking an object or during an activity. Following, there is an overview of the theory and the methodology used to track a specific object. As a result there exists a short presentation of the image processing and the machine learning procedures that are used to accomplish such tasks. Thereafter, we further analyze the specific eye-tracking technologies and techniques that are used nowadays and the characteristics that affect the exact choice of eye-tracking equipment. For the appropriate choice we have to take into account the area of research-interest in which the equipment will be used. In addition, the main categories of eye-tracking applications are presented and we shortlist the latest state of the art eye-tracking commercial systems. Following, we present our first approach, trying to describe an eye-tracking device that could be used in vehicles offering much better safety standards, controlling various parameters, continuously checking the readiness of the driver and alerting him for potential imminent collision incidents. Finally, we describe the existing way of connecting a device, in our case an eye-tracker, can be connected to an automobile’s system.

3 versions available

Eyetracking Data Analysis Tool

Year: 2015

Authors: K Sippel,T Kübler,W Fuhl,G Schievelbein

Over the last years eye tracking became more and more popular. A variety of new eye-tracker models and algorithms for eye tracking data processing emerged. On the one hand this multitude of hard- and software brought many advantages, on the other hand the diversity of devices and measures impedes the comparability and repeatability of eye-tracking studies. While supply of eye tracking software is high, the functioning of the algorithms, e.g. how fixations and saccades are identified, is often intransparent and unflexible. The Eyetrace software bundle approaches these problems by providing a variety of different evaluation methods compatible with many eye-tracker models. Eyetrace2014 combines state-of-the-art algorithms with established approaches and provides a continuous visualization of the analysis process. All calculations provide user adaptable parameters and are well documented and referenced in order to make the whole analysis transparent. Our software is available free of charge. It is well suited for exploratory data analysis and education (http://www.ti.uni-tuebingen.de/Eyetrace.1751.0.html).

4 versions available

Gaze estimation on glasses-based stereoscopic displays

Year: 2015

Authors: L Świrski

Glasses-based 3D displays, such as those used in stereoscopic cinema or 3D monitors, are currently the most common form of 3D display. However, they are often reported to cause headaches and discomfort. One of the reasons for this is the vergence–accommodation conflict, where the binocular stimulus of one’s eyes rotating to look at a point is decoupled from the monocular stimulus of the eye lenses focusing on a point. This discomfort could be decreased by estimating the depth of a person’s gaze, and simulating a depth-of-field effect contingent on where they are looking in 3D space. In this dissertation, I investigate gaze estimation on such glasses-based 3D displays. Furthermore, I explore the feasibility of this gaze estimation with realistic constraints, such as low cost, low complexity hardware, free head motion, and real-time gaze estimates. I propose several algorithms for eye tracking and gaze estimation which are designed to work robustly and accurately despite these constraints. Firstly, I present a pupil detection approach which can accurately detect the pupil contour in difficult, off-axis images such as those captured by my eye cameras, which are attached underneath the frame of a pair of glasses. My algorithm is robust to occlusions such as eyelashes and eyelids, and operates in real-time. I evaluate it using a manually labelled dataset, and show that it has a higher detection rate than existing approaches, and sub-pixel accuracy. Secondly, I investigate the issue of evaluating gaze estimation, especially the question of how to collect ground truth data. As a result of this investigation, I present a new evaluation framework, which renders photorealistic synthetic eye images that can be used for evaluating the computer vision aspects of eye tracking. Thirdly, I present a novel eye model fitting algorithm, which initialises and refines an eye model based solely on pupil data, with no need for calibration or controlled lighting. I describe the geometry of initialising this eye model, and two methods of refining it using two different optimisation metrics. I evaluate it using synthetic images, and show that my refinements give a significant improvement in detection rate and gaze accuracy. Lastly, I present a binocular gaze estimation system which combines the above methods. My system performs geometric gaze estimation by combining the monocular eye models fitted to the left and right eye images. I describe two methods for combining these into a single binocular gaze point estimate, and methods for calibrating and refining this estimate. I then evaluate this system by performing a user study, showing that my system works for gaze estimation on glasses-based displays and is sufficiently accurate for simulating depth-of-field.

1 version available:

GazeNav: Gaze-based pedestrian navigation

Year: 2015

Authors: I Giannopoulos,P Kiefer,M Raubal

Pedestrian navigation systems help us make a series of decisions that lead us to a destination. Most current pedestrian navigation systems communicate using map-based turn-by-turn instructions. This interaction mode suffers from ambiguity, its user's ability to match the instruction with the environment, and it requires a redirection of visual attention from the environment to the screen. In this paper we present GazeNav, a novel gaze-based approach for pedestrian navigation. GazeNav communicates the route to take based on the user's gaze at a decision point. We evaluate GazeNav against the map-based turn-by-turn instructions. Based on an experiment conducted in a virtual environment with 32 participants we found a significantly improved user experience of GazeNav, compared to map-based instructions, and showed the effectiveness of GazeNav as well as evidence for better local spatial learning. We provide a complete comparison of navigation efficiency and effectiveness between the two approaches.

7 versions available

Glance awareness and gaze interaction in smartwatches

Year: 2015

Authors: D Akkil,J Kangas,J Rantala,P Isokoski

Smartwatches are widely available and increasingly adopted by consumers. The most common way of interacting with smartwatches is either touching a screen or pressing buttons on the sides. However, such techniques require using both hands. We propose glance awareness and active gaze interaction as alternative techniques to interact with smartwatches. We will describe an experiment conducted to understand the user preferences for visual and haptic feedback on a "glance" at the wristwatch. Following the glance, the users interacted with the watch using gaze gestures. Our results showed that user preferences differed depending on the complexity of the interaction. No clear preference emerged for complex interaction. For simple interaction, haptics was the preferred glance feedback modality.

3 versions available

Graphical processing unit assisted image processing for accelerated eye tracking

Year: 2015

Authors: JPL Du Plessis

Eye tracking is a well-established tool utilised in research areas such as neuroscience, psychology and marketing. There are currently many different types of eye trackers available, the most common being video-based remote eye trackers. Many of the currently available remote eye trackers are either expensive, or provide a relatively low sampling frequency. The goal of this dissertation is to present researchers with the option of an affordable high-speed eye tracker. The eye tracker implementation presented in this dissertation was developed to address the lack of low-cost high-speed eye trackers currently available. Traditionally, low-cost systems make use of commercial off-the-shelf components. However, the high frequency at which the developed system runs prohibits the use of such hardware. Instead, affordability of the eye tracker has been evaluated relative to existing commercial systems. To facilitate these high frequencies, the eye tracker developed in this dissertation utilised the Graphical Processing Unit, Microsoft DirectX and HLSL in an attempt to accelerate eye tracking tasks – specifically the processing of the eye video. The final system was evaluated through experimentation to determine its performance in terms of accuracy, precision, trackability and sampling frequency. Through an experiment involving 31 participants, it was demonstrated that the developed solution is capable of sampling at frequencies of 200 Hz and higher, while allowing for head movements within an area of 10×6×10 cm. Furthermore, the system reports a pooled variance precision of approximately 0.3° and an accuracy of around 1° of visual angle for human participants. The entire system can be built for less than 700 euros, and will run on a mid-range computer system. Through the study an alternative is presented for more accessible research in numerous application fields.

2 versions available

Inferring mindful cognitive‐processing of peer‐feedback via eye‐tracking: Role of feedback‐characteristics, fixation‐durations and transitions

Year: 2015

Authors: M Bolzer,JW Strijbos,F Fischer

Feedback literature identifies mindful cognitive processing of (peer) feedback and (peer) feedback characteristics – as well as the presence of justifications for feedback – as important for its efficiency. However, mindful cognitive processing has yet to be operationalized and investigated. In this study, an operationalization of mindful cognitive processing is introduced, alongside an investigation to identify valid measures for it. In a between-subjects design, peer feedback (PF) content [elaborated specific feedback with justifications (ESF + J) vs. elaborated specific feedback without justifications (ESF)] was varied. Students received a scenario containing an essay by a fictional student and fictional PF, followed by a text revision, distraction and PF recall task. Eye tracking was applied to measure (a) how written PF was (re-) read (fixation durations) and (b) the number of transitions occurring between PF and essay text. Mindful cognitive processing was inferred from the relation between fixation durations on PF and number of transitions between essay text and PF with (a) text revision performance and (b) PF recall performance. When no justifications were provided, recipients invested more time in reading the PF and essay and increased the effort to relate the PF to essay text. Fixation durations and number of transitions proved to be valid measures to infer mindful cognitive processing.

7 versions available

Investigating the mechanisms underlying fixation durations during the first year of life: a computational account

Year: 2015

Authors: IR Saez de Urabain

Infants’ eye-movements provide a window onto the development of cognitive functions over the first years of life. Despite considerable advances in the past decade, studying the mechanisms underlying infant fixation duration and saccadic control remains a challenge due to practical and technical constraints in infant testing. This thesis addresses these issues and investigates infant oculomotor control by presenting novel software and methods for dealing with low-quality infant data (GraFIX), a series of behavioural studies involving novel gaze-contingent and scene-viewing paradigms, and computational modelling of fixation timing throughout development. In a cross-sectional study and two longitudinal studies, participants were eye-tracked while viewing dynamic and static complex scenes, and performed gap-overlap and double-step paradigms. Fixation data from these studies were modelled in a number of simulation studies with the CRISP model of fixation durations in adults in scene viewing. Empirical results showed how fixation durations decreased with age for all viewing conditions but at different rates. Individual differences between long- and short-lookers were found across visits and viewing conditions, with static images being the most stable viewing condition. Modelling results confirmed the CRISP theoretical framework’s applicability to infant data and highlighted the influence of both cognitive processing and the developmental state of the visuo-motor system on fixation durations during the first few months of life. More specifically, while the present work suggests that infant fixation durations reflect on-line perceptual and cognitive activity similarly to adults, the individual developmental state of the visuo-motor system still affects this relationship until 10 months of age. Furthermore, results suggested that infants are already able to program saccades in two stages at 3.5 months: (1) an initial labile stage subject to cancellation and (2) a subsequent non-labile stage that cannot be cancelled. The length of the non-labile stage decreased relative to the labile stage especially from 3.5 to 5 months, indicating a greater ability to cancel saccade programs as infants grew older. In summary, the present work provides unprecedented insights into the development of fixation durations and saccadic control during the first year of life and demonstrates the benefits of mixing behavioural and computational approaches to investigate methodologically challenging research topics such as oculomotor control in infancy.

4 versions available

Lexical processing in children and adults during word copying

Year: 2015

Authors: AE Laishley,SP Liversedge

Copying text may seem trivial, but the task itself is psychologically complex. It involves a series of sequential visual and cognitive processes, which must be co-ordinated; these include visual encoding, mental representation and written production. To investigate the time course of word processing during copying, we recorded eye movements of adults and children as they hand-copied isolated words presented on a classroom board. Longer and lower frequency words extended adults' encoding durations, suggesting whole word encoding. Only children's short word encoding was extended by lower frequency. Though children spent more time encoding long words compared to short words, gaze durations for long words were extended similarly for high- and low-frequency words. This suggested that for long words children used partial word representations and encoded multiple sublexical units rather than single whole words. Piecemeal word representation underpinned copying longer words in children, but reliance on partial word representations was not shown in adult readers.

4 versions available