Publication Hub Archive

Dikablis Glasses

You have reached the Ergoneers Publication Hub for:

Product Name > Dikablis Glasses

Find all Publications here:

Publication Hub

Total results: 509

Eye-tracking technology in vehicles: application and design

Year: 2015

Authors: V Selimis

This work analyses the eye-tracking technology and, as an outcome, it presents an idea of implementing it, along with other kinds of technology, in vehicles. The main advantage of such an implementation would be to augment safety while driving. The setup and the methodology used for detecting human activity and interaction using the means of the eye-tracking technology are investigated. Research in that area is growing rapidly and its results are used in a variety of cases. The main reasons for that growth are the constant lowering of prices of the special equipment that is necessary, the portability that is available in some cases as well as the easiness of use that make the usage of that technology more user-friendly than it was a few years ago. The whole idea of eye-tracking is to track the movements of the eyes in an effort to determine the direction of the gaze, using sophisticated software and purpose built hardware. This manuscript, makes a brief introduction in the history of eye monitoring presenting the very early scientific approaches used in an effort to better understand the movements of the human while tracking an object or during an activity. Following, there is an overview of the theory and the methodology used to track a specific object. As a result there exists a short presentation of the image processing and the machine learning procedures that are used to accomplish such tasks. Thereafter, we further analyze the specific eye-tracking technologies and techniques that are used nowadays and the characteristics that affect the exact choice of eye-tracking equipment. For the appropriate choice we have to take into account the area of research-interest in which the equipment will be used. In addition, the main categories of eye-tracking applications are presented and we shortlist the latest state of the art eye-tracking commercial systems. Following, we present our first approach, trying to describe an eye-tracking device that could be used in vehicles offering much better safety standards, controlling various parameters, continuously checking the readiness of the driver and alerting him for potential imminent collision incidents. Finally, we describe the existing way of connecting a device, in our case an eye-tracker, can be connected to an automobile’s system.

Eye Tracking Glasses
Software

3 versions available

Gaze estimation on glasses-based stereoscopic displays

Year: 2015

Authors: L Świrski

Glasses-based 3D displays, such as those used in stereoscopic cinema or 3D monitors, are currently the most common form of 3D display. However, they are often reported to cause headaches and discomfort. One of the reasons for this is the vergence–accommodation conflict, where the binocular stimulus of one’s eyes rotating to look at a point is decoupled from the monocular stimulus of the eye lenses focusing on a point. This discomfort could be decreased by estimating the depth of a person’s gaze, and simulating a depth-of-field effect contingent on where they are looking in 3D space. In this dissertation, I investigate gaze estimation on such glasses-based 3D displays. Furthermore, I explore the feasibility of this gaze estimation with realistic constraints, such as low cost, low complexity hardware, free head motion, and real-time gaze estimates. I propose several algorithms for eye tracking and gaze estimation which are designed to work robustly and accurately despite these constraints. Firstly, I present a pupil detection approach which can accurately detect the pupil contour in difficult, off-axis images such as those captured by my eye cameras, which are attached underneath the frame of a pair of glasses. My algorithm is robust to occlusions such as eyelashes and eyelids, and operates in real-time. I evaluate it using a manually labelled dataset, and show that it has a higher detection rate than existing approaches, and sub-pixel accuracy. Secondly, I investigate the issue of evaluating gaze estimation, especially the question of how to collect ground truth data. As a result of this investigation, I present a new evaluation framework, which renders photorealistic synthetic eye images that can be used for evaluating the computer vision aspects of eye tracking. Thirdly, I present a novel eye model fitting algorithm, which initialises and refines an eye model based solely on pupil data, with no need for calibration or controlled lighting. I describe the geometry of initialising this eye model, and two methods of refining it using two different optimisation metrics. I evaluate it using synthetic images, and show that my refinements give a significant improvement in detection rate and gaze accuracy. Lastly, I present a binocular gaze estimation system which combines the above methods. My system performs geometric gaze estimation by combining the monocular eye models fitted to the left and right eye images. I describe two methods for combining these into a single binocular gaze point estimate, and methods for calibrating and refining this estimate. I then evaluate this system by performing a user study, showing that my system works for gaze estimation on glasses-based displays and is sufficiently accurate for simulating depth-of-field.

Simulator
Software

1 version available:

GazeNav: Gaze-based pedestrian navigation

Year: 2015

Authors: I Giannopoulos,P Kiefer,M Raubal

Pedestrian navigation systems help us make a series of decisions that lead us to a destination. Most current pedestrian navigation systems communicate using map-based turn-by-turn instructions. This interaction mode suffers from ambiguity, its user's ability to match the instruction with the environment, and it requires a redirection of visual attention from the environment to the screen. In this paper we present GazeNav, a novel gaze-based approach for pedestrian navigation. GazeNav communicates the route to take based on the user's gaze at a decision point. We evaluate GazeNav against the map-based turn-by-turn instructions. Based on an experiment conducted in a virtual environment with 32 participants we found a significantly improved user experience of GazeNav, compared to map-based instructions, and showed the effectiveness of GazeNav as well as evidence for better local spatial learning. We provide a complete comparison of navigation efficiency and effectiveness between the two approaches.

Simulator
Software

7 versions available

Glance awareness and gaze interaction in smartwatches

Year: 2015

Authors: D Akkil,J Kangas,J Rantala,P Isokoski

Smartwatches are widely available and increasingly adopted by consumers. The most common way of interacting with smartwatches is either touching a screen or pressing buttons on the sides. However, such techniques require using both hands. We propose glance awareness and active gaze interaction as alternative techniques to interact with smartwatches. We will describe an experiment conducted to understand the user preferences for visual and haptic feedback on a "glance" at the wristwatch. Following the glance, the users interacted with the watch using gaze gestures. Our results showed that user preferences differed depending on the complexity of the interaction. No clear preference emerged for complex interaction. For simple interaction, haptics was the preferred glance feedback modality.

Eye Tracking Glasses
Software

3 versions available

Inferring mindful cognitive‐processing of peer‐feedback via eye‐tracking: Role of feedback‐characteristics, fixation‐durations and transitions

Year: 2015

Authors: M Bolzer,JW Strijbos,F Fischer

Feedback literature identifies mindful cognitive processing of (peer) feedback and (peer) feedback characteristics – as well as the presence of justifications for feedback – as important for its efficiency. However, mindful cognitive processing has yet to be operationalized and investigated. In this study, an operationalization of mindful cognitive processing is introduced, alongside an investigation to identify valid measures for it. In a between-subjects design, peer feedback (PF) content [elaborated specific feedback with justifications (ESF + J) vs. elaborated specific feedback without justifications (ESF)] was varied. Students received a scenario containing an essay by a fictional student and fictional PF, followed by a text revision, distraction and PF recall task. Eye tracking was applied to measure (a) how written PF was (re-) read (fixation durations) and (b) the number of transitions occurring between PF and essay text. Mindful cognitive processing was inferred from the relation between fixation durations on PF and number of transitions between essay text and PF with (a) text revision performance and (b) PF recall performance. When no justifications were provided, recipients invested more time in reading the PF and essay and increased the effort to relate the PF to essay text. Fixation durations and number of transitions proved to be valid measures to infer mindful cognitive processing.

Eye Tracking Glasses
Software

7 versions available

Lexical processing in children and adults during word copying

Year: 2015

Authors: AE Laishley,SP Liversedge

Copying text may seem trivial, but the task itself is psychologically complex. It involves a series of sequential visual and cognitive processes, which must be co-ordinated; these include visual encoding, mental representation and written production. To investigate the time course of word processing during copying, we recorded eye movements of adults and children as they hand-copied isolated words presented on a classroom board. Longer and lower frequency words extended adults' encoding durations, suggesting whole word encoding. Only children's short word encoding was extended by lower frequency. Though children spent more time encoding long words compared to short words, gaze durations for long words were extended similarly for high- and low-frequency words. This suggested that for long words children used partial word representations and encoded multiple sublexical units rather than single whole words. Piecemeal word representation underpinned copying longer words in children, but reliance on partial word representations was not shown in adult readers.

Eye Tracking Glasses
Software

4 versions available

Malfunction of a traffic light assistant application on a smartphone

Year: 2015

Authors: M Krause, S Weichelt,K Bengler

A traffic light assistant on a smartphone is assessed in real traffic, with an eye tracking system. In one experimental condition, the system showed (intentionally) false information to the drivers to simulate a malfunction. The glances for this condition showed similar gaze parameters, as a working system. The subjective ratings of the test subjects after this malfunction dropped significantly. The gathered gaze data are compared to three former studies (two in a driving simulator and another study in real road driving). Findings indicate, that a driving simulator is a safe and reliable alternative to get some of the glance data (e.g., glance durations to the smartphone) without driving in real traffic.

Simulator
Software

4 versions available

Mobile cognition: balancing user support and learning

Year: 2015

Authors: M Raubal

People engage in mobile decision-making on a daily basis. Spatially aware mobile devices have the potential to support users in spatio-temporal decision situations by augmenting their cognitive abilities or compensating for their deficiencies. In many cases though, this technology has a negative impact on people's spatial learning of the environment, such as during wayfinding. In this position paper we argue that mobile cognition must strive for solutions that find the right balance between immediate goals and longer-term objectives such as spatial learning.

Eye Tracking Glasses
Software

2 versions available

Postural sway and gaze can track the complex motion of a visual target

Year: 2015

Authors: V Hatzitaki,N Stergiou,G Sofianidis,A Kyvelidou

Variability is an inherent and important feature of human movement. This variability has form exhibiting a chaotic structure. Visual feedback training using regular predictable visual target motions does not take into account this essential characteristic of the human movement, and may result in task-specific learning and loss of visuo-motor adaptability. In this study, we asked how well healthy young adults can track visual target cues of varying degrees of complexity during whole-body swaying in the Anterior-Posterior (AP) and Medio-Lateral (ML) directions. Participants were asked to track three visual target motions: a complex (Lorenz attractor), a noise (brown) and a periodic (sine) moving target while receiving online visual feedback about their performance. Postural sway, gaze, and target motion were synchronously recorded and the degree of force-target and gaze-target coupling was quantified using spectral coherence and Cross-Approximate entropy. Analysis revealed that both force-target and gaze-target coupling was sensitive to the complexity of the visual stimuli motions. Postural sway showed a higher degree of coherence with the Lorenz attractor than the brown noise or sinusoidal stimulus motion. Similarly, gaze was more synchronous with the Lorenz attractor than the brown noise and sinusoidal stimulus motion. These results were similar regardless of whether tracking was performed in the AP or ML direction. Based on the theoretical model of optimal movement variability, tracking of a complex signal may provide a better stimulus to improve visuo-motor adaptation and learning in postural control.

Eye Tracking Glasses
Simulator

19 versions available

Prediction of take-over time in highly automated driving by two psychometric tests

Year: 2015

Authors: M Körber, T Weißgerber, L Kalb, C Blaschke, M Farid

In this study, we investigated if the driver's ability to take over vehicle control when being engaged in a secondary task (Surrogate Reference Task) can be predicted by a subject's multitasking ability and reaction time. 23 participants performed a multitasking test and a simple response task and then drove for about 38 min highly automated on a highway and encountered five take-over situations. Data analysis revealed significant correlations between the multitasking performance and take-over time as well as gaze distributions for Situations 1 and 2, even when reaction time was controlled. This correlation diminished beginning with Situation 3, but a stable difference between the worst multitaskers and the best multitaskers persisted. Reaction time was not a significant predictor in any situation. The results can be seen as evidence for stable individual differences in dual task situations regarding automated driving, but they also highlight effects associated with the experience of a take-over situation.

Eye Tracking Glasses
Simulator

16 versions available