Publication Hub Archive

UX Analysis

You have reached the Ergoneers Publication Hub for:

Field of Application > UX Analysis

Find all Publications here:

Publication Hub

Total results: 588

Real-Time Driver’s Focus of Attention Extraction and Prediction using Deep Learning

Year: 2021

Authors: H Pei

Driving is one of the most common activities in our modern lives. Every day, millions drive to and from their schools or workplaces. Even though this activity seems simple and everyone knows how to drive on roads, it actually requires drivers’ complete attention to keep their eyes on the road and surrounding cars for safe driving. However, most of the research focused on either keeping improving the configurations of active safety systems with high-cost components like Lidar, night vision cameras, and radar sensor array, or finding the optimal way of fusing and interpreting sensor information without considering the impact of drivers’ continuous attention and focus. We notice that effective safety technologies and systems are greatly affected by drivers’ attention and focus. In this paper, we design, implement and evaluate DFaep, a deep learning network for automatically examining, estimating, and predicting driver’s focus of attention in a real-time manner with dual low-cost dash cameras for driver-centric and car-centric views. Based on the raw stream data captured by the dash cameras during driving, we first detect the driver’s face and eye and generate augmented face images to extract facial features and enable real-time head movement tracking. We then parse the driver’s attention behaviors and gaze focus together with the road scene data captured by one front-facing dash camera faced towards the roads. Facial features, augmented face images, and gaze focus data are then inputted to our deep learning network for modeling drivers’ driving and attention behaviors. Experiments are then conducted on the large dataset, DR(eye)VE, and our own dataset under realistic driving conditions. The findings of this study indicated that the distribution of driver’s attention and focus is highly skewed. Results show that DFaep can quickly detect and predict the driver’s attention and focus, and the average accuracy of prediction is 99.38%. This will provide a basis and feasible solution with a computational learnt model for capturing and understanding driver’s attention and focus to help avoid fatal collisions and eliminate the probability of potential unsafe driving behavior in the future.

2 versions available

Refining distraction potential testing guidelines by considering differences in glancing behavior

Year: 2021

Authors: H Grahn,T Taipalus

Driver distraction is a recognized cause of traffic accidents. Although the well-known guidelines for measuring distraction of secondary in-car tasks were published by the United States National Highway Traffic Safety Administration (NHTSA) in 2013, studies have raised concerns on the accuracy of the method defined in the guidelines, namely criticizing them for basing the diversity of the driver sample on driver age, and for inconsistent between-group results. In fact, it was recently discovered that the NHTSA driving simulator test is susceptible to rather fortuitous results when the participant sample is randomized. This suggests that the results of said test are highly dependent on the selected participants, rather than on the phenomenon being studied, for example, the effects of touch screen size on driver distraction. As an attempt to refine the current guidelines, we set out to study whether a previously proposed new testing method is as susceptible to the effects of participant randomization as the NHTSA method. This new testing method differs from the NHTSA method by two major accounts. First, the new method considers occlusion distance (i.e., how far a driver can drive with their vision covered) rather than age, and second, the new method considers driving in a more complex, and arguably, a more realistic environment than proposed in the NHTSA guidelines. Our results imply that the new method is less susceptible to sample randomization, and that occlusion distance appears a more robust criterion for driver sampling than merely driver age. Our results are applicable in further developing driver distraction guidelines and provide empirical evidence on the effect of individual differences in drivers’ glancing behavior.

6 versions available

Relations between postural sway and cognitive workload during various gaze tasks in healthy young and old people

Year: 2021

Authors: M Roh, E Shin, S Lee

The purpose of this study was to determine whether postural control would differ under various gaze tasks while standing in a wide or narrow stance between healthy young and old people, and also investigate whether postural sway and cognitive workload are affected by dual-task balance. Ten young and 10 healthy old people participated in this study. Each participant stood upright under four gaze conditions (fixation, saccade, pursuit, vestibular-ocular reflex) and two stance conditions (wide and narrow stance) in a total of 16 trials. Postural sway was measured by the mean sway amplitude of the center of pressure in the medial-lateral and anteriorposterior directions. Cognitive workload was measured through pupil response as an index of cognitive activity (ICA) by using Eye tracking system and Eyeworks. The results showed that postural sway significantly reduced when performing saccadic eye movement in both groups but greater postural sway was evoked in vestibular-ocular reflex condition. In addition, although old people had a significant increase in ICA compared to the young, there were no significant differences among all the gaze conditions in old people. These results confirmed that saccadic eye movements are the most beneficial for reducing postural sway regardless of aging and also provide some insight that pupil response represents an indicator of cognitive workload during dual-task balance context. These findings suggest that eye movement exercises may be considered as an effective intervention to improve postural control so a fall prevention program applying eye movement should be extended to individuals who are at risk of falling.

11 versions available

Screen mirroring is not as easy as it seems: A closer look at older adults’ cross-device experience through touch gestures

Year: 2021

Authors: X Ouyang,J Zhou, H Xiang

Screen mirroring might be a way to improve older adults’ user experience of smart televisions (STVs) through smartphones. To examine this possibility, two experiments were conducted. Experiment I examined older adults’ difficulties of screen mirroring (mirroring smartphone screens to STVs) through five common touch gestures (“Drag,” “Slide,” “Zoom,” “Draw,” and “Handwrite”), in comparison to younger adults. The results indicated that a major problem for older adults is the frequent attention switching between the STV and smartphone screens. Therefore, experiment II explored how to reduce the need of attention switching through the touch gestures (“Tap,” “Slide + Tap,” and “Slide + Release”) and the button sizes (8, 14, and 20 mm) for different input postures. Thirty older adults participated in this experiment and their eye movements were tracked. Four major findings were derived. First, the “Zoom,” “Draw,” and “Handwrite” gestures in screen mirroring were difficult for older adults with a task completion rate lower than 68%. Second, the problem of frequent attention switching between the STV and smartphone was predominant for tapping tasks. Third, the “Slide + Tap” and “Slide + Release” touch gestures helped to reduce attention switching in tapping tasks more than the “Tap” for older adults, while the “Slide + Release” received the worst subjective feedback. Fourth, increasing the button size from 8 mm to 14 mm on smartphones can improve the task completion rate and the task efficiency in screen mirroring when older adults used the one-handed posture to tap.

1 version available:

The Comparison of Environmental Constraints Changes on Quiet Eye Factors during Performance Skill of Throw Targeting

Year: 2021

Authors: A Amini,S Tahmasebi Boroujeni

Introduction: The purpose of this study was a comparison of environmental constraints changes on quiet eye factors during performance skill of throw targeting between various athletes with three different levels of skill. Materials and Methods: Thirty athletes (22-28 years) were selected in three levels; elite, expert, and novice. The interaction of regulatory conditions (stationary/in motion) and intertrial variability (present/absent) created four target conditions and recorded characteristics related to gaze behavior continuously in any scenario. Using an eye-tracking device, gaze behavior was recorded and analyzed by an information processing system. Results: Athletes at the elite level had significantly quiet eye onset and longer quiet eye periods compared to athletes with semi-skilled and beginner levels. Conclusion: The results of the present study emphasized the importance of quiet eye characters on the successful performance of athletes with earlier onset and longer duration of the quiet eye, particularly when presented assignment-related information in different shapes and regions in the wide field of vision.

9 versions available

The design and integration of a comprehensive measurement system to assess trust in automated driving

Year: 2021

Authors: A Madison, A Arestides, S Harold

With the increased availability of commercially automated vehicles, trust in automation may serve a critical role in the overall system safety, rate of adoption, and user satisfaction. We developed and integrated a novel measurement system to better calibrate human-vehicle trust in driving. The system was designed to collect a comprehensive set of measures based on a validated model of trust focusing on three types: dispositional, learned, and situational. Our system was integrated into a Tesla Model X to assess different automated functions and their effects on trust and performance in real-world driving (e.g., lane changes, parking, and turns). The measurement system collects behavioral, physiological (eye and head movements), and self-report measures of trust using validated instruments. A vehicle telemetry system (Ergoneers Vehicle Testing Kit) uses a suite of sensors for capturing real driving performance data. This off-the-shelf solution is coupled with a custom mobile application for recording driver behaviors, such as engaging/disengaging automation, during on-road driving. Our initial usability evaluations of components of the system revealed that the system is easy to use, and events can be logged quickly and accurately. Our system is thus viable for data collection and can be used to model user trust behaviors in realistic on-road conditions.

2 versions available

The driving experience lab: simulating the automotive future in a trailer

Year: 2021

Authors: C Schartmüller,A Riener

Driving simulators are typically used to evaluate next-generation automotive user interfaces in user studies as they offer a replicable driving setting that also allows studying safety-critical and/or future systems. However, this AutomotiveUI experience research is often limited to university or company campuses and their students and staff. To combat that, we introduced a mobile driving simulator lab in a car trailer. We present features but also limitations of this lab, report on experiences after the first days of operation, and discuss further use cases beyond research. During 7 days of user studies with the trailer at a national garden festival, we conducted trials with more than 70 participants from diverse backgrounds. However, executing studies at public events also has its limitations, e.g., on accepted trial duration and potential for biased responses.

3 versions available

Triangulated Investigation of Trust in Automated Driving: Challenges and Solution Approaches for Data Integration

Year: 2021

Authors: TE Kalayci,EG Kalayci,G Lechner, N Neuhuber

In automated driving, an appropriate level of driver trust is essential to improve safety and ensure zero fatalities. Drivers must have a sufficient level of trust to intervene correctly in safety-critical situations: very low levels may lead to either continuous and excessive monitoring of the functions, reducing the attention paid to the environment or switching off these functions, whereas extreme trust in automated driving functions can result in dangerous driving situations because the environment is either insufficiently monitored or not monitored at all. A deeper understanding of trust in automated driving is challenging and requires a triangulated study in which the type of driver, vehicle usage, and environmental data are varied. However, many previous studies were based on a rather limited set of data sources, often relying on qualitative means such as pre-and-post interviews or trust questionnaires to evaluate trust in autonomous driving functions. Although data gathered through empirical research, such as conducting quantitative surveys or qualitative interviews, are simple to store and analyze, the collection and integration of vehicle and sensor data from different data sources usually pose important technical challenges in practice. Hence, a suitable data collection and integration strategy is required to address these challenges. In this context, we propose a general framework for collecting and integrating data from different sources with diverse capabilities and requirements to determine a driver’s trust in automated driving. Our proposed framework facilitates the integration of empirical and measurement data, allowing a triangulated investigation to provide a road map for the automotive industry.

1 version available:

Urgent Cues While Driving: S3D Take-over Requests

Year: 2021

Authors: F Weidner, F Weidner

In SAE level 3 automated vehicles, the human driver still needs to be ready to take over control in case the vehicle encounters a situation outside its operational design domain. This chapter outlines a case study where smart stereoscopic 3D icons act as a take-over notification.

1 version available:

Use of Pupil Area and Fixation Maps to Evaluate Visual Behavior of Drivers inside Tunnels at Different Luminance Levels—A Pilot Study

Year: 2021

Authors: L Qin, QL Cao,AS Leon, YN Weng, XH Shi

This study reports the results of a pilot study on spatiotemporal characteristics of drivers’ visual behavior while driving in three different luminance levels in a tunnel. The study was carried out in a relatively long tunnel during the daytime. Six experienced drivers were recruited to participate in the driving experiment. Experimental data of pupil area and fixation point position (at the tunnel’s interior zone: 1566 m long) were collected by non-intrusive eye-tracking equipment at three luminance levels (2 cd/m2, 2.5 cd/m2, and 3 cd/m2). Fixation maps (color-coded maps presenting distributed data) were created based on fixation point position data to quantify changes in visual behavior. The results demonstrated that luminance levels had a significant effect on pupil areas and fixation zones. Fixation area and average pupil area had a significant negative correlation with luminance levels during the daytime. In addition, drivers concentrated more on the front road pavement, the top wall surface, and the cars’ control wheels. The results revealed that the pupil area had a linear relationship with the luminance level. The limitations of this research are pointed out and the future research directions are also prospected.

4 versions available