Publication Hub Archive

Ergoneers VTK (Vehicle testing kit)

You have reached the Ergoneers Publication Hub for:

Product Name > Ergoneers VTK (Vehicle testing kit)

Find all Publications here:

Publication Hub

Total results: 145

The Effect of Quite Eye Training with Self-Control and Variable-Constant Organization on Learning and Performance of Badminton Backhand Low Service in Student …

Year: 2018

Authors: N Parvizi,M Shahbazi,S Tahmasebi

Introduction: Quiet eye had been introduced as a period of extended gaze fixation in many targeting tasks. The aim of this study was to investigate the effect of quiet eye training with self-control and constant-variable body organization on performance and learning badminton low backhand service in the student novice girls. Materials and Methods: In this quasi-experimental study, 19 girl students of physical education in University of Tehran, Iran, were selected using convenience sampling method and randomly divided into two groups of self-control (n = 9) and constant-variable (n = 10). The task intention was badminton backhand low service. In order to measure the accuracy of backhand low service, we used French short serve standard five test, and to record the visual data, Ergonear Eye Tracking test was used. The day after the pretest, participants took part in 3 sections of acquisition (8 blocks with 15 trails, a total of 360 trails), and 48 hours after the acquisition test, the test of retention and transfer was executed. The data were analyzed using Mixed ANOVA test of 2 × 4 at performance and 2 × 3 during the period of the quiet eye at a significance level of P ≥ 0.050. Results: Quiet eye duration showed a significant increase in both groups from pretest to retention (P ≥ 0.001). Therefore, it can be noted that both methods of practice had a positive effect on the quiet eye duration. However, in the accuracy performance, there was no significant different between the groups (P = 0.374) and within the groups (P = 0.890). Conclusion: It seems that the organization of constant-variable and self-control exercises has similar effects on the accuracy and quiet eye duration.

Eye Tracking Glasses
Software

2 versions available

The effects of distraction on anticipatory driving

Year: 2018

Authors: D He,B Donmez

The anticipation of future events in traffic can allow potential gains in recognition and response times. Anticipatory actions (i.e., actions in preparation for a potential upcoming conflict) have been found to be more prevalent among experienced drivers in a driving simulator study where driving was the sole task. The influence of secondary tasks on anticipatory driving has not yet been investigated, despite the prevalence and negative effects of distraction widely documented in the literature. A driving simulator experiment was conducted with 16 experienced and 16 novice drivers to address this gap with half of the participants provided with a self-paced visual-manual secondary task. More anticipatory actions were observed among experienced drivers in general compared to novices; experienced drivers also exhibited more efficient visual scanning behaviors. Secondary task engagement reduced anticipatory actions for both experienced and novice drivers.

Eye Tracking Glasses
Simulator

6 versions available

Using visual cues to leverage the use of speech input in the vehicle

Year: 2018

Authors: F Roider,S Rümelin,T Gross

Touch and speech input often exist side-by-side in multimodal systems. Speech input has a number of advantages over touch, which are especially relevant in safety critical environments such as driving. However, information on large screens tempts drivers to use touch input for interaction. They lack an effective trigger, which reminds them that speech input might be the better choice. This work investigates the efficacy of visual cues to leverage the use of speech input while driving. We conducted a driving simulator experiment with 45 participants that examined the influence of visual cues, task type, driving scenario, and audio signals on the driver’s choice of modality, glance behavior and subjective ratings. The results indicate that visual cues can effectively promote speech input, without increasing visual distraction, or restricting the driver’s freedom to choose. We propose that our results can be applied to other applications such as smartphones or smart home applications.

Simulator
Software

4 versions available

Varieties of interaction: from User Experience to Neuroergonomics: On the occasion of the Human Factors and Ergonomics Society Europe Chapter Annual …

Year: 2018

Authors: D de Waard,F Di Nocera,D Coelho,J Edworthy

In everyday road traffic, communication between road users plays an important role – especially in traffic situations where cooperation is necessary. In order to ensure successful future communication between human road users and autonomous vehicles, the communication between human road users must be better understood and modeled for automatic traffic. A relevant parameter in the analysis of cooperative scenarios is gaze behaviour. In contrast to e.g. mental workload, no specific parameters have been identified for analyzing cooperative scenarios so far. As a method, on a traffic-training-center, two experiments were conducted for cooperative situations implementing a narrow-passage (N=21) and a specific t-junction-scenario with three road users (N=20) to investigate cooperative behavior. In both experiments, the subjects were confronted with offensive or defensive approaching behaviours and the decision-making behaviour was investigated. Aim of the analysis was to identify relevant gaze parameters for cooperative scenarios. The results show that for different scenarios different parameters become relevant. For a complex scenario saccadic parameters are more important than fixation parameters. In contrast fixation-metrics show higher importance in simple scenarios.

Eye Tracking Glasses
Simulator

3 versions available

Visual attention failures during turns at intersections: An on-road study

Year: 2018

Authors: NE Kaya,S Ayas,CT Ponnambalam,B Donmez

Crash data indicate that misallocation of attention is a major source of vehicle crashes with vulnerable road users (pedestrians and cyclists) at intersections. Video recordings from outside and inside the vehicle indicate that drivers allocate their attention based on their expectations but the extent that drivers fail to scan for vulnerable road users at intersections is not known. In this paper, we examine failures to check for vulnerable road users during right turns at intersections. Eye-tracking data was analyzed from 19 drivers between the ages of 35 and 54 who participated in an on-road instrumented vehicle study conducted in downtown Toronto. Each participant made two right turns from a major arterial road. In addition to attention allocation failures, we assessed whether the objective data was correlated with experience driving in the area as well as with drivers’ subjective responses about their intersection-related errors collected through the Driver Behaviour Questionnaire (DBQ). Eleven of the 19 participants had a failure in at least one of the intersections; all failures related to checking for cyclists. At a marginally significant level, attentional failures were more likely for those who drove more frequently in downtown Toronto and for those who had larger error scores on intersection-related questions of DBQ. The prevalence of attentional failures observed is alarming, especially given that our participants represented the lowest crash-risk age group. It appeared that drivers less familiar with an area were more cautious when it comes to negotiating an intersection. Additionally, drivers appeared to be aware of their intersection-related errors as indicated by their DBQ responses. Further research with an increased sample size and on a variety of intersections is needed to generalize these findings.

Eye Tracking Glasses
Simulator

2 versions available

A comparative evaluation of in-vehicle side view displays layouts in critical lane changing situation

Year: 2017

Authors: D Beck,M Lee, W Park

This study conducted a driving simulator experiment to comparatively evaluate three in-vehicle side view displays layouts for camera monitor systems (CMS) and the traditional side view mirror arrangement. The three layouts placed two electronic side view displays near the traditional mirrors positions, on the dashboard at each side of the steering wheel and on the centre fascia with the two displays joined side-by-side, respectively. Twenty-two participants performed a time- and safety-critical driving task that required rapidly gaining situation awareness through the side view displays/mirrors and making a lane change to avoid collision. The dependent variables were eye-off-the-road time, response time, and, ratings of perceived workload, preference and perceived safety. Overall, the layout placing the side view displays on the dashboard at each side of the steering wheel was found to be the best. The results indicated that reducing eye gaze travel distance and maintaining compatibility were both important for the design of CMS displays layout.

Eye Tracking Glasses
Simulator

3 versions available

Assessing high cognitive load in drivers through electroencephalography

Year: 2017

Authors: D He, CC Liu, B Donmez

This paper explores the influence of high cognitive load on driver’s Electroencephalography (EEG) signals collected from four positions (TP9, Fp1, Fp2, TP10) along with other physiological signals, plus eye tracking, driving performance, and subjective measures. Although EEG has been used in driving research to assess mental workload, only a few studies focused on high cognitive load, but they utilized research-grade EEG systems. Recent advancements allow for less intrusive and more affordable systems to be incorporated into vehicles. We tested the feasibility of one such system to differentiate three incremental levels of cognitive taskload in a preliminary simulator study, which so far has been completed by 15 participants. Each participant completed a baseline drive with no secondary task and two drives with a modified version of the n-back task (1-back, 2-back). The modification removed the verbal response during auditory stimulus presentation to increase EEG signal quality, with the 2-back level still imposing higher cognitive demand than 1-back. The system tested was sensitive to taskload levels, with alpha band being sensitive among all difficulty levels; beta and gamma bands distinguishing 2-back level from the baseline and 1-back; and the delta band distinguishing baseline from the n-back levels. In line with previous studies, galvanic skin response and standard deviation of gaze position also showed significant stepwise trends from the baseline to 1-back and then to 2-back. Further research is needed to investigate the ability of consumer-grade EEG headbands to differentiate different driver states.

Eye Tracking Glasses
Simulator

3 versions available

Data Collection Report

Year: 2017

Authors: CC Liu

Driver distraction from secondary in-vehicle activities is recognized as a significant source of people injuries and fatalities on the road. Cognitive workload as one main source of diver distraction is vital to understand the driver state in partially automated cars. eDREAM Project, conducted during May 2015 to November 2016, was initiated to develop an advanced driver monitoring system that utilizes advanced sensory and vision technologies to improve driving experience and safety. Vehicle-based measures, physiological measures and video-based measures data were collected in order to discover the various impacts of cognitive load. Those measures were collected from a total of 36 gender-balanced participants and a driving simulator under three incremental cognitive task-load conditions. The NASA-TLX questionnaire was used for rating various demands and efforts in order to collect participants’ perceived cognitive workload level after each drive that contained different task-load. This document focused on the process of experiment design and implementation, future sections on resulted dataset and analysis results will be added.

Eye Tracking Glasses
Simulator

1 version available:

Division of area of fixation interest for real vehicle driving tests

Year: 2017

Authors: Q Xu,T Guo, F Shao, X Jiang

The area of interest (AOI) reflects the degree of attention of a driver while driving. The division of AOI is visual characteristic analysis required in both real vehicle tests and simulated driving scenarios. Some key eye tracking parameters and their transformations can only be obtained after the division of AOI. In this study, 9 experienced and 7 novice drivers participated in real vehicle driving tests. They were asked to drive along a freeway section and a highway section, wearing the Dikablis eye tracking device. On average, 8132 fixation points for each driver were extracted. After coordinate conversion, the MSAP (Mean Shift Affinity Propagation) method is proposed to classify the distribution of fixation points into a circle type and a rectangle type. Experienced drivers’ fixation behavior falls into the circle type, in which fixation points are concentrated. Novice drivers’ fixation points, which are decentralized, are illustrated in the rectangle type. In the clustering algorithm, the damping coefficient λ determines the algorithm convergence, and the deviation parameter p mainly affects the number of clusters, where larger p values generate more clusters. This study not only provides the cluster type and cluster counts, but also presents the borderlines for each cluster. The findings provide significant contribution to eye tracking research.

Eye Tracking Glasses
Simulator

9 versions available

Driven to distraction? A review of speech technologies in the automobile

Year: 2017

Authors: R Young, J Zhang

Speech technologies hold the promise of improving driver performance for many visual-manual secondary tasks, by enabling eyes-free and hands-free interactions. Unfortunately, speech interfaces have enjoyed only incremental growth since the early 2000s in the automotive industry. Instead, mixed-mode interfaces (speech combined with visual) have become increasingly common, and visual-manual interfaces are still dominant. This paper provides a historical overview of speech driver interface studies, including formal testing on a 2014 Toyota Corolla production vehicle, and a new analytical evaluation of the Apple CarPlay interface in the 2016 Cadillac ATS. Results indicate that eyes-free and hands-free speech (i.e., “pure” speech) improves driver performance vs. mixed-mode interfaces. Also, mixed-mode improves driver performance vs. “pure” visual-manual, for the tasks tested. The visual component of the mixed-mode and visual-manual interfaces increases off-road glances, a safety decrement. We recommend that future in-vehicle speech interface products should sensibly limit visual displays from showing information redundant to that provided by the speech interface. In general, we recommend pure speech driver-vehicle interfaces for secondary tasks wherever possible.

Simulator
Software

2 versions available