Hazard Perception Test via Driving Simulator
Hazard Perception Test (HPT) is one (1) of longstanding approach in many countries to assess individuals’ competency before obtaining driving licenses. In Malaysia, however, HPT is yet to be part of the national licensing system. Previous hazard perception studies using Malaysian samples reported mixed findings on the effectivity of reaction time-based HPT (e.g. Lim, Sheppard & Crundall, 2013; Ab Rashid & Ibrahim, 2017). Unlike these studies that adopted computer-based HPT, current study employed a full-size cabin driving simulator to study hazard perception between two (2) groups of drivers: novice and experienced. Results from 28 (15 novices, 13 experienced) drivers indicated that novice drivers detected hazards faster than their experienced counterpart, even though both groups have the same performance of hazard recognition. Correlational analysis revealed that driving frequency might be a factor contributing to the difference of response time between these two (2) groups. Further analysis also indicates that different road environments contribute to different hazard perception performance. It is recommended that hazard perception test should be put as a part of the national licensing system and the potential of driving simulator as a HPT tool can be explore more.
New motion cueing algorithm for improved evaluation of vehicle dynamics on a driving simulator
In recent years, driving simulators have become a valuable tool in the automotive design and testing process. Yet, in the field of vehicle dynamics, most decisions are still based on test drives in real cars. One reason for this situation can be found in the fact that many driving simulators do not allow the driver to evaluate the handling qualities of a simulated vehicle. In a driving simulator, the motion cueing algorithm tries to represent the vehicle motion within the constrained motion envelope of the motion platform. By nature, this process leads to so called false cues where the motion of the platform is not in phase or moving in a different direction with respect to the vehicle motion. In a driving simulator with classical filter-based motion cueing, false cues make it considerably more difficult for the driver to rate vehicle dynamics. A team with members from the University of Stuttgart, Cruden B.V., and AUDI AG developed a new motion cueing methodology for the use in a driving simulator dedicated to vehicle dynamics. The new algorithm is a track based approach that makes use of the vehicle’s position on the track and does not use high-pass filters. It therefore allows minimization of false cues, thereby giving the driver the best possible information on the handling qualities of the car. In this paper, the basic principles of the algorithm are described, as well as the implementation in a driving simulator. Comparison with data from a handling track shows the advantages of the new methodology over the classical motion cueing approach.
Online recognition of driver-activity based on visual scanpath classification
The next step towards the fully automated vehicle is the level of conditional automation, where the automated driving system can take over the control and responsibility for a limited time interval. Nevertheless, take-over situations may occur, forcing the driver to resume the driving task. Despite such situations, the driver is able to perform secondary tasks during conditionally automated driving, hence a low take-over quality must be expected. Methods for Driver-Activity Recognition (DAR) usually extract features for the classification within a moving time window. In this paper, the first DAR architecture based on the driver's scanpath, which is extracted by means of dynamic clustering and symbolic aggregate approximation patterns, is introduced. To demonstrate the potential of this approach, it is compared to a state-of-the-art method based on the data of a driving simulator study with 82 subjects. The classification performance of both DAR approaches was examined for decreasing window sizes with regard to the recognition of different secondary tasks and the separability of drivers using a handheld or hands-free device. Compared to the state-of-the-art approach, the proposed method shows a classification accuracy increase of nearly 20%, a significant improvement of the overall classification performance, and is able to classify the secondary tasks of the driver even for short windows of a duration of 5 s, i.e. with little information.
Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2016 Annual Conference: Human Factors and User Needs in Transport, Control …
This experiment aims to study the impact of the sonification of a hand gesture controlled system on the driver behavior. The principle is to give an auditory feedback to the driver, in addition to a visual screen, in order to assist in-car devices interface manipulation. A touchless interface has been tested with a panel of 24 subjects on a driving simulator. Different tasks (pick up the phone, select an item in a list) involving the screen and the interface had to be performed by the user while driving. To study the contribution of the sound feedback on the drivers’ behavior, two audio conditions were tested: with and without auditory feedback. The tasks were performed in lowly and highly demanding traffic conditions. Driving and gaze behavior as well as eye-tracking information were analyzed. Moreover, a questionnaire was used to obtain subjective measurements, such as ease of use and feeling of safety. The results show that the sonification helped drivers to feel safer and more focused on the road. This result was confirmed by gaze analysis, which shows that drivers look significantly less to the visual interface when a sound is present, leading to a safer use of the interface.
Eye Tracking Glasses
Simulator
Quantitative Usability Testing in User-Centered Product Development with Mobile Eye Tracking
The usability assessment of tangible products holds manifold challenges in identifying the sources of usability problems and quantifying the evaluation of user-product interactions. In this field, the use of modern mobile eye tracking systems show great potential, as they are capable of non-invasively capturing the user’s field of vision, including the gaze point in almost any real-word setting. With the eye movements captured, the eye tracking data provides information, which offers an insight into the user’s intentions and struggles. However, the prospects of mobile eye tracking for usability assessments of tangible products are not well studied yet, and there is a lack of methods supporting the analysis of the resulting eye tracking data. Therefore, the goal of this thesis is to evaluate mobile eye tracking in usability assessments of tangible products over the conventional third-person view, and to develop methodological supports for the data analysis. A comparison study shows that the mobile eye tracking perspective leads to a more detailed description of the scene and a better explanation for the causes of usability problems, when compared to the third-person perspective. To facilitate the analysis of eye tracking data, three analysis methods have been developed. The Target-Based Analysis is a coding scheme for manual analysis, whereas the Scrutinizing algorithm and the Hand-Gaze Distance approach are semi-automated supports, detecting interruptions of the usage flow, considering fixation durations, saccade amplitudes and hand movements. The evaluation of the three methods shows that the Target-Based Analysis is applicable to a broad variety of applications, however, it is time-consuming. Both, the Scrutinizing algorithm and the Hand-Gaze Distance approach are able to reduce the manual effort and to identify usability problems, however, they are less accurate. The evidence-based description of the detected usability problems, derived from the mobile eye tracking data, are quickly understood, accepted, and foster a solution-oriented discussion, when presented to others. Overall, the application of mobile eye tracking in usability testing is vital, as it allows for a fine-granular and a quantifiable evaluation of user-product interactions. The three developed methods are of benefit for the analyst and enable the interpretation of eye tracking data in a more structured and partly automated way. In the future, with analysis methods developed further, mobile eye tracking is suitable to become an important element of usability assessments of tangible products.
Eye Tracking Glasses
Simulator
Ready for take-over? A new driver assistance system for an automated classification of driver take-over readiness
Recent studies analyzing driver behavior report that various factors may influence a driver's take-over readiness when resuming control after an automated driving section. However, there has been little effort made to transfer and integrate these findings into an automated system which classifies the driver's take-over readiness and derives the expected take-over quality. This study now introduces a new advanced driver assistance system to classify the driver's takeover readiness in conditionally automated driving scenarios. The proposed system works preemptively, i.e., the driver is warned in advance if a low take-over readiness is to be expected. The classification of the take-over readiness is based on three information sources: (i) the complexity of the traffic situation, (ii) the current secondary task of the driver, and (iii) the gazes at the road. An evaluation based on a driving simulator study with 81 subjects showed that the proposed system can detect the take-over readiness with an accuracy of 79%. Moreover, the impact of the character of the take-over intervention on the classification result is investigated. Finally, a proof of concept of the novel driver assistance system is provided showing that more than half of the drivers with a low take-over readiness would be warned preemptively with only a 13% false alarm rate.
Spatial perception of landmarks assessed by objective tracking of people and Space Syntax techniques
This paper focuses on space perception and how visual cues, such as landmarks, may influence the way people move in a given space. Our main goal with this research is to compare people’s movement in the real world with their movement in a replicated virtual world and study how landmarks influence their choices when deciding among different paths. The studied area was a university campus and three spatial analysis techniques were used: space syntax; an analysis of a Real Environment (RE) experiment; and an analysis of a Virtual Reality (VR) environment replicating the real experiment. The outcome data was compared and analysed in terms of finding the similarities and differences, between the observed motion flows in both RE and VR and also with the flows predicted by space syntax analysis. We found a statistically significant positive correlation between the real and virtual experiments, considering the number of passages in each segment line and considering fixations and saccades at the identified landmarks (with higher visual Integration). A statistically significant positive correlation, was also found between both RE and VR and syntactic measures. The obtained data enabled us to conclude that: i) the level of visual importance of landmarks, given by visual integration, can be captured by eye tracking data ii) our virtual environment setup is able to simulate the real world, when performing experiments on spatial perception.