Objective measures of pilot workload
This project was an engineering feasibility study to determine the value of using physiological based workload assessment technology in the USAF test and evaluation process at the Air Force Test Center, Edwards AFB. The feasibility study was requested by the 412 Test Wing, Air Force Test Center, Edwards AFB, California. The responsible test organization was the 418th Flight Test Squadron at Edwards AFB. Testing was conducted by the 418th Flight Test Squadron with the Operator Performance Laboratory (OPL) from the University of Iowa. The OPL provided, operated and collected data with test equipment called the Cognitive Assessment Tool Set (CATS) kit. Testing was conducted from 28 August 2018 to 12 February 2019 and consisted of four sorties totaling 18.5 flight test hours. Testing was conducted by the Combined Test Force IAW test plan, 412TW-TP-18-47, Objective Measures of Workload Feasibility Study [1]. A total of seven evaluator pilots performed takeoff, AR, normal flight operations, and landing tasks to determine the utility of the CATS system in a flight test environment. The CATS was set up inside an instrumented C-17 aircraft acting as a receiving aircraft against a legacy KC-135 tanker. Alongside CATS, OPL used the Dikablis Professional eye-tracking glasses in conjunction withD-Lab software from Ergoneers. Each pilot was fitted with electrocardiogram (ECG or EKG) electrodes prior to flight. The eye-tracker was used exclusively in the right seat, and only on the August flights. During the course of these flights, ECG data was successfully obtained with little-to-no complications. Flight One produced useful eye-tracking data, but Flight Two did not result in successful eye tracking from the Dikablis glasses due to inproper fit. For the final two flights in February, due to technical infeasibility of the current eye tracking setup, OPL made the decision to forego additional eye tracking data and instead focus on the CATS workload data collection. In addition to live flights, workload data was gathered from simulator tasks in Simulator B at the Test Pilot School. Pilots flew through 4 scenarios of varying difficulty in which they were tasked with following an aircraft while keeping it within a set of horizontal bars in the HUD, all while performing a secondary auditory response task. OPL gathered data from seven pilots total, but only five of these seven participated in the live flights, with the two additional pilots only instrumented with OPL’s ECG amplifier. Results indicated the pilots subjectively rated the ECG as comfortable and non-evasive. The utility of the ECG was reliable for the most part, there were a few instances of leads disconnecting resulting momentary data drop-outs, but were quickly detected and corrected by the pilot. The Dikablis glasses eye-tracker was generally uncomfortable and compatable with head-sets for only short durations. The eye-tracker could be better implemented if integrated into a helmet. Using the ECG and workload data, the difficulty level of each task (and even levels for distinct aspects of each task) was able to be determined. ECG data was sensitive to both low and high workload flight conditions and consistent across users. When combined with eye-tracker and aircraft data, the measure showed good diagnostisity. ECG data were unique to each pilot, but of the various tasks performed, station keeping on-boom during a turn generally produced the highest workload, followed by boom limits operations. Comparatively, takeoff and landing tasks seemed fairly straightforward and low-workload. Post-task pilot questionnaires (TLX and Bedford) backed up the ECG workload scores. Eye-tracking data showed that during aerial refueling tasks, the singular point that attracted the attention of the pilot was the KC-135 pilot director lights (PDL’s), and could be delineated down to the “captain’s bars”, which were used as a visual reference point to maintain formation. Regardless of the presence of the HUD, the captain’s bars held each pilot’s attention exclusively during refueling, with only occasional glances to their instruments or the tanker wing. Unsurprisingly, pilots maintained good situational awareness with a robust scan-pattern as they shifted their gaze much more during Landing and Takeoff tasks, alternating between the HUD, instruments, and forward and right-side windows. This study has shown that OPL’s methods and instrumentation can reliably provide physiological data including workload and eye-tracking that can help better evaluate the effort and attention required by the flight crew to successfully complete aerial refueling operations. The ECG based workload assessment system was deployed in minutes and required no training or special skills from the pilot. The system provides a relative workload number in real-time from the second the ECG amplifier was turned on and no further, calibration, modification, or refinement was needed to generate the figures shown in this report. Other than commercial power for the laptops, there was zero integration with the aircraft and was acceptably nonintruisive as the pilots were not tethered in any way. Unique flight events were conviently tagged by the test team to eliminate the need for aircraft systems integration. The ECG system could easily be deployed simultaneously on the boom operator and on a pilot in a single seat cockpit at the same time and get the total team workload picture established with relative ease.
Eye Tracking Glasses
Software
Quantitative eye gaze and movement differences in visuomotor adaptations to varying task demands among upper-extremity prosthesis users
Importance: New treatments for upper-limb amputation aim to improve movement quality and reduce visual attention to the prosthesis. However, evaluation is limited by a lack of understanding of the essential features of human-prosthesis behavior and by an absence of consistent task protocols. Objective: To evaluate whether task selection is a factor in visuomotor adaptations by prosthesis users to accomplish 2 tasks easily performed by individuals with normal arm function. Design, Setting, and Participants: This cross-sectional study was conducted in a single research center at the University of Alberta, Edmonton, Alberta, Canada. Upper-extremity prosthesis users were recruited from January 1, 2016, through December 31, 2016, and individuals with normal arm function were recruited from October 1, 2015, through November 30, 2015. Eight prosthesis users and 16 participants with normal arm function were asked to perform 2 goal-directed tasks with synchronized motion capture and eye tracking. Data analysis was performed from December 3, 2018, to April 15, 2019. Main Outcome and Measures: Movement time, eye fixation, and range of motion of the upper body during 2 object transfer tasks (cup and box) were the main outcomes. Results: A convenience sample comprised 8 male prosthesis users with acquired amputation (mean [range] age, 45 [30-64] years), along with 16 participants with normal arm function (8 [50%] of whom were men; mean [range] age, 26 [18-43] years; mean [range] height, 172.3 [158.0-186.0] cm; all right handed). Prosthesis users spent a disproportionately prolonged mean (SD) time in grasp and release phases when handling the cups (grasp: 2.0 [2.3] seconds vs 0.9 [0.8] seconds; P < .001; release: 1.1 [0.6] seconds vs 0.7 [0.4] seconds; P < .001). Prosthesis users also had increased mean (SD) visual fixations on the hand for the cup compared with the box task during reach (10.2% [12.1%] vs 2.2% [2.8%]) and transport (37.1% [9.7%] vs 22.3% [7.6%]). Fixations on the hand for both tasks were significantly greater for prosthesis users compared with normative values. Prosthesis users had significantly more trunk flexion and extension for the box task compared with the cup task (mean [SD] trunk range of motion, 32.1 [10.7] degrees vs 21.2 [3.7] degrees; P = .01), with all trunk motions greater than normative values. The box task required greater shoulder movements compared with the cup task for prosthesis users (mean [SD] flexion and extension; 51.3 [12.6] degrees vs 41.0 [9.4] degrees, P = .01; abduction and adduction: 40.5 [7.2] degrees vs 32.3 [5.1] degrees, P = .02; rotation: 50.6 [15.7] degrees vs 35.5 [10.0] degrees, P = .02). However, other than shoulder abduction and adduction for the box task, these values were less than those seen for participants with normal arm function. Conclusions and Relevance: This study suggests that prosthesis users have an inherently different way of adapting to varying task demands, therefore suggesting that task selection is crucial in evaluating visuomotor performance. The cup task required greater compensatory visual fixations and prolonged grasp and release movements, and the box task required specific kinematic compensatory strategies as well as increased visual fixation. This is the first study to date to examine visuomotor differences in prosthesis users across varying task demands, and the findings appear to highlight the advantages of quantitative assessment in understanding human-prosthesis interaction.
Eye Tracking Glasses
Simulator
Reducing the cognitive load of decision-makers in emergency management through augmented reality
Decision processes in emergency management are particularly complex. Operations managers have to make decisions under time pressure while the situation at hand changes continuously. As wrong decisions in emergencies often have drastic effects, operations managers try to receive information from various sources such as the emergency control centre, their operation forces, databases, electronic location maps and drones. However, previous research has shown that humans have only limited information processing capabilities, and once these are exceeded, task performance decreases. Augmented Reality (AR) offers entire new possibilities to visualise information. Previous research on the relationship between the use of AR for information visualisation and the experienced cognitive load yielded contradictory results. By using the design science approach, we therefore aim to develop an AR decision support system. In a comparative eye-tracking study, we plan to examine how different types of AR information visualisation affect the experienced cognitive load of operations managers and thus decision-making. In this research-in-progress paper, we present the results of expert interviews with six operations managers who described three AR use cases in emergency management and five requirements for an AR decision support system.
Eye Tracking Glasses
Software
Robotic process automation applied to education: A new kind of robot teacher?
The robots, despite being commercially accessible for multiple functionalities and in different modalities, are still an innovative topic and represent great opportunities for the research and development in the education and culture of the society. In this article, the use of RPA (Robotic Process Automation) robots is developed and proposed, as a resource to support teaching processes. The revision starts describing the gradual utilization of robots in the modern educational institution, by means of examples, indicating cases, perceptions, challenges and/or opportunities as the text advances. The role that the AI (artificial intelligence) plays is pointed out, as well as its current situation, different type of robot implementations and their application in education, while going more and more into the RPA option, showing application cases from the private enterprise and highlighting the improvements accomplished. The method used for the research is explained, which basically consists of the compilation and revision of the related literature, in order to validate the authenticity and innovation of the proposed hypothesis based in RPA, followed by the design of a use case diagram of the robot (operability), the development (software programming) and testing of the specified functionalities, considering different perspectives: in favour and against the project. The work continues with the demonstration of results achieved, including a video of the robot in progress (named Aileen) having an exchange with a student of 11 years old, who interact without interruption for about 10 minutes during a teaching-learning process, in Spanish language (with English subtitles). Before ending, to highlight the widespread opportunity and scope, some of the author experiences about complementary, but pertinent, technology are shared, taking into account emotional factors, through the use of facial expression recognition, eye tracking and the reading of brain’s electrical activity with EEG (electroencephalography), among others, and the development potential with the current RPA technology (e.g. an updated version of Aileen). The importance of AI and autonomous systems in the society is emphasized, and the achievements and lessons learned with the RPA robot are reviewed, including the impact and opportunities for governments and educational institutions with its eventual incorporation. Keywords: Robot, RPA, education, artificial intelligence, teaching, learning.
Eye Tracking Glasses
Software
Smart S3D TOR: intelligent warnings on large stereoscopic 3D dashboards during take-overs
When operating a conditionally automated vehicle, humans occasionally have to take over control. If the driver is out-of-the-loop, a certain amount of time is necessary to gain situation awareness. This work evaluates the potential of smart stereoscopic 3D (S3D) dashboard visualizations to support situation assessment and by that, take-over performance. In a driving simulator study with a 4×2 between-within design, we presented smart take-over-requests (TOR) showing the current traffic situation on various locations in 2D and S3D to 52 participants performing the n-back task. Driving and glance behavior indicate that participants considered the smart TORs and by that, could perform more safe take-overs. Warnings in S3D and warnings appearing at the participant's focus of attention as well as the instrument cluster performed best.
Eye Tracking Glasses
Simulator
Some on-road glances are more equal than others: Measuring engagement in the driving task
The current work examines a methodology developed for assessing driver attention management using high-precision eye glances towards safety-relevant driving information. The Task Analysis Eye Movement Overlay (TAEMO) method uses task analyses, video recordings of a driving scenario, and eye glance data toward visual keys that drivers sample during the driving scenario to directly measure driver engagement. This methodology has applications for evaluating infrastructure design, driver impairment assessment, driver training, driver distraction research, and vehicle human-machine interface (HMI) system design.
Eye Tracking Glasses
Software
The influence of a gaze direction based attention request to maintain mode awareness
Future vehicles will combine different levels of driving automation characterized by varying responsibilities for users. This development will intensify system complexity which poses the risk of confusing the driver. We hypothesize that the users’ mode awareness suffers especially when changing from Level 3 “Conditional Automation” to Level 2 “Partial Automation”. Therefore, automated systems need to be designed in a way that minimizes confusion with regard to the automation mode. The article describes the influence of a gaze direction based Attention Request (ATR) to avoid mode confusion with the aim of contributing to the reliable operation of different levels of automation in one vehicle. Two similar studies were conducted. One took place in a dynamic driving simulator with 40 participants. Every participant drove for 10 minutes with a partially automated driving (PAD) (SAE level 2) system and conditionally automated driving (CAD) (SAE level 3) system in the order PAD/CAD/PAD. The second study was conducted on a German highway in a Wizard-of-Oz car. All 40 test persons drove in each PAD and CAD phase 8 minutes in the order of PAD/CAD/PAD/CAD/PAD/CAD. The CAD-system was in both studies a high performing Hands-Off Level 2 system that required no input of the driver. To promote the same mental model for all participants as it is a requirement to measure the differences in mode awareness, all persons became a detailed description of the Level 2 and 3 systems presented by video and text. Both studies used a between-subject-design to measure the influence of an ATR. The ATR was based on the gaze direction of the driver and initiated by the investigator when the drivers gaze was not in the street AOI for longer than 4 seconds. Mode awareness was operationalized by the visual attention towards driving-relevant areas, a qualitative analysis of a questionnaire and followed by an interview. The ATR was proven to be an effective action to maintain the mode awareness by using a level 2 and 3 system within one car. Specifically, the visual attention did not decrease by an intermitted CAD drive during PAD. Moreover, the visual attention to the road scene increased for the group with an ATR during PAD. This was indicated by the measurement of a significant interaction effect for the development of the visual attention to the road scene for the groups with and without ATR. Thus, the gaze direction-based ATR was proven to be an effective measure to maintain mode awareness, if different levels of automation are combined in one vehicle. This result helps to take the next step for realizing such combined multilevel systems with tailored HMIs for advanced driver assistance systems. Moreover, it has to be considered, that the studies put the emphasis on the first glance of the drivers, during their first contact with partly and conditionally automated systems. Further studies should investigate the long term effect of an ATR.
Eye Tracking Glasses
Simulator