News & Posts
Eye Tracking - why markers make the difference
Understanding Propheaeye & Dynamic Global AOIS®
Behavioral research depends on linking human perception to environmental cues, with eye tracking offering direct insight into visual attention. Yet most Eye tracker systems still flatten gaze into two dimensions—missing the depth of real-world contexts like cockpits or simulators. By pushing eye trackers to recognize areas of interest by integrating spatial markers, gaze can be anchored in true 3D space, turning objects into stable areas of interest across angles, movements, and sessions. This shift not only refines behavioral analysis but also paves the way toward machine seeing and intelligent systems incorporated in Prophea.X.
Eye tracker & Areas of Interest - an Introduction
Discover the essentials and latest advancements of eye tracker for your research
Behavioral research has long depended on the ability to link human perception and attention to observable environmental cues. With the latest eye tracker technology, we gain direct access to one of the most crucial channels of human behavior: visual attention. However, while modern eye trackers offer high-precision gaze data, most conventional systems still interpret gaze in two dimensions—limiting analyses to screen-based stimuli or flat video perspectives. In real-world environments such as vehicle cockpits, aircraft control panels, or immersive simulation spaces, this limitation creates a fundamental disconnect: the human operates in three dimensions, but the data is flattened into two.
To bridge this gap, researchers have begun integrating infrared (IR) markers into their eye tracker methodologies for experimental setups. These markers—small, unobtrusive objects either passively reflective or actively emitting infrared light—can be strategically placed within an environment to define reference points in three-dimensional space. When recognized by the eye tracking system, they allow the transformation of gaze vectors into spatially accurate coordinates. This approach is not merely a technical upgrade—it is a conceptual shift from observing where someone is looking on a screen to understanding what object in the environment is being attended to and from what angle or position.
A key application of this method lies in the definition and tracking of Areas of Interest (AOIs)—regions or objects that are critical for interpreting human attention and interaction. In traditional research, AOIs are defined as rectangles or polygons in a 2D video stream. But what happens when the participant moves, or when the object of interest is angled, mobile, or partially obscured? IR markers allow AOIs to be defined relative to the real-world coordinates of objects—mirrors, gear levers, switches, or entire dashboard modules—making them persistent, reliable, and transferable across sessions and subjects.
Beyond aiding human researchers, this approach nudges eye tracking toward a new conceptual frontier: machine seeing. By anchoring the eye track (gaze) to three-dimensional, structured space, IR marker systems help generate spatial awareness not just for humans interpreting the data but also for algorithms and intelligent systems capable of learning from it. In this way, IR markers serve as foundational tools for a growing class of applications in autonomous systems, real-time feedback loops, and cognitive modeling in eye tracker studies.
This paper explores the role of IR markers in enhancing spatial orientation in eye tracking studies, with a focus on defining AOIs in 3D space, maintaining consistency across changing perspectives, and laying the groundwork for machine-readable environmental perception. Through examples drawn from cockpit environments and similar high-density control settings, we demonstrate how this eye tracker method improves data reliability, interpretability, and scalability—ultimately enabling more rigorous and more meaningful insights into human behavior in real-world contexts.
The Role of Infrared Markers in Eye Tracking Systems
What Are Infrared Markers?
Infrared markers are small elements that either passively reflect or actively emit infrared light, making them visible to dedicated cameras and sensors—but invisible to the human eye. They are commonly attached to objects or surfaces within the experimental eye tracker environment. When used in conjunction with wearable eye trackers, which are equipped with scene cameras and infrared-sensitive lenses, these markers allow for real-time detection of specific locations or objects in space.
There are two primary types:
- Passive IR markers: Usually made of retroreflective materials. These rely on an IR light source—often part of the eye tracking hardware—which illuminates the scene. The markers reflect this light back to the camera, allowing them to be recognized.
- Active IR markers: These contain built-in IR LEDs and emit light at specific wavelengths. They are more resistant to environmental IR noise and can be uniquely identified, enabling more complex tracking scenarios.
How They Integrate with Eye Tracking Systems
In modern mobile eye tracking systems, such as Ergoneers Prophea.X, which is also compatible with Tobii hardware, the scene camera captures the field of view from the subject’s perspective. IR markers, once placed in the environment, appear as detectable patterns or points that the software can use to reconstruct a spatial map. By correlating gaze vectors with the known coordinates of these markers, the system can infer what physical object or area is being fixated upon—within a shared 3D coordinate space.
In combination with spatial mapping software (or integrated within solutions like Prophea.X), this setup enables:
- Gaze projection into 3D environments
- Stable AOI definition across sessions and individuals
- Dynamic scene understanding even during participant movement
Eye tracker technology capable of special AOI detection is a substantial improvement over traditional methods where gaze is interpreted only in relation to the frame of the camera feed, which shifts constantly as the subject moves.
Precision Through Anchoring
One of the main benefits of using IR markers is stability. In a moving world—where subjects lean forward, turn their heads, or interact with mobile components—AOIs that are anchored via markers remain stable. For example, a cockpit’s rearview mirror marked with IR targets can be defined once as an AOI and then reliably tracked even as lighting changes or the participant’s viewpoint shifts.
In effect, IR markers transform the environment into a structured, trackable space. Each marker acts as an anchor point, turning an otherwise shifting camera feed into a navigable 3D scene with fixed referents. This lays the groundwork for more accurate behavioral analysis, particularly in multimodal research environments where attention must be correlated with events, stimuli, or actions occurring in real-world space.
Flexible and Modular Design
The beauty of eye tracker software incorporating an IR marker system lies in its modularity. Researchers can freely design their experimental environments—whether that’s placing markers on an infotainment system, steering wheel, or even on physical tools used in human–machine interaction studies. They can also be used in conjunction with motion tracking systems or full-body pose estimation to analyze gaze in relation to body position or gesture.
Furthermore, the marker infrastructure scales well from single-subject lab studies to complex simulator setups, where multiple observers or participants interact in a shared space.
From Static Pixels to Anchored Objects in Space
Areas of Interest (AOIs)+ Eye tracker: >h2>
In behavioral and usability research, Areas of Interest (AOIs) are crucial for interpreting where, how long, and how often a person directs their gaze. Traditionally, AOIs are defined in 2D space—manually drawn onto video recordings or screen images as fixed pixel regions. This approach has served well in screen-based research, but breaks down in spatially complex, real-world environments where neither the observer nor the target object remains stationary.
To address this, infrared (IR) markers enable a paradigm shift: AOIs can be anchored to physical objects, recognized dynamically, and tracked across perspective changes and movement. This chapter contrasts traditional 2D AOIs with marker-based approaches and highlights the advantages for real-world, spatially embedded studies.
The Challenge of Pixel-Based AOIs
Defining AOIs as rectangles or polygons on a 2D video stream introduces several challenges:
- Perspective distortion: AOIs become inaccurate as the participant changes position or angle.
- Object ambiguity: Visually similar objects may overlap in the camera view but differ spatially.
- No persistence: If the object moves (e.g., a swinging mirror or an opened glove box), the AOI no longer applies.
- Manual labor: Drawing AOIs on video for each session or participant is time-consuming and error-prone.
- Poor reproducibility: Variability in video angle, resolution, and lens distortion reduces the reliability of comparisons across subjects or studies.
In environments like a cockpit—where dozens of spatially dense, visually similar components compete for attention—this method can produce misleading results for eye tracker studies.
IR Markers Enable Object-Based AOIs
By attaching IR markers to physical objects, those objects become identifiable entities within a shared 3D space. Instead of assigning AOIs to fluctuating video pixels, the system tracks the object itself as a spatial reference.
This results in:
- Stable AOIs regardless of subject movement, camera angle, or lighting
- Automatic object recognition during live or post-processed analysis
- Reusability across sessions: once defined, AOIs persist for every participant
- Reduced manual effort in defining and adjusting AOIs
- Enhanced accuracy in identifying object-level attention
For example, marking a rearview mirror or dashboard dial with three IR points allows the system to triangulate its exact position and orientation, defining an AOI that stays valid regardless of where the participant is seated or looking from.
Tracking Moving or Dynamic AOIs
Beyond static fixtures, IR markers in eye tracking also allow for dynamic AOIs—objects that move or change position during the task. This is especially relevant in real-world studies involving human-machine interaction, mobile tools, or variable scene configurations.
Examples include:
- Tracking a technician’s gaze on a handheld instrument
- Monitoring attention toward a moving vehicle in a simulation
- Evaluating how users follow dynamic cues on robotic arms or AR displays
By linking AOIs to moving marker configurations, the system retains spatial fidelity without having to redraw boundaries or reinterpret the scene each time.
Object Identity and Semantic Meaning
An often-overlooked benefit in eye tracker studies using marker-based AOIs is the capacity to assign semantic meaning to objects. Once markers are recognized, each AOI can carry metadata: e.g., “left-side climate control” or “hazard light switch.” This not only enables richer analysis but also facilitates integration with other data sources such as:
- Event logs (e.g., button presses, alert triggers)
- Biometric signals (e.g., skin conductance, heart rate)
- Motion data (e.g., head or hand movement)
AOIs become not just regions of interest, but semantically grounded units of interaction in the experimental design.
A Comparative Summary
Feature
2D Pixel-Based AOIs
IR Marker–Based AOIs
Perspective stability
Low
High
Object movement handling
None
Supported
Cross-session reproducibility
Poor
Excellent
Setup effort
Manual per video
One-time spatial calibration
Spatial context
Absent
Fully integrated
Automation potential
Low
High
Conclusion: AOIs that Understand the World
With IR markers, an eye tracker becomes context-aware, physically grounded, and resilient to variability—enabling researchers to move from interpreting gaze on a screen to understanding gaze in a living, operational environment. This shift is essential for research domains like automotive UX, aviation safety, industrial ergonomics, and mobile human behavior studies.
It also sets the stage for advanced capabilities such as intelligent scene recognition, automated AOI generation, and machine-driven behavioral annotation—topics we begin to explore in the next chapter.
In behavioral and usability research, Areas of Interest (AOIs) are crucial for interpreting where, how long, and how often a person directs their gaze. Traditionally, AOIs are defined in 2D space—manually drawn onto video recordings or screen images as fixed pixel regions. This approach has served well in screen-based research, but breaks down in spatially complex, real-world environments where neither the observer nor the target object remains stationary.
To address this, infrared (IR) markers enable a paradigm shift: AOIs can be anchored to physical objects, recognized dynamically, and tracked across perspective changes and movement. This chapter contrasts traditional 2D AOIs with marker-based approaches and highlights the advantages for real-world, spatially embedded studies.
The Challenge of Pixel-Based AOIs
Defining AOIs as rectangles or polygons on a 2D video stream introduces several challenges:
- Perspective distortion: AOIs become inaccurate as the participant changes position or angle.
- Object ambiguity: Visually similar objects may overlap in the camera view but differ spatially.
- No persistence: If the object moves (e.g., a swinging mirror or an opened glove box), the AOI no longer applies.
- Manual labor: Drawing AOIs on video for each session or participant is time-consuming and error-prone.
- Poor reproducibility: Variability in video angle, resolution, and lens distortion reduces the reliability of comparisons across subjects or studies.
In environments like a cockpit—where dozens of spatially dense, visually similar components compete for attention—this method can produce misleading results for eye tracker studies.
IR Markers Enable Object-Based AOIs
By attaching IR markers to physical objects, those objects become identifiable entities within a shared 3D space. Instead of assigning AOIs to fluctuating video pixels, the system tracks the object itself as a spatial reference.
This results in:
- Stable AOIs regardless of subject movement, camera angle, or lighting
- Automatic object recognition during live or post-processed analysis
- Reusability across sessions: once defined, AOIs persist for every participant
- Reduced manual effort in defining and adjusting AOIs
- Enhanced accuracy in identifying object-level attention
For example, marking a rearview mirror or dashboard dial with three IR points allows the system to triangulate its exact position and orientation, defining an AOI that stays valid regardless of where the participant is seated or looking from.
Tracking Moving or Dynamic AOIs
Beyond static fixtures, IR markers in eye tracking also allow for dynamic AOIs—objects that move or change position during the task. This is especially relevant in real-world studies involving human-machine interaction, mobile tools, or variable scene configurations.
Examples include:
- Tracking a technician’s gaze on a handheld instrument
- Monitoring attention toward a moving vehicle in a simulation
- Evaluating how users follow dynamic cues on robotic arms or AR displays
By linking AOIs to moving marker configurations, the system retains spatial fidelity without having to redraw boundaries or reinterpret the scene each time.
Object Identity and Semantic Meaning
An often-overlooked benefit in eye tracker studies using marker-based AOIs is the capacity to assign semantic meaning to objects. Once markers are recognized, each AOI can carry metadata: e.g., “left-side climate control” or “hazard light switch.” This not only enables richer analysis but also facilitates integration with other data sources such as:
- Event logs (e.g., button presses, alert triggers)
- Biometric signals (e.g., skin conductance, heart rate)
- Motion data (e.g., head or hand movement)
AOIs become not just regions of interest, but semantically grounded units of interaction in the experimental design.
A Comparative Summary
|
Feature |
2D Pixel-Based AOIs |
IR Marker–Based AOIs |
|
Perspective stability |
Low |
High |
|
Object movement handling |
None |
Supported |
|
Cross-session reproducibility |
Poor |
Excellent |
|
Setup effort |
Manual per video |
One-time spatial calibration |
|
Spatial context |
Absent |
Fully integrated |
|
Automation potential |
Low |
High |
Conclusion: AOIs that Understand the World
With IR markers, an eye tracker becomes context-aware, physically grounded, and resilient to variability—enabling researchers to move from interpreting gaze on a screen to understanding gaze in a living, operational environment. This shift is essential for research domains like automotive UX, aviation safety, industrial ergonomics, and mobile human behavior studies.
It also sets the stage for advanced capabilities such as intelligent scene recognition, automated AOI generation, and machine-driven behavioral annotation—topics we begin to explore in the next chapter.
Stabilizing Attention Mapping in Real-World Environments
Areas of Interest in 3D Space
As experimental research settings for eye trackers studies move beyond flat screens and into real-world environments—cars, airplanes, operating rooms, and immersive simulators—Areas of Interest (AOIs) must evolve as well. Anchoring attention to a screen pixel is no longer sufficient; researchers need to know whether a participant looked at the rearview mirror, the infotainment system, or the gear shifter—and from what angle, at what moment, and under what conditions. This requires not just tracking gaze but understanding gaze in context.
Infrared (IR) markers enable the construction and tracking of 3D AOIs—spatially defined regions or objects within a physical environment that can be automatically recognized and persistently referenced across participants, movements, and sessions. This chapter examines how these AOIs function, their key advantages, and how they enable more powerful and ecologically valid behavioral analysis.
What Is a 3D AOI?
A 3D AOI is an area or volume in space that has defined coordinates within a shared spatial reference system. Unlike 2D AOIs, which are fixed to a camera perspective, 3D AOIs are attached to physical space—either as static objects (e.g., dashboard buttons) or dynamic objects (e.g., movable panels, tools, or screens).
They are created by:
– Attaching IR markers to the object or area of interest
– Calibrating the scene to map marker positions to a coordinate space
– Defining the AOI boundaries relative to those marker positions
– Linking gaze vectors to intersecting 3D surfaces or volumes
Once established, these AOIs remain valid regardless of subject movement, head rotation, or camera angle.
Persistent Attention Zones Across Participants
One of the major advantages of 3D AOIs is their cross-participant consistency. In traditional 2D eye tracker setups, each subject’s camera perspective differs slightly, meaning that AOIs must be redrawn or warped to match. With 3D AOIs:
– Every participant interacts with the same spatial AOIs, no matter their position
– AOIs can be defined once and reused across all subjects
– Group data can be aggregated and compared with spatial precision
This makes heatmaps, fixation analyses, and attention metrics more reliable, reproducible, and statistically valid.
AOIs in Dynamic and Interactive Scenes
Real-world tasks often involve motion—not only by the participant, but also by the objects they interact with. 3D eye tracker AOIs can handle dynamics:
– A driver adjusts the rearview mirror: the AOI moves with it.
– A pilot pulls a control lever: the AOI follows the handle’s arc.
– A user taps an interactive touchscreen that repositions: the AOI remains valid.
By tying AOIs to marker-based object tracking, gaze behavior remains meaningful even in scenes with fluid geometry.
Scene-Dependent AOI Logic
3D AOIs also support conditional logic based on eye tracker task structure. For example:
– Gaze to an AOI might be interpreted differently depending on the phase of a task.
– AOIs can change size, shape, or position dynamically (e.g., highlighting only when relevant).
– Complex hierarchical AOIs can be built—for example, defining a parent AOI for “dashboard” and sub-AOIs for “speedometer,” “fuel gauge,” etc.
This introduces the potential for semantic AOIs that combine spatial and cognitive layers—capturing not just where participants look, but why.
Benefits of 3D AOIs in Cockpit and Simulator Studies using an eye tracker
The use of 3D AOIs is especially impactful in cockpit settings, where spatial layout, object hierarchy, and ergonomic design all matter. With IR marker-defined AOIs, researchers can:
– Determine how often drivers check mirrors or displays
– Analyze transition times between critical controls
– Compare scanning patterns between novice and expert users
– Identify overlooked safety-critical regions
– Measure how changes in layout impact attention distribution
In simulators, this data enables rapid iteration on interface design, training programs, and human–machine interaction models—backed by quantitative, spatially grounded attention data.
From eye tracker gaze mapping to spatial Insight
By anchoring gaze behavior to a stable 3D space, IR marker-based AOIs shift the research paradigm:
– From observing gaze to understanding interaction
– From annotating videos to tracking behavior in space
– From flattened camera feeds to rich, multidimensional insight
The result is not just more precise data—but a better understanding of how people perceive, navigate, and engage with complex environments.
© Dikablis.X by Ergoneers
Balancing Precision, Practicality, and Scalability
Benefits and Limitations of IR Marker-Based Eye Tracking Systems
Infrared (IR) markers bring critical spatial intelligence to eye tracking systems. They offer a structured, reliable way to anchor gaze data in real-world environments, enabling accurate, reproducible interpretation of visual attention in three-dimensional space. However, like any research tool, this approach comes with trade-offs.
In this chapter, we provide a clear-eyed assessment of the strengths and constraints of using IR markers in behavioral studies—helping researchers decide when and how to apply this method effectively.
Benefits
High Spatial Precision
IR marker setups enable millimeter-level gaze-object mapping, even in dynamic or cluttered environments. By anchoring AOIs to physical coordinates, researchers avoid the distortions, shifts, and inconsistencies common in camera-relative systems.
Stable and Reusable AOIs
AOIs defined via IR markers are persistent across sessions, camera angles, and participants. Once calibrated, the same object-based AOIs can be reused for different subjects, streamlining analysis and enhancing cross-subject comparability.Robustness to Subject Movement
Unlike 2D pixel-based methods, IR marker frameworks remain accurate even as participants change posture, move within the environment, or rotate their heads. This is essential in cockpit studies, surgical settings, or mobile fieldwork.
Compatibility with Real-World Tasks
Marker-based setups support naturalistic, hands-on interaction with physical objects—critical for applied research in automotive, aviation, manufacturing, and simulation-based training environments.
Enables Machine Seeing and Automation
By providing a reliable 3D frame of reference, IR markers serve as the backbone for real-time scene interpretation and automated gaze analysis—empowering intelligent interfaces and adaptive systems.
Limitations
Setup Complexity
Marker-based systems require initial calibration of the scene, careful marker placement, and sometimes the integration of external tracking software. While this process can be standardized, it introduces overhead compared to simpler 2D setups.
Marker Occlusion Risks
Markers must remain visible to the eye tracking camera(s) to function reliably. In environments with high interaction or movement, occlusion can occur—temporarily disabling AOI tracking or corrupting spatial mapping.
Hardware Dependence
Not all eye tracking systems support IR marker tracking natively. Integrating marker tracking may require additional cameras, IR lighting, or scene mapping software (e.g., Tobii Pro Glasses with Prophea.X or similar systems).
Environmental Constraints
IR markers work best in controlled lighting conditions. In bright outdoor environments or settings with interfering infrared sources (e.g., heat lamps, sunlight), marker detection accuracy may decrease.
Scalability Considerations
For large-scale or multi-subject studies, maintaining consistent marker placement, calibration accuracy, and scene interpretation can become logistically challenging—especially in complex, multi-room environments.
Strategic Use: When to Choose IR Marker Integration
IR marker systems are ideal when:
- The research environment is spatially complex (e.g., a cockpit, simulator, or lab)
- Objects of interest are 3D, movable, or interactable
- Precision and reproducibility are critical
- The study requires real-time gaze-based system feedback
- Researchers seek to define AOIs anchored to the environment, not to a video frame
They may be excessive when:
- The study is screen-based or purely 2D
- The AOIs are static and fixed within a single camera feed
- Minimal setup or portability is a higher priority than spatial accuracy
- Environmental conditions interfere with IR signal quality
Hybrid Possibilities and Future Integration
Increasingly, marker-based systems are being combined with:
- Fiducial tags (e.g., ArUco, AprilTags) for low-cost object recognition
- SLAM (Simultaneous Localization and Mapping) algorithms for marker-free spatial mapping
- Computer vision + AI like in Prophea<sup>autorecog</sup>for object detection and scene understanding without hardware markers
While IR markers in connection with eye trackers remain a gold standard for spatial stability and precision, these complementary technologies open new doors for hybrid approaches—balancing flexibility, scalability, and automation.
In the next and final chapter, we look at real-world applications across automotive, aviation, simulation, and ergonomics—illustrating how IR marker–based gaze tracking is being used to generate insights, improve systems, and shape the next generation of human–environment research.
Translating Gaze and Spatial Awareness into Actionable Insights
8. Real-World Applications of IR Marker-Based Eye Tracking
Infrared marker–based eye tracking is no longer limited to academic laboratories. It is increasingly being deployed in real-world environments where spatial accuracy, interpretability, and system integration are non-negotiable. This chapter highlights key application domains where IR markers in eye tracker studies enhance behavioral analysis, training, interface design, and automation.
Automotive and Cockpit Research
Use Case: Driver Attention and Safety Analysis
In vehicle cockpit environments, IR markers enable precise tracking of driver gaze relative to mirrors, dashboards, navigation systems, and hazard indicators. This allows researchers and manufacturers to:
- Quantify glance behavior across safety-critical zones
- Detect missed glances before lane changes or turns
- Identify cognitive overload or fatigue through gaze patterns
- Optimize interface layout and information prioritization
Why IR markers matter: Vehicle interiors are dense and multi-angled; fixed video-based AOIs break down under head movement. IR markers maintain consistent spatial tracking across motion, subjects, and sessions.
Aviation and Control Panels
Use Case: Pilot Situational Awareness in Simulated Flight
Pilot gaze behavior is a central indicator of flight readiness and emergency response. By placing IR markers on key instruments (altimeter, artificial horizon, etc.), researchers can assess:
- How pilots scan instruments during normal and emergency scenarios
- Whether attention is properly distributed or biased
- Real-time training feedback for missed AOIs or overfocus
Integration of eye trackers with simulator systems allows gaze-driven triggers, such as scoring attention fidelity or adapting difficulty based on visual scanning behavior.
Simulation-Based Training
Use Case: Attention-Adaptive Learning Environments
In military, medical, or industrial simulation, IR marker–defined AOIs enable real-time feedback and analytics:
- Tracking whether trainees attend to critical zones during procedures
- Comparing expert vs. novice gaze distribution patterns
- Triggering scenario changes or feedback based on attention lapses
Markers help instructors move beyond performance outcomes to analyze how and where attention was allocated.
Human–Machine Interaction (HMI) Design
Use Case: Gaze-Informed Adaptive Interfaces
In research and prototyping of complex interfaces (e.g., heavy machinery, AR displays), IR markers allow the system to:
- Adjust displays based on real-time gaze
- Monitor focus distribution across digital and physical controls
- Identify confusing or neglected interface elements
By grounding gaze spatially, designers can improve ergonomics, safety, and efficiency—especially in high-stakes or high-load settings.
Ergonomics and Workstation Optimization
Use Case: Visual Workflow and Productivity Studies
In assembly lines, surgical workstations, or command centers, IR marker–based gaze data helps identify:
- Workflow bottlenecks caused by excessive gaze shifts
- Poorly placed displays or instruments
- Visual distractions that impair task performance
Spatial eye tracker analysis enables objective, data-driven redesign of human-centered environments.
Foundations for Autonomous and Assistive Systems
IR marker infrastructure can serve as a spatial “language” between humans and intelligent systems. In scenarios where:
- Robots need to understand what the human sees or intends
- Driver-assist systems must respond to attention blind spots
- Adaptive environments tailor feedback to user focus
…the combination of 3D gaze tracking and machine-perceivable markers lays the groundwork for shared perception between human and machine.
Closing Thoughts
The use of IR markers for eye tracker glasses is not merely a technical enhancement—it is a strategic shift toward spatially grounded behavioral insight. By mapping gaze to real-world objects in 3D, researchers and systems designers gain access to a richer, more accurate understanding of human behavior in context. As environments grow more complex and systems more autonomous, this approach enables the kind of robust, real-time, and actionable data that future research and technology will increasingly demand.
Prophea.X, when integrated with high-precision eye-tracking hardware such as Tobii systems, offers an advanced platform for leveraging IR marker–based spatial gaze tracking—turning raw attention into structured, analyzable, and system-integrated perception. Prophea.X is not just a platform—it’s a catalyst for the next generation of reliable, scalable, and intelligent eye tracking. Whether through precision infrared markers or smart AutoRecog vision, it empowers researchers and practitioners to decode human attention with confidence, unlocking deeper insights and driving innovation across fields.
The future of eye tracking is here, and Prophea.X is leading the way.
Infrared Markers in Eye Tracking and Related Topics
Key References and Resources
- Duchowski, A. T. (2017). Eye Tracking Methodology: Theory and Practice (3rd Edition).
A foundational book on eye tracking principles and technologies, including spatial mapping and data synchronization.
https://link.springer.com/book/10.1007/978-3-319-57883-5 - Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (2011). Eye Tracking: A Comprehensive Guide to Methods and Measures.
A comprehensive reference on eye tracking research methods and areas of interest (AOIs).
https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199675111.001.0001/oxfordhb-9780199675111 - Müller, J., & Kröse, B. (2010). Infrared Marker-based Spatial Calibration for Mobile Eye Tracking.
Discusses spatial calibration using IR markers in mobile eye tracking contexts.
(Check IEEE Xplore or similar digital libraries.) - Tobii Pro Glasses 3 and Prophea.X Integration White Papers and Documentation
Manufacturer-provided technical documents on combining IR markers with eye tracking hardware for real-world spatial gaze tracking.
https://www.tobiipro.com/product-listing/tobii-pro-glasses-3/
https://ergoneers.com/prophea-x - AprilTag and ArUco Fiducial Marker Systems for Object Recognition
Marker systems often referenced for comparison with IR markers in spatial tracking.- AprilTag: https://april.eecs.umich.edu/software/apriltag.html
- ArUco: https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html
- Blignaut, P. (2013). “Fixation Identification: The Optimal Threshold for a Dispersion Algorithm.” Attention, Perception, & Psychophysics.
Discusses fixation identification methods that relate to AOI definition in eye tracking.
https://link.springer.com/article/10.3758/s13414-012-0406-7 - Roth, T., Nothdurft, H.-C., & Hense, J. (2014). “Multi-modal Sensor Fusion for Behavioral Analysis.”
Discusses combining eye tracking with other sensors and the importance of precise spatial calibration.
(Search IEEE Xplore or SpringerLink.) - Fuhl, W., Santini, T., & Kasneci, E. (2018). “Real-time Markerless Eye Tracking using Deep Learning.”
For a modern vision-based approach alternative to IR markers.
https://arxiv.org/abs/1809.08046 - Wang, X., & O’Sullivan, C. (2020). “Using Eye Tracking for Ergonomic Assessment in Real Work Environments.” Applied Ergonomics.
Provides insight on applying gaze tracking for spatial and workflow analysis.
https://doi.org/10.1016/j.apergo.2020.103009 - Zhu, Z., & Yu, G. (2022). “Shared Perception in Human–Robot Interaction.” Frontiers in Robotics and AI. Discusses concepts of shared perception and machine seeing in interactive environments. https://www.frontiersin.org/articles/10.3389/frobt.2022.841129/full
Boost your lab for the future
Want to find out how smart AOIs could help your research?
Request a consultation today!
600+ and counting
Explore real studies using Ergoneers Equipment