Skip to content

The official code repository for the ISMAR 2025 paper "Will You Be Aware? Eye Tracking-Based Modeling of Situational Awareness in Augmented Reality"

License

Notifications You must be signed in to change notification settings

Duke-I3T-Lab/AR_CPR_SA

Repository files navigation

Situational Awareness Modeling in AR-Guided CPR

This is the official code repository for the paper to be presented at IEEE ISMAR 2025, titled "Will You Be Aware? Eye Tracking-Based Modeling of Situational Awareness in Augmented Reality", authored by Zhehan Qu, Tianyi Hu, Christian Fronk and Maria Gorlatova. arxiv link; video link

Overview

Overview This work seeks to model situational awareness though eye tracking data captured on a Magic Leap 2 device, in an AR app designed for cardiopulmonary resuscitation (CPR) guidance. To evaluate situational awareness, we designed two realistic, unexpected incidents, i.e. the patient bleeding or vomiting during the CPR procedure, to observe the participants' response. Based on their responses we label them with good or poor SA labels and trained a graph neural network that predicts such label at 83% accuracy.

Video Demonstration

Check the video below (simply click on the image!) for how we setup our experiment and how the incidents look in the AR view. A brief introduction of our modeling method is also included in the video.

Watch the video

Code Base Introduction

The code base consists of three main modules: the gaze extraction and classic ML module, the (baseline) PatchTSMixer module (thanks to George Zerveas et al. for open sourcing their code, from which we built a part of the model training/testing workflow), and the FixGraphPool module. The recommanded workflow (and so was our workflow) is to first compile raw collected gaze data through the gaze extraction module, and then run preprocessing code on gaze extraction output to run the different models.

Please note that due to IRB requirements we are not able to release the data. Instead, we provide example data files corresponding to visual behavior of one of the authors in my_data.

For more details of code usage, please refer to seperate README files in each module.

Citation

If you find this repo useful or the paper interesting, please consider citing the following paper:

@misc{qu2025will,
title={Will You Be Aware? {Eye} Tracking-Based Modeling of Situational Awareness in Augmented Reality},
author={Zhehan Qu and Tianyi Hu and Christian Fronk and Maria Gorlatova},
year={2025},
eprint={2508.05025},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.05025}
}

Acknowledgments

We thank Prof. David Carlson and Dr. Amy McDonnell for helpful discussions regarding the work and all participants for contributing to the study. This work was supported in part by NSF grants CSR-2312760, CNS-2112562, and IIS-2231975, NSF CAREER Award IIS-2046072, NSF NAIAD Award 2332744, a Cisco Research Award, a Meta Research Award, Defense Advanced Research Projects Agency Young Faculty Award HR0011-24-1-0001, and the Army Research Laboratory under Cooperative Agreement Number W911NF-23-2-0224.

About

The official code repository for the ISMAR 2025 paper "Will You Be Aware? Eye Tracking-Based Modeling of Situational Awareness in Augmented Reality"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages