AI Navigate

TopoOR: A Unified Topological Scene Representation for the Operating Room

arXiv cs.CV / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • TopoOR introduces a unified topological scene representation that models the complex multimodal environment of surgical operating rooms by preserving higher-order relationships beyond traditional pairwise interactions.
  • This new approach lifts interactions between entities into higher-order topological cells, allowing it to maintain the precise multimodal structure needed for safety-critical reasoning in the OR.
  • A novel higher-order attention mechanism is proposed that preserves manifold structures and modality-specific features during hierarchical relational attention, avoiding the loss of detail common in existing graph or joint latent representations.
  • Experiments show that TopoOR outperforms state-of-the-art graph and large language model (LLM)-based methods in tasks such as sterility breach detection, robot phase prediction, and next-action anticipation in surgical scenarios.
  • By subsuming traditional scene graphs, TopoOR offers greater expressivity and better captures the complex dynamics and multimodality of operating room environments compared to existing paradigms.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09466 (cs)
[Submitted on 10 Mar 2026]

Title:TopoOR: A Unified Topological Scene Representation for the Operating Room

View a PDF of the paper titled TopoOR: A Unified Topological Scene Representation for the Operating Room, by Tony Danjun Wang and 4 other authors
View PDF HTML (experimental)
Abstract:Surgical Scene Graphs abstract the complexity of surgical operating rooms (OR) into a structure of entities and their relations, but existing paradigms suffer from strictly dyadic structural limitations. Frameworks that predominantly rely on pairwise message passing or tokenized sequences flatten the manifold geometry inherent to relational structures and lose structure in the process. We introduce TopoOR, a new paradigm that models multimodal operating rooms as a higher-order structure, innately preserving pairwise and group relationships. By lifting interactions between entities into higher-order topological cells, TopoOR natively models complex dynamics and multimodality present in the OR. This topological representation subsumes traditional scene graphs, thereby offering strictly greater expressivity. We also propose a higher-order attention mechanism that explicitly preserves manifold structure and modality-specific features throughout hierarchical relational attention. In this way, we circumvent combining 3D geometry, audio, and robot kinematics into a single joint latent representation, preserving the precise multimodal structure required for safety-critical reasoning, unlike existing methods. Extensive experiments demonstrate that our approach outperforms traditional graph and LLM-based baselines across sterility breach detection, robot phase prediction, and next-action anticipation
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09466 [cs.CV]
  (or arXiv:2603.09466v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09466
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Tony Danjun Wang [view email]
[v1] Tue, 10 Mar 2026 10:19:42 UTC (4,192 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.