TopoOR: 手術室のための統一トポロジカルシーン表現

arXiv cs.CV / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • TopoORは、手術室の複雑なマルチモーダル環境を従来の対ペア相互作用を超えた高次関係を保持しながらモデル化する統一トポロジカルシーン表現を導入します。
  • この新しいアプローチは、エンティティ間の相互作用を高次トポロジカルセルに昇格させることで、手術室での安全性が重要な推論に必要な正確なマルチモーダル構造を維持します。
  • 新しい高次注意メカニズムが提案され、階層的な関係注意の過程で多様体構造とモダリティ固有の特徴を保持し、既存のグラフや結合潜在表現でよく見られる詳細の喪失を回避します。
  • 実験では、TopoORが無菌違反検出、ロボットの段階予測、次の行動予測などの手術シナリオにおいて、最先端のグラフおよび大規模言語モデル(LLM)ベースの手法を上回る成果を示しています。
  • 従来のシーングラフを包含することで、TopoORは表現力が高く、手術室環境の複雑な動態とマルチモーダリティを既存のパラダイムよりも優れた形で捉えます。

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09466 (cs)
[Submitted on 10 Mar 2026]

Title:TopoOR: A Unified Topological Scene Representation for the Operating Room

View a PDF of the paper titled TopoOR: A Unified Topological Scene Representation for the Operating Room, by Tony Danjun Wang and 4 other authors
View PDF HTML (experimental)
Abstract:Surgical Scene Graphs abstract the complexity of surgical operating rooms (OR) into a structure of entities and their relations, but existing paradigms suffer from strictly dyadic structural limitations. Frameworks that predominantly rely on pairwise message passing or tokenized sequences flatten the manifold geometry inherent to relational structures and lose structure in the process. We introduce TopoOR, a new paradigm that models multimodal operating rooms as a higher-order structure, innately preserving pairwise and group relationships. By lifting interactions between entities into higher-order topological cells, TopoOR natively models complex dynamics and multimodality present in the OR. This topological representation subsumes traditional scene graphs, thereby offering strictly greater expressivity. We also propose a higher-order attention mechanism that explicitly preserves manifold structure and modality-specific features throughout hierarchical relational attention. In this way, we circumvent combining 3D geometry, audio, and robot kinematics into a single joint latent representation, preserving the precise multimodal structure required for safety-critical reasoning, unlike existing methods. Extensive experiments demonstrate that our approach outperforms traditional graph and LLM-based baselines across sterility breach detection, robot phase prediction, and next-action anticipation
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09466 [cs.CV]
  (or arXiv:2603.09466v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09466
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Tony Danjun Wang [view email]
[v1] Tue, 10 Mar 2026 10:19:42 UTC (4,192 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.