AI Navigate

More than the Sum: Panorama-Language Models for Adverse Omni-Scenes

arXiv cs.CV / 3/11/2026

Signals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Panorama-Language Modeling (PLM), a unified approach for 360-degree vision-language reasoning that surpasses the capabilities of traditional pinhole image-based models.
  • PLM leverages a novel panoramic sparse attention module enabling existing vision-language models to process equirectangular panoramas without needing to retrain from scratch.
  • The authors release PanoVQA, a large-scale panoramic visual question answering dataset focusing on adverse omni-scenes, such as object occlusions and driving accidents, to enable robust contextual understanding.
  • Experiments show that PLM achieves superior holistic spatial and contextual reasoning in complex omni-scene environments, outperforming conventional narrow field-of-view approaches.
  • The project and dataset are publicly available, promoting further research and development in panoramic vision-language understanding.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09573 (cs)
[Submitted on 10 Mar 2026]

Title:More than the Sum: Panorama-Language Models for Adverse Omni-Scenes

View a PDF of the paper titled More than the Sum: Panorama-Language Models for Adverse Omni-Scenes, by Weijia Fan and 9 other authors
View PDF HTML (experimental)
Abstract:Existing vision-language models (VLMs) are tailored for pinhole imagery, stitching multiple narrow field-of-view inputs to piece together a complete omni-scene understanding. Yet, such multi-view perception overlooks the holistic spatial and contextual relationships that a single panorama inherently preserves. In this work, we introduce the Panorama-Language Modeling (PLM)paradigm, a unified $360^\circ$ vision-language reasoning that is more than the sum of its pinhole counterparts. Besides, we present PanoVQA, a large-scale panoramic VQA dataset that involves adverse omni-scenes, enabling comprehensive reasoning under object occlusions and driving accidents. To establish a foundation for PLM, we develop a plug-and-play panoramic sparse attention module that allows existing pinhole-based VLMs to process equirectangular panoramas without retraining. Extensive experiments demonstrate that our PLM achieves superior robustness and holistic reasoning under challenging omni-scenes, yielding understanding greater than the sum of its narrow parts. Project page: this https URL.
Comments:
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09573 [cs.CV]
  (or arXiv:2603.09573v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09573
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Weijia Fan [view email]
[v1] Tue, 10 Mar 2026 12:19:50 UTC (2,738 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.