AI Navigate

GIIM: Graph-based Learning of Inter- and Intra-view Dependencies for Multi-view Medical Image Diagnosis

arXiv cs.CV / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper introduces GIIM, a novel graph-based learning approach designed to model both inter-view (across different imaging views) and intra-view (within a single view) dependencies in multi-view medical image diagnosis.
  • GIIM addresses limitations of current CADx systems which often overlook complex relationships among abnormalities and struggle with incomplete clinical imaging data, improving robustness and predictive accuracy.
  • The framework is validated on multiple imaging modalities such as CT, MRI, and mammography, demonstrating superior diagnostic performance compared to existing techniques.
  • GIIM’s approach reframes diagnostic challenges as relationship modeling problems, enabling a more nuanced and clinically relevant understanding of lesion dynamics.
  • By effectively handling missing data and capturing dynamic changes across views, GIIM sets a foundation for more reliable and comprehensive computer-aided diagnosis systems in medical imaging.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09446 (cs)
[Submitted on 10 Mar 2026]

Title:GIIM: Graph-based Learning of Inter- and Intra-view Dependencies for Multi-view Medical Image Diagnosis

View a PDF of the paper titled GIIM: Graph-based Learning of Inter- and Intra-view Dependencies for Multi-view Medical Image Diagnosis, by Tran Bao Sam and 5 other authors
View PDF HTML (experimental)
Abstract:Computer-aided diagnosis (CADx) has become vital in medical imaging, but automated systems often struggle to replicate the nuanced process of clinical interpretation. Expert diagnosis requires a comprehensive analysis of how abnormalities relate to each other across various views and time points, but current multi-view CADx methods frequently overlook these complex dependencies. Specifically, they fail to model the crucial relationships within a single view and the dynamic changes lesions exhibit across different views. This limitation, combined with the common challenge of incomplete data, greatly reduces their predictive reliability. To address these gaps, we reframe the diagnostic task as one of relationship modeling and propose GIIM, a novel graph-based approach. Our framework is uniquely designed to simultaneously capture both critical intra-view dependencies between abnormalities and inter-view dynamics. Furthermore, it ensures diagnostic robustness by incorporating specific techniques to effectively handle missing data, a common clinical issue. We demonstrate the generality of this approach through extensive evaluations on diverse imaging modalities, including CT, MRI, and mammography. The results confirm that our GIIM model significantly enhances diagnostic accuracy and robustness over existing methods, establishing a more effective framework for future CADx systems.
Comments:
Subjects: Computer Vision and Pattern Recognition (cs.CV)
MSC classes: 68T07
ACM classes: I.2.10
Cite as: arXiv:2603.09446 [cs.CV]
  (or arXiv:2603.09446v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09446
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Sam Tran Bao [view email]
[v1] Tue, 10 Mar 2026 09:57:57 UTC (635 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled GIIM: Graph-based Learning of Inter- and Intra-view Dependencies for Multi-view Medical Image Diagnosis, by Tran Bao Sam and 5 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.