AI Navigate

Transformer-Based Multi-Region Segmentation and Radiomic Analysis of HR-pQCT Imaging

arXiv cs.CV / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The study introduces a fully automated framework using a transformer-based SegFormer model to segment multiple regions in HR-pQCT images, including cortical and trabecular bone along with surrounding soft tissues.
  • This approach is novel as it leverages transformer architecture for multi-region bone and soft tissue segmentation, achieving a high mean F1 score of 95.36%.
  • Radiomic features extracted from segmented soft tissues, especially myotendinous regions, provided better osteoporosis classification accuracy and AUROC compared to traditional bone-based methods.
  • The framework demonstrated improved patient-level osteoporosis detection by integrating soft tissue radiomics, increasing AUROC from 0.792 to 0.875 over standard techniques.
  • These findings highlight the clinical value of analyzing both bone and adjacent soft tissues in HR-pQCT imaging for more comprehensive osteoporosis diagnosis.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09137 (cs)
[Submitted on 10 Mar 2026]

Title:Transformer-Based Multi-Region Segmentation and Radiomic Analysis of HR-pQCT Imaging

View a PDF of the paper titled Transformer-Based Multi-Region Segmentation and Radiomic Analysis of HR-pQCT Imaging, by Mohseu Rashid Subah and 5 other authors
View PDF HTML (experimental)
Abstract:Osteoporosis is a skeletal disease typically diagnosed using dual-energy X-ray absorptiometry (DXA), which quantifies areal bone mineral density but overlooks bone microarchitecture and surrounding soft tissues. High-resolution peripheral quantitative computed tomography (HR-pQCT) enables three-dimensional microstructural imaging with minimal radiation. However, current analysis pipelines largely focus on mineralized bone compartments, leaving much of the acquired image data underutilized. We introduce a fully automated framework for binary osteoporosis classification using radiomics features extracted from anatomically segmented HR-pQCT images. To our knowledge, this work is the first to leverage a transformer-based segmentation architecture, i.e., the SegFormer, for fully automated multi-region HR-pQCT analysis. The SegFormer model simultaneously delineated the cortical and trabecular bone of the tibia and fibula along with surrounding soft tissues and achieved a mean F1 score of 95.36%. Soft tissues were further subdivided into skin, myotendinous, and adipose regions through post-processing. From each region, 939 radiomic features were extracted and dimensionally reduced to train six machine learning classifiers on an independent dataset comprising 20,496 images from 122 HR-pQCT scans. The best image level performance was achieved using myotendinous tissue features, yielding an accuracy of 80.08% and an area under the receiver operating characteristic curve (AUROC) of 0.85, outperforming bone-based models. At the patient level, replacing standard biological, DXA, and HR-pQCT parameters with soft tissue radiomics improved AUROC from 0.792 to 0.875. These findings demonstrate that automated, multi-region HR-pQCT segmentation enables the extraction of clinically informative signals beyond bone alone, highlighting the importance of integrated tissue assessment for osteoporosis detection.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09137 [cs.CV]
  (or arXiv:2603.09137v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09137
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Mohseu Rashid Subah [view email]
[v1] Tue, 10 Mar 2026 03:22:13 UTC (13,069 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Transformer-Based Multi-Region Segmentation and Radiomic Analysis of HR-pQCT Imaging, by Mohseu Rashid Subah and 5 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.