Ordinal Semantic Segmentation Applied to Medical and Odontological Images

arXiv cs.CV / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies semantic segmentation losses that explicitly model ordinal relationships among class labels to improve semantic consistency compared with standard deep learning approaches.
  • It proposes and evaluates a taxonomy of ordinal-aware losses, including unimodal, quasi-unimodal (relaxed ordinal constraints), and spatial losses that enforce consistency between neighboring pixels.
  • The work adapts ordinal classification loss functions to ordinal semantic segmentation and specifically tests EXP_MSE, QUL, and CSSDF-based spatial Contact Surface Loss.
  • Experiments on medical and odontological images indicate improved robustness, better generalization, and stronger anatomical consistency, suggesting the ordinal structure of classes carries useful domain knowledge.
  • The study is positioned as an arXiv preprint, advancing research rather than reporting a specific deployed product or system release.

Abstract

Semantic segmentation consists of assigning a semantic label to each pixel according to predefined classes. This process facilitates the understanding of object appearance and spatial relationships, playing an important role in the global interpretation of image content. Although modern deep learning approaches achieve high accuracy, they often ignore ordinal relationships among classes, which may encode important domain knowledge for scene interpretation. In this work, loss functions that incorporate ordinal relationships into deep neural networks are investigated to promote greater semantic consistency in semantic segmentation tasks. These loss functions are categorized as unimodal, quasi-unimodal, and spatial. Unimodal losses constrain the predicted probability distribution according to the class ordering, while quasi-unimodal losses relax this constraint by allowing small variations while preserving ordinal coherence. Spatial losses penalize semantic inconsistencies between neighboring pixels, encouraging smoother transitions in the image space. In particular, this study adapts loss functions originally proposed for ordinal classification to ordinal semantic segmentation. Among them, the Expanded Mean Squared Error (EXP_MSE), the Quasi-Unimodal Loss (QUL), and the spatial Contact Surface Loss using Signal Distance Function (CSSDF) are investigated. These approaches have shown promising results in medical imaging, improving robustness, generalization, and anatomical consistency.