AI Navigate

BiCLIP: Domain Canonicalization via Structured Geometric Transformation

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • BiCLIP is a new framework designed to improve domain adaptation in vision-language models by leveraging structured geometric transformations between domains.
  • The approach hypothesizes that image features across different domains are related through a canonical geometric transformation recoverable with a small set of anchors, such as few-shot labeled samples.
  • BiCLIP applies a targeted transformation to multimodal features, resulting in enhanced cross-modal alignment with a simple, low-parameter method.
  • Extensive evaluations across 11 benchmarks show that BiCLIP achieves state-of-the-art results in few-shot classification and domain adaptation tasks.
  • The research also empirically verifies theoretical insights on the orthogonality and angular distribution of learned transformations, supporting the importance of structured alignment in domain adaptation.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.08942 (cs)
[Submitted on 9 Mar 2026]

Title:BiCLIP: Domain Canonicalization via Structured Geometric Transformation

View a PDF of the paper titled BiCLIP: Domain Canonicalization via Structured Geometric Transformation, by Pranav Mantini and Shishir K. Shah
View PDF HTML (experimental)
Abstract:Recent advances in vision-language models (VLMs) have demonstrated remarkable zero-shot capabilities, yet adapting these models to specialized domains remains a significant challenge. Building on recent theoretical insights suggesting that independently trained VLMs are related by a canonical transformation, we extend this understanding to the concept of domains. We hypothesize that image features across disparate domains are related by a canonicalized geometric transformation that can be recovered using a small set of anchors. Few-shot classification provides a natural setting for this alignment, as the limited labeled samples serve as the anchors required to estimate this transformation. Motivated by this hypothesis, we introduce BiCLIP, a framework that applies a targeted transformation to multimodal features to enhance cross-modal alignment. Our approach is characterized by its extreme simplicity and low parameter footprint. Extensive evaluations across 11 standard benchmarks, including EuroSAT, DTD, and FGVCAircraft, demonstrate that BiCLIP consistently achieves state-of-the-art results. Furthermore, we provide empirical verification of existing geometric findings by analyzing the orthogonality and angular distribution of the learned transformations, confirming that structured alignment is the key to robust domain adaptation. Code is available at this https URL
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)
Cite as: arXiv:2603.08942 [cs.CV]
  (or arXiv:2603.08942v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.08942
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Pranav Mantini [view email]
[v1] Mon, 9 Mar 2026 21:26:15 UTC (1,671 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.