AI Navigate

BiCLIP:構造的幾何変換によるドメイン正準化

arXiv cs.AI / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • BiCLIPは、ドメイン間の構造的幾何変換を活用して、視覚と言語モデルにおけるドメイン適応を改善することを目的とした新しいフレームワークです。
  • 異なるドメイン間の画像特徴は少数のアンカー(例:少数ショットのラベル付きサンプル)で回復可能な正準的な幾何変換を介して関連していると仮定しています。
  • BiCLIPはマルチモーダル特徴に対して特定の変換を適用し、シンプルでパラメータ数の少ない方法でクロスモーダルアライメントを強化します。
  • EuroSAT、DTD、FGVCAircraftなど11のベンチマークにわたる広範な評価で、BiCLIPは少数ショット分類とドメイン適応タスクで最先端の結果を達成しています。
  • 学習された変換の直交性や角度分布に関する理論的知見を実証的に検証し、構造化された整合がドメイン適応において重要であることを支持しています。

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.08942 (cs)
[Submitted on 9 Mar 2026]

Title:BiCLIP: Domain Canonicalization via Structured Geometric Transformation

View a PDF of the paper titled BiCLIP: Domain Canonicalization via Structured Geometric Transformation, by Pranav Mantini and Shishir K. Shah
View PDF HTML (experimental)
Abstract:Recent advances in vision-language models (VLMs) have demonstrated remarkable zero-shot capabilities, yet adapting these models to specialized domains remains a significant challenge. Building on recent theoretical insights suggesting that independently trained VLMs are related by a canonical transformation, we extend this understanding to the concept of domains. We hypothesize that image features across disparate domains are related by a canonicalized geometric transformation that can be recovered using a small set of anchors. Few-shot classification provides a natural setting for this alignment, as the limited labeled samples serve as the anchors required to estimate this transformation. Motivated by this hypothesis, we introduce BiCLIP, a framework that applies a targeted transformation to multimodal features to enhance cross-modal alignment. Our approach is characterized by its extreme simplicity and low parameter footprint. Extensive evaluations across 11 standard benchmarks, including EuroSAT, DTD, and FGVCAircraft, demonstrate that BiCLIP consistently achieves state-of-the-art results. Furthermore, we provide empirical verification of existing geometric findings by analyzing the orthogonality and angular distribution of the learned transformations, confirming that structured alignment is the key to robust domain adaptation. Code is available at this https URL
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)
Cite as: arXiv:2603.08942 [cs.CV]
  (or arXiv:2603.08942v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.08942
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Pranav Mantini [view email]
[v1] Mon, 9 Mar 2026 21:26:15 UTC (1,671 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.