AI Navigate

Harnessing the Power of Foundation Models for Accurate Material Classification

arXiv cs.CV / 3/19/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper proposes a framework that harnesses vision-language foundation models to address data scarcity in material classification.
  • It introduces a robust image generation and auto-labeling pipeline that creates diverse, high-quality material-centric training data by fusing object semantics and material attributes in prompts.
  • It adds a prior incorporation strategy to distill information from VLMs and a joint fine-tuning method that optimizes a pre-trained vision model together with VLM-derived priors to preserve generalizability while adapting to material-specific features.
  • Experiments on multiple datasets show significant improvements, with synthetic data effectively capturing real-world material characteristics and priors boosting final performance, and the authors announce the release of source code and dataset.

Abstract

Material classification has emerged as a critical task in computer vision and graphics, supporting the assignment of accurate material properties to a wide range of digital and real-world applications. While traditionally framed as an image classification task, this domain faces significant challenges due to the scarcity of annotated data, limiting the accuracy and generalizability of trained models. Recent advances in vision-language foundation models (VLMs) offer promising avenues to address these issues, yet existing solutions leveraging these models still exhibit unsatisfying results in material recognition tasks. In this work, we propose a novel framework that effectively harnesses foundation models to overcome data limitations and enhance classification accuracy. Our method integrates two key innovations: (a) a robust image generation and auto-labeling pipeline that creates a diverse and high-quality training dataset with material-centric images, and automatically assigns labels by fusing object semantics and material attributes in text prompts; (b) a prior incorporation strategy to distill information from VLMs, combined with a joint fine-tuning method that optimizes a pre-trained vision foundation model alongside VLM-derived priors, preserving broad generalizability while adapting to material-specific features.Extensive experiments demonstrate significant improvements on multiple datasets. We show that our synthetic dataset effectively captures the characteristics of real world materials, and the integration of priors from vision-language models significantly enhances the final performance. The source code and dataset will be released.