EffiMiniVLM: A Compact Dual-Encoder Regression Framework

arXiv cs.CV / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • EffiMiniVLM is proposed as a compact dual-encoder vision-language regression framework for predicting product quality in cold-start settings using images and textual metadata when user history is unavailable.
  • The approach combines an EfficientNet-B0 image encoder and a MiniLM-based text encoder with a lightweight regression head, aiming to reduce computational cost compared with larger vision-language models.
  • A weighted Huber loss is introduced to improve training sample efficiency by emphasizing more reliable samples using rating-count information.
  • The model is trained on only 20% of the Amazon Reviews 2023 dataset, uses 27.7M parameters and 6.8 GFLOPs, and reports a CES score of 0.40 with the lowest resource cost in the benchmark.
  • The authors find strong scalability, noting that increasing training data to 40% can let EffiMiniVLM outperform other methods that rely on larger models and external datasets.

Abstract

Predicting product quality from multimodal item information is critical in cold-start scenarios, where user interaction history is unavailable and predictions must rely on images and textual metadata. However, existing vision-language models typically depend on large architectures and/or extensive external datasets, resulting in high computational cost. To address this, we propose EffiMiniVLM, a compact dual-encoder vision-language regression framework that integrates an EfficientNet-B0 image encoder and a MiniLM-based text encoder with a lightweight regression head. To improve training sample efficiency, we introduce a weighted Huber loss that leverages rating counts to emphasize more reliable samples, yielding consistent performance gains. Trained using only 20% of the Amazon Reviews 2023 dataset, the proposed model contains 27.7M parameters and requires 6.8 GFLOPs, yet achieves a CES score of 0.40 with the lowest resource cost in the benchmark. Despite its small size, it remains competitive with significantly larger models, achieving comparable performance while being approximately 4x to 8x more resource-efficient than other top-5 methods and being the only approach that does not use external datasets. Further analysis shows that scaling the data to 40% alone allows our model to overtake other methods, which use larger models and datasets, highlighting strong scalability despite the model's compact design.