Distilled Large Language Model-Driven Dynamic Sparse Expert Activation Mechanism

arXiv cs.CV / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a Distilled LLM-driven Sparse Mixture-of-Experts (DS-MoE) framework to improve visual recognition under challenges like high inter-class similarity and extreme scale variation.
  • DS-MoE uses text-guided dynamic routing to align textual semantics with defect-specific visual patterns, adaptively activating task-relevant experts to reduce inter-class ambiguity.
  • A lightweight MobileSAM encoder is used to enable real-time inference while retaining multi-scale defect details.
  • Experiments on PCB, aluminum foil, and mold defect datasets show DS-MoE outperforms existing pure vision models, including reported mAP@0.5:0.95 gains over YOLO variants.
  • Overall, the work combines cross-modal semantics (text + vision) with sparse expert activation to better generalize across diverse real-world defect data while staying within computational budgets.

Abstract

High inter-class similarity, extreme scale variation, and limited computational budgets hinder reliable visual recognition across diverse real-world data. Existing vision-centric and cross-modal approaches often rely on rigid fusion mechanisms and heavy annotation pipelines, leading to sub-optimal generalization. We propose the Distilled Large Language Model (LLM)-Driven Sparse Mixture-of-Experts (DS-MoE) framework, which integrates text-guided dynamic routing and lightweight multi-scale comprehension. The DS-MoE framework dynamically aligns textual semantics with defect-specific visual patterns through a sparse MoE architecture, where task-relevant experts are adaptively activated based on semantic relevance, resolving inter-class ambiguity. A lightweight MobileSAM encoder enables real-time inference while preserving multi-scale defect details. Extensive experiments on PCB, aluminum foil, and mold defect datasets demonstrate that our framework achieves superior performance compared to existing pure vision models. \textbf{DS-MoE} surpasses YOLOv8/YOLOX with gains of +13.9, +1.4, and +2.0 pp mAP@ 0.5:0.95 on BBMP, aluminum, and PCB, respectively, while also improving precision and recall.