Iterative Definition Refinement for Zero-Shot Classification via LLM-Based Semantic Prototype Optimization

arXiv cs.CV / 5/1/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper targets zero-shot web content classification for web filtering, arguing that embedding-based methods are highly sensitive to how category definitions are specified and can misclassify when definitions are ambiguous.
  • It proposes a training-free, iterative framework that improves classification by progressively refining category definitions using an LLM as a feedback-driven optimizer, rather than updating the underlying model parameters.
  • Three refinement strategies are explored—example-guided, confusion-aware, and history-aware—each leveraging structured signals from misclassified instances to improve class descriptions.
  • The authors release a human-labeled benchmark with 10 URL categories (1,000 samples each) and evaluate the approach across 13 state-of-the-art embedding foundation models, finding consistent performance gains.
  • The results highlight definition quality as a critical, comparatively underexplored factor for embedding-based zero-shot systems and release the dataset publicly for further research.

Abstract

Web filtering systems rely on accurate web content classification to block cyber threats, prevent data exfiltration, and ensure compliance. However, classification is increasingly difficult due to the dynamic and rapidly evolving nature of the modern web. Embedding-based zero-shot approaches map content and category descriptions into a shared semantic space, enabling label assignment without labeled training data, but remain highly sensitive to definition quality. Poorly specified or ambiguous definitions create semantic overlap in the embedding space, leading to systematic misclassification. In this paper, we propose a training-free, adaptive iterative definition refinement framework that improves zero-shot web content classification by progressively optimizing category definitions rather than updating model parameters. Using LLMs as feedback-driven definition optimizers, we investigate three refinement strategies namely example-guided, confusion-aware, and history-aware, each refining class descriptions using structured signals from misclassified instances. Furthermore, we introduce a human-labeled benchmark of 10 URL categories with 1,000 samples per class and evaluate across 13 state-of-the-art embedding foundation models. Results demonstrate that iterative definition refinement consistently improves classification performance across diverse architectures, establishing definition quality as a critical and underexplored factor in embedding-based systems. The dataset is available at https://github.com/naeemrehmat/B2MWT-10C.