LINE: LLM-based Iterative Neuron Explanations for Vision Models
arXiv cs.CV / 4/10/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces LINE, a training-free, iterative method for labeling and explaining neuron-level concepts in vision models using an open-vocabulary approach.
- LINE operates in a strict black-box setting by using an LLM and a text-to-image generator in a closed loop, with proposals guided by the neuron activation history.
- Experiments report state-of-the-art results across multiple architectures, including AUC gains of up to 0.18 on ImageNet and 0.05 on Places365.
- The method reportedly discovers new concepts not covered by large predefined vocabularies, finding on average 29% new concepts that those vocabularies miss.
- LINE also outputs a full generation history and visual explanations, enabling analyses such as polysemanticity evaluation and comparisons to gradient-dependent activation maximization approaches.
Related Articles

Black Hat Asia
AI Business
v0.20.5
Ollama Releases

Inside Anthropic's Project Glasswing: The AI Model That Found Zero-Days in Every Major OS
Dev.to
Gemma 4 26B fabricated an entire code audit. I have the forensic evidence from the database.
Reddit r/LocalLLaMA

SoloEngine: Low-Code Agentic AI Development Platform with Native Support for Multi-Agent Collaboration, MCP, and Skill System
Dev.to