CodePercept: Code-Grounded Visual STEM Perception for MLLMs
arXiv cs.CV / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper demonstrates that scaling perception yields larger gains than scaling reasoning for STEM visual reasoning in MLLMs, identifying perception as the true bottleneck.
- It introduces ICC-1M, a 1-million image-caption-code dataset that treats executable code as the perceptual medium to ground STEM visuals.
- It proposes Code-Grounded Caption Generation, which uses executable code as ground truth for image captions to reduce hallucinations in traditional knowledge distillation.
- It introduces STEM2Code-Eval, a benchmark that requires generating reconstruction code to directly evaluate visual perception rather than relying on problem-solving accuracy.
- The authors make the work reproducible by releasing code at https://github.com/TongkunGuan/Qwen-CodePercept, enabling further exploration of code-based perception for MLLMs.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
How I Built an AI SDR Agent That Finds Leads and Writes Personalized Cold Emails
Dev.to
Complete Guide: How To Make Money With Ai
Dev.to
I Analyzed My Portfolio with AI and Scored 53/100 — Here's How I Fixed It to 85+
Dev.to
The Demethylation
Dev.to