LLM as a Tool, Not an Agent: Code-Mined Tree Transformations for Neural Architecture Search
arXiv cs.LG / 4/21/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper introduces LLMasTool, a hierarchical, tree-based Neural Architecture Search (NAS) framework that uses LLMs as a tool to support model evolution rather than fully agentic code generation.
- It automatically extracts reusable neural modules from arbitrary source code and represents candidate architectures as hierarchical trees, so evolution happens via reliable tree transformations instead of generating entire architectures as raw code.
- The approach uses diversity-guided, Bayesian modeling at the coarse-planning level to improve exploration efficiency, while the LLM handles remaining design degrees of freedom to produce executable architectures.
- By shifting from fully agentic LLM proposals toward algorithmic tree transformations, the method aims to reduce over-reliance on patterns biased toward the LLM’s training data.
- Experiments show improved NAS performance versus existing methods, with gains of 0.69 (CIFAR-10), 1.83 (CIFAR-100), and 2.68 points (ImageNet16-120).
Related Articles

A practical guide to getting comfortable with AI coding tools
Dev.to

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

🚀 Major BrowserAct CLI Update
Dev.to

Building AgentOS: Why I’m Building the AWS Lambda for Insurance Claims
Dev.to