Deep Research of Deep Research: From Transformer to Agent, From AI to AI for Science
arXiv cs.AI / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper surveys how LLM capabilities have progressed from text-based question answering to multimodal interaction and finally to agentic tool use, enabling general-purpose agents.
- It frames “deep research” (DR) as a vertical prototype application for agentic systems aimed at assisting humans in problem discovery and potentially surpassing top human scientists.
- The authors propose a clear definition of deep research and integrate viewpoints from industry “deep research” efforts and academia’s “AI for Science (AI4S)” within a unified developmental framework.
- It positions LLMs and Stable Diffusion as dual pillars of generative AI and outlines a roadmap from transformer-based methods toward agent-based architectures.
- The paper reviews AI4S progress across disciplines, compares human–AI interaction paradigms and system architectures, and highlights remaining challenges and fundamental research questions, while discussing reciprocal growth between AI and science.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to