DRACULA: Hunting for the Actions Users Want Deep Research Agents to Execute
arXiv cs.CL / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- DRACULA is presented as a new dataset that captures user feedback not just on final deep research reports, but on intermediate actions proposed by deep research (DR) agents.
- In a five-week study, nineteen expert CS researchers selected preferred intermediate actions for DR-system outputs (e.g., adding a datasets section), producing 8,103 action preferences and 5,230 judgments about whether reports executed the chosen actions.
- The authors evaluate how predictable user-preferred actions are, finding that LLMs struggle initially but improve when given the user’s full selection history rather than partial/self-reported or extrapolated context signals.
- They also show that users’ preferred actions vary for the same query due to unstated goals, and they use simulation results to design an online intervention that recommends new actions based on prior user interactions, which users select most often in follow-up studies.
- The work open-sources the DRACULA study design, feedback, and simulation tasks to encourage future research on action feedback for long-horizon agents, highlighting “choosing which actions to execute” as a core challenge.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to

How I Improved My YouTube Shorts and Podcast Audio Workflow with AI Tools
Dev.to

An improvement of the convergence proof of the ADAM-Optimizer
Dev.to