From Research Question to Scientific Workflow: Leveraging Agentic AI for Science Automation

arXiv cs.AI / 4/25/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The article argues that scientific workflow automation still falls short because researchers must manually translate research questions into workflow specifications.
  • It proposes an agentic architecture that uses an LLM for semantic intent extraction, deterministic generation for reproducible workflow DAGs, and “Skills” documents from domain experts to encode vocabulary mappings, constraints, and optimization strategies.
  • By confining LLM non-determinism to the intent-extraction step, the system ensures that identical intents produce identical workflows, improving reliability and reproducibility.
  • The approach is implemented and evaluated on the 1000 Genomes population genetics workflow and Hyperflow WMS on Kubernetes, showing large gains in intent accuracy and reductions in data transfer.
  • Reported results from an ablation study on 150 queries indicate intent accuracy increases from 44% to 83% with Skills, while LLM overhead stays under 15 seconds and cost under $0.001 per query.

Abstract

Scientific workflow systems automate execution -- scheduling, fault tolerance, resource management -- but not the semantic translation that precedes it. Scientists still manually convert research questions into workflow specifications, a task requiring both domain knowledge and infrastructure expertise. We propose an agentic architecture that closes this gap through three layers: an LLM interprets natural language into structured intents (semantic layer); validated generators produce reproducible workflow DAGs (deterministic layer); and domain experts author ``Skills'': markdown documents encoding vocabulary mappings, parameter constraints, and optimization strategies (knowledge layer). This decomposition confines LLM non-determinism to intent extraction: identical intents always yield identical workflows. We implement and evaluate the architecture on the 1000 Genomes population genetics workflow and Hyperflow WMS running on Kubernetes. In an ablation study on 150 queries, Skills raise full-match intent accuracy from 44% to 83%; skill-driven deferred workflow generation reduces data transfer by 92\%; and the end-to-end pipeline completes queries on Kubernetes with LLM overhead below 15 seconds and cost under $0.001 per query.