Synthetic Computers at Scale for Long-Horizon Productivity Simulation

arXiv cs.AI / 5/1/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces “Synthetic Computers at Scale,” a methodology for generating user-specific computer environments that include realistic folder hierarchies and content-rich artifacts like documents, spreadsheets, and presentations.
  • It runs long-horizon simulations on each synthetic computer using two roles of agents: one agent creates month-long, multi-deliverable productivity objectives tailored to the synthetic user, while another agent performs the work by navigating the filesystem, coordinating with collaborators, and producing professional outputs.
  • Preliminary experiments generate 1,000 synthetic computers and execute simulations that average over 2,000 turns, requiring more than 8 hours of agent runtime per run.
  • The authors report that the resulting experiential learning signals significantly improve agent performance on both in-domain and out-of-domain productivity evaluations, suggesting the approach generalizes beyond the training setup.
  • They argue the technique could scale to millions or even billions of synthetic user worlds, enabling broader coverage of professions and serving as a foundation for agent self-improvement and agentic reinforcement learning.

Abstract

Realistic long-horizon productivity work is strongly conditioned on user-specific computer environments, where much of the work context is stored and organized through directory structures and content-rich artifacts. To scale synthetic data creation for such productivity scenarios, we introduce Synthetic Computers at Scale, a scalable methodology for creating such environments with realistic folder hierarchies and content-rich artifacts (e.g., documents, spreadsheets, and presentations). Conditioned on each synthetic computer, we run long-horizon simulations: one agent creates productivity objectives that are specific to the computer's user and require multiple professional deliverables and about a month of human work; another agent then acts as that user and keeps working across the computer -- for example, navigating the filesystem for grounding, coordinating with simulated collaborators, and producing professional artifacts -- until these objectives are completed. In preliminary experiments, we create 1,000 synthetic computers and run long-horizon simulations on them; each run requires over 8 hours of agent runtime and spans more than 2,000 turns on average. These simulations produce rich experiential learning signals, whose effectiveness is validated by significant improvements in agent performance on both in-domain and out-of-domain productivity evaluations. Given that personas are abundant at billion scale, this methodology can in principle scale to millions or even billions of synthetic user worlds with sufficient compute, enabling broader coverage of diverse professions, roles, contexts, environments, and productivity needs. We argue that scalable synthetic computer creation, together with at-scale simulations, is highly promising as a foundational substrate for agent self-improvement and agentic reinforcement learning in long-horizon productivity scenarios.