What an Autonomous Agent Discovers About Molecular Transformer Design: Does It Transfer?

arXiv cs.AI / 3/31/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The study systematically tests whether transformer design choices transfer across molecular (SMILES), protein sequences, and a natural-language control using autonomous architecture search run over 3,106 experiments on a single GPU.
  • For SMILES, autonomous architecture search is counterproductive, with learning-rate and schedule tuning outperforming full architecture search (p = 0.001).
  • For natural language, architecture changes account for most gains, driving 81% of improvement (p = 0.009), while proteins show intermediate behavior.
  • Although the agent finds domain-specific architectures, innovations transfer across all three domains with less than 1% degradation, suggesting the differences come from the search path rather than domain-specific biological constraints.
  • The authors release a decision framework and an open-source toolkit to help molecular modeling teams choose between autonomous architecture search and simpler hyperparameter tuning approaches.

Abstract

Deep learning models for drug-like molecules and proteins overwhelmingly reuse transformer architectures designed for natural language, yet whether molecular sequences benefit from different designs has not been systematically tested. We deploy autonomous architecture search via an agent across three sequence types (SMILES, protein, and English text as control), running 3,106 experiments on a single GPU. For SMILES, architecture search is counterproductive: tuning learning rates and schedules alone outperforms the full search (p = 0.001). For natural language, architecture changes drive 81% of improvement (p = 0.009). Proteins fall between the two. Surprisingly, although the agent discovers distinct architectures per domain (p = 0.004), every innovation transfers across all three domains with <1% degradation, indicating that the differences reflect search-path dependence rather than fundamental biological requirements. We release a decision framework and open-source toolkit for molecular modeling teams to choose between autonomous architecture search and simple hyperparameter tuning.