AI Navigate

SCALE:Scalable Conditional Atlas-Level Endpoint transport for virtual cell perturbation prediction

arXiv cs.LG / 3/19/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • SCALE is a specialized large-scale foundation model for virtual cell perturbation prediction that jointly tackles training/inference bottlenecks and evaluation fidelity.
  • It introduces a BioNeMo-based training and inference framework that achieves about 12.51x speedup in pretraining and 1.29x in inference compared with the prior state of the art under matched system settings.
  • The perturbation prediction task is formulated as conditional transport using a set-aware flow that links LLaMA-based cellular encoding to endpoint-oriented supervision, improving training stability and perturbation recovery.
  • Evaluation on Tahoe-100M with biologically meaningful metrics shows improvements: PDCorr up by 12.02% and DE Overlap up by 10.66% over STATE.
  • The work argues for co-design of scalable infrastructure, stable transport modeling, and biologically faithful evaluation as essential for advancing virtual cell modeling.

Abstract

Virtual cell models aim to enable in silico experimentation by predicting how cells respond to genetic, chemical, or cytokine perturbations from single-cell measurements. In practice, however, large-scale perturbation prediction remains constrained by three coupled bottlenecks: inefficient training and inference pipelines, unstable modeling in high-dimensional sparse expression space, and evaluation protocols that overemphasize reconstruction-like accuracy while underestimating biological fidelity. In this work we present a specialized large-scale foundation model SCALE for virtual cell perturbation prediction that addresses the above limitations jointly. First, we build a BioNeMo-based training and inference framework that substantially improves data throughput, distributed scalability, and deployment efficiency, yielding 12.51* speedup on pretrain and 1.29* on inference over the prior SOTA pipeline under matched system settings. Second, we formulate perturbation prediction as conditional transport and implement it with a set-aware flow architecture that couples LLaMA-based cellular encoding with endpoint-oriented supervision. This design yields more stable training and stronger recovery of perturbation effects. Third, we evaluate the model on Tahoe-100M using a rigorous cell-level protocol centered on biologically meaningful metrics rather than reconstruction alone. On this benchmark, our model improves PDCorr by 12.02% and DE Overlap by 10.66% over STATE. Together, these results suggest that advancing virtual cells requires not only better generative objectives, but also the co-design of scalable infrastructure, stable transport modeling, and biologically faithful evaluation.