MIMIC: A Generative Multimodal Foundation Model for Biomolecules

arXiv cs.AI / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MIMIC is a newly proposed generative multimodal foundation model for biomolecules that jointly links nucleic acids, proteins, evolutionary, structural, regulatory, and semantic/contextual modalities within partially observed molecular states.
  • Using a split-track encoder-decoder architecture, MIMIC can condition on arbitrary subsets of observed modalities to reconstruct or generate missing components across the genome, transcriptome, and proteome.
  • The model’s multimodal conditioning improves RNA sequence reconstruction compared with sequence-only inputs, and its learned representations achieve state-of-the-art results on multiple RNA and protein downstream tasks, including splicing prediction.
  • MIMIC’s generative framework also supports constrained, isoform-aware inference and design, such as identifying corrective edits for a clinically relevant HBB splice-disrupting mutation and generating diverse, high-confidence protein sequences by conditioning on binding-site structure and surface chemistry.
  • For assay modeling, MIMIC treats experimental context as semantic conditioning to capture assay-dependent RNA chemical probing rather than producing context as a fixed output.

Abstract

Biological function emerges from coupled constraints across sequence, structure, regulation, evolution, and cellular context, yet most foundation models in biology are trained within one modality or for a fixed forward task. We present MIMIC, a generative multimodal foundation model trained on our newly curated and aligned dataset, LORE, linking nucleic acid, protein, evolutionary, structural, regulatory, and semantic/contextual modalities within partially observed biomolecular states. MIMIC uses a split-track encoder-decoder architecture to condition on arbitrary subsets of observed modalities and reconstruct or generate missing components of molecular state across the genome, transcriptome, and proteome. Multimodal conditioning consistently improves MIMIC's sequence reconstruction relative to sequence-only inputs, while its learned representations enable state-of-the-art performance on RNA and protein downstream tasks. MIMIC achieves state-of-the-art splicing prediction, and its joint generative formulation enables isoform-aware inference that further improves performance. Beyond prediction, the same generative framework supports constrained design. For RNA, MIMIC identifies corrective edits in a clinically relevant HBB splice-disrupting mutation without reverting it by using evolutionary and structural signals. For proteins, jointly conditioning on shape and surface chemistry of PD-L1 and hACE2 binding sites produces diverse, high-confidence sequences with strong in silico support for target binding. Finally, MIMIC uses experimental context as semantic conditioning to model assay-dependent RNA chemical probing, rather than treating context as a fixed output. Together, these results position MIMIC's aligned multimodal generative modeling as a strong foundation for unifying representation learning, conditional prediction, and constrained biomolecular design within a single model.