Tabular foundation models for in-context prediction of molecular properties

arXiv cs.LG / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes tabular foundation models (TFMs) that predict molecular properties via in-context learning, avoiding task-specific fine-tuning and reducing the need for ML expertise.
  • Experiments in low- to medium-data settings on both pharmaceutical benchmark tasks and chemical engineering datasets show strong predictive accuracy with lower computational cost than fine-tuning.
  • The study evaluates TFMs using frozen molecular foundation model embeddings as well as classical descriptors and fingerprints, finding that representation choice strongly affects performance.
  • Using TFMs with CheMeleon embeddings achieves up to a 100% win rate on 30 MoleculeACE tasks, while compact descriptor sets like RDKit2d and Mordred also perform well.
  • Overall, the results suggest TFMs with appropriate molecular representations offer a highly accurate and cost-efficient approach for property prediction in real-world applications such as drug discovery and engineering.

Abstract

Accurate molecular property prediction is central to drug discovery, catalysis, and process design, yet real-world applications are often limited by small datasets. Molecular foundation models provide a promising direction by learning transferable molecular representations; however, they typically involve task-specific fine-tuning, require machine learning expertise, and often fail to outperform classical baselines. Tabular foundation models (TFMs) offer a fundamentally different paradigm: they perform predictions through in-context learning, enabling inference without task-specific training. Here, we evaluate TFMs in the low- to medium-data regime across both standardized pharmaceutical benchmarks and chemical engineering datasets. We evaluate both frozen molecular foundation model representations, as well as classical descriptors and fingerprints. Across the benchmarks, the approach shows excellent predictive performance while reducing computational cost, compared to fine-tuning, with these advantages also transferring to practical engineering data settings. In particular, combining TFMs with CheMeleon embeddings yields up to 100\% win rates on 30 MoleculeACE tasks, while compact RDKit2d and Mordred descriptors provide strong descriptor-based alternatives. Molecular representation emerges as a key determinant in TFM performance, with molecular foundation model embeddings and 2D descriptor sets both providing substantial gains over classic molecular fingerprints on many tasks. These results suggest that in-context learning with TFMs provides a highly accurate and cost-efficient alternative for property prediction in practical applications.