AI Navigate

Self-Conditioned Denoising for Atomistic Representation Learning

arXiv cs.LG / 3/19/2026

📰 NewsModels & Research

Key Points

  • The paper introduces Self-Conditioned Denoising (SCD), a backbone-agnostic pretraining objective that uses self-embeddings to enable conditional denoising across atomistic data.
  • SCD applies across diverse domains, including small molecules, proteins, periodic materials, and non-equilibrium geometries, addressing SSL limitations that were confined to ground-state geometries or a single domain.
  • With controlled backbone architecture and pretraining data, SCD significantly outperforms previous SSL methods and matches or exceeds supervised force-energy pretraining on downstream benchmarks.
  • A small, fast Graph Neural Network pretrained with SCD can achieve competitive or superior performance to larger models trained on substantially larger labeled or unlabeled datasets.
  • Code for SCD is available at https://github.com/TyJPerez/SelfConditionedDenoisingAtoms

Abstract

The success of large-scale pretraining in NLP and computer vision has catalyzed growing efforts to develop analogous foundation models for the physical sciences. However, pretraining strategies using atomistic data remain underexplored. To date, large-scale supervised pretraining on DFT force-energy labels has provided the strongest performance gains to downstream property prediction, out-performing existing methods of self-supervised learning (SSL) which remain limited to ground-state geometries, and/or single domains of atomistic data. We address these shortcomings with Self-Conditioned Denoising (SCD), a backbone-agnostic reconstruction objective that utilizes self-embeddings for conditional denoising across any domain of atomistic data, including small molecules, proteins, periodic materials, and 'non-equilibrium' geometries. When controlled for backbone architecture and pretraining dataset, SCD significantly outperforms previous SSL methods on downstream benchmarks and matches or exceeds the performance of supervised force-energy pretraining. We show that a small, fast GNN pretrained by SCD can achieve competitive or superior performance to larger models pretrained on significantly larger labeled or unlabeled datasets, across tasks in multiple domains. Our code is available at: https://github.com/TyJPerez/SelfConditionedDenoisingAtoms