Hyper-Dimensional Fingerprints as Molecular Representations

arXiv cs.LG / 5/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces hyper-dimensional fingerprints (HDF) as deterministic molecular representations that avoid task-specific training by using algebraic operations on high-dimensional vectors.
  • Experiments across multiple property prediction benchmarks show HDF generally outperforms conventional hashed fingerprints and is more consistent across datasets and models.
  • HDF embeddings better preserve molecular structural similarity than standard Morgan fingerprints, achieving higher correlation with graph edit distance even at very low dimensions.
  • The authors demonstrate that simple nearest-neighbor regression can remain predictive with as few as 64 HDF components, where hash-based fingerprints degrade.
  • In Bayesian molecular optimization, HDF-based surrogate models improve sample efficiency in settings where Morgan fingerprints are only comparable to random search.

Abstract

Computational molecular representations underpin virtual screening, property prediction, and materials discovery. Conventional fingerprints are efficient and deterministic but lose structural information through hash-based compression, particularly at low dimensionalities. Learned representations from graph neural networks recover this expressiveness but require task-specific training and substantial computational resources. Here we introduce hyperdimensional fingerprints (HDF), which replace the learned transformations of message-passing neural networks with algebraic operations on high-dimensional vectors, producing deterministic molecular representations without any training. Across diverse property prediction benchmarks, HDF outperforms conventional fingerprints in the majority of tasks while exhibiting greater consistency across datasets and models. Crucially, HDF embeddings preserve molecular similarity faithfully: at 32 dimensions, distances in HDF space achieve a 0.9 Pearson correlation with graph edit distance, compared to 0.55 for Morgan fingerprints at equivalent size. This structural fidelity persists at low dimensions where hash-based methods degrade, allowing simple nearest-neighbor regression to remain predictive with as few as 64 components. We further demonstrate the practical impact in Bayesian molecular optimization, where HDF-based surrogate models achieve substantially improved sample efficiency in regimes where Morgan fingerprints perform comparably to random search. HDF thus provides a general-purpose, training-free alternative to conventional molecular fingerprints, suggesting that the information loss long accepted as inherent to fixed-length fingerprints is a limitation of the hash-based encoding scheme rather than the fingerprint paradigm itself.