Conditioning Protein Generation via Hopfield Pattern Multiplicity

arXiv cs.LG / 3/23/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • A single scalar bias added to the sampler's attention logits conditions protein sequence generation toward a user-specified subset without retraining or changing the model architecture.
  • The conditioning works for any interpretation of the subset (binding, stability, specificity, etc.) and is controlled by a multiplicity ratio that tunes how strongly the subset is favored.
  • A calibration gap can arise because the dimensionality-reduced encoding may not preserve residue-level variation; the gap is predicted by a simple geometric measure of how well the encoding separates the subset from the rest.
  • Experiments on five Pfam families (Kunitz, SH3, WW, Homeobox, Forkhead) demonstrate a monotonic relationship between latent-space separation and the calibration gap, and applying the method to omega-conotoxin peptides seeded with 23 characterized binders yields over a thousand candidates that preserve the primary pharmacophore and all experimentally identified binding determinants.

Abstract

Protein sequence generation via stochastic attention produces plausible family members from small alignments without training, but treats all stored sequences equally and cannot direct generation toward a functional subset of interest. We show that a single scalar parameter, added as a bias to the sampler's attention logits, continuously shifts generation from the full family toward a user-specified subset, with no retraining and no change to the model architecture. A practitioner supplies a small set of sequences (for example, hits from a binding screen) and a multiplicity ratio that controls how strongly generation favors them. The method is agnostic to what the subset represents: binding, stability, specificity, or any other property. We find that the conditioning is exact at the level of the sampler's internal representation, but that the decoded sequence phenotype can fall short because the dimensionality reduction used to encode sequences does not always preserve the residue-level variation that defines the functional split. We term this discrepancy the calibration gap and show that it is predicted by a simple geometric measure of how well the encoding separates the functional subset from the rest of the family. Experiments on five Pfam families (Kunitz, SH3, WW, Homeobox, and Forkhead domains) confirm the monotonic relationship between separation and gap across a fourfold range of geometries. Applied to omega-conotoxin peptides targeting a calcium channel involved in pain signaling, curated seeding from 23 characterized binders produces over a thousand candidates that preserve the primary pharmacophore and all experimentally identified binding determinants. These results show that stochastic attention enables practitioners to expand a handful of experimentally characterized sequences into diverse candidate libraries without retraining a generative model.