AI Navigate

How to make the most of your masked language model for protein engineering

arXiv cs.LG / 3/12/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper introduces a stochastic beam search sampling method for masked language models to optimize protein properties during design.
  • It leverages MLMs’ efficiency in evaluating the pseudo-perplexity of the entire 1-edit neighborhood to guide generation with multiple objectives.
  • It reframes generation as entire-sequence evaluation, enabling flexible multi-objective optimization during protein design.
  • In vitro head-to-head experiments on antibody engineering campaigns show that the choice of sampling method can be as impactful as the model itself, underscoring a crucial area for future research.

Abstract

A plethora of protein language models have been released in recent years. Yet comparatively little work has addressed how to best sample from them to optimize desired biological properties. We fill this gap by proposing a flexible, effective sampling method for masked language models (MLMs), and by systematically evaluating models and methods both in silico and in vitro on actual antibody therapeutics campaigns. Firstly, we propose sampling with stochastic beam search, exploiting the fact that MLMs are remarkably efficient at evaluating the pseudo-perplexity of the entire 1-edit neighborhood of a sequence. Reframing generation in terms of entire-sequence evaluation enables flexible guidance with multiple optimization objectives. Secondly, we report results from our extensive in vitro head-to-head evaluation for the antibody engineering setting. This reveals that choice of sampling method is at least as impactful as the model used, motivating future research into this under-explored area.