AI Navigate

Novelty-Driven Target-Space Discovery in Automated Electron and Scanning Probe Microscopy

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The authors present BEACON, a deep-kernel-learning framework designed to actively search the target space (spectra and functional responses) rather than optimize only visible image features in automated electron and scanning probe microscopy.
  • They benchmark discovery strategies against classical acquisition methods using pre-acquired ground-truth datasets and define monitoring functions to compare exploration quality, target-space coverage, and surrogate-model behavior.
  • The workflow is demonstrated on STEM, showing progression from offline validation to real experimental deployment and illustrating practical translation of the method.
  • To support community adoption, the associated notebooks enable reproduction, benchmarking, and adaptation to other instruments and datasets.

Abstract

Modern automated microscopy faces a fundamental discovery challenge: in many systems, the most important scientific information does not reside in the immediately visible image features, but in the target space of sequentially acquired spectra or functional responses, making it essential to develop strategies that can actively search for new behaviors rather than simply optimize known objectives. Here, we developed a deep-kernel-learning BEACON framework that is explicitly designed to guide discovery in the target space by learning structure-property relationships during the experiment and using that evolving model to seek diverse response regimes. We first established the method through demonstration workflows built on pre-acquired ground-truth datasets, which enabled direct benchmarking against classical acquisition strategies and allowed us to define a set of monitoring functions for comparing exploration quality, target-space coverage, and surrogate-model behavior in a transparent and reproducible manner. This benchmarking framework provides a practical basis for evaluating discovery-driven algorithms, not just optimization performance. We then operationalized and deployed the workflow on STEM, showing that the approach can transition from offline validation to real experimental implementation. To support adoption and extension by the broader community, the associated notebooks are available, allowing users to reproduce the workflows, test the benchmarks, and adapt the method to their own instruments and datasets.