Inverting Foundation Models of Brain Function with Simulation-Based Inference

arXiv cs.LG / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper explores whether foundation models of brain activity can be used in reverse to recover a stimulus (or its underlying properties) from synthetic brain signals.
  • Using TRIBEv2, the researchers couple a brain emulator with LLMs that generate news headlines from psychological/linguistic parameters such as valence, arousal, and dominance.
  • They apply simulation-based inference to learn a probabilistic mapping from predicted brain maps back to the latent stimulus parameters.
  • The results indicate that the latent parameters can be recovered from the predicted brain maps, supporting the quality of the model’s neural encodings.
  • The study also suggests that LLMs can act as controllable stimulus generators, enabling more flexible simulated experiments toward decoding and inverse design.

Abstract

Foundation models of brain activity promise a new frontier for in silico neuroscience by emulating neural responses to complex stimuli across tasks and modalities. A natural next step is to ask whether these models can also be used in reverse. Can we recover a stimulus or its properties from synthetic brain activity? We study this question in a proof-of-concept setting using TRIBEv2. We pair the brain emulator with large language models (LLMs) that generate news headlines from linguistic parameters such as valence, arousal, and dominance. We then use simulation-based inference to learn a probabilistic mapping from brain maps to latent stimulus parameters. Our results show that these parameters can be recovered from predicted brain maps, validating the quality of neural encodings. They also show that LLMs can serve as controllable stimulus generators for simulated experiments. Together, these findings provide a step toward decoding and inverse design with foundation brain models.