Meta's new AI model predicts how your brain reacts to images, sounds, and speech

THE DECODER / 3/28/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • Meta developed an AI model that predicts typical brain responses to multimodal inputs including images, sounds, and speech.
  • In evaluations, the model’s predicted activation patterns aligned more closely with common brain responses than scans from individual subjects.
  • The work suggests progress toward computational models that can infer brain activity from real-world media inputs.
  • The approach positions Meta’s research as a step toward improved brain-signal prediction and broader multimodal neuroscience applications.

Two black head profiles with color-coded AI predictions of brain activation, red-orange heat areas on grey model.

Meta built an AI model that predicts how the human brain reacts to images, sounds, and speech. In tests, its predictions matched the typical brain response more closely than an actual scan of any single person.

The article Meta's new AI model predicts how your brain reacts to images, sounds, and speech appeared first on The Decoder.

広告