Podcast on teaching AI empathy using brain signals

Reddit r/artificial / 4/29/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • A podcast episode with Thorsten Zander discusses “passive” brain-computer interfaces (BCIs) that non-invasively read brain signals to infer a user’s mental state with minimal effort from the user.
  • The discussion clarifies what non-invasive BCIs can and cannot extract from brain activity, emphasizing that this is not the same as reading thoughts or internal monologue.
  • Zander explains how recent hardware and software advances are making passive BCIs more wearable and affordable, enabling continuous neural feedback.
  • The episode argues that continuous neural feedback could improve AI training over approaches that rely mainly on human ratings, and it connects this to a potential route toward solving AI alignment.
  • It also highlights societal risks, warning that social platforms could potentially exploit unconscious brain reactions for manipulation and that regulation alone may not be sufficient.
Podcast on teaching AI empathy using brain signals

Podcast episode with Thorsten Zander, professor at Brandenburg University of Technology and co-founder of Zander Labs. He coined the concept of passive brain-computer interfaces: devices that read brain signals to decode a user's mental state, non-invasively and without any effort on their part.

Covers:

  • What non-invasive brain-computer interfaces (BCIs) can actually pick up from brain signals, and why that's very different from reading your thoughts or internal monologue
  • The hardware and software breakthroughs that are finally making passive BCIs wearable and affordable
  • How continuous neural feedback could dramatically improve AI training compared to current methods based on human ratings
  • Why Thorsten believes passive BCIs may offer the most concrete path to solving the AI alignment problem
  • The risk of social networks exploiting unconscious brain reactions to manipulate people, and why regulation alone is unlikely to be enough
submitted by /u/JMarty97
[link] [comments]