| Hey guys, I’m the same creator of Netryx V2, the geolocation tool. I’ve been working on something new called COGNEX. It learns how a person reacts to situations, then uses that pattern to simulate how they would respond to something new. You collect real stimulus and response pairs. A stimulus is an event. A response is what they said or did. The key is linking them properly. Then you convert both into structured signals instead of raw text. This is where TRIBE v2 comes in. It was released by Meta about two weeks ago, trained on fMRI scan data, and it can take text, audio, images, and video and estimate how a human brain would process that input. On its own, it reflects an average brain. It does not know the individual. COGNEX uses TRIBE to first map every stimulus and response into this shared “brain-like” space. Then the system learns how a specific person deviates from that baseline. Over multiple examples, it picks up patterns like how this individual amplifies threat, suppresses emotion, or shifts toward strategy. So TRIBE gives the common reference frame, and COGNEX fits it to an individual by learning their unique transformation on top of it. Once that mapping is learned, you can simulate new scenarios or test different messages and see what kind of response they trigger. For intelligence, this helps with planning. You can model how someone might react before acting, whether it’s a threat, negotiation, or public message. It’s not perfect, but it gives a structured way to think ahead instead of guessing. Here’s the demo link. Open sourcing soon. [link] [comments] |
Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]
Reddit r/MachineLearning / 4/5/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- A developer describes a project called COGNEX that aims to learn how a specific person responds to situations by training on paired stimulus–response data and converting both into structured signal representations.
- The approach uses Meta’s recently released TRIBE v2 (trained on fMRI scan data) to map new stimuli and observed responses into a shared “brain-like” embedding space, then learns how an individual deviates from the population baseline.
- The system is framed as enabling simulation of how a person might react to new scenarios or different messages by learning individual patterns such as amplifying threat or suppressing emotion.
- The author claims potential applications for planning (including threat assessment, negotiation, and public messaging), while noting the method is not perfect and is meant to provide a structured alternative to intuition.
- A demo is provided via a video link, and the author says they plan to open-source the project soon.
Related Articles

Black Hat Asia
AI Business

How to Self-Host n8n with Docker — AI Workflow Automation Guide 2026
Dev.to

How We Built a Company Powered by 14 AI Agents Using Paperclip
Dev.to

Top 15 MCP Servers Every Developer Should Install in 2026
Dev.to

I Built a Chess Engine with 5 AI Agents — Here's What Surprised Me
Dev.to