Large Language Models Align with the Human Brain during Creative Thinking
arXiv cs.CL / 4/7/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The study uses fMRI data from 170 participants performing the Alternate Uses Task to examine how large language model (LLM) representations align with human brain activity during creative thinking, using Representational Similarity Analysis (RSA).
- Brain-LLM alignment is found to scale with LLM size (notably in the default mode network) and with idea originality (in both the default mode and frontoparietal networks), with the strongest alignment effects occurring early in the creative process.
- Different post-training objectives produce distinct, functionally selective changes in alignment: a creativity-optimized Llama variant preserves alignment with high-creativity neural responses while weakening alignment with low-creativity ones.
- A model fine-tuned to human behavior increases alignment with both high- and low-creativity neural responses, whereas a reasoning-trained variant shifts alignment away from the creative neural geometry toward more analytical processing patterns.




