Learned but Not Expressed: Capability-Expression Dissociation in Large Language Models
arXiv cs.CL / 3/20/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The study demonstrates that LLMs can reconstruct and trace learned content from training data under specific elicitation, but this capability does not appear in standard generation contexts.
- In an empirical analysis of 3 models, 10 task scenarios, and both creative narrative and practical advisory contexts, the authors observed zero instances of non-causal, non-implementable solution frames in generated outputs.
- The results show a dissociation between learned capability and expressed output, suggesting that task-conditioned generation policies can suppress learned content even when reconstruction is possible.
- These findings have implications for understanding generation dynamics, controlling output distributions, and delineating the behavioral boundaries of modern LLMs.
Related Articles

Attacks On Data Centers, Qwen3.5 In All Sizes, DeepSeek’s Huawei Play, Apple’s Multimodal Tokenizer
The Batch

Your AI generated code is "almost right", and that is actually WORSE than it being "wrong".
Dev.to

Lessons from Academic Plagiarism Tools for SaaS Product Development
Dev.to

**Core Allocation Optimization for Energy‑Efficient Multi‑Core Scheduling in ARINC650 Systems**
Dev.to

KI in der amtlichen Recherche beim DPMA: Was Patentanwälte bei Neuanmeldungen jetzt beachten sollten (Stand: März 2026)
Dev.to