HumorGen: Cognitive Synergy for Humor Generation in Large Language Models via Persona-Based Distillation
arXiv cs.CL / 4/14/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- Humor generation is difficult for standard LLM training because next-token prediction discourages the surprise and incongruity that comedy relies on.
- The paper proposes the “Cognitive Synergy Framework,” using a Mixture-of-Thought setup with six persona-based cognitive perspectives to synthesize diverse humor data from psychological theories.
- A theoretically grounded dataset produced by these personas is used to fine-tune a 7B-parameter student model.
- The authors compare training methods, finding that their 7B model strongly outperforms larger instruction-tuned baselines and performs competitively with state-of-the-art proprietary models.
- The work concludes that persona-driven cognitive data curation is more important than alignment algorithms or sheer model scale for achieving strong humor generation quality.
Related Articles

Emerging Properties in Unified Multimodal Pretraining
Dev.to

Build a Profit-Generating AI Agent with LangChain: A Step-by-Step Tutorial
Dev.to

Open source AI is winning — but here's why I still pay $2/month for Claude API
Dev.to

AI Agents Need Real Email Infrastructure
Dev.to

Beyond the Prompt: Why AI Agents Are Hitting the Deployment Wall
Dev.to