Learning to Think Like a Cartoon Captionist: Incongruity-Resolution Supervision for Multimodal Humor Understanding

arXiv cs.AI / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal humor understanding requires correct reasoning processes, not just accurate black-box predictions on benchmarks like the New Yorker Cartoon Caption Contest (NYCC).- It proposes IRS (Incongruity-Resolution Supervision), which breaks humor understanding into incongruity modeling (detect visual mismatches), resolution modeling (form coherent reinterpretations), and preference alignment (score candidates against human judgments).- The method uses structured intermediate “reasoning traces” derived from captionist expertise to make the path from perception to humorous interpretation explicit and learnable during training.- Experiments across 7B, 32B, and 72B models on NYCC show IRS improves caption matching and ranking over strong multimodal baselines, with the largest model nearing expert-level ranking performance.- Zero-shot transfer to other benchmarks suggests IRS captures generalizable reasoning patterns, indicating that supervising reasoning structure can be more important than scaling alone for reasoning-centric tasks.

Abstract

Humor is one of the few cognitive tasks where getting the reasoning right matters as much as getting the answer right. While recent work evaluates humor understanding on benchmarks such as the New Yorker Cartoon Caption Contest (NYCC), it largely treats it as black-box prediction, overlooking the structured reasoning processes underlying humor comprehension. We introduce IRS (Incongruity-Resolution Supervision), a framework that decomposes humor understanding into three components: incongruity modeling, which identifies mismatches in the visual scene; resolution modeling, which constructs coherent reinterpretations of these mismatches; and preference alignment, which evaluates candidate interpretations under human judgments. Grounded in incongruity-resolution theory and expert captionist practice, IRS supervises intermediate reasoning process through structured traces that make the path from visual perception to humorous interpretation explicit and learnable. Across 7B, 32B, and 72B models on NYCC, IRS outperforms strong open and closed multimodal baselines across caption matching and ranking tasks, with our largest model approaching expert-level performance on ranking. Zero-shot transfer to external benchmarks shows that IRS learns generalizable reasoning patterns. Our results suggest that supervising reasoning structure, rather than scale alone, is key for reasoning-centric tasks.