Learning the Cue or Learning the Word? Analyzing Generalization in Metaphor Detection for Verbs
arXiv cs.CL / 4/16/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates whether state-of-the-art metaphor detection models generalize via transferable context patterns or rely on lexical memorization of verbs.
- Using RoBERTa as a common backbone and the VU Amsterdam Metaphor Corpus, the authors run a lexical hold-out experiment that removes target verb lemmas from fine-tuning and compares performance on exposed vs held-out verbs.
- Results show the model scores highest on exposed lemmas but still performs robustly on held-out lemmas, indicating meaningful generalization beyond seen words.
- Additional analysis finds that sentence context features can largely reproduce full-model performance on held-out lemmas, while static verb-level embeddings do not.
- The findings support a “learning the cue” view as the primary driver of generalization, with “learning the word” acting as an additive benefit when lexical exposure is present.
Related Articles

"The AI Agent's Guide to Sustainable Income: From Zero to Profitability"
Dev.to

"The Hidden Economics of AI Agents: Survival Strategies in Competitive Markets"
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

"The Hidden Costs of AI Agent Deployment: A CFO's Guide to True ROI in Enterpris
Dev.to

"The Real Cost of AI Compute: Why Token Efficiency Separates Viable Agents from
Dev.to