I Came, I Saw, I Explained: Benchmarking Multimodal LLMs on Figurative Meaning in Memes

arXiv cs.CL / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study benchmarks eight state-of-the-art generative multimodal LLMs on detecting and explaining six types of figurative meaning in memes across three datasets.
  • Results show a pervasive bias: models tend to predict figurative meaning even when it is not present in the meme.
  • Human evaluation indicates that model explanations may not reliably support the predicted label and can be insufficiently faithful to the meme’s original content.
  • Qualitative analysis finds that correct label predictions do not necessarily come with explanation quality or content-faithfulness.
  • The work highlights a key limitation in how MLLMs align visual-text interpretation with grounded figurative semantics and explainability in real multimodal settings.

Abstract

Internet memes represent a popular form of multimodal online communication and often use figurative elements to convey layered meaning through the combination of text and images. However, it remains largely unclear how multimodal large language models (MLLMs) combine and interpret visual and textual information to identify figurative meaning in memes. To address this gap, we evaluate eight state-of-the-art generative MLLMs across three datasets on their ability to detect and explain six types of figurative meaning. In addition, we conduct a human evaluation of the explanations generated by these MLLMs, assessing whether the provided reasoning supports the predicted label and whether it remains faithful to the original meme content. Our findings indicate that all models exhibit a strong bias to associate a meme with figurative meaning, even when no such meaning is present. Qualitative analysis further shows that correct predictions are not always accompanied by faithful explanations.