From Codebooks to VLMs: Evaluating Automated Visual Discourse Analysis for Climate Change on Social Media

arXiv cs.CV / 4/24/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper proposes a framework for using computer vision and vision-language models to analyze climate-change discourse on social media images at scale.
  • It benchmarks six promptable VLMs and 15 zero-shot CLIP-like models on two X (Twitter) datasets, covering five annotation dimensions such as climate actions, consequences, and image context.
  • Gemini-3.1-flash-lite achieves the best overall performance across categories and both datasets, with relatively small performance gaps versus some moderately sized open-weight models.
  • The authors argue that distribution-level evaluation can recover population trends even when per-image accuracy is only moderate, enabling scalable discourse analysis.
  • They report that chain-of-thought prompting hurts performance, while prompt designs tailored to specific annotation dimensions improve results, and they release tweet IDs/labels and code for reproducibility.

Abstract

Social media platforms have become primary arenas for climate communication, generating millions of images and posts that - if systematically analysed - can reveal which communication strategies mobilise public concern and which fall flat. We aim to facilitate such research by analysing how computer vision methods can be used for social media discourse analysis. This analysis includes application-based taxonomy design, model selection, prompt engineering, and validation. We benchmark six promptable vision-language models and 15 zero-shot CLIP-like models on two datasets from X (formerly Twitter) - a 1,038-image expert-annotated set and a larger corpus of over 1.2 million images, with 50,000 labels manually validated - spanning five annotation dimensions: animal content, climate change consequences, climate action, image setting, and image type. Among the models benchmarked, Gemini-3.1-flash-lite outperforms all others across all super-categories and both datasets, while the gap to open-weight models of moderate size remains relatively small. Beyond instance-level metrics, we advocate for distributional evaluation: VLM predictions can reliably recover population level trends even when per-image accuracy is moderate, making them a viable starting point for discourse analysis at scale. We find that chain-of-thought reasoning reduces rather than improves performance, and that annotation dimension specific prompt design improves performance. We release tweet IDs and labels along with our code at https://github.com/KathPra/Codebooks2VLMs.git.