CHASM: Unveiling Covert Advertisements on Chinese Social Media
arXiv cs.LG / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper highlights a gap in existing LLM/social-media moderation benchmarks: they often miss covert advertisements that impersonate normal posts to manipulate consumer purchases.
- Researchers introduce CHASM, a manually curated, anonymized multimodal dataset (4,992 instances) drawn from real scenarios on China’s Rednote platform, created with strict privacy and quality controls.
- Evaluation results show that in both zero-shot and in-context learning modes, today’s multimodal LLMs are not reliably able to detect covert ads.
- While fine-tuning open-source MLLMs on CHASM improves performance, the study finds persistent difficulty in spotting subtle cues in comments and in distinguishing nuanced visual vs. textual patterns.
- The authors provide detailed error analysis and call for better defenses from the research community and social-media moderators against this emerging threat.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to