Harmful Visual Content Manipulation Matters in Misinformation Detection Under Multimedia Scenarios
arXiv cs.LG / 2026/3/24
💬 オピニオンIdeas & Deep AnalysisModels & Research
要点
- The paper addresses multimodal misinformation detection (MMD) in social media by arguing that visual manipulation cues and the intent behind them are important indicators that many existing methods miss.
- It proposes learning two feature types—manipulation features (whether visual content is altered) and intention features (whether the manipulation is harmful vs harmless)—to improve misinformation identification.
- Because the labels needed to directly supervise these features are unavailable, the study introduces weakly supervised indicators using supplementary datasets for image manipulation detection and formulates two tasks as positive and unlabeled learning problems.
- Experiments on four widely used MMD datasets show that the proposed HAVC-M4D approach significantly and consistently improves performance over existing MMD methods.

