CycleCap: Improving VLMs Captioning Performance via Self-Supervised Cycle Consistency Fine-Tuning
arXiv cs.CV / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- CycleCap introduces a self-supervised fine-tuning scheme that uses cycle consistency between a visual-language model and a text-to-image model to improve image captioning and reduce hallucinations.
- The approach employs Group Relative Policy Optimization with a live reward based on the similarity between the original and reconstructed images, computed online during training.
- It eliminates the need for curated image-text datasets by leveraging raw images as the training signal, guiding captions to be more grounded in visual content.
- Across four VLMs ranging from 1B to 7B parameters, CycleCap achieves consistent improvements on captioning and hallucination benchmarks, outperforming state-of-the-art methods that rely on supervised cycle-consistency training.
Related Articles

Attacks On Data Centers, Qwen3.5 In All Sizes, DeepSeek’s Huawei Play, Apple’s Multimodal Tokenizer
The Batch

Your AI generated code is "almost right", and that is actually WORSE than it being "wrong".
Dev.to

Lessons from Academic Plagiarism Tools for SaaS Product Development
Dev.to

**Core Allocation Optimization for Energy‑Efficient Multi‑Core Scheduling in ARINC650 Systems**
Dev.to

KI in der amtlichen Recherche beim DPMA: Was Patentanwälte bei Neuanmeldungen jetzt beachten sollten (Stand: März 2026)
Dev.to