ConeSep: Cone-based Robust Noise-Unlearning Compositional Network for Composed Image Retrieval
arXiv cs.CV / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses Composed Image Retrieval (CIR), where retrieval depends on costly and error-prone triplet annotations, by focusing on the “Noisy Triplet Correspondence” (NTC) noise introduced by mislabeled triplets.
- It shows that a particular type of NTC noise—“hard noise,” where reference and target images are very similar but the modification text is wrong—breaks a key assumption used by existing noise correspondence learning methods.
- The authors dissect three overlooked difficulties in NTC: Modality Suppression, Negative Anchor Deficiency, and Unlearning Backlash, explaining why prior approaches struggle.
- To overcome these issues, they propose ConeSep, which includes Geometric Fidelity Quantization to estimate a noise boundary, Negative Boundary Learning to learn an explicit opposite anchor, and Boundary-based Targeted Unlearning formulated as an optimal transport problem.
- Experiments on FashionIQ and CIRR benchmarks indicate ConeSep achieves significantly better performance than current state-of-the-art noise-robust CIR methods, demonstrating both accuracy and robustness.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to