Leveraging Image Editing Foundation Models for Data-Efficient CT Metal Artifact Reduction
arXiv cs.CV / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses metal artifact reduction in CT scans, noting that high-attenuation implants can severely degrade image quality and overwhelm standard deep learning approaches that need large paired datasets.
- It reframes artifact reduction as an in-context reasoning task by adapting a general-purpose vision-language diffusion foundation model using parameter-efficient LoRA, cutting data needs to just 16–128 paired examples (about two orders of magnitude reduction).
- The authors find that domain adaptation is essential to prevent hallucinations, because without adaptation the foundation model may misinterpret streak artifacts as real objects.
- To better ground restored anatomy, they introduce a multi-reference conditioning strategy that supplies clean anatomical exemplars from other subjects alongside the corrupted input for category-specific inference.
- Experiments on the AAPM CT-MAR benchmark report state-of-the-art results on perceptual and radiological-feature metrics and provide released code.
Related Articles

Black Hat Asia
AI Business

Meta's latest model is as open as Zuckerberg's private school
The Register

AI fuels global trade growth as China-US flows shift, McKinsey finds
SCMP Tech

Why multi-agent AI security is broken (and the identity patterns that actually work)
Dev.to
BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.
Reddit r/artificial