ME-IQA: Memory-Enhanced Image Quality Assessment via Re-Ranking
arXiv cs.CV / 3/24/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces ME-IQA, a plug-and-play, test-time memory-enhanced re-ranking framework for image quality assessment using reasoning-enabled vision-language models (VLMs).
- It builds a memory bank by retrieving semantically and perceptually aligned neighbors based on reasoning summaries, enabling more informative comparisons during inference.
- ME-IQA treats the VLM as a probabilistic comparator to produce pairwise preference probabilities and fuses ordinal evidence with the original scalar score using Thurstone’s Case V model.
- A gated reflection step and memory consolidation are used to improve future decisions and reduce discrete score collapse, producing denser and distortion-sensitive predictions.
- Experiments on multiple IQA benchmarks report consistent gains over strong reasoning-induced VLM baselines, non-reasoning IQA methods, and other test-time scaling approaches.
Related Articles
Santa Augmentcode Intent Ep.6
Dev.to

Your Agent Hired Another Agent. The Output Was Garbage. The Money's Gone.
Dev.to
ClawRouter vs TeamoRouter: one requires a crypto wallet, one doesn't
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Palantir’s billionaire CEO says only two kinds of people will succeed in the AI era: trade workers — ‘or you’re neurodivergent’
Reddit r/artificial