Prototype-Based Knowledge Guidance for Fine-Grained Structured Radiology Reporting
arXiv cs.AI / 3/13/2026
📰 NewsModels & Research
Key Points
- ProtoSR integrates free-text derived knowledge into structured radiology reporting by using a multimodal knowledge base of visual prototypes aligned with the reporting template.
- The approach automatically extracts knowledge from 80k+ MIMIC-CXR studies using an instruction-tuned LLM to populate the knowledge base.
- ProtoSR retrieves relevant prototypes for a given image-question pair and augments predictions with a prototype-conditioned residual, acting as a data-driven second opinion.
- On the Rad-ReStruct benchmark, ProtoSR achieves state-of-the-art results, with the largest gains for detailed attribute questions, demonstrating the value of leveraging unstructured text signals for fine-grained image understanding.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA