Learning from Compressed CT: Feature Attention Style Transfer and Structured Factorized Projections for Resource-Efficient Medical Image Analysis
arXiv cs.CV / 5/4/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper tackles the high computational cost of running AI on uncompressed chest CT volumes by using JPEG-compressed CT data to enable low-resource deployment and faster transfer.
- It proposes Feature Attention Style Transfer (FAST), a knowledge-distillation framework that transfers both activation/attention patterns and structural relationships from high-fidelity CT models to encoders trained on compressed inputs.
- It introduces Structured Factorized Projection (SFP), a parameter-efficient projection-head approach using Block Tensor Train decomposition that cuts projection-head parameters by nearly half.
- The authors combine FAST and SFP into a contrastive learning pipeline called CT-Lite with a SigLIP-based multimodal alignment objective, achieving AUROC within 5–7% of an uncompressed-input baseline on multiple CT datasets.
- Overall, the results suggest that compressed CT can support accurate medical image analysis with substantially fewer parameters, improving feasibility for clinical settings with resource constraints.
Related Articles
AnnouncementsBuilding a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic News

Dara Khosrowshahi on replacing Uber drivers — and himself — with AI
The Verge

CLMA Frame Test
Dev.to

Governance and Liability in AI Agents: What I Built Trying to Answer Those Questions
Dev.to

Roundtable chat with Talkie-1930 and Gemma 4 31B
Reddit r/LocalLLaMA