MapSR: Prompt-Driven Land Cover Map Super-Resolution via Vision Foundation Models
arXiv cs.CV / 4/17/2026
📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research
Key Points
- MapSR tackles high-cost dense HR land-cover annotation by improving coarse low-resolution maps into high-resolution outputs using prompt-driven map super-resolution rather than retraining dense predictors with LR labels repeatedly.
- The method decouples supervision from training: it uses LR labels only once to derive class prompts from frozen vision foundation model features via a lightweight linear probe.
- HR prediction is then performed in a training-free manner using cosine-similarity prompt matching, followed by graph-based prediction refinement for spatial consistency.
- On the Chesapeake Bay dataset, MapSR reaches 59.64% mIoU without any HR labels, outperforming a fully supervised baseline and staying competitive with the best weakly supervised approach.
- MapSR dramatically reduces compute needs, cutting trainable parameters by four orders of magnitude and shrinking training time from hours to minutes, supporting scalable HR mapping under tight annotation budgets.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.



![[Patterns] AI Agent Error Handling That Actually Works](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Frn5czaopq2vzo7cglady.png&w=3840&q=75)