Unified Multi-Foundation-Model Slide Representation for Pan-Cancer Recognition and Text-Guided Tumor Localization
arXiv cs.CV / 4/28/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces ASTRA, a pan-cancer framework that unifies fragmented tile-level representations from multiple pathology foundation models into a shared slide-level representation space for clinical-grade slide reasoning.
- ASTRA semantically anchors this shared space using structured pathology metadata fields (classification category, cancer type, and anatomic site), enabling interpretability and text-guided localization.
- The method uses sparse mixture-of-experts contextualization, masked multi-model reconstruction, and contrastive alignment to structured pathology prompts to learn slide representations supporting multi-level classification and weakly supervised tumor localization.
- Trained on 10,359 whole-slide images across 16 tumor types from the CHTN cohort, ASTRA improves pan-cancer classification across four foundation-model backbones, reaching up to 97.8% macro-AUC (4-category), 99.7% (3-class typing), and 99.2% (16-class typing).
- For localization, ASTRA achieves a mean Dice score of 0.897 on an in-domain annotated subset (n=380) and 0.738 on an external TCGA subset (n=1,686), showing strong generalization without pixel-level supervision.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Same Agent, Different Risk | How Microsoft 365 Copilot Grounding Changes the Security Model | Rahsi Framework™
Dev.to

Claude Haiku for Low-Cost AI Inference: Patterns from a Horse Racing Prediction System
Dev.to

How We Built an Ambient AI Clinical Documentation Pipeline (and Saved Doctors 8+ Hours a Week)
Dev.to

🦀 PicoClaw Deep Dive — A Field Guide to Building an Ultra-Light AI Agent in Go 🐹
Dev.to