Why Digital Pathology Needs a Rebuild
A single pathology slide contains billions of pixels, and a pathologist might spend hours manually scanning through a Whole Slide Image (WSI) just to count mitotic figures or delineate tumor boundaries. Multiply this by dozens of cases per day, across different cancer types, using slides from different scanner vendors—you have a workflow that is slow, subjective, and riddled with format incompatibilities.
My team and I built *[AI Pathology](https://ai-pathology.metaworldos.com * to attack these three pain points head-on:
- ⚡ Speed: Turn hours of manual analysis into minutes.
- 🧬 Breadth: Support pan-cancer analysis out of the box.
- 🔬 Compatibility: Natively read every major WSI format without conversion.
Here is how we built it.
⚡ Speed: From Hours to Minutes
The most immediate value our platform delivers is raw speed. By offloading inference to GPU-accelerated pipelines (whether in the cloud or on-premise), we compressed tasks that used to take hours into seconds or minutes.
| Task | Traditional Manual Workflow | AI Pathology |
|---|---|---|
| Tumor Region Delineation | 30–60 minutes per slide | < 60 seconds |
| Mitotic Figure Counting | 1–2 hours per slide | < 2 minutes |
| Cell Classification (10k+ cells) | Practically infeasible manually | < 3 minutes |
| Report Drafting | 20–40 minutes | Auto-generated in seconds |
This isn’t just a convenience upgrade—it fundamentally changes what kinds of analyses are feasible at scale. Researchers can now run quantitative studies on thousands of slides, not dozens.
How we achieve this:
- Tile-based parallel inference: WSIs are tiled into patches and processed in parallel across GPU workers.
- Smart region-of-interest (ROI) prefiltering: We skip blank/background tiles to cut compute by up to 70%.
- Streaming results: Users see heatmaps and counts progressively as tiles complete, rather than waiting for the full slide.
🧬 Pan-Cancer Analysis: One Platform, Many Tumor Types
Most pathology AI tools are narrowly scoped to a single cancer type. We took a different approach: build a modular model hub that covers the full spectrum of common solid tumors and hematologic cases, with a unified API.
Currently supported cancer types include:
| Category | Covered Cancer Types |
|---|---|
| Thoracic | Lung adenocarcinoma, squamous cell carcinoma |
| Breast | Invasive ductal carcinoma, HER2 / Ki-67 quantification |
| Gastrointestinal | Gastric, colorectal, esophageal, liver cancers |
| Urogenital | Prostate (Gleason grading), bladder, kidney cancers |
| Gynecologic | Cervical, endometrial, ovarian cancers |
| Hematologic | Lymphoma subtyping, bone marrow cellularity |
| Dermatologic | Melanoma, basal/squamous cell carcinoma |
Each cancer type has dedicated models for:
- Tumor region segmentation
- Cell-level classification
- Biomarker quantification (e.g., Ki-67, PD-L1, HER2)
- Morphological feature extraction
For rare diseases or research-specific needs, our no-code training pipeline lets labs bring their own annotated datasets and fine-tune custom models directly on the platform.
🔬 Universal Format Support: Breaking Vendor Lock-in
One of the biggest real-world headaches in digital pathology is that every scanner vendor uses its own proprietary format. Labs using multiple scanners typically juggle multiple desktop viewers, or waste hours converting files.
We built a high-performance universal WSI engine that natively handles 20+ formats with zero conversion.
| Vendor / Standard | Formats Supported |
|---|---|
| Aperio (Leica) |
.svs, .tif
|
| Hamamatsu |
.ndpi, .vms, .vmu
|
| 3DHISTECH | .mrxs |
| Leica | .scn |
| Olympus | .vsi |
| Philips |
.isyntax, .tiff
|
| Ventana (Roche) | .bif |
| Zeiss | .czi |
| Sakura | .svslide |
| DICOM |
.dcm (WSI DICOM) |
| Generic |
.tiff, .tif, pyramidal TIFF |
Under the hood:
- A unified tile server abstracts vendor-specific pyramid structures into a common API.
- Metadata (magnification, MPP, channels) is auto-normalized on ingestion.
- Deep zoom rendering streams tiles on-demand, so opening a 50GB
.mrxsfeels as smooth as opening a JPEG.
This means a lab can use Aperio for breast cases, Hamamatsu for lung cases, and Philips for prostate cases—and every slide opens in the same viewer with the same AI pipelines ready to run.
🏗️ Cloud & On-Premise: Deployment Built for Healthcare
Healthcare data rarely lives in the public cloud. Hospitals have strict HIPAA/GDPR requirements, and patient data often cannot leave internal networks. So we architected the platform to support both deployment modes from day one:
1. Managed Cloud (Zero Installation)
For independent researchers and small clinics, users access a GPU-backed cloud desktop (powered by Wuying Workspace) directly through the browser. No install, no local GPU needed—just open a tab and start analyzing.
2. On-Premise / Private Deployment
For hospitals and large research institutions, we ship the entire stack as Docker containers (Kubernetes or Docker Compose orchestration). The frontend, backend, database, and AI inference engines all run behind the hospital firewall, using local GPU clusters for inference. 100% data sovereignty, no external dependencies.
🛠️ The Tech Stack
- Frontend & API: Next.js (App Router) + TypeScript
- Database & ORM: Prisma + PostgreSQL
- Styling: Tailwind CSS
- AI Inference: PyTorch + ONNX Runtime, served via containerized GPU workers
- WSI Engine: Custom tile server built on top of OpenSlide + libvips
- Reporting: LLM integration (OpenAI) for auto-generated structured reports
- Hosting (Cloud): Vercel for the app layer; GPU nodes for inference
🤔 Looking for Feedback
We are offering a Free Trial via our Cloud Desktop—you can load a sample WSI, run a tumor segmentation model, and get a full report in under 5 minutes, entirely in your browser.
🔗 Try it here: AI Pathology
Questions for the Dev.to community:
- Anyone else building tile servers for gigapixel imagery? How do you handle caching across multiple proprietary formats?
- For those shipping both SaaS and on-prem versions of an AI product—how do you handle model updates and versioning for air-gapped deployments?
- Thoughts on using DaaS (Cloud Desktop) as a delivery mechanism for heavy AI applications?
Let me know your thoughts in the comments! 👇

