AI Navigate

Meissa: Multi-modal Medical Agentic Intelligence

arXiv cs.AI / 3/11/2026

Tools & Practical UsageModels & Research

Key Points

  • Meissa is a lightweight 4-billion-parameter multi-modal large language model designed for medical applications that operates fully offline, addressing cost, latency, and privacy issues of API-dependent frontier models.
  • The model introduces unified trajectory modeling to represent reasoning and action traces within a single formalism, enabling generalization across diverse medical environments.
  • Meissa employs a three-tier stratified supervision system that triggers escalating interaction strategies based on the model's error difficulty, improving decision-making capabilities.
  • The approach includes prospective-retrospective supervision, combining forward exploratory traces with hindsight-executed rationales for stable and effective policy learning.
  • Trained on 40,000 curated trajectories, Meissa matches or outperforms leading proprietary agents on multiple medical benchmarks, utilizing significantly fewer parameters and achieving notably lower latency.

Computer Science > Artificial Intelligence

arXiv:2603.09018 (cs)
[Submitted on 9 Mar 2026]

Title:Meissa: Multi-modal Medical Agentic Intelligence

View a PDF of the paper titled Meissa: Multi-modal Medical Agentic Intelligence, by Yixiong Chen and 4 other authors
View PDF HTML (experimental)
Abstract:Multi-modal large language models (MM-LLMs) have shown strong performance in medical image understanding and clinical reasoning. Recent medical agent systems extend them with tool use and multi-agent collaboration, enabling complex decision-making. However, these systems rely almost entirely on frontier models (e.g., GPT), whose API-based deployment incurs high cost, high latency, and privacy risks that conflict with on-premise clinical requirements. We present Meissa, a lightweight 4B-parameter medical MM-LLM that brings agentic capability offline. Instead of imitating static answers, Meissa learns both when to engage external interaction (strategy selection) and how to execute multi-step interaction (strategy execution) by distilling structured trajectories from frontier models. Specifically, we propose: (1) Unified trajectory modeling: trajectories (reasoning and action traces) are represented within a single state-action-observation formalism, allowing one model to generalize across heterogeneous medical environments. (2) Three-tier stratified supervision: the model's own errors trigger progressive escalation from direct reasoning to tool-augmented and multi-agent interaction, explicitly learning difficulty-aware strategy selection. (3) Prospective-retrospective supervision: pairing exploratory forward traces with hindsight-rationalized execution traces enables stable learning of effective interaction policies. Trained on 40K curated trajectories, Meissa matches or exceeds proprietary frontier agents in 10 of 16 evaluation settings across 13 medical benchmarks spanning radiology, pathology, and clinical reasoning. Using over 25x fewer parameters than typical frontier models like Gemini-3, Meissa operates fully offline with 22x lower end-to-end latency compared to API-based deployment. Data, models, and environments are released at this https URL.
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09018 [cs.AI]
  (or arXiv:2603.09018v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09018
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Yixiong Chen [view email]
[v1] Mon, 9 Mar 2026 23:22:55 UTC (3,973 KB)
Full-text links:

Access Paper:

Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.