AI Navigate

OpenQlaw: An Agentic AI Assistant for Analysis of 2D Quantum Materials

arXiv cs.CV / 3/19/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • OpenQlaw is an agentic orchestration system for analyzing 2D quantum materials that decouples visual identification from reasoning by leveraging a domain-expert MLLM and QuPAINT as a specialized node.
  • The system uses a lightweight NanoBot-based framework to orchestrate domain experts, enabling dynamic processing of user queries, scale-aware physical computations, and generation of isolated visual annotations.
  • It features a persistent memory to store physical scale ratios (e.g., 1 pixel = 0.25 μm) and sample preparation methods for efficacy comparison, supporting reproducible analysis.
  • By transforming isolated inferences into a context-aware assistant with lab-floor accessibility via multiple messaging channels, OpenQlaw aims to accelerate high-throughput device fabrication.

Abstract

The transition from optical identification of 2D quantum materials to practical device fabrication requires dynamic reasoning beyond the detection accuracy. While recent domain-specific Multimodal Large Language Models (MLLMs) successfully ground visual features using physics-informed reasoning, their outputs are optimized for step-by-step cognitive transparency. This yields verbose candidate enumerations followed by dense reasoning that, while accurate, may induce cognitive overload and lack immediate utility for real-world interaction with researchers. To address this challenge, we introduce OpenQlaw, an agentic orchestration system for analyzing 2D materials. The architecture is built upon NanoBot, a lightweight agentic framework inspired by OpenClaw, and QuPAINT, one of the first Physics-Aware Instruction Multi-modal platforms for Quantum Material Discovery. This allows accessibility to the lab floor via a variety of messaging channels. OpenQlaw allows the core Large Language Model (LLM) agent to orchestrate a domain-expert MLLM,with QuPAINT, as a specialized node, successfully decoupling visual identification from reasoning and deterministic image rendering. By parsing spatial data from the expert, the agent can dynamically process user queries, such as performing scale-aware physical computation or generating isolated visual annotations, and answer in a naturalistic manner. Crucially, the system features a persistent memory that enables the agent to save physical scale ratios (e.g., 1 pixel = 0.25 {\mu}m) for area computations and store sample preparation methods for efficacy comparison. The application of an agentic architecture, together with the extension that uses the core agent as an orchestrator for domain-specific experts, transforms isolated inferences into a context-aware assistant capable of accelerating high-throughput device fabrication.