AnalogRetriever: Learning Cross-Modal Representations for Analog Circuit Retrieval
arXiv cs.CV / 4/28/2026
📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research
Key Points
- Analog circuit IP reuse is hindered by difficulty searching across heterogeneous formats (SPICE netlists, schematics, and functional descriptions), because existing approaches mainly support exact matching within a single modality.
- The paper introduces AnalogRetriever, a unified tri-modal retrieval framework that embeds schematics and descriptions with a vision-language model and netlists with a port-aware relational graph convolutional network into a shared space using curriculum contrastive learning.
- A new high-quality dataset is built on Masala-CHAI using a two-stage repair pipeline that improves netlist compile rate from 22% to 100%, enabling effective training and evaluation.
- Experiments report an average Recall@1 of 75.2% across six cross-modal retrieval directions, outperforming prior baselines.
- When integrated into the AnalogCoder agentic framework as a retrieval-augmented generation module, AnalogRetriever improves functional pass rates and helps complete tasks that were previously unsolved, with code and the dataset planned for release.
Related Articles

Black Hat USA
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
How I Automate My Dev Workflow with Claude Code Hooks
Dev.to

Claude Haiku for Low-Cost AI Inference: Patterns from a Horse Racing Prediction System
Dev.to

How We Built an Ambient AI Clinical Documentation Pipeline (and Saved Doctors 8+ Hours a Week)
Dev.to