ChipLingo: A Systematic Training Framework for Large Language Models in EDA

arXiv cs.LG / 5/1/2026

📰 NewsModels & Research

Key Points

  • The paper introduces ChipLingo, a systematic training pipeline to domain-adapt large language models specifically for Electronic Design Automation (EDA) use cases.
  • ChipLingo includes three stages: building a curated EDA domain corpus with multi-source data and QA augmentation, performing domain-adaptive pretraining with different parameter-training strategies, and doing instruction alignment using RAG scenario training under varied retrieval conditions.
  • The authors create an internal benchmark called EDA-Bench (with plans for public release) that covers representative EDA tool scenarios to evaluate model performance.
  • Experimental results show that ChipLingo-8B achieves 59.7% accuracy on EDA-Bench, and ChipLingo-32B reaches 70.02%, with ablation findings that QA augmentation helps, Partial FT better preserves general capability than LoRA, and RAG scenario training reduces degradation in retrieval utilization.
  • The study argues that systematic domain training can be a practical foundation for future EDA agents and external-knowledge-driven systems.

Abstract

With the rapid advancement of semiconductor technology, Electronic Design Automation (EDA) has become an increasingly knowledge-intensive and document-driven engineering domain. Although large language models (LLMs) have shown strong general capabilities, applying them directly to EDA remains challenging due to limited domain expertise, cross-tool knowledge confusion, and degraded retrieval-augmented generation (RAG) performance after domain training. To address these issues, this paper presents ChipLingo, a systematic training pipeline for domain-adapted LLMs tailored to EDA scenarios. ChipLingo consists of three stages: domain corpus construction with multi-source data curation and QA augmentation, domain-adaptive pretraining with comparisons of different parameter training strategies, and instruction alignment with RAG scenario training under diverse retrieval conditions. We also curate an internal benchmark, EDA-Bench, covering representative EDA tool scenarios, with plans for public release. Experiments show that ChipLingo-8B achieves 59.7% accuracy on EDA-Bench, outperforming the same-scale base model and some larger general-purpose models. ChipLingo-32B reaches 70.02%, approaching leading closed-source commercial models. Further analysis shows that QA augmentation improves domain performance, Partial FT offers a better balance between adaptation and general capability retention than LoRA, and explicit RAG scenario training mitigates the decline in retrieval utilization after domain training. These results demonstrate the practical value of systematic domain training for knowledge-intensive EDA tasks and provide a foundation for future EDA agents and external-knowledge-driven systems.

ChipLingo: A Systematic Training Framework for Large Language Models in EDA | AI Navigate