Causal AI For AMS Circuit Design: Interpretable Parameter Effects Analysis

arXiv cs.AI / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a causal-inference framework for analog-mixed-signal (AMS) circuit design that learns a directed-acyclic graph (DAG) from SPICE simulation data to model parameter relationships.
  • It estimates parameter impact using Average Treatment Effect (ATE), producing interpretable rankings of design knobs and explicit “what-if” predictions for trade-off analysis.
  • The method is evaluated on three operational-amplifier families (OTA, telescopic, folded-cascode) implemented in TSMC 65nm and is benchmarked against a neural-network regressor.
  • Results show the causal model reproduces simulation-based ATEs with under 25% average absolute error, while the neural network deviates by over 80% and often predicts the wrong direction (sign).
  • The authors argue this demonstrates causal AI’s potential for more accurate and explainable AMS design automation compared with purely data-driven predictors.

Abstract

Analog-mixed-signal (AMS) circuits are highly non-linear and operate on continuous real-world signals, making them far more difficult to model with data-driven AI than digital blocks. To close the gap between structured design data (device dimensions, bias voltages, etc.) and real-world performance, we propose a causal-inference framework that first discovers a directed-acyclic graph (DAG) from SPICE simulation data and then quantifies parameter impact through Average Treatment Effect (ATE) estimation. The approach yields human-interpretable rankings of design knobs and explicit 'what-if' predictions, enabling designers to understand trade-offs in sizing and topology. We evaluate the pipeline on three operational-amplifier families (OTA, telescopic, and folded-cascode) implemented in TSMC 65nm and benchmark it against a baseline neural-network (NN) regressor. Across all circuits the causal model reproduces simulation-based ATEs with an average absolute error of less than 25%, whereas the neural network deviates by more than 80% and frequently predicts the wrong sign. These results demonstrate that causal AI provides both higher accuracy and explainability, paving the way for more efficient, trustworthy AMS design automation.