AI Navigate

CktEvo: Repository-Level RTL Code Benchmark for Design Evolution

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • CktEvo is a new benchmark and reference framework designed for repository-level RTL code evolution targeting improvement in Power, Performance, and Area (PPA) while preserving functional correctness.
  • Unlike previous benchmarks focusing on isolated code snippets, CktEvo works with complete IP core repositories, capturing the complex cross-file interactions important for real-world hardware design.
  • The framework integrates large language model (LLM) proposed code edits with feedback from the RTL toolchain to enable iterative, automated optimization without human intervention.
  • Experiments show that CktEvo can achieve measurable PPA improvements on real hardware design repositories, establishing a practical foundation for LLM-assisted RTL optimization at scale.
  • This work advances the state of LLM applications in hardware design beyond isolated module generation or debugging by addressing repository-level, function-preserving transformations that matter for engineering practice.

Computer Science > Hardware Architecture

arXiv:2603.08718 (cs)
[Submitted on 10 Feb 2026]

Title:CktEvo: Repository-Level RTL Code Benchmark for Design Evolution

View a PDF of the paper titled CktEvo: Repository-Level RTL Code Benchmark for Design Evolution, by Zhengyuan Shi and 4 other authors
View PDF HTML (experimental)
Abstract:Register-Transfer Level (RTL) coding is an iterative, repository-scale process in which Power, Performance, and Area (PPA) emerge from interactions across many files and the downstream toolchain. While large language models (LLMs) have recently been applied to hardware design, most efforts focus on generation or debugging from natural-language prompts, where ambiguity and hallucinations necessitate expert review. A separate line of work begins from formal inputs, yet typically optimizes high-level synthesis or isolated modules and remains decoupled from cross-file dependencies. In this work, we present CktEvo, a benchmark and reference framework for repo-level RTL evolution. Unlike prior benchmarks consisting of isolated snippets, our benchmark targets complete IP cores where PPA emerges from cross-file dependencies. Our benchmark packages several high-quality Verilog repositories from real-world designs. We formalize the task as: given an initial repository, produce edits that preserve functional behavior while improving PPA. We also provide a closed-loop framework that couples LLM-proposed edits with toolchain feedback to enable cross-file modifications and iterative repair at repository scale. Our experiments demonstrate that the reference framework realizes PPA improvements without any human interactions. CktEvo establishes a rigorous and executable foundation for studying LLM-assisted RTL optimization that matters for engineering practice: repository-level, function-preserving, and PPA-driven.
Subjects: Hardware Architecture (cs.AR); Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.08718 [cs.AR]
  (or arXiv:2603.08718v1 [cs.AR] for this version)
  https://doi.org/10.48550/arXiv.2603.08718
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Zhengyuan Shi [view email]
[v1] Tue, 10 Feb 2026 02:46:15 UTC (1,223 KB)
Full-text links:

Access Paper:

Current browse context:
cs.AR
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.