AI Navigate

Automating Detection and Root-Cause Analysis of Flaky Tests in Quantum Software

arXiv cs.AI / 3/11/2026

Tools & Practical UsageModels & Research

Key Points

  • Quantum software testing is challenged by flaky tests that produce inconsistent outcomes due to the probabilistic nature of quantum computing.
  • The paper introduces an automated pipeline leveraging Large Language Models (LLMs) to detect flaky tests and identify their root causes in quantum software repositories.
  • Using the pipeline, the researchers expanded the flaky test dataset by 54%, discovering 25 new flaky test cases.
  • Google Gemini LLM achieved high classification performance with F1-scores of 0.9420 for detection and 0.9643 for root-cause identification, demonstrating LLM efficacy in this domain.
  • The expanded dataset and tools aim to support the quantum software engineering community, with future work focused on improving detection robustness and exploring test repair automation.

Computer Science > Software Engineering

arXiv:2603.09029 (cs)
[Submitted on 9 Mar 2026]

Title:Automating Detection and Root-Cause Analysis of Flaky Tests in Quantum Software

View a PDF of the paper titled Automating Detection and Root-Cause Analysis of Flaky Tests in Quantum Software, by Janakan Sivaloganathan and 3 other authors
View PDF HTML (experimental)
Abstract:Like classical software, quantum software systems rely on automated testing. However, their inherently probabilistic outputs make them susceptible to quantum flakiness -- tests that pass or fail inconsistently without code changes. Such quantum flaky tests can mask real defects and reduce developer productivity, yet systematic tooling for their detection and diagnosis remains limited.
This paper presents an automated pipeline to detect flaky-test-related issues and pull requests in quantum software repositories and to support the identification of their root causes. We aim to expand an existing quantum flaky test dataset and evaluate the capability of Large Language Models (LLMs) for flakiness classification and root-cause identification.
Building on a prior manual analysis of 14 quantum software repositories, we automate the discovery of additional flaky test cases using LLMs and cosine similarity. We further evaluate a variety of LLMs from OpenAI GPT, Meta LLaMA, Google Gemini, and Anthropic Claude suites for classifying flakiness and identifying root causes from issue descriptions and code context. Classification performance is assessed using standard performance metrics, including F1-score.
Using our pipeline, we identify 25 previously unknown flaky tests, increasing the original dataset size by 54%. The best-performing model, Google Gemini, achieves an F1-score of 0.9420 for flakiness detection and 0.9643 for root-cause identification, demonstrating that LLMs can provide practical support for triaging flaky reports and understanding their underlying causes in quantum software.
The expanded dataset and automated pipeline provide reusable artifacts for the quantum software engineering community. Future work will focus on improving detection robustness and exploring automated repair of quantum flaky tests.
Comments:
Subjects: Software Engineering (cs.SE); Artificial Intelligence (cs.AI); Emerging Technologies (cs.ET)
Cite as: arXiv:2603.09029 [cs.SE]
  (or arXiv:2603.09029v1 [cs.SE] for this version)
  https://doi.org/10.48550/arXiv.2603.09029
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Lei Zhang [view email]
[v1] Mon, 9 Mar 2026 23:57:55 UTC (201 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Automating Detection and Root-Cause Analysis of Flaky Tests in Quantum Software, by Janakan Sivaloganathan and 3 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.SE
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.