AI Navigate

GRAFITE: Generative Regression Analysis Framework for Issue Tracking and Evaluation

arXiv cs.CL / 3/20/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • GRAFITE is a continuous LLM evaluation platform that builds and maintains a repository of model issues based on user feedback to enable ongoing testing.
  • It uses a QA-testing pipeline with LLM-as-a-judge and supports side-by-side comparisons of multiple models to detect regressions across releases.
  • The framework provides an end-to-end workflow from issue collection to automated QA tests, enabling scalable, time-aware evaluation of model performance.
  • The project is open-source at IBM/grafite and includes a demo video, offering a practical tool to assess LLMs and mitigate benchmark contamination.

Abstract

Large language models (LLMs) are largely motivated by their performance on popular topics and benchmarks at the time of their release. However, over time, contamination occurs due to significant exposure of benchmark data during training. This poses a risk of model performance inflation if testing is not carefully executed. To address this challenge, we present GRAFITE, a continuous LLM evaluation platform through a comprehensive system for maintaining and evaluating model issues. Our approach enables building a repository of model problems based on user feedback over time and offers a pipeline for assessing LLMs against these issues through quality assurance (QA) tests using LLM-as-a-judge. The platform enables side-by-side comparison of multiple models, facilitating regression detection across different releases. The platform is available at https://github.com/IBM/grafite. The demo video is available at www.youtube.com/watch?v=XFZyoleN56k.