RPRA: Predicting an LLM-Judge for Efficient but Performant Inference

arXiv cs.AI / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Predict-Answer/Act (PA) and Reason-Predict-Reason-Answer/Act (RPRA) methods where smaller LLMs predict how an LLM judge would score their output before deciding whether to answer or defer to a larger model.
  • It evaluates three judge-score prediction strategies—zero-shot prediction, in-context “report card” prompting, and supervised fine-tuning—showing different strengths across model sizes and judge types.
  • Results indicate that larger (especially reasoning) models can predict generic LLM judges effectively in a zero-shot setup, while smaller models need fine-tuning or report cards to achieve reliable prediction quality.
  • Across datasets, report cards and supervised fine-tuning improve smaller-model judge prediction accuracy by up to 55% and 52% respectively, supporting more efficient inference without sacrificing performance.
  • The findings suggest that models can learn to recognize their own limitations, enabling more “self-aware” systems that route queries to appropriate model sizes.

Abstract

Large language models (LLMs) face a fundamental trade-off between computational efficiency (e.g., number of parameters) and output quality, especially when deployed on computationally limited devices such as phones or laptops. One way to address this challenge is by following the example of humans and have models ask for help when they believe they are incapable of solving a problem on their own; we can overcome this trade-off by allowing smaller models to respond to queries when they believe they can provide good responses, and deferring to larger models when they do not believe they can. To this end, in this paper, we investigate the viability of Predict-Answer/Act (PA) and Reason-Predict-Reason-Answer/Act (RPRA) paradigms where models predict -- prior to responding -- how an LLM judge would score their output. We evaluate three approaches: zero-shot prediction, prediction using an in-context report card, and supervised fine-tuning. Our results show that larger models (particularly reasoning models) perform well when predicting generic LLM judges zero-shot, while smaller models can reliably predict such judges well after being fine-tuned or provided with an in-context report card. Altogether, both approaches can substantially improve the prediction accuracy of smaller models, with report cards and fine-tuning achieving mean improvements of up to 55% and 52% across datasets, respectively. These findings suggest that models can learn to predict their own performance limitations, paving the way for more efficient and self-aware AI systems.