What Is the Optimal Ranking Score Between Precision and Recall? We Can Always Find It and It Is Rarely $F_1$

arXiv stat.ML / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes how to construct a single global ranking from inherently multidimensional classification performance measures, focusing on the complementary tradeoff between precision and recall.
  • It proves that rankings induced by the Fβ family of scores are meaningful and relates the precision- and recall-induced orderings via a shortest-path concept.
  • The authors reformulate choosing a precision–recall compromise as an optimization problem using Kendall rank correlations and show that the commonly used F1 score is rarely optimal under this criterion.
  • They derive theoretical tools, including a closed-form way to compute the optimal β for any given performance distribution or set, and demonstrate the approach across six case studies.

Abstract

Ranking methods or models based on their performance is of prime importance but is tricky because performance is fundamentally multidimensional. In the case of classification, precision and recall are scores with probabilistic interpretations that are both important to consider and complementary. The rankings induced by these two scores are often in partial contradiction. In practice, therefore, it is extremely useful to establish a compromise between the two views to obtain a single, global ranking. Over the last fifty years or so, it has been proposed to take a weighted harmonic mean, known as the F-score, F-measure, or F_\beta. Generally speaking, by averaging basic scores, we obtain a score that is intermediate in terms of values. However, there is no guarantee that these scores lead to meaningful rankings and no guarantee that the rankings are good tradeoffs between these base scores. Given the ubiquity of F_\beta scores in the literature, some clarification is in order. Concretely: (1) We establish that F_\beta-induced rankings are meaningful and define a shortest path between precision- and recall-induced rankings. (2) We frame the problem of finding a tradeoff between two scores as an optimization problem expressed with Kendall rank correlations. We show that F_1 and its skew-insensitive version are far from being optimal in that regard. (3) We provide theoretical tools and a closed-form expression to find the optimal value for \beta for any distribution or set of performances, and we illustrate their use on six case studies. Code is available at https://github.com/pierard/cvpr-2026-optimal-tradeoff-precision-recall.