TRACER: Learn-to-Defer for LLM Classification with Formal Teacher-Agreement Guarantees

Reddit r/MachineLearning / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • TRACER is a released library that learns cost-efficient routing policies for LLM classification by deferring a subset of calls to a cheaper local surrogate while targeting a minimum surrogate-vs-teacher agreement rate.
  • The approach uses an “acceptor gate” calibrated on held-out data so the system can provide formal teacher-agreement guarantees while maximizing coverage under the agreement constraint.
  • TRACER offers three pipeline families—Global (accept-all), L2D (surrogate + conformal acceptor gate), and RSB (two-stage residual cascade)—and selects among them using an automated Pareto-frontier criterion.
  • In an example on Banking77 intent classification using BGE-M3 embeddings, the method reports 91.4% coverage at a 92% teacher-agreement target and 96.4% end-to-end macro-F1, with L2D chosen.
  • The project includes a small “model zoo” for surrogate learners and proposes qualitative audits (e.g., slice summaries and boundary-pair comparisons) alongside the formal calibration guarantee.
TRACER: Learn-to-Defer for LLM Classification with Formal Teacher-Agreement Guarantees

I'm releasing TRACER (Trace-Based Adaptive Cost-Efficient Routing), a library for learning cost-efficient routing policies from LLM traces.

The setup: you have an LLM handling classification tasks. You want to replace a fraction of calls with a cheap local surrogate, with a formal guarantee that the surrogate agrees with the LLM at least X% of the time on handled traffic.

Technical core:

  • Three pipeline families: Global (accept-all), L2D (surrogate + conformal acceptor gate), RSB (Residual Surrogate Boosting: two-stage cascade)
  • Acceptor gate predicts surrogate-teacher agreement; calibrated on held-out split
  • Calibration guarantee: coverage maximized subject to TA >= target on calibration set
  • Model zoo: logreg, MLP (1h/2h), DT, RF, ExtraTrees, GBT, XGBoost (optional)
  • Qualitative audit: slice summaries, contrastive boundary pairs, temporal deltas

Results on Banking77 (77-class intent, BGE-M3 embeddings):

  • 91.4% coverage at 92% teacher agreement target
  • 96.4% end-to-end macro-F1
  • L2D selected; method automatically determined by Pareto frontier

Paper in progress. Feedback welcome.

submitted by /u/Adr-740
[link] [comments]