| BANKING77 (77 fine-grained banking intents) is a well-established but increasingly saturated intent classification benchmark. did this while using a lightweight embedding-based classifier + example reranking approach (no LLMs involved), I obtained 94.42% accuracy on the official PolyAI test split. Strict Full train protocol was used: Hyperparameter tuning / recipe selection performed via 5-fold stratified CV on the official training set only, final model retrained on 100% of the official training data (recipe frozen) and single evaluation on the held-out official PolyAI test split Here are the results: Accuracy: 94.42%, Macro-F1: 0.9441, Model size: ~68 MiB (FP32), Inference: ~225 ms per query This represents +0.59pp over the commonly cited 93.83% baseline and places the result in clear 2nd place on the public leaderboard (0.52pp behind the current SOTA of 94.94%), unless there is a new one that I am not finding. [link] [comments] |
[R] 94.42% on BANKING77 Official Test Split with Lightweight Embedding + Example Reranking (strict full-train protocol)
Reddit r/MachineLearning / 4/7/2026
💬 OpinionSignals & Early TrendsTools & Practical UsageModels & Research
Key Points
- The post reports a 94.42% accuracy result on the official BANKING77 PolyAI test split using a lightweight embedding-based classifier with an example reranking step (no LLMs involved).
- It emphasizes a strict full-train protocol: recipe selection and hyperparameter tuning via 5-fold stratified CV on the official training set only, followed by retraining on 100% of the official training data before one final evaluation on the held-out test split.
- Reported metrics include Macro-F1 of 0.9441, a model size of about 68 MiB (FP32), and inference latency of roughly 225 ms per query.
- The author claims the result improves the commonly cited 93.83% baseline by +0.59 percentage points and is positioned as clear second on the public leaderboard, 0.52pp behind the current stated SOTA (94.94%).




