More Human, More Efficient: Aligning Annotations with Quantized SLMs

arXiv cs.CL / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes fine-tuning a small (1.7B) quantized language model to act as a deterministic, human-aligned evaluator and annotator despite human annotation capacity limits.
  • It addresses issues with proprietary LLM-based evaluation—such as systematic bias, poor reproducibility, and privacy concerns—by training on limited human-labeled data with a custom multi-dimensional rubric and lightweight augmentation/regularization.
  • The method improves agreement with human experts, increasing Krippendorff’s alpha by 0.23 over the best-performing proprietary LLM baseline.
  • The authors show the training pipeline generalizes to a separate emotion classification task, suggesting the approach is not limited to the original annotation domain.
  • They release the fine-tuning approach publicly and claim that task-specific alignment plus 4-bit quantized fine-tuning yields a strong open-source alternative for evaluation/annotation workflows.

Abstract

As Large Language Model (LLM) capabilities advance, the demand for high-quality annotation of exponentially increasing text corpora has outpaced human capacity, leading to the widespread adoption of LLMs in automatic evaluation and annotation. However, proprietary LLMs often exhibit systematic biases that diverge from human expert consensus, lacks reproducibility, and raises data privacy concerns. Our work examines the viability of finetuning a quantized Small Language Model of 1.7B parameter size on limited human-annotated data to serve as a highly aligned, deterministic evaluator and annotator. By implementing a custom, multi-dimensional rubric framework and simple augmentation and regularization techniques, the proposed approach achieves higher inter-annotator agreement (0.23 points increase in Krippendorff's \alpha) than the best performing state-of-the-art proprietary LLM. We also demonstrate the generalizability of the proposed training pipeline on a separate emotion classification task. The results show that task-specific alignment and efficient 4-bit quantized fine-tuning provide superior open-source alternative to using proprietary models for evaluation and annotation. Our finetuning approach is publicly available at https://github.com/jylee-k/slm-judge.