Fine-Tuning A Large Language Model for Systematic Review Screening

arXiv cs.CL / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates why prior LLM approaches to systematic review screening have produced inconsistent results, arguing that prompting alone lacks sufficient context for strong performance.
  • Researchers fine-tuned a small 1.2B-parameter open-weight LLM specifically for title and abstract screening using human ratings from a dataset of 8,500+ records.
  • The fine-tuned model substantially outperformed the base model, achieving an 80.79% improvement in weighted F1 score.
  • On the full dataset of 8,277 studies, the fine-tuned model matched human coders with 86.40% agreement, including a 91.18% true positive rate and 86.38% true negative rate.
  • The authors report stable behavior across repeated inference runs with perfect agreement, concluding that fine-tuning may be promising for large-scale systematic review workflows.

Abstract

Systematic reviews traditionally have taken considerable amounts of human time and energy to complete, in part due to the extensive number of titles and abstracts that must be reviewed for potential inclusion. Recently, researchers have begun to explore how to use large language models (LLMs) to make this process more efficient. However, research to date has shown inconsistent results. We posit this is because prompting alone may not provide sufficient context for the model(s) to perform well. In this study, we fine-tune a small 1.2 billion parameter open-weight LLM specifically for study screening in the context of a systematic review in which humans rated more than 8500 titles and abstracts for potential inclusion. Our results showed strong performance improvements from the fine-tuned model, with the weighted F1 score improving 80.79% compared to the base model. When run on the full dataset of 8,277 studies, the fine-tuned model had 86.40% agreement with the human coder, a 91.18% true positive rate, a 86.38% true negative rate, and perfect agreement across multiple inference runs. Taken together, our results show that there is promise for fine-tuning LLMs for title and abstract screening in large-scale systematic reviews.
広告