TiAb Review Plugin: A Browser-Based Tool for AI-Assisted Title and Abstract Screening

arXiv cs.AI / 4/13/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • TiAb Review Plugin is an open-source, browser-based (Chrome) tool that enables no-code, serverless AI-assisted screening of titles and abstracts for systematic reviews.
  • The plugin uses Google Sheets as a shared database and supports multi-reviewer collaboration without a dedicated server, while requiring users to provide their own Gemini API key stored locally and encrypted.
  • It offers three screening modes: manual review, LLM batch screening, and ML active learning, including an in-browser re-implementation of ASReview’s default active learning approach using TypeScript (TF-IDF + Naive Bayes).
  • The ML component matched ASReview’s top-100 rankings exactly across six datasets, and the LLM configuration (Gemini 3.0 Flash with a low thinking budget and TopP=0.95) achieved 94–100% recall with 2–15% precision and strong workload savings at 95% recall.
  • The study concludes the extension is functional and practical for integrating both LLM screening and ML active learning into a lightweight, collaborative workflow.

Abstract

Background: Server-based screening tools impose subscription costs, while open-source alternatives require coding skills. Objectives: We developed a browser extension that provides no-code, serverless artificial intelligence (AI)-assisted title and abstract screening and examined its functionality. Methods: TiAb Review Plugin is an open-source Chrome browser extension (available at https://chromewebstore.google.com/detail/tiab-review-plugin/alejlnlfflogpnabpbplmnojgoeeabij). It uses Google Sheets as a shared database, requiring no dedicated server and enabling multi-reviewer collaboration. Users supply their own Gemini API key, stored locally and encrypted. The tool offers three screening modes: manual review, large language model (LLM) batch screening, and machine learning (ML) active learning. For ML evaluation, we re-implemented the default ASReview active learning algorithm (TF-IDF with Naive Bayes) in TypeScript to enable in-browser execution, and verified equivalence against the original Python implementation using 10-fold cross-validation on six datasets. For LLM evaluation, we compared 16 parameter configurations across two model families on a benchmark dataset, then validated the optimal configuration (Gemini 3.0 Flash, low thinking budget, TopP=0.95) with a sensitivity-oriented prompt on five public datasets (1,038 to 5,628 records, 0.5 to 2.0 percent prevalence). Results: The TypeScript classifier produced top-100 rankings 100 percent identical to the original ASReview across all six datasets. For LLM screening, recall was 94 to 100 percent with precision of 2 to 15 percent, and Work Saved over Sampling at 95 percent recall (WSS@95) ranged from 48.7 to 87.3 percent. Conclusions: We developed a functional browser extension that integrates LLM screening and ML active learning into a no-code, serverless environment, ready for practical use in systematic review screening.