VIGIL: An Extensible System for Real-Time Detection and Mitigation of Cognitive Bias Triggers

arXiv cs.CL / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces VIGIL, an extensible browser extension designed to detect and mitigate cognitive-bias triggers in real time while users read online content.
  • VIGIL performs in-situ, scroll-synced detection and uses LLM-powered reformulation that is fully reversible, aiming to reduce manipulation and persuasion effects.
  • The system supports privacy-tiered inference, ranging from fully offline processing to optional cloud-based inference.
  • The extension is built for third-party plugins, and the authors report that several validated plugins are already included using NLP benchmarks.
  • VIGIL is presented as the first tool specifically targeting cognitive-bias trigger detection/mitigation, and it is open-sourced on GitHub.

Abstract

The rise of generative AI is posing increasing risks to online information integrity and civic discourse. Most concretely, such risks can materialise in the form of mis- and disinformation. As a mitigation, media-literacy and transparency tools have been developed to address factuality of information and the reliability and ideological leaning of information sources. However, a subtler but possibly no less harmful threat to civic discourse is to use of persuasion or manipulation by exploiting human cognitive biases and related cognitive limitations. To the best of our knowledge, no tools exist to directly detect and mitigate the presence of triggers of such cognitive biases in online information. We present VIGIL (VIrtual GuardIan angeL), the first browser extension for real-time cognitive bias trigger detection and mitigation, providing in-situ scroll-synced detection, LLM-powered reformulation with full reversibility, and privacy-tiered inference from fully offline to cloud. VIGIL is built to be extensible with third-party plugins, with several plugins that are rigorously validated against NLP benchmarks are already included. It is open-sourced at https://github.com/aida-ugent/vigil.

VIGIL: An Extensible System for Real-Time Detection and Mitigation of Cognitive Bias Triggers | AI Navigate