CatRAG: Functor-Guided Structural Debiasing with Retrieval Augmentation for Fair LLMs
arXiv cs.CL / 3/24/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces CatRAG, a debiasing framework for large language models that combines functor-guided structural debiasing with retrieval-augmented generation (RAG) to better control bias across the pipeline.
- The functor component uses category-theoretic structure to apply a structure-preserving embedding-space projection that targets bias-associated directions while aiming to retain task-relevant semantics.
- Experiments on the BBQ question-answering benchmark across three open-source LLMs (Llama-3, GPT-OSS, and Gemma-3) show state-of-the-art performance, with accuracy improvements up to ~40% over base models.
- The method also substantially reduces bias scores, bringing them to near zero from roughly 60% for the base models across gender, nationality, race, and intersectional subgroups.
- The authors argue that prior debiasing approaches often operate at a single stage and can be brittle under distribution shifts, motivating their dual-pronged, structure-preserving pipeline design.
Related Articles
Santa Augmentcode Intent Ep.6
Dev.to

Your Agent Hired Another Agent. The Output Was Garbage. The Money's Gone.
Dev.to
ClawRouter vs TeamoRouter: one requires a crypto wallet, one doesn't
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Palantir’s billionaire CEO says only two kinds of people will succeed in the AI era: trade workers — ‘or you’re neurodivergent’
Reddit r/artificial