LAG-XAI: A Lie-Inspired Affine Geometric Framework for Interpretable Paraphrasing in Transformer Latent Spaces
arXiv cs.CL / 4/8/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes LAG-XAI, a Lie-inspired affine geometric framework that treats paraphrasing as a continuous affine transformation (geometric flow) in Transformer embedding/latent spaces rather than discrete word swaps.
- It introduces a computationally efficient mean-field approximation inspired by local Lie group actions, decomposing paraphrase transitions into interpretable components: rotation, deformation, and translation.
- Experiments on the PIT-2015 noisy Twitter corpus (Sentence-BERT embeddings) show a “linear transparency” effect, with the affine operator reaching AUC 0.7713 and retaining about 80% of a non-linear baseline’s effective classification capacity.
- The method identifies geometric invariants such as a stable reconfiguration angle (~27.84°) and near-zero deformation (suggesting local isometry), and demonstrates cross-corpus generalization via validation on the TURL dataset.
- As a practical application, LAG-XAI is used for LLM hallucination detection, achieving 95.3% factual distortion detection on HaluEval via a “cheap geometric check” for deviations beyond a semantic corridor.
Related Articles
[N] Just found out that Milla Jovovich is a dev, invested in AI, and just open sourced a project
Reddit r/MachineLearning

ALTK‑Evolve: On‑the‑Job Learning for AI Agents
Hugging Face Blog

Context Windows Are Getting Absurd — And That's a Good Thing
Dev.to
Google isn’t an AI-first company despite Gemini being great
Reddit r/artificial

GitHub Weekly: Copilot SDK Goes Public, Cloud Agent Breaks Free
Dev.to