FairNVT: Improving Fairness via Noise Injection in Vision Transformers
arXiv cs.CV / 4/21/2026
📰 NewsModels & Research
Key Points
- FairNVT is proposed as a lightweight debiasing framework for pretrained transformer-based encoders that targets both representation-level and prediction-level fairness without sacrificing task accuracy.
- The paper argues representation and prediction fairness are closely linked, and that suppressing sensitive information in learned embeddings can directly lead to fairer downstream predictions.
- FairNVT uses lightweight adapters to learn task-relevant and sensitive embeddings, injects calibrated Gaussian noise into the sensitive embedding, and then fuses it with the task representation.
- It further employs orthogonality constraints and fairness regularization to reduce sensitive-attribute leakage and improve fairness metrics such as demographic parity and equalized odds.
- Experiments across three vision-and-language datasets show that FairNVT lowers sensitive-attribute attacker accuracy while maintaining strong task performance, and it is compatible with many pretrained transformer encoders.
Related Articles

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

Building AgentOS: Why I’m Building the AWS Lambda for Insurance Claims
Dev.to

Where we are. In a year, everything has changed. Kimi - Minimax - Qwen - Gemma - GLM
Reddit r/LocalLLaMA
Where is Grok-2 Mini and Grok-3 (mini)?
Reddit r/LocalLLaMA