Data-Free Contribution Estimation in Federated Learning using Gradient von Neumann Entropy
arXiv cs.AI / 4/27/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes a privacy-preserving, data-free way to estimate each client’s contribution in federated learning using the matrix (spectral) von Neumann entropy of final-layer update signals.
- It introduces two practical aggregation/combination schemes, SpectralFed (entropy-based aggregation weights) and SpectralFuse (entropy plus class-specific alignment using a rank-adaptive Kalman filter) to improve per-round stability.
- Experiments on CIFAR-10/100 and naturally partitioned FEMNIST and FedISIC under diverse non-IID settings show high correlation between the entropy-derived contribution scores and standalone client accuracy without using validation data or client metadata.
- The method is benchmarked against existing data-free contribution estimation baselines, supporting spectral entropy as a useful indicator for fair client importance and reward assignment.
- Overall, the approach aims to remove reliance on server-side validation datasets or potentially manipulable self-reported client information.
Related Articles

Subagents: The Building Block of Agentic AI
Dev.to

DeepSeek-V4 Models Could Change Global AI Race
AI Business

Got OpenAI's privacy filter model running on-device via ExecuTorch
Reddit r/LocalLLaMA

The Agent-Skill Illusion: Why Prompt-Based Control Fails in Multi-Agent Business Consulting Systems
Dev.to

We Built a Voice AI Receptionist in 8 Weeks — Every Decision We Made and Why
Dev.to