SOMP: Scalable Gradient Inversion for Large Language Models via Subspace-Guided Orthogonal Matching Pursuit
arXiv cs.LG / 3/18/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- SOMP reframes text recovery from aggregated gradients as a sparse signal recovery problem and introduces a scalable framework to tackle gradient inversion for LLMs.
- It exploits head-wise geometric structure in transformer gradients and sample-level sparsity to progressively narrow the search space without exhaustive search.
- In experiments across multiple LLM families, model scales, and five languages, SOMP consistently outperforms prior methods in the aggregated-gradient regime.
- For long sequences at batch size B=16, SOMP achieves substantially higher reconstruction fidelity while remaining computationally competitive, and it remains effective under extreme aggregation up to B=128, implying privacy leakage can persist.
- The work highlights privacy risks in gradient-sharing scenarios and underscores the need for stronger defenses against gradient inversion attacks.
Related Articles
The massive shift toward edge computing and local processing
Dev.to
Self-Refining Agents in Spec-Driven Development
Dev.to

has anyone tried this? Flash-MoE: Running a 397B Parameter Model on a Laptop
Reddit r/LocalLLaMA

M2.7 open weights coming in ~2 weeks
Reddit r/LocalLLaMA

MiniMax M2.7 Will Be Open Weights
Reddit r/LocalLLaMA