SOMP: Scalable Gradient Inversion for Large Language Models via Subspace-Guided Orthogonal Matching Pursuit
arXiv cs.LG / 3/18/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- SOMP reframes text recovery from aggregated gradients as a sparse signal recovery problem and introduces a scalable framework to tackle gradient inversion for LLMs.
- It exploits head-wise geometric structure in transformer gradients and sample-level sparsity to progressively narrow the search space without exhaustive search.
- In experiments across multiple LLM families, model scales, and five languages, SOMP consistently outperforms prior methods in the aggregated-gradient regime.
- For long sequences at batch size B=16, SOMP achieves substantially higher reconstruction fidelity while remaining computationally competitive, and it remains effective under extreme aggregation up to B=128, implying privacy leakage can persist.
- The work highlights privacy risks in gradient-sharing scenarios and underscores the need for stronger defenses against gradient inversion attacks.
Related Articles

ベテランの若手育成負担を減らせ、PLC制御の「ラダー図」をAIで生成
日経XTECH

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch

AI Can Write Your Code. Who's Testing Your Thinking?
Dev.to

‘Uncanny Valley’: Nvidia’s ‘Super Bowl of AI,’ Tesla Disappoints, and Meta’s VR Metaverse ‘Shutdown’
Wired
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning