Random Walk Learning and the Pac-Man Attack
arXiv stat.ML / 4/16/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies a stealthy adversarial “Pac-Man” attack on random-walk (RW) based distributed and decentralized learning, where a malicious node probabilistically kills any RW that visits it and thereby silently halts learning.
- It introduces a fully decentralized defense called the Average Crossing (AC) algorithm that duplicates random walks to prevent “RW extinction” under the attack.
- The authors prove that under AC the RW population stays almost surely bounded and that RW-based stochastic gradient descent still converges despite Pac-Man, with a measurable deviation from the true optimum.
- Extensive experiments on synthetic and real datasets confirm the theory and reveal a phase transition in extinction probability depending on the RW duplication threshold.
- The work also provides additional theoretical intuition by analyzing a simplified variant of AC to explain the observed phase-transition behavior.
Related Articles

"The AI Agent's Guide to Sustainable Income: From Zero to Profitability"
Dev.to

"The Hidden Economics of AI Agents: Survival Strategies in Competitive Markets"
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

"The Hidden Costs of AI Agent Deployment: A CFO's Guide to True ROI in Enterpris
Dev.to

"The Real Cost of AI Compute: Why Token Efficiency Separates Viable Agents from
Dev.to