Sparse Goodness: How Selective Measurement Transforms Forward-Forward Learning
arXiv cs.AI / 4/16/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper analyzes the Forward-Forward (FF) learning algorithm’s “goodness function” design choices, focusing on which activations to measure and how to aggregate them layer-wise.
- It proposes a sparse “top-k goodness” metric that evaluates only the k most active neurons, achieving a large improvement on Fashion-MNIST versus the standard sum-of-squares (SoS) baseline.
- It introduces “entmax-weighted energy,” which learns a soft sparse alternative to hard top-k selection via an alpha-entmax transformation, delivering further accuracy gains.
- By combining sparse goodness with a separate label feature forwarding approach (injecting class hypotheses at every layer through dedicated projections), the authors reach 87.1% accuracy on Fashion-MNIST with a 4x2000 architecture.
- Controlled experiments across multiple goodness functions, architectures, and sparsity settings suggest that adaptive sparsity (alpha ≈ 1.5) is the most important design factor for FF networks.
Related Articles

Black Hat Asia
AI Business

Introducing Claude Opus 4.7
Anthropic News

AI traffic to US retailers rose 393% in Q1, and it’s boosting their revenue too
TechCrunch

Who Audits the Auditors? Building an LLM-as-a-Judge for Agentic Reliability
Dev.to

"Enterprise AI Cost Optimization: How Companies Are Cutting AI Infrastructure Sp
Dev.to