Structural Generalization on SLOG without Hand-Written Rules
arXiv cs.AI / 4/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses structural generalization in semantic parsing, where models must apply learned compositional rules to new structural combinations without relying on hand-written algebraic rules.
- It proposes a neural cellular automaton (NCA) with a discrete bottleneck that learns compositional operations purely from data via local iterations, avoiding hand-crafted compositional rules.
- On the SLOG benchmark, the method reaches 100% type-exact match on 11 out of 17 structural generalization categories, including cases where AM-Parser performs very poorly (0–74%).
- The authors find failures are concentrated into two specific mechanisms related to wh-extraction context interacting with reduced verb types and modifiers appearing on the subject side of verbs.
- By analyzing CCG structural features, the study shows intermediate performance arises from mixing distinct structural patterns rather than partial generalization, and that successes align with operations covered during training while failures correspond to directed operations missing from training.
Related Articles
Vector DB and ANN vs PHE conflict, is there a practical workaround? [D]
Reddit r/MachineLearning

Agent Amnesia and the Case of Henry Molaison
Dev.to
Azure Weekly: Microsoft and OpenAI Restructure Partnership as GPT-5.5 Lands in Foundry
Dev.to
Proven Patterns for OpenAI Codex in 2026: Prompts, Validation, and Gateway Governance
Dev.to
Vibe coding is a tool, not a shortcut. Most people are using it wrong.
Dev.to