Filling in the Mechanisms: How do LMs Learn Filler-Gap Dependencies under Developmental Constraints?
arXiv cs.CL / 4/17/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies whether language models develop transferable representations for filler-gap dependencies across syntactic constructions such as wh-questions and topicalization.
- It uses Distributed Alignment Search (DAS) on LMs trained with different amounts of data from the BabyLM challenge to probe how learning behaves under data-quantity constraints.
- The findings indicate that shared—but item-sensitive—mechanisms can emerge even with limited training data.
- However, the models still need substantially more data than humans to reach generalizations comparable to human language acquisition, suggesting a role for language-specific inductive biases in acquisition theories.


![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
