AI Navigate

From Formal Language Theory to Statistical Learning: Finite Observability of Subregular Languages

arXiv cs.CL / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The authors prove that all standard subregular language classes are linearly separable by their deciding predicates, establishing finite observability and learnability with simple linear models.
  • Synthetic experiments show perfect separability in noise-free conditions, while real-data experiments on English morphology indicate learned features align with well-known linguistic constraints.
  • The work argues that the subregular hierarchy provides a rigorous and interpretable foundation for modeling natural language structure, bridging formal language theory and practical NLP.
  • The authors provide code for their experiments on GitHub, enabling reproducibility and potential adoption in related NLP modeling efforts.

Abstract

We prove that all standard subregular language classes are linearly separable when represented by their deciding predicates. This establishes finite observability and guarantees learnability with simple linear models. Synthetic experiments confirm perfect separability under noise-free conditions, while real-data experiments on English morphology show that learned features align with well-known linguistic constraints. These results demonstrate that the subregular hierarchy provides a rigorous and interpretable foundation for modeling natural language structure. Our code used in real-data experiments is available at https://github.com/UTokyo-HayashiLab/subregular.