From Uniform to Learned Knots: A Study of Spline-Based Numerical Encodings for Tabular Deep Learning
arXiv cs.LG / 4/8/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The study examines how explicit spline-based numerical encodings affect tabular deep learning, testing B-splines, M-splines, and I-splines under different knot/placement strategies (uniform, quantile-based, target-aware, and learnable knots).
- Learnable-knot encodings use a differentiable knot parameterization that supports stable end-to-end optimization of knot locations jointly with common backbone models.
- Experiments across multiple public regression and classification datasets show that the best encoding strongly depends on the task type, output dimensionality, and the chosen backbone architecture.
- For classification, piecewise-linear encoding (PLE) is identified as the most robust overall, while spline-based methods remain competitive; for regression, no single encoding consistently dominates.
- The paper also finds that learnable-knot variants can improve training stability but may significantly increase training cost, particularly for M-spline and I-spline expansions, so compute overhead should be considered alongside accuracy.
Related Articles

Meta's latest model is as open as Zuckerberg's private school
The Register

Why multi-agent AI security is broken (and the identity patterns that actually work)
Dev.to
BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.
Reddit r/artificial
A Comprehensive Implementation Guide to ModelScope for Model Search, Inference, Fine-Tuning, Evaluation, and Export
MarkTechPost

Harness Engineering: The Next Evolution of AI Engineering
Dev.to