LMI-Net: Linear Matrix Inequality--Constrained Neural Networks via Differentiable Projection Layers
arXiv cs.LG / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces LMI-Net, a differentiable projection layer designed to enforce linear matrix inequality (LMI) constraints in neural networks by construction rather than via soft penalties.
- It reformulates the feasible LMI set as an intersection of an affine equality constraint with the positive semidefinite cone, then computes the forward pass using Douglas–Rachford splitting and enables training via implicit differentiation.
- The authors provide theoretical convergence guarantees that the projection layer reaches a feasible point, turning a generic neural network into a model that satisfies LMI requirements with formal certification.
- Experiments on tasks such as invariant ellipsoid synthesis and joint controller-and-certificate design for disturbed linear systems show improved feasibility under distribution shift compared with soft-constrained approaches while keeping fast inference speed.
Related Articles

Black Hat Asia
AI Business
[N] Just found out that Milla Jovovich is a dev, invested in AI, and just open sourced a project
Reddit r/MachineLearning

ALTK‑Evolve: On‑the‑Job Learning for AI Agents
Hugging Face Blog

Context Windows Are Getting Absurd — And That's a Good Thing
Dev.to

Every AI Agent Registry in 2026, Compared
Dev.to