Hyper Input Convex Neural Networks for Shape Constrained Learning and Optimal Transport
arXiv cs.LG / 4/30/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Hyper Input Convex Neural Networks (HyCNNs), a new architecture that is guaranteed to be convex with respect to its input by combining Maxout networks with input convex neural networks (ICNNs).
- The authors provide theoretical results showing HyCNNs can approximate quadratic functions with exponentially fewer parameters than standard ICNNs to achieve the same precision.
- Experimental results on synthetic convex regression and interpolation tasks indicate HyCNNs outperform both ICNNs and conventional MLPs in predictive performance.
- The method is also applied to learn high-dimensional optimal transport maps, including using single-cell RNA sequencing data, where HyCNNs often beat ICNN-based neural optimal transport approaches and other baselines.
- Overall, the work argues that HyCNNs offer both stronger theory (efficiency) and more reliable scaled training performance compared with ICNNs.
Related Articles
Vector DB and ANN vs PHE conflict, is there a practical workaround? [D]
Reddit r/MachineLearning

Agent Amnesia and the Case of Henry Molaison
Dev.to

Azure Weekly: Microsoft and OpenAI Restructure Partnership as GPT-5.5 Lands in Foundry
Dev.to

Proven Patterns for OpenAI Codex in 2026: Prompts, Validation, and Gateway Governance
Dev.to

Vibe coding is a tool, not a shortcut. Most people are using it wrong.
Dev.to