Trajectory-Optimized Time Reparameterization for Learning-Compatible Reduced-Order Modeling of Stiff Dynamical Systems
arXiv cs.LG / 3/18/2026
📰 NewsModels & Research
Key Points
- The paper proposes trajectory-optimized time reparameterization (TOTR) as an optimization in arc-length coordinates to mitigate stiffness in neural ODE–based reduced-order models.
- It designs a traversal-speed profile that penalizes acceleration in stretched time, improving the regularity and learnability of the time map.
- The approach is evaluated on three stiff problems—the parameterized stiff linear system, the van der Pol oscillator, and the HIRES chemical kinetics model—showing smoother reparameterizations and better physical-time predictions than existing TR methods under identical training conditions.
- Quantitatively, TOTR achieves loss reductions of one to two orders of magnitude compared with benchmark TR algorithms, indicating robust stiffness mitigation for explicit ML-ROMs.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA