AsyncMDE: Real-Time Monocular Depth Estimation via Asynchronous Spatial Memory
arXiv cs.CV / 3/12/2026
💬 OpinionModels & Research
Key Points
- AsyncMDE introduces an asynchronous depth perception system that splits work between a foundation model producing spatial features in the background and a lightweight foreground model that fuses memory with current observations to estimate depth.
- The system enables cross-frame feature reuse with complementary fusion and autoregressive memory updates, achieving bounded accuracy degradation across frames.
- It is compact (3.83M parameters) and delivers 237 FPS on an RTX 4090, recovering 77% of the accuracy gap to the foundation model with 25x fewer parameters; it also runs at 161 FPS on a Jetson AGX Orin with TensorRT, demonstrating edge feasibility.
- Validation on indoor static, dynamic, and synthetic extreme-motion benchmarks shows graceful degradation between refreshes and practical real-time performance.
Related Articles
Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA