MosaicMem: Hybrid Spatial Memory for Controllable Video World Models
arXiv cs.CV / 3/19/2026
📰 NewsModels & Research
Key Points
- MosaicMem introduces a hybrid spatial memory that lifts patches into 3D to improve localization and targeted retrieval while preserving the model's ability to follow prompts during generation.
- It uses a patch-and-compose interface to assemble spatially aligned patches in the queried view, preserving what should persist and allowing the model to inpaint what should evolve.
- The approach adds PRoPE camera conditioning and two memory-alignment methods, achieving better pose adherence than implicit memory and stronger dynamic modeling than explicit baselines.
- It enables minute-level navigation, memory-based scene editing, and autoregressive rollout, supporting long-horizon, memory-consistent video world modeling.
Related Articles
How political censorship actually works inside Qwen, DeepSeek, GLM, and Yi: Ablation and behavioral results across 9 models
Reddit r/LocalLLaMA

OpenSeeker's open-source approach aims to break up the data monopoly for AI search agents
THE DECODER

How to Choose the Best AI Chat Models of 2026 for Your Business Needs
Dev.to

I built an AI that generates lesson plans in your exact teaching voice (open source)
Dev.to

6-Band Prompt Decomposition: The Complete Technical Guide
Dev.to