FunRec: Reconstructing Functional 3D Scenes from Egocentric Interaction Videos
arXiv cs.CV / 4/8/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- FunRec is a new research method that reconstructs functional 3D “digital twin” indoor scenes from egocentric RGB-D interaction videos without relying on controlled capture setups or CAD priors.
- The approach automatically discovers articulated parts, estimates their kinematic parameters, tracks 3D motion over time, and reconstructs both static and moving geometry in a canonical space suitable for simulation.
- On newly introduced real and simulated benchmarks, FunRec reports large gains over prior articulated reconstruction methods, including up to +50 mIoU for part segmentation and substantially lower articulation and pose errors.
- The paper demonstrates downstream usability via exports to simulation formats (URDF/USD) and interactive applications like affordance mapping and robot-scene interaction.
Related Articles

Black Hat Asia
AI Business
[N] Just found out that Milla Jovovich is a dev, invested in AI, and just open sourced a project
Reddit r/MachineLearning

ALTK‑Evolve: On‑the‑Job Learning for AI Agents
Hugging Face Blog

Context Windows Are Getting Absurd — And That's a Good Thing
Dev.to

Every AI Agent Registry in 2026, Compared
Dev.to