Synthetic Dataset Generation for Partially Observed Indoor Objects
arXiv cs.CV / 4/9/2026
💬 OpinionSignals & Early TrendsModels & Research
Key Points
- The paper introduces a Unity-based virtual scanning framework that generates realistic synthetic indoor 3D scan data by simulating scanner parameters like resolution, range, and distance-dependent noise.
- It uses ray-based scanning from configurable viewpoints to accurately model occlusion and sensor visibility, producing partial point clouds suitable for partially observed object learning.
- The system assigns color to point clouds using panoramic images taken at the virtual scanner pose, improving the realism of the generated scans.
- For scalability, the scanner is connected to a procedural indoor scene generator that creates diverse rooms and furniture layouts automatically.
- The authors release the V-Scan dataset, which includes partial object point clouds, voxel-based occlusion grids, and complete ground-truth geometry for training and evaluating 3D scene reconstruction and object completion methods.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Frontend Engineers Are Becoming AI Trainers
Dev.to