High-fidelity Multi-view Normal Integration with Scale-encoded Neural Surface Representation
arXiv cs.CV / 3/24/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper identifies a core limitation of existing multi-view normal integration: sampling only one ray per pixel ignores the pixel’s spatial coverage, which changes with camera intrinsics and object distance.
- When the same object is captured from different distances, the resulting normal estimates across corresponding pixels can become inconsistent across views, causing blurred high-frequency surface details.
- It proposes a scale-encoded neural surface representation that explicitly incorporates per-pixel coverage area by associating 3D points with a spatial scale and deriving normals via a hybrid grid-based encoding.
- The method also adds a scale-aware mesh extraction module that assigns an optimal local scale to each mesh vertex based on training observations, improving reconstruction under varying capture distances.
- Experiments show the approach produces consistently higher-fidelity reconstructions from normals observed at different distances and outperforms prior multi-view normal integration methods.
Related Articles
Santa Augmentcode Intent Ep.6
Dev.to

Your Agent Hired Another Agent. The Output Was Garbage. The Money's Gone.
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Palantir’s billionaire CEO says only two kinds of people will succeed in the AI era: trade workers — ‘or you’re neurodivergent’
Reddit r/artificial
Scaffolded Test-First Prompting: Get Correct Code From the First Run
Dev.to