AI Navigate

CEI-3D: Collaborative Explicit-Implicit 3D Reconstruction for Realistic and Fine-Grained Object Editing

arXiv cs.CV / 3/13/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • CEI-3D proposes a collaborative explicit-implicit reconstruction pipeline combining an implicit SDF network with a differentiably sampled, locally controllable set of handler points to enable both global geometry and local editing.
  • The implicit network provides a smooth geometric prior, while explicit handler points provide localized control and mutual guidance during editing.
  • A physical properties disentangling module decouples each handler point's color into separate physical properties, enabling independent control of appearance attributes.
  • A dual-diffuse-albedo network processes edited and non-edited regions in separate branches to prevent interference from edits.
  • A spatial-aware editing module with cross-view propagation-based 3D segmentation enables part-wise adjustment of relevant handler points; experiments show more realistic, fine-grained edits with less editing time, and code released.

Abstract

Existing 3D editing methods often produce unrealistic and unrefined results due to the deeply integrated nature of their reconstruction networks. To address the challenge, this paper introduces CEI-3D, an editing-oriented reconstruction pipeline designed to facilitate realistic and fine-grained editing. Specifically, we propose a collaborative explicit-implicit reconstruction approach, which represents the target object using an implicit SDF network and a differentially sampled, locally controllable set of handler points. The implicit network provides a smooth and continuous geometry prior, while the explicit handler points offer localized control, enabling mutual guidance between the global 3D structure and user-specified local editing regions. To independently control each attribute of the handler points, we design a physical properties disentangling module to decouple the color of the handler points into separate physical properties. We also propose a dual-diffuse-albedo network in this module to process the edited and non-edited regions through separate branches, thereby preventing undesired interference from editing operations. Building on the reconstructed collaborative explicit-implicit representation with disentangled properties, we introduce a spatial-aware editing module that enables part-wise adjustment of relevant handler points. This module employs a cross-view propagation-based 3D segmentation strategy, which helps users to edit the specified physical attributes of a target part efficiently. Extensive experiments on both real and synthetic datasets demonstrate that our approach achieves more realistic and fine-grained editing results than the state-of-the-art (SOTA) methods while requiring less editing time. Our code is available on https://github.com/shiyue001/CEI-3D.