Beyond Voxel 3D Editing: Learning from 3D Masks and Self-Constructed Data

arXiv cs.CV / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses key 3D-editing challenges: keeping semantic consistency with prompt-driven edits while preserving local invariance so unedited regions match the original asset.
  • It critiques existing methods for projection losses in multi-view pipelines and for voxel-based constraints that limit which regions and how large an edit can be applied.
  • To overcome dataset scarcity, the authors propose the Beyond Voxel 3D Editing (BVE) framework along with a self-constructed large-scale 3D editing dataset.
  • BVE extends an image-to-3D generative foundation model using lightweight trainable modules to inject textual semantics efficiently, avoiding costly full-model retraining.
  • The framework also introduces an annotation-free 3D masking strategy to preserve unchanged regions during editing, improving faithfulness alongside text alignment in experiments.

Abstract

3D editing refers to the ability to apply local or global modifications to 3D assets. Effective 3D editing requires maintaining semantic consistency by performing localized changes according to prompts, while also preserving local invariance so that unchanged regions remain consistent with the original. However, existing approaches have significant limitations: multi-view editing methods incur losses when projecting back to 3D, while voxel-based editing is constrained in both the regions that can be modified and the scale of modifications. Moreover, the lack of sufficiently large editing datasets for training and evaluation remains a challenge. To address these challenges, we propose a Beyond Voxel 3D Editing (BVE) framework with a self-constructed large-scale dataset specifically tailored for 3D editing. Building upon this dataset, our model enhances a foundational image-to-3D generative architecture with lightweight, trainable modules, enabling efficient injection of textual semantics without the need for expensive full-model retraining. Furthermore, we introduce an annotation-free 3D masking strategy to preserve local invariance, maintaining the integrity of unchanged regions during editing. Extensive experiments demonstrate that BVE achieves superior performance in generating high-quality, text-aligned 3D assets, while faithfully retaining the visual characteristics of the original input.