3D-ReGen: A Unified 3D Geometry Regeneration Framework

arXiv cs.CV / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The article presents 3D-ReGen, a framework for regenerating 3D objects using 2D images together with an initial 3D shape, aiming to go beyond one-shot text/image-to-3D generation.
  • Unlike typical one-pass generators with limited control, 3D-ReGen is conditioned on an input geometry so it can enhance, reconstruct, and edit 3D assets while improving them relative to the starting shape.
  • The method introduces a new conditioning mechanism based on VecSet, enabling updates to the input geometry with consistent, fine-grained details.
  • 3D-ReGen learns a broadly applicable regeneration prior from existing (off-the-shelf) 3D datasets using self-supervised pretext tasks and data augmentations, avoiding the need for additional annotations.
  • Experiments evaluate both geometric consistency and fine-detail quality, reporting state-of-the-art results in controllable 3D generation across multiple tasks.

Abstract

We consider the problem of regenerating 3D objects from 2D images and initial 3D shapes. Most 3D generators operate in a one-shot fashion, converting text or images to a 3D object with limited controllability. We introduce instead 3D-ReGen, a 3D regenerator that is conditioned on an initial 3D shape. This conceptually simple formulation allows us to support numerous useful tasks, including 3D enhancement, reconstruction, and editing. 3D-ReGen uses a new conditioning mechanism based on VecSet, which allows the regenerator to update or improve the input geometry with consistent fine-grained details. 3D-ReGen learns a widely applicable regeneration prior from off-the-shelf 3D datasets via self-supervised pretext tasks and augmentations, without additional annotations. We evaluate both the geometric consistency and fine-grained quality of 3D-ReGen, achieving state-of-the-art performance in controllable 3D generation across several tasks.