ReConText3D: Replay-based Continual Text-to-3D Generation
arXiv cs.CV / 4/16/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- ReConText3D is proposed as the first continual text-to-3D generation framework, aiming to learn new 3D categories from text incrementally while avoiding catastrophic forgetting.
- The authors show that existing text-to-3D models degrade under incremental training, motivating a replay-based approach that preserves performance on previously learned categories.
- ReConText3D builds a compact, diverse replay memory using text-embedding k-Center selection, enabling rehearsal of prior knowledge without changing the underlying generative model architecture.
- The paper introduces Toys4K-CL, a class-incremental benchmark derived from Toys4K with balanced and semantically diverse splits to evaluate continual text-to-3D learning systematically.
- Experiments on Toys4K-CL indicate ReConText3D outperforms baselines across multiple generative backbones, maintaining high-quality generation for both old and newly learned classes.
Related Articles

Black Hat Asia
AI Business

Introducing Claude Opus 4.7
Anthropic News

AI traffic to US retailers rose 393% in Q1, and it’s boosting their revenue too
TechCrunch

Who Audits the Auditors? Building an LLM-as-a-Judge for Agentic Reliability
Dev.to

"Enterprise AI Cost Optimization: How Companies Are Cutting AI Infrastructure Sp
Dev.to