**The Struggle of E-commerce Apparel 👕
**In professional apparel photography, the "Ghost Mannequin" (or hollow man) effect is the gold standard. It makes clothing look 3D and "worn" without a visible model. Traditionally, this requires hours of manual clipping and compositing two separate photos (one with the model, one with the garment inside-out).
At Rewarx Studio AI, we decided this was a perfect problem for Generative AI to solve—but it’s a lot harder than just hitting "Generate."
**The Technical Challenge: Depth & Occlusion
**The hardest part isn't removing the model; it's reconstructing what was behind them. Specifically:
*The inner back of the collar.
*
*The curvature of the sleeve openings.
*
Maintaining the lighting consistency inside the "hollow" areas.
**Our 3-Step Pipeline 🛠️
**To solve this, we moved away from generic inpainting and built a specialized pipeline:
**Semantic Masking (SAM): **We use the Segment Anything Model to precisely isolate the garment. But we don't just mask the model; we predict the "inner" bounds where the mannequin would logically end.
**Depth Estimation (Depth Anything): **To make the clothing look 3D and not like a flat sticker, we generate a depth map. This tells the AI: "This collar area is 5cm behind the front zipper," which guides the shading.
**Context-Aware Inpainting: **This is where the magic happens. We use a fine-tuned SDXL Inpainting model that understands apparel structure.
Let’s Talk Prompts (The Precision Part) 🔍
For AI to understand "invisible interior," generic terms fail. We inject technical descriptors into the latent space to guide the texture:
*Internal Prompt Logic:
*(3D hollow effect:1.2), (inner garment texture:1.3), invisible mannequin, detailed fabric weave inside collar, consistent studio lighting, photorealistic, 8k, --no floating limbs, --no distorted seams.
**The Result
**We’ve managed to reduce a process that usually takes a senior retoucher 20-30 minutes per image down to under 15 seconds. For a brand with 500 SKUs, that’s a game-changer.
**What’s your take?
**I’m curious—has anyone else in the community experimented with combining Depth Maps and Inpainting for industrial use cases? I’d love to hear your thoughts on maintaining material texture during high-strength inpainting.
Cheers,
Keble
Founder @ Rewarx Studio AI



