Originally posted on nanowow.ai — reposted here for Dev.to readers.
GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
GPT Image 2's Subject-Lock editing (via the input_fidelity parameter) is the single most useful feature for ecommerce sellers, fashion operators, and anyone doing variant photography at scale. It's also the one capability DALL-E 3, Midjourney, and Ideogram have no equivalent for.
This guide is practical: what input_fidelity does, what values to use for what jobs, when it fails, and how to build real workflows around it.
If you want to try it while reading, jump to nanowow.ai/gpt-image-2, switch to Edit mode, and upload any reference image.
What Subject-Lock actually does
Every previous image model (DALL-E 3, Midjourney, Stable Diffusion, Ideogram) regenerates from scratch each time. You upload a reference, describe changes, and the model produces a new image that resembles the reference. Small drifts in shape, proportion, color, or detail happen on every regeneration.
GPT Image 2's Edit mode works differently. You upload a reference image and set input_fidelity to a value between 0 and 1:
-
input_fidelity: 1.0— the subject is preserved near-pixel-perfect. Only the parts you explicitly describe (background, lighting, text, clothing) change. -
input_fidelity: 0.0— the reference becomes a loose stylistic suggestion; the model regenerates freely. - Anywhere in between — smooth sliding scale.
In practice, three zones matter:
| Zone | Value range | What happens |
|---|---|---|
| Pixel lock | 0.8 – 1.0 | Product / logo / face stays identical across generations. Best for product variant photography, label swaps, background replacement. |
| Shape lock | 0.5 – 0.7 | Overall silhouette and proportions preserved, but textures and finer details can drift. Best for outfit restyling, pose-preserving restyling, lighting-only changes. |
| Inspiration | 0.2 – 0.4 | Loose stylistic borrowing. Best for exploring variations in mood, style, or medium while keeping rough composition. |
Where Subject-Lock wins
Ecommerce product photography
The canonical use case. You photograph one product, generate N backgrounds.
Workflow:
- Upload product photo on plain backdrop (any photo, even a phone shot).
- Set
input_fidelity: 0.9. - Prompt: "Place this product on a marble countertop with morning window light, natural shadow at 45°, minimalist editorial composition."
- Generate 5 variants — all preserve the product identically, change only the scene.
No Photoshop compositing. No masking. The label text, cap shape, and ceramic material remain exact across generations because the model literally preserves them.
Label / packaging swaps
Take an existing product photo, change the label or packaging text without reshooting.
Workflow:
- Upload existing product photo.
- Set
input_fidelity: 0.85. - Prompt:
"Change the label text to read exactly 'LIMITED EDITION — 500ml — BREWED 2026-04'. Keep product shape, lighting, and background identical." - The model rewrites just the text on the label, preserves everything else.
This is the single most common request from ecommerce operators and it was literally impossible before.
Fashion: outfit restyling with pose preservation
Upload a model photo, restyle the outfit while preserving the pose.
Workflow:
- Upload full-body model shot.
- Set
input_fidelity: 0.6(shape-lock zone — pose preserved, outfit can change). - Prompt:
"Replace outfit with a charcoal Issey Miyake pleated blazer over white shirt, same pose, same lighting." - Pose and composition locked; outfit redraws from the described garment.
For fashion catalogs generating 20 outfits on the same model, this replaces an entire shoot day with 20 prompts.
Character consistency across a campaign
Shoot one hero image, generate an entire campaign with the same character.
-
Same character, 10 different scenes →
input_fidelity: 0.85 -
Same outfit, different models →
input_fidelity: 0.5+ describe the new model -
Same product, different seasons →
input_fidelity: 0.9+ describe seasonal backdrop
Prompt patterns that work
Pattern 1: Explicit preservation list
Tell the model what NOT to change. GPT Image 2 respects preservation constraints.
Change the background to a minimalist white studio setup with soft side
light. Preserve: product shape, label, ceramic texture, cap color.
Do not alter the product itself.
Pattern 2: Scene + subject separation
Scene: Nordic kitchen countertop with morning light, linen napkin
visible at corner, shallow DoF.
Subject (preserved from reference): [Product] — keep label, proportions,
and finish pixel-identical.
Pattern 3: Material-level lock
Preserve the ribbed glass texture, liquid color, and label typography
exactly as in the reference. Only the wooden background and the
surrounding ingredients may change.
Where Subject-Lock struggles
Three scenarios where input_fidelity doesn't work well. Know them before you build a pipeline.
1. Real human faces
GPT Image 2 is routed through fal.ai, which enforces ByteDance/OpenAI content policies on real-person likenesses. Upload a photo with an identifiable face → frequent content_policy_violation errors. Use stylized characters, illustration-based references, or crop faces out for product-focused shots.
2. Small / low-resolution reference images
If your reference is 512×512 or smaller, fine details are lost to the model's pre-processing. Upload at least 1024×1024 references when label or typography accuracy matters.
3. Conflicting prompts
Setting input_fidelity: 0.9 and then asking for a major stylistic transformation ("turn this product into a watercolor painting") produces muddy results. High fidelity is for scene/light/text changes around a preserved subject, not for re-rendering the subject itself.
Advanced: combining Subject-Lock with structured text
The most powerful workflow combines input_fidelity: 0.9 with GPT Image 2's text-rendering capability. You preserve a product and change only the text on it.
Example — label text swap:
Change the label to read exactly "Limited Edition 2026 - #0147 of 500".
Keep bottle shape, glass color, cork, and background identical.
Font: same as reference, matching weight and kerning.
The model preserves the bottle pixel-perfect, rewrites only the label text, and matches the existing typography. For limited-edition drops, serial-numbered products, or personalized SKUs, this scales one hero photo into infinite variants.
Quick-start checklist
Before your first Subject-Lock generation:
- Reference image ≥ 1024×1024, PNG or JPEG, under 30 MB.
- No real human faces in the reference (unless intentionally illustration/stylized).
-
input_fidelitypicked from the zone table above based on what you're preserving. - Prompt describes scene/light/text changes, not subject transformations.
- Preservation list at the end — what should NOT change.
Try a first generation at input_fidelity: 0.9 and adjust down if the model is too rigid, up if it's drifting.
Where to go next
- Browse 40 curated GPT Image 2 prompts with real outputs: nanowow.ai/gpt-image-2/prompts
- Try Subject-Lock free (5 credits on signup): nanowow.ai/gpt-image-2 → switch to Edit mode
- Full model comparison (vs DALL-E 3, Nano Banana 2, Ideogram): nanowow.ai/compare/gpt-image-2-vs-dall-e-3
- Prompt structure deep-dive: Best GPT Image 2 Prompts (2026)
FAQ
Q: Can Subject-Lock edit photos of real people?
Mostly no — fal.ai's upstream content policy flags real-person likenesses. Stylized characters, illustrations, and product/object photos work fine.
Q: What's the credit cost for Edit mode?
Slightly higher than text-to-image at the same size/quality (roughly +1-2 credits per generation for the reference image processing).
Q: Can I upload multiple reference images?
Yes — GPT Image 2 accepts an array of reference images. Useful for character + outfit preservation, or start + end frames (for video-adjacent workflows).
Q: Does it work with transparent backgrounds?
Yes. Combine background: "transparent" with Subject-Lock to swap backgrounds while preserving the subject.
Q: How different is this from ChatGPT's inpainting?
Fundamentally different. ChatGPT inpainting regenerates the masked region every time — no subject preservation guarantee. Subject-Lock preserves at the pixel level by design.
Try Subject-Lock now: nanowow.ai/gpt-image-2 (Edit mode). Browse 40 curated prompts: nanowow.ai/gpt-image-2/prompts.
This post first appeared on nanowow.ai. Questions? Reply below.





