Visually-Guided Controllable Medical Image Generation via Fine-Grained Semantic Disentanglement
arXiv cs.CV / 3/12/2026
📰 NewsModels & Research
Key Points
- The paper presents a Visually-Guided Text Disentanglement framework to improve controllability in medical image generation by addressing the modality gap between detailed visuals and abstract clinical text.
- It introduces a cross-modal latent alignment mechanism that uses visual priors to disentangle unstructured text into independent semantic representations.
- A Hybrid Feature Fusion Module (HFFM) injects these features into a Diffusion Transformer through separated channels, enabling fine-grained structural control.
- Experiments on three datasets show improved generation quality and better downstream classification performance compared with existing methods.
- The authors provide the source code at the given GitHub URL for reproducibility and further research.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA