Dual-Control Frequency-Aware Diffusion Model for Depth-Dependent Optical Microrobot Microscopy Image Generation

arXiv cs.RO / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Du-FreqNet, a dual-control, frequency-aware diffusion model designed to generate depth-dependent optical microscopy images for optical microrobots using optical tweezers.
  • It addresses shortcomings of prior GAN-based augmentation by enforcing physically consistent optical characteristics, especially diffraction and defocus effects that vary with depth.
  • Du-FreqNet uses two separate ControlNet branches to encode microrobot 3D point clouds and depth-specific mesh layers, enabling controllable image synthesis conditioned on 3D structure.
  • The method adds an adaptive frequency-domain loss that reweights frequency components according to distance from the focal plane and applies differentiable FFT-based supervision to better match real optical frequency distributions.
  • Experiments indicate strong performance with limited data (e.g., ~80 images per pose), including a reported 20.7% SSIM improvement over baselines and gains in downstream 3D pose/depth estimation for improved closed-loop microrobotic control.

Abstract

Optical microrobots actuated by optical tweezers (OT) are important for cell manipulation and microscale assembly, but their autonomous operation depends on accurate 3D perception. Developing such perception systems is challenging because large-scale, high-quality microscopy datasets are scarce, owing to complex fabrication processes and labor-intensive annotation. Although generative AI offers a promising route for data augmentation, existing generative adversarial network (GAN)-based methods struggle to reproduce key optical characteristics, particularly depth-dependent diffraction and defocus effects. To address this limitation, we propose Du-FreqNet, a dual-control, frequency-aware diffusion model for physically consistent microscopy image synthesis. The framework features two independent ControlNet branches to encode microrobot 3D point clouds and depth-specific mesh layers, respectively. We introduce an adaptive frequency-domain loss that dynamically reweights high- and low-frequency components based on the distance to the focal plane. By leveraging differentiable FFT-based supervision, Du-FreqNet captures physically meaningful frequency distributions often missed by pixel-space methods. Trained on a limited dataset (e.g., 80 images per pose), our model achieves controllable, depth-dependent image synthesis, improving SSIM by 20.7% over baselines. Extensive experiments demonstrate that Du-FreqNet generalizes effectively to unseen poses and significantly enhances downstream tasks, including 3D pose and depth estimation, thereby facilitating robust closed-loop control in microrobotic systems.