Dual-Domain Representation Alignment: Bridging 2D and 3D Vision via Geometry-Aware Architecture Search

arXiv cs.AI / 3/23/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • EvoNAS provides an efficient distributed approach to multi-objective evolutionary architecture search, reducing candidate evaluation cost while preserving Pareto-optimal accuracy-efficiency trade-offs.
  • It uses a hybrid supernet that combines Vision State Space (VSS) blocks with Vision Transformer (ViT) modules and introduces Cross-Architecture Dual-Domain Knowledge Distillation (CA-DDKD) to boost shared representational capacity and ranking consistency.
  • A Distributed Multi-Model Parallel Evaluation (DMMPE) framework with GPU pooling and asynchronous scheduling further speeds up large-scale validation, achieving over 70% efficiency gains versus traditional data-parallel methods.
  • Experiments on COCO, ADE20K, KITTI, and NYU-Depth v2 show EvoNets achieve Pareto-optimal trade-offs with lower inference latency and higher throughput under fixed budgets, while maintaining strong generalization on downstream tasks like novel view synthesis.
  • The work provides code at GitHub, enabling replication and adoption of EvoNets in resource-constrained deployment scenarios.

Abstract

Modern computer vision requires balancing predictive accuracy with real-time efficiency, yet the high inference cost of large vision models (LVMs) limits deployment on resource-constrained edge devices. Although Evolutionary Neural Architecture Search (ENAS) is well suited for multi-objective optimization, its practical use is hindered by two issues: expensive candidate evaluation and ranking inconsistency among subnetworks. To address them, we propose EvoNAS, an efficient distributed framework for multi-objective evolutionary architecture search. We build a hybrid supernet that integrates Vision State Space and Vision Transformer (VSS-ViT) modules, and optimize it with a Cross-Architecture Dual-Domain Knowledge Distillation (CA-DDKD) strategy. By coupling the computational efficiency of VSS blocks with the semantic expressiveness of ViT modules, CA-DDKD improves the representational capacity of the shared supernet and enhances ranking consistency, enabling reliable fitness estimation during evolution without extra fine-tuning. To reduce the cost of large-scale validation, we further introduce a Distributed Multi-Model Parallel Evaluation (DMMPE) framework based on GPU resource pooling and asynchronous scheduling. Compared with conventional data-parallel evaluation, DMMPE improves efficiency by over 70% through concurrent multi-GPU, multi-model execution. Experiments on COCO, ADE20K, KITTI, and NYU-Depth v2 show that the searched architectures, termed EvoNets, consistently achieve Pareto-optimal trade-offs between accuracy and efficiency. Compared with representative CNN-, ViT-, and Mamba-based models, EvoNets deliver lower inference latency and higher throughput under strict computational budgets while maintaining strong generalization on downstream tasks such as novel view synthesis. Code is available at https://github.com/EMI-Group/evonas