A Gesture-Based Visual Learning Model for Acoustophoretic Interactions using a Swarm of AcoustoBots

arXiv cs.RO / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper introduces a gesture-based visual learning framework to enable intuitive, contactless human control of a multimodal AcoustoBot swarm.
  • It combines ESP32-CAM gesture capture, PhaseSpace motion tracking, centralized processing, and an OpenCLIP-based VLM (with linear probing) to recognize three hand gestures.
  • The recognized gestures are mapped to three modalities—mid-air haptics, directional audio, and acoustic levitation—on the AcoustoBots.
  • Gesture classification performance improves from about 67% on a small dataset to nearly 98% on the largest dataset, and integrated two-robot tests show 87.8% gesture-to-modality switching accuracy over 90 trials.
  • The system’s average end-to-end latency is 3.95 seconds, and the authors note key limitations including centralized processing, a fixed gesture set, and evaluation in controlled environments.

Abstract

AcoustoBots are mobile acoustophoretic robots capable of delivering mid-air haptics, directional audio, and acoustic levitation, but existing implementations rely on scripted commands and lack an intuitive interface for real-time human control. This work presents a gesture-based visual learning framework for contactless human-swarm interaction with a multimodal AcoustoBot platform. The system combines ESP32-CAM gesture capture, PhaseSpace motion tracking, centralized processing, and an OpenCLIP-based visual learning model (VLM) with linear probing to classify three hand gestures and map them to haptics, audio, and levitation modalities. Validation accuracy improved from about 67% with a small dataset to nearly 98% with the largest dataset. In integrated experiments with two AcoustoBots, the system achieved an overall gesture-to-modality switching accuracy of 87.8% across 90 trials, with an average end-to-end latency of 3.95 seconds. These results demonstrate the feasibility of using a vision-language-model-based gesture interface for multimodal human-swarm interaction. While the current system is limited by centralized processing, a static gesture set, and controlled-environment evaluation, it establishes a foundation for more expressive, scalable, and accessible swarm robotic interfaces.