BioVITA: Biological Dataset, Model, and Benchmark for Visual-Textual-Acoustic Alignment

arXiv cs.CV / 3/26/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces BioVITA, a new multimodal framework that aligns visual, textual, and acoustic data for biological species understanding.
  • It builds a large training dataset with 1.3M audio clips and 2.3M images across 14,133 species, annotated with ecological trait labels.
  • BioVITA extends BioCLIP2 with a two-stage training approach to align audio representations with both visual and textual representations.
  • It also releases a cross-modal retrieval benchmark supporting all directional retrieval pairs among image, audio, and text, evaluated at Family/Genus/Species taxonomic levels.
  • Experiments indicate the method learns a shared representation space that captures species-level semantics and goes beyond taxonomy for multimodal biodiversity understanding.

Abstract

Understanding animal species from multimodal data poses an emerging challenge at the intersection of computer vision and ecology. While recent biological models, such as BioCLIP, have demonstrated strong alignment between images and textual taxonomic information for species identification, the integration of the audio modality remains an open problem. We propose BioVITA, a novel visual-textual-acoustic alignment framework for biological applications. BioVITA involves (i) a training dataset, (ii) a representation model, and (iii) a retrieval benchmark. First, we construct a large-scale training dataset comprising 1.3 million audio clips and 2.3 million images, covering 14,133 species annotated with 34 ecological trait labels. Second, building upon BioCLIP2, we introduce a two-stage training framework to effectively align audio representations with visual and textual representations. Third, we develop a cross-modal retrieval benchmark that covers all possible directional retrieval across the three modalities (i.e., image-to-audio, audio-to-text, text-to-image, and their reverse directions), with three taxonomic levels: Family, Genus, and Species. Extensive experiments demonstrate that our model learns a unified representation space that captures species-level semantics beyond taxonomy, advancing multimodal biodiversity understanding. The project page is available at: https://dahlian00.github.io/BioVITA_Page/