A Human-Inspired Decoupled Architecture for Efficient Audio Representation Learning

arXiv cs.AI / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces HEAR (Human-inspired Efficient Audio Representation), a decoupled audio model designed to reduce the parameter count and quadratic compute cost of standard Transformer-based self-supervised learning.
  • HEAR separates processing into an Acoustic Model for local feature extraction and a Task Model for global semantic integration, inspired by how humans disentangle local acoustic cues from broader context.
  • It uses an Acoustic Tokenizer trained with knowledge distillation to support robust Masked Audio Modeling (MAM).
  • Experiments report strong efficiency—about 15M parameters and 9.47 GFLOPs at inference—substantially lower than typical foundation audio models (85M–94M), while maintaining competitive results on multiple audio classification benchmarks.
  • The authors provide code and pre-trained models via the linked GitHub repository to facilitate reuse and further experimentation.

Abstract

While self-supervised learning (SSL) has revolutionized audio representation, the excessive parameterization and quadratic computational cost of standard Transformers limit their deployment on resource-constrained devices. To address this bottleneck, we propose HEAR (Human-inspired Efficient Audio Representation), a novel decoupled architecture. Inspired by the human cognitive ability to isolate local acoustic features from global context, HEAR splits the processing pipeline into two dedicated modules: an Acoustic Model for local feature extraction and a Task Model for global semantic integration. Coupled with an Acoustic Tokenizer trained via knowledge distillation, our approach enables robust Masked Audio Modeling (MAM). Extensive experiments demonstrate that HEAR requires only 15M parameters and 9.47 GFLOPs for inference, operating at a fraction of the computational cost of conventional foundation models (which typically require 85M-94M parameters). Despite this high efficiency, HEAR achieves highly competitive performance across diverse audio classification benchmarks. The code and pre-trained models are available at https://github.com/HarunoriKawano/HEAR