AI Navigate

EDMFormer:音楽構造セグメンテーションのためのジャンル特化型自己教師あり学習

arXiv cs.AI / 2026/3/11

Models & Research

要点

  • EDMFormerは電子ダンスミュージック(EDM)に特化した音楽構造セグメンテーションのために設計されたトランスフォーマーベースのモデルであり、歌詞や和声の類似性に依存する既存モデルの弱点を克服します。
  • 本モデルは新たに作成したジャンル特化型データセットEDM-98で学習された自己教師あり音声埋め込みを使用しており、EDMのビルドアップ、ドロップ、ブレイクダウンなどの独特な構造要素を反映する98曲の専門的に注釈されたEDMトラックが含まれています。
  • EDMFormerは特にドロップやビルドアップのセクションにおいて、既存のアプローチと比べて境界検出およびセクションラベリングを大幅に改善します。
  • 学習済み表現とジャンル特化データおよび構造的事前情報の組み合わせが性能向上に寄与することを示し、他の専門的な音楽ジャンルやより広範な音声解析タスクへの応用可能性も示唆しています。

Computer Science > Sound

arXiv:2603.08759 (cs)
[Submitted on 8 Mar 2026]

Title:EDMFormer: Genre-Specific Self-Supervised Learning for Music Structure Segmentation

View a PDF of the paper titled EDMFormer: Genre-Specific Self-Supervised Learning for Music Structure Segmentation, by Sahal Sajeer and 3 other authors
View PDF HTML (experimental)
Abstract:Music structure segmentation is a key task in audio analysis, but existing models perform poorly on Electronic Dance Music (EDM). This problem exists because most approaches rely on lyrical or harmonic similarity, which works well for pop music but not for EDM. EDM structure is instead defined by changes in energy, rhythm, and timbre, with different sections such as buildup, drop, and breakdown. We introduce EDMFormer, a transformer model that combines self-supervised audio embeddings using an EDM-specific dataset and taxonomy. We release this dataset as EDM-98: a group of 98 professionally annotated EDM tracks. EDMFormer improves boundary detection and section labelling compared to existing models, particularly for drops and buildups. The results suggest that combining learned representations with genre-specific data and structural priors is effective for EDM and could be applied to other specialized music genres or broader audio domains.
Comments:
Subjects: Sound (cs.SD); Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.08759 [cs.SD]
  (or arXiv:2603.08759v1 [cs.SD] for this version)
  https://doi.org/10.48550/arXiv.2603.08759
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Oscar Chung [view email]
[v1] Sun, 8 Mar 2026 15:56:37 UTC (522 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled EDMFormer: Genre-Specific Self-Supervised Learning for Music Structure Segmentation, by Sahal Sajeer and 3 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.SD
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.