AI Navigate

信号分離のための無線周波数トランスフォーマー

arXiv cs.LG / 2026/3/11

Signals & Early TrendsModels & Research

要点

  • 本論文は、未知の非ガウス的干渉によって汚染された関心信号(SOI)を推定する信号分離問題に対し、完全にデータ駆動型のアプローチを提案している。
  • 提案手法は、従来の平均二乗誤差(MSE)による訓練を改良し、離散的なトークナイザーと交差エントロピー損失で訓練されるエンドツーエンドのトランスフォーマーを組み合わせている。
  • トークナイザーはGoogleのSoundStreamを改良し、トランスフォーマーレイヤーを追加、さらに有限スカラー量子化(FSQ)に切り替えることで大幅な性能向上を実現している。
  • MIT RF Challengeデータセットで評価した結果、5G干渉からQPSK信号を分離するタスクにおいて、従来手法に比べてビット誤り率(BER)を最大122倍低減した。
  • 学習された表現は副情報なしで未知の干渉タイプに対しても良好に一般化し、無線周波数信号にとどまらず、重力波検出やその他の科学的センシング用途にも応用が期待される。

Computer Science > Machine Learning

arXiv:2603.09201 (cs)
[Submitted on 10 Mar 2026]

Title:The Radio-Frequency Transformer for Signal Separation

View a PDF of the paper titled The Radio-Frequency Transformer for Signal Separation, by Egor Lifar and 5 other authors
View PDF HTML (experimental)
Abstract:We study a problem of signal separation: estimating a signal of interest (SOI) contaminated by an unknown non-Gaussian background/interference. Given the training data consisting of examples of SOI and interference, we show how to build a fully data-driven signal separator. To that end we learn a good discrete tokenizer for SOI and then train an end-to-end transformer on a cross-entropy loss. Training with a cross-entropy shows substantial improvements over the conventional mean-squared error (MSE). Our tokenizer is a modification of Google's SoundStream, which incorporates additional transformer layers and switches from VQVAE to finite-scalar quantization (FSQ). Across real and synthetic mixtures from the MIT RF Challenge dataset, our method achieves competitive performance, including a 122x reduction in bit-error rate (BER) over prior state-of-the-art techniques for separating a QPSK signal from 5G interference. The learned representation adapts to the interference type without side information and shows zero-shot generalization to unseen mixtures at inference time, underscoring its potential beyond RF. Although we instantiate our approach on radio-frequency mixtures, we expect the same architecture to apply to gravitational-wave data (e.g., LIGO strain) and other scientific sensing problems that require data-driven modeling of background and noise.
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2603.09201 [cs.LG]
  (or arXiv:2603.09201v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09201
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Egor Lifar [view email]
[v1] Tue, 10 Mar 2026 05:22:02 UTC (13,454 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.