AI Navigate

A Multi-Prototype-Guided Federated Knowledge Distillation Approach in AI-RAN Enabled Multi-Access Edge Computing System

arXiv cs.LG / 3/11/2026

Models & Research

Key Points

  • The paper addresses challenges in federated learning (FL) within AI-native Radio Access Network (AI-RAN) enabled Multi-Access Edge Computing (MEC) systems, particularly handling non-independent and identically distributed (non-IID) data across edge devices.
  • A novel multi-prototype-guided federated knowledge distillation (MP-FedKD) approach is proposed integrating self-knowledge distillation and a multi-prototype strategy to reduce information loss from averaging embedding vectors.
  • The approach introduces a conditional hierarchical agglomerative clustering (CHAC) method and a prototype alignment scheme to better represent data distributions locally and globally.
  • A new loss function (LEMGP loss) is designed to strengthen the relationship between global prototypes and local embeddings, enhancing model accuracy.
  • Extensive experiments demonstrate that MP-FedKD significantly improves accuracy and error metrics in various non-IID data scenarios compared to state-of-the-art baselines.

Computer Science > Machine Learning

arXiv:2603.09727 (cs)
[Submitted on 10 Mar 2026]

Title:A Multi-Prototype-Guided Federated Knowledge Distillation Approach in AI-RAN Enabled Multi-Access Edge Computing System

View a PDF of the paper titled A Multi-Prototype-Guided Federated Knowledge Distillation Approach in AI-RAN Enabled Multi-Access Edge Computing System, by Luyao Zou and 5 other authors
View PDF
Abstract:With the development of wireless network, Multi-Access Edge Computing (MEC) and Artificial Intelligence (AI)-native Radio Access Network (RAN) have attracted significant attention. Particularly, the integration of AI-RAN and MEC is envisioned to transform network efficiency and responsiveness. Therefore, it is valuable to investigate AI-RAN enabled MEC system. Federated learning (FL) nowadays is emerging as a promising approach for AI-RAN enabled MEC system, in which edge devices are enabled to train a global model cooperatively without revealing their raw data. However, conventional FL encounters the challenge in processing the non-independent and identically distributed (non-IID) data. Single prototype obtained by averaging the embedding vectors per class can be employed in FL to handle the data heterogeneity issue. Nevertheless, this may result in the loss of useful information owing to the average operation. Therefore, in this paper, a multi-prototype-guided federated knowledge distillation (MP-FedKD) approach is proposed. Particularly, self-knowledge distillation is integrated into FL to deal with the non-IID issue. To cope with the problem of information loss caused by single prototype-based strategy, multi-prototype strategy is adopted, where we present a conditional hierarchical agglomerative clustering (CHAC) approach and a prototype alignment scheme. Additionally, we design a novel loss function (called LEMGP loss) for each local client, where the relationship between global prototypes and local embedding will be focused. Extensive experiments over multiple datasets with various non-IID settings showcase that the proposed MP-FedKD approach outperforms the considered state-of-the-art baselines regarding accuracy, average accuracy and errors (RMSE and MAE).
Comments:
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2603.09727 [cs.LG]
  (or arXiv:2603.09727v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09727
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Luyao Zou [view email]
[v1] Tue, 10 Mar 2026 14:32:41 UTC (2,959 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled A Multi-Prototype-Guided Federated Knowledge Distillation Approach in AI-RAN Enabled Multi-Access Edge Computing System, by Luyao Zou and 5 other authors
  • View PDF
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.