AI Navigate

Towards Privacy-Preserving Machine Translation at the Inference Stage: A New Task and Benchmark

arXiv cs.CL / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Privacy-Preserving Machine Translation (PPMT) to protect user text during model inference, addressing privacy leakage in online translation services.
  • It highlights the lack of a defined privacy-protection task, dedicated evaluation datasets, metrics, and benchmarks for MT inference privacy.
  • The authors construct three benchmark datasets, define corresponding evaluation metrics, and propose baseline benchmark methods as a starting point for this task.
  • By focusing on protecting privacy of named entities in text, the work aims to provide a solid foundation for privacy protection in machine translation.

Abstract

Current online translation services require sending user text to cloud servers, posing a risk of privacy leakage when the text contains sensitive information. This risk hinders the application of online translation services in privacy-sensitive scenarios. One way to mitigate this risk for online translation services is introducing privacy protection mechanisms targeting the inference stage of translation models. However, compared to subfields of NLP like text classification and summarization, the machine translation research community has limited exploration of privacy protection during the inference stage. There is no clearly defined privacy protection task for the inference stage, dedicated evaluation datasets and metrics, and reference benchmark methods. The absence of these elements has seriously constrained researchers' in-depth exploration of this direction. To bridge this gap, this paper proposes a novel "Privacy-Preserving Machine Translation" (PPMT) task, aiming to protect the private information in text during the model inference stage. For this task, we constructed three benchmark test datasets, designed corresponding evaluation metrics, and proposed a series of benchmark methods as a starting point for this task. The definition of privacy is complex and diverse. Considering that named entities often contain a large amount of personal privacy and commercial secrets, we have focused our research on protecting only the named entity's privacy in the text. We expect this research work will provide a new perspective and a solid foundation for the privacy protection problem in machine translation.