From Synthetic to Native: Benchmarking Multilingual Intent Classification in Logistics Customer Service

arXiv cs.CL / 3/25/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that many multilingual intent-classification benchmarks use machine-translated text that is cleaner than real customer queries, leading to inflated estimates of robustness in logistics customer service.
  • It introduces a new public hierarchical multilingual intent-classification benchmark built from real de-identified logistics customer-service logs, including ~30K curated queries from historical data.
  • The dataset uses a two-level taxonomy (13 parent intents, 17 leaf intents) and covers English, Spanish, and Arabic, with additional languages (e.g., Indonesian, Chinese) enabling zero-shot evaluation.
  • To quantify the synthetic-to-real gap, the authors provide paired native and machine-translated test sets and evaluate multilingual encoders, embedding models, and small language models in both flat and hierarchical settings.
  • Experimental results show that translated test sets significantly overestimate performance on noisy native queries, particularly for long-tail intents and cross-lingual transfer, highlighting the need for more realistic benchmarks.

Abstract

Multilingual intent classification is central to customer-service systems on global logistics platforms, where models must process noisy user queries across languages and hierarchical label spaces. Yet most existing multilingual benchmarks rely on machine-translated text, which is typically cleaner and more standardized than native customer requests and can therefore overestimate real-world robustness. We present a public benchmark for hierarchical multilingual intent classification constructed from real logistics customer-service logs. The dataset contains approximately 30K de-identified, stand-alone user queries curated from 600K historical records through filtering, LLM-assisted quality control, and human verification, and is organized into a two-level taxonomy with 13 parent and 17 leaf intents. English, Spanish, and Arabic are included as seen languages, while Indonesian, Chinese, and additional test-only languages support zero-shot evaluation. To directly measure the gap between synthetic and real evaluation, we provide paired native and machine-translated test sets and benchmark multilingual encoders, embedding models, and small language models under flat and hierarchical protocols. Results show that translated test sets substantially overestimate performance on noisy native queries, especially for long-tail intents and cross-lingual transfer, underscoring the need for more realistic multilingual intent benchmarks.