Reheat Nachos for Dinner? Evaluating AI Support for Cross-Cultural Communication of Neologisms

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates whether AI tools help non-native speakers understand and use English neologisms/slang in informal cross-cultural communication.
  • In a human-subjects experiment (N=234), participants learned neologisms with three AI supports—definitions, simplified rewrites, or explanations of meaning and usage—and compared them with a non-AI dictionary condition.
  • The results show that AI explanations produce the largest improvements in communicative competence as rated by native speakers, while contextual-appropriateness judgments did not significantly differ across support types.
  • The findings reveal a gap between participants’ self-perceived competence and native-speaker ratings, and also a persistent difference between non-native and native-produced writing that suggests current AI tools still have limitations.
  • The paper concludes with design implications for future tools that better support culturally appropriate and context-sensitive language use for neologisms.

Abstract

Neologisms and emerging slang are central to daily conversation, yet challenging for non-native speakers (NNS) to interpret and use appropriately in cross-cultural communication with native speakers (NS). NNS increasingly make use of Artificial Intelligence (AI) tools to learn these words. We study the utility of such tools in mediating an informal communication scenario through a human-subjects study (N=234): NNS participants learn English neologisms with AI support, write messages using the learned word to an NS friend, and judge contextual appropriateness of the neologism in two provided writing samples. Using both NS evaluator-rated communicative competence of NNS-produced writing and NNS' contextual appropriateness judgments, we compare three AI-based support conditions: AI Definition, AI Rewrite into simpler English, AI Explanation of meaning and usage, and Non-AI Dictionary for comparison. We show that AI Explanation yields the largest gains over no support in NS-rated competence, while contextual appropriateness judgments show indifference across support. NNS participants' self-reported perceptions tend to overestimate NS ratings, revealing a mismatch between perceived and actual competence. We further observe a significant gap between NNS- and NS-produced writing, highlighting the limitations of current AI tools and informing design for future tools.