Cross-Lingual Attention Distillation with Personality-Informed Generative Augmentation for Multilingual Personality Recognition

arXiv cs.CL / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ADAM, a multilingual personality recognition method that uses cross-lingual attention distillation (CLAD) to learn personality traits across languages despite limited multilingual data.
  • It addresses dataset scarcity by starting from an English personality dataset and using an LLM for translation-based generative data augmentation, further improved with Personality-Informed Generative Augmentation (PIGA).
  • The approach generates augmented training data for multiple languages (Japanese, Chinese, Malay, and French) and includes analyses and an ablation study to validate the contributions of the augmentation components.
  • Experimental results report that CLAD trained with PIGA augmentation outperforms a standard BCE baseline across languages and traits, with average BA score gains of +0.0573 on the Essays dataset and +0.0968 on the Kaggle dataset.
  • The authors provide a repository with model weights, dataset, and code to support reproducibility and benchmarking.

Abstract

While significant work has been done on personality recognition, the lack of multilingual datasets remains an unresolved challenge. To address this, we propose ADAM (Cross-Lingual (A)ttention (D)istillation with Personality-Guided Generative (A)ugmentation for (M)ultilingual Personality Recognition), a state-of-the-art approach designed to advance multilingual personality recognition. Our approach leverages an existing English-language personality dataset as the primary source and employs a large language model (LLM) for translationbased augmentation, enhanced by Personality-Informed Generative Augmentation (PIGA), to generate high-quality training data in multiple languages, including Japanese, Chinese, Malay, and French. We provide a thorough analysis to justify the effectiveness of these augmentation techniques. Building on these advancements, ADAM integrates Cross-Lingual Attention Distillation (CLAD) to train a model capable of understanding and recognizing personality traits across languages, bridging linguistic and cultural gaps in personality analysis. This research presents a thorough evaluation of the proposed augmentation method, incorporating an ablation study on recognition performance to ensure fair comparisons and robust validation. Overall, with PIGA augmentation, the findings demonstrate that CLAD significantly outperforms the standard BCE across all languages and personality traits, achieving notable improvements in average BA scores - 0.6332 (+0.0573) on the Essays dataset and 0.7448 (+0.0968) on the Kaggle dataset. The CLAD-trained model also demonstrated strong generalizability and achieved benchmark performance comparable to current leading encoder models. The model weight, dataset, and algorithm repository are available at https://research.jingjietan.com/?q=ADAM.