AI Navigate

Indirect Question Answering in English, German and Bavarian: A Challenging Task for High- and Low-Resource Languages Alike

arXiv cs.CL / 3/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents two multilingual IQA corpora, InQA+ and GenIQA, covering English, Standard German, and Bavarian, with InQA+ being hand-annotated and GenIQA generated via GPT-4o-mini.
  • It shows IQA is pragmatically hard, with low performance even in English and signs of severe overfitting, indicating that data quality and size are critical.
  • Experiments with multilingual transformers (mBERT, XLM-R, mDeBERTa) reveal that label ambiguity, label-set choices, and dataset size strongly influence results.
  • The authors offer recommendations to address these challenges and highlight that larger training data improves IQA performance, while GPT-4o-mini data may not yield high-quality IQA data.

Abstract

Indirectness is a common feature of daily communication, yet is underexplored in NLP research for both low-resource as well as high-resource languages. Indirect Question Answering (IQA) aims at classifying the polarity of indirect answers. In this paper, we present two multilingual corpora for IQA of varying quality that both cover English, Standard German and Bavarian, a German dialect without standard orthography: InQA+, a small high-quality evaluation dataset with hand-annotated labels, and GenIQA, a larger training dataset, that contains artificial data generated by GPT-4o-mini. We find that IQA is a pragmatically hard task that comes with various challenges, based on several experiment variations with multilingual transformer models (mBERT, XLM-R and mDeBERTa). We suggest and employ recommendations to tackle these challenges. Our results reveal low performance, even for English, and severe overfitting. We analyse various factors that influence these results, including label ambiguity, label set and dataset size. We find that the IQA performance is poor in high- (English, German) and low-resource languages (Bavarian) and that it is beneficial to have a large amount of training data. Further, GPT-4o-mini does not possess enough pragmatic understanding to generate high-quality IQA data in any of our tested languages.