Multilingual Cognitive Impairment Detection in the Era of Foundation Models
arXiv cs.CL / 4/9/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The study evaluates cognitive impairment (CI) classification from speech transcripts in English, Slovene, and Korean using both zero-shot LLM classifiers and supervised tabular models under a leave-one-out setup.
- Zero-shot LLMs serve as competitive no-training baselines, but supervised tabular approaches generally outperform them, especially when engineered linguistic features are included and fused with transcript embeddings.
- The experiments compare three input settings—transcript-only, linguistic-features-only, and combined—and show that integrating structured linguistic signals improves robustness across languages.
- Few-shot tests indicate that the usefulness of limited labeled data varies by language, with some languages gaining more from supervision than others without richer feature representations.
- The authors conclude that, in small-data CI detection, structured linguistic features and fusion-based classifiers remain reliable and strong compared with purely LLM-driven approaches.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to