Right at My Level: A Unified Multilingual Framework for Proficiency-Aware Text Simplification
arXiv cs.CL / 4/8/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Re-RIGHT, a unified reinforcement-learning framework for proficiency-aware, adaptive multilingual text simplification without needing parallel corpus supervision.
- It finds that prompt-based lexical simplification using target proficiency levels (CEFR/JLPT/TOPIK/HSK) performs poorly for easier proficiency bands and for non-English languages even with strong LLMs.
- To overcome this, the authors collect 43K vocabulary-level data points across English, Japanese, Korean, and Chinese and train a compact 4B policy model.
- Re-RIGHT uses three reward modules—vocabulary coverage, semantic preservation, and coherence—to better target readability levels while keeping meaning and fluency.
- Experiments show improved lexical coverage at target proficiency levels compared with stronger LLM baselines while maintaining semantic and fluency quality.
Related Articles

Black Hat Asia
AI Business

Meta's latest model is as open as Zuckerberg's private school
The Register

AI fuels global trade growth as China-US flows shift, McKinsey finds
SCMP Tech

Why multi-agent AI security is broken (and the identity patterns that actually work)
Dev.to
BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.
Reddit r/artificial