ExPosST: Explicit Positioning with Adaptive Masking for LLM-Based Simultaneous Machine Translation
arXiv cs.CL / 3/17/2026
📰 NewsModels & Research
Key Points
- ExPosST proposes explicit position allocation to resolve the positional mismatch when using decoder-only LLMs for simultaneous machine translation (SimulMT).
- It reserves fixed positional slots for incoming source tokens to enable efficient decoding with KV cache across different positional encoding schemes.
- The authors introduce a policy-consistent fine-tuning strategy that aligns training with inference-time decoding behavior, bridging fine-tuning and inference.
- Experiments on multiple language pairs show that ExPosST enables simultaneous translation under diverse policies and improves compatibility with various encoding methods.
- The framework aims to improve inference efficiency, positional consistency, and broad model compatibility in LLM-based SimulMT.


