ExPosST: Explicit Positioning with Adaptive Masking for LLM-Based Simultaneous Machine Translation
arXiv cs.CL / 3/17/2026
📰 NewsModels & Research
Key Points
- ExPosST proposes explicit position allocation to resolve the positional mismatch when using decoder-only LLMs for simultaneous machine translation (SimulMT).
- It reserves fixed positional slots for incoming source tokens to enable efficient decoding with KV cache across different positional encoding schemes.
- The authors introduce a policy-consistent fine-tuning strategy that aligns training with inference-time decoding behavior, bridging fine-tuning and inference.
- Experiments on multiple language pairs show that ExPosST enables simultaneous translation under diverse policies and improves compatibility with various encoding methods.
- The framework aims to improve inference efficiency, positional consistency, and broad model compatibility in LLM-based SimulMT.
Related Articles
[D] Matryoshka Representation Learning
Reddit r/MachineLearning
Two new Qwen3.5 “Neo” fine‑tunes focused on fast, efficient reasoning
Reddit r/LocalLLaMA

HKIC, Gobi Partners and HKU team up for fund backing university research start-ups
SCMP Tech
Yann LeCun’s New LeWorldModel (LeWM) Research Targets JEPA Collapse in Pixel-Based Predictive World Modeling
MarkTechPost
Streaming experts
Simon Willison's Blog