Beyond Sequential Distance: Inter-Modal Distance Invariant Position Encoding
arXiv cs.CV / 3/12/2026
📰 NewsModels & Research
Key Points
- The paper identifies that the distance-based inductive bias of Multimodal RoPE degrades inter-modal attention as the text sequence length increases, causing visual fading in long-context generation.
- It proposes inter-modal Distance Invariant Position Encoding (DIPE), which disentangles position encoding by modality to preserve intra-modal locality while anchoring inter-modal proximity.
- DIPE, when combined with Multimodal RoPE, mitigates the inter-modal distance penalty and keeps visual signals perceptually grounded across long contexts.
- Experimental results show preserved performance on short-context benchmarks alongside significantly improved long-context visual grounding, with code available at the linked GitHub repository.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA