Learning to Rotate: Temporal and Semantic Rotary Encoding for Sequential Modeling
arXiv cs.AI / 4/28/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that Rotary Positional Embeddings (RoPE) provide an often-overlooked “rotation manifold” that represents a second, expressive dimension in attention beyond semantic embeddings.
- It reframes token embeddings as encoding the semantic (real) component, while the rotation dimension encodes the dynamic/relational (imaginary) component across time, position, and context.
- The authors propose SIREN-RoPE, which makes the rotation dimension learnable and signal-conditioned by injecting heterogeneous inputs (continuous timestamps, cyclical patterns, and categorical metadata) via a dual-branch SIREN (Sinusoidal Representation Network).
- In a proof-of-concept evaluation on a large-scale social network news-feed dataset using a generative recommender as the ranking model, the approach improves calibration and ranking while adding negligible computational overhead.
Related Articles

A beginner's guide to the Gemini-2.5-Flash model by Google on Replicate
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Hugging Face 'Spaces' now acts as an MCP-App-Store. Anybody thinking on the security consequence?
Dev.to

AI + Space + APIs: The Future of Web Development 🌌
Dev.to

I Thought AI Would Make Me Lazy. It Made Me More Rigorous.
Dev.to