Learning to Rotate: Temporal and Semantic Rotary Encoding for Sequential Modeling

arXiv cs.AI / 4/28/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that Rotary Positional Embeddings (RoPE) provide an often-overlooked “rotation manifold” that represents a second, expressive dimension in attention beyond semantic embeddings.
  • It reframes token embeddings as encoding the semantic (real) component, while the rotation dimension encodes the dynamic/relational (imaginary) component across time, position, and context.
  • The authors propose SIREN-RoPE, which makes the rotation dimension learnable and signal-conditioned by injecting heterogeneous inputs (continuous timestamps, cyclical patterns, and categorical metadata) via a dual-branch SIREN (Sinusoidal Representation Network).
  • In a proof-of-concept evaluation on a large-scale social network news-feed dataset using a generative recommender as the ranking model, the approach improves calibration and ranking while adding negligible computational overhead.

Abstract

Every Transformer architecture dedicates enormous capacity to learning rich representations in semantic embedding space -- yet the rotation manifold acted upon by Rotary Positional Embeddings (RoPE) has been treated as a fixed, hand-crafted structure, populated only by discrete ordinal indices. We argue that this rotation space is a largely overlooked second dimension of expressivity in the attention mechanism, one whose systematic exploration may open a new door for attention-based architectures. The analogy to complex numbers is instructive: just as introducing the imaginary axis -- orthogonal to and independent of the real line -- unlocked new algebraic structure once believed impossible, treating the rotation manifold as a learnable, signal-conditioned space opens an orthogonal degree of freedom in attention. In this framing, the token embedding encodes the semantic (real) component of a representation -- what a token means -- while the rotation encodes its dynamic (imaginary) component -- how it relates to every other token across time, position, and context. We introduce SIREN-RoPE, a concrete instantiation of this idea, which populates the rotation dimension with heterogeneous signals -- continuous timestamps, cyclical temporal patterns, and categorical metadata -- via a dual-branch Sinusoidal Representation Network (SIREN). As a proof of concept, we evaluate on a production-scale news feed dataset from a major social network using a generative recommender as the ranking model, demonstrating that activating this hidden dimension yields consistent improvements across calibration and ranking objectives with negligible computational overhead. We invite the community to view the rotation space not as a solved positional-encoding detail, but as an untapped axis whose rich structure may prove as consequential for attention as the imaginary unit proved for algebra.