Light Cones For Vision: Simple Causal Priors For Visual Hierarchy
arXiv cs.LG / 3/27/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that standard vision models represent objects as independent Euclidean points and therefore struggle to capture hierarchical “parts within wholes” structure.
- It introduces Worldline Slot Attention, which represents objects as persistent trajectories (worldlines) in spacetime with multiple slots across hierarchy levels, sharing spatial position but differing in temporal coordinates.
- Experiments show Euclidean worldlines perform poorly (0.078 accuracy, below random chance), while Lorentzian worldlines achieve substantially higher accuracy (0.479–0.661) with reported 6x improvement replicated across 20+ runs.
- The authors find that Lorentzian (causal/light-cone) geometry outperforms hyperbolic embeddings, suggesting visual hierarchy formation depends on asymmetric causal/temporal structure rather than purely tree-like radial branching.
- The method is described as requiring only 11K parameters, with code released on GitHub for further exploration.
Related Articles

GDPR and AI Training Data: What You Need to Know Before Training on Personal Data
Dev.to
Edge-to-Cloud Swarm Coordination for heritage language revitalization programs with embodied agent feedback loops
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Sector HQ Daily AI Intelligence - March 27, 2026
Dev.to

AI Crawler Management: The Definitive Guide to robots.txt for AI Bots
Dev.to