Match-Any-Events: Zero-Shot Motion-Robust Feature Matching Across Wide Baselines for Event Cameras
arXiv cs.CV / 4/22/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper proposes the first event-camera feature matching model that performs zero-shot wide-baseline correspondence across datasets without target-domain fine-tuning or adaptation.
- It introduces a motion-robust, computationally efficient attention backbone that learns multi-timescale features from event streams, along with sparsity-aware event token selection to keep large-scale training feasible.
- To overcome the lack of wide-baseline supervision, the authors build a robust event motion synthesis framework that generates large-scale training datasets with varied viewpoints, modalities, and motions.
- Experiments on multiple benchmarks show a 37.7% improvement over prior best event feature matching methods, and the project provides code and data publicly.
Related Articles

Enterprise AI Governance Has Shifted from Policy to Execution
Dev.to

Rethinking CNN Models for Audio Classification
Dev.to
v0.20.0rc1
vLLM Releases

Build-in-Public: What I Learned Building an AI Image SaaS
Dev.to
I built my own event bus for a sustainability app — here's what I learned about agent automation using OpenClaw
Dev.to