Efficient Onboard Spacecraft Pose Estimation with Event Cameras and Neuromorphic Hardware

arXiv cs.RO / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an end-to-end 6-DoF spacecraft relative pose estimation pipeline that combines event-camera vision with the BrainChip Akida neuromorphic processor to handle space imagery challenges like saturation, high contrast, and fast motion.
  • It trains compact MobileNet-style keypoint regression networks using event-frame representations, then applies quantization-aware training (8/4-bit) and converts the networks into Akida-compatible spiking neural networks.
  • Benchmarks on the SPADES dataset evaluate multiple event representations and show real-time, low-power inference on Akida V1 hardware.
  • For improved accuracy, the authors also design a heatmap-based model for Akida V2 and validate it using Akida Cloud, reporting better pose accuracy than prior variants.
  • The work claims to be the first end-to-end demonstration of spacecraft pose estimation running directly on Akida neuromorphic hardware, positioning it as a practical approach for future low-latency onboard autonomy.

Abstract

Reliable relative pose estimation is a key enabler for autonomous rendezvous and proximity operations, yet space imagery is notoriously challenging due to extreme illumination, high contrast, and fast target motion. Event cameras provide asynchronous, change-driven measurements that can remain informative when frame-based imagery saturates or blurs, while neuromorphic processors can exploit sparse activations for low-latency, energy-efficient inferences. This paper presents a spacecraft 6-DoF pose-estimation pipeline that couples event-based vision with the BrainChip Akida neuromorphic processor. Using the SPADES dataset, we train compact MobileNet-style keypoint regression networks on lightweight event-frame representations, apply quantization-aware training (8/4-bit), and convert the models to Akida-compatible spiking neural networks. We benchmark three event representations and demonstrate real-time, low-power inference on Akida V1 hardware. We additionally design a heatmap-based model targeting Akida V2 and evaluate it on Akida Cloud, yielding improved pose accuracy. To our knowledge, this is the first end-to-end demonstration of spacecraft pose estimation running on Akida hardware, highlighting a practical route to low-latency, low-power perception for future autonomous space missions.