SpikingBrain2.0: Brain-Inspired Foundation Models for Efficient Long-Context and Cross-Platform Inference

arXiv cs.LG / 4/27/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • SpikingBrain2.0(SpB2.0)は、長いコンテキストでも性能と計算効率を両立することを目的にした5B規模の脳風(スパイキング)基盤モデルで、先行モデルのアーキテクチャと学習効率を強化しています。
  • DSSA(Dual-Space Sparse Attention)として、層間でSparse Softmax Attention(MoBA)とSparse Linear Attention(SSE)をハイブリッド化し、長文モデリングでの性能/効率トレードオフ改善を狙っています。
  • INT8のスパイキング符号化とFP8の符号化という「デュアル量子化経路」により、イベント駆動計算での効率化と、最新GPUでの推論高速化を両立します。
  • 学習面ではTransformer→ハイブリッド(T2H)の最適化パイプラインを整備し、LLM/VLM向けにデュアル変換経路を用いてオープンデータを厳選して学習コストを抑えつつ性能回復を示しました。
  • 実験では4MコンテキストでTTFTを10.13倍高速化し、vLLM上で8台のA100により10Mトークン超の処理を可能にするなど、メモリ制約の厳しい長文でも動作し、さらにFP8 GPU推論やニューロモーフィック実行での性能/省電力効果を報告しています。

Abstract

Scaling context length is reshaping large-model development, yet full-attention Transformers suffer from prohibitive computation and inference bottlenecks at long sequences. A key challenge is to design foundation models that maintain performance and long-context efficiency with minimal training overhead. We introduce SpikingBrain2.0 (SpB2.0), a 5B model that advances both architecture and training efficiency of its predecessor. Our contributions are two-fold. (1) Architectural Innovation: We propose Dual-Space Sparse Attention (DSSA), an inter-layer hybrid of Sparse Softmax Attention (MoBA) and Sparse Linear Attention (SSE), achieving an improved performance-efficiency trade-off for long-context modeling. SpB2.0 further supports dual quantization paths: INT8-Spiking coding enables sparse event-driven computation, while FP8 coding accelerates inference on modern GPUs. (2) Enhanced Training Strategy: We develop an optimized Transformer-to-Hybrid (T2H) pipeline with dual conversion paths for LLMs and VLMs using curated open-source data. Empirically, SpB2.0-5B and SpB2.0-VL-5B recover most of the base Transformer (Qwen3-4B) capability with under 7k A100 GPU hours. SpB2.0 achieves a 10.13x TTFT speedup at 4M context and supports over 10M tokens on 8 A100 GPUs under vLLM, where full-attention models exceed memory limits. It also demonstrates strong cross-platform compatibility, enabling FP8 GPU inference (2.52x speedup at 250k) and efficient neuromorphic execution (64.31% sparsity, with 70.6% and 46.5% area and power reduction at 500MHz). Overall, SpikingBrain2.0 provides a practical pathway for lightweight, multimodal, spiking foundation models, highlighting the potential of combining brain-inspired mechanisms with efficient architectures for resource-constrained and edge scenarios.

SpikingBrain2.0: Brain-Inspired Foundation Models for Efficient Long-Context and Cross-Platform Inference | AI Navigate