AssemLM: Spatial Reasoning Multimodal Large Language Models for Robotic Assembly

arXiv cs.RO / 4/13/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces AssemLM, a spatial reasoning multimodal LLM designed to improve robotic assembly by performing explicit 3D geometric reasoning for fine-grained manipulation tasks.
  • AssemLM combines assembly manuals, point clouds, and textual instructions to predict task-critical 6D assembly poses, using a specialized point-cloud encoder to capture detailed geometric and rotational features.
  • It also presents AssemBench, a new large-scale dataset/benchmark with 900K+ multimodal samples and precise 6D pose annotations to evaluate 3D spatial inference beyond common 2D or grounding-focused benchmarks.
  • Reported experiments claim state-of-the-art 6D pose reasoning performance across varied assembly scenarios, and real-robot tests indicate support for fine-grained, multi-step assembly in real-world conditions.

Abstract

Spatial reasoning is a fundamental capability for embodied intelligence, especially for fine-grained manipulation tasks such as robotic assembly. While recent vision-language models (VLMs) exhibit preliminary spatial awareness, they largely rely on coarse 2D perception and lack the ability to perform accurate reasoning over 3D geometry, which is crucial for precise assembly operations. To address this limitation, we propose AssemLM, a spatial multimodal large language model tailored for robotic assembly. AssemLM integrates assembly manuals, point clouds, and textual instructions to reason about and predict task-critical 6D assembly poses, enabling explicit geometric understanding throughout the assembly process. To effectively bridge raw 3D perception and high-level reasoning, we adopt a specialized point cloud encoder to capture fine-grained geometric and rotational features, which are then integrated into the multimodal language model to support accurate 3D spatial reasoning for assembly tasks. In addition, we construct AssemBench, a large-scale dataset and benchmark for assembly-oriented spatial reasoning, comprising over 900K multimodal samples with precise 6D pose annotations. AssemBench extends spatial reasoning evaluation beyond 2D and grounding tasks into full 3D geometric inference, filling a critical gap in existing embodied AI benchmarks. Extensive experiments demonstrate that AssemLM achieves state-of-the-art performance in 6D pose reasoning across diverse assembly scenarios. Furthermore, real-robot evaluations show that our model can support fine-grained and multi-step assembly execution in real-world settings, demonstrating its potential for robotic assembly applications.