ThermoAct:Thermal-Aware Vision-Language-Action Models for Robotic Perception and Decision-Making

arXiv cs.RO / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that human-robot collaboration can benefit from integrating non-visual sensors, highlighting thermal data as a key but underused signal for robot safety and efficiency.
  • It introduces a thermal-aware Vision-Language-Action (VLA) framework where a Vision-Language Model (VLM) functions as a high-level planner that interprets natural-language commands and decomposes them into sub-tasks.
  • By incorporating thermal information rather than relying only on RGB/vision, the robot can better perceive physical properties and proactively maintain environmental safety during execution.
  • The authors report real-world experiments that validate the feasibility of the approach and suggest improvements in task success rates and safety over purely vision-based systems.

Abstract

In recent human-robot collaboration environments, there is a growing focus on integrating diverse sensor data beyond visual information to enable safer and more intelligent task execution. Although thermal data can be crucial for enhancing robot safety and operational efficiency, its integration has been relatively overlooked in prior research. This paper proposes a novel Vision-Language-Action (VLA) framework that incorporates thermal information for robot task execution. The proposed system leverages a Vision-Language Model (VLM) as a high-level planner to interpret complex natural language commands and decompose them into simpler sub-tasks. This approach facilitates efficient data collection and robust reasoning for complex operations. Unlike conventional methods that rely solely on visual data, our approach integrates thermal information, enabling the robot to perceive physical properties and proactively ensure environmental safety. Experimental results from real-world task scenarios validate the feasibility of our proposed framework, suggesting its potential to enhance task success rates and safety compared to existing vision-based systems.