Human-in-the-Loop Uncertainty Analysis in Self-Adaptive Robots Using LLMs

arXiv cs.RO / 5/6/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper presents RoboULM, a human-in-the-loop methodology and tool that helps practitioners systematically explore uncertainties in self-adaptive robots during the design stage using LLMs.
  • It introduces an uncertainty taxonomy that catalogs different sources, impacts, and mitigation-related dimensions of uncertainty in self-adaptive robotic systems.
  • The authors argue that handling unaddressed uncertainties is crucial for avoiding safety violations and operational failures in dynamic, unpredictable environments.
  • In an evaluation with 16 practitioners across four industrial use cases, RoboULM was rated as both useful and easy to understand, with strong appreciation for structured prompting and iterative refinement.
  • Overall, the study suggests RoboULM could enable more systematic uncertainty analysis for complex, rapidly evolving robotic technologies.

Abstract

Self-adaptive robots operate in dynamic, unpredictable environments where unaddressed uncertainties can lead to safety violations and operational failures. However, systematically identifying and analyzing these uncertainties, including their sources, impacts, and mitigation strategies, remains a significant challenge given the inherent complexity of real-world environments, dynamic robotic behavior, and the rapid evolution of robotic technologies. To address this, we introduce RoboULM, a human-in-the-loop methodology and tool that supports practitioners in systematically exploring uncertainties at the design stage using large language models (LLMs). Moreover, we present an uncertainty taxonomy that provides a detailed catalog of uncertainties in self-adaptive robots. We evaluated RoboULM with 16 practitioners from four industrial use cases. The results show that RoboULM was perceived as both useful and easy to understand, with the participants particularly valuing structured prompting and iterative refinement support. These findings demonstrate the potential of RoboULM as a viable solution for systematic uncertainty analysis in complex robots.