Hazard Management in Robot-Assisted Mammography Support

arXiv cs.RO / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a safety-focused hazard management methodology for MammoBot, a robot-assisted system supporting patients during X-ray mammography in close-contact clinical settings.
  • It combines stakeholder-guided process modeling with SHARD and STPA to systematically capture human-robot interactions and analyze both technical deviations and unsafe control actions caused by user interaction.
  • The workflow is defined collaboratively with clinicians, roboticists, and patient representatives to ensure that real interaction patterns and risks are reflected in the analysis.
  • The findings indicate that many hazards stem from timing mismatches, premature actions, and misinterpretation of system state rather than component failures.
  • The paper translates identified hazards into refined and additional safety requirements that constrain system behavior and reduce dependence on perfect human timing or interpretation, offering a traceable approach potentially reusable for other assistive clinical robots.

Abstract

Robotic and embodied-AI systems have the potential to improve accessibility and quality of care in clinical settings, but their deployment in close physical contact with vulnerable patients introduces significant safety risks. This paper presents a hazard management methodology for MammoBot, an assistive robotic system designed to support patients during X-ray mammography. To ensure safety from early development stages, we combine stakeholder-guided process modelling with Software Hazard Analysis and Resolution in Design (SHARD) and System-Theoretic Process Analysis (STPA). The robot-assisted workflow is defined collaboratively with clinicians, roboticists, and patient representatives to capture key human-robot interactions. SHARD is applied to identify technical and procedural deviations, while STPA is used to analyse unsafe control actions arising from user interaction. The results show that many hazards arise not from component failures, but from timing mismatches, premature actions, and misinterpretation of system state. These hazards are translated into refined and additional safety requirements that constrain system behaviour and reduce reliance on correct human timing or interpretation alone. The work demonstrates a structured and traceable approach to safety-driven design with potential applicability to assistive robotic systems in clinical environments.