A Framework for Low-Latency, LLM-driven Multimodal Interaction on the Pepper Robot

arXiv cs.AI / 3/24/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces an open-source Android framework for Pepper that targets high latency and loss of paralinguistic cues in common cascaded STT→LLM→TTS pipelines.
  • It uses end-to-end Speech-to-Speech (S2S) models to support low-latency interaction while preserving prosody and enabling adaptive intonation.
  • The framework extends LLM usage by adding robust Function Calling so the LLM can act as an agentic planner coordinating navigation, gaze control, and tablet interaction.
  • It integrates multimodal feedback channels, including vision, touch, and system state, to improve embodied HRI control and perception.
  • The system is designed to run on Pepper’s tablet but is also portable to standard Android devices, easing development and experimentation independent of robot hardware.

Abstract

Despite recent advances in integrating Large Language Models (LLMs) into social robotics, two weaknesses persist. First, existing implementations on platforms like Pepper often rely on cascaded Speech-to-Text (STT)->LLM->Text-to-Speech (TTS) pipelines, resulting in high latency and the loss of paralinguistic information. Second, most implementations fail to fully leverage the LLM's capabilities for multimodal perception and agentic control. We present an open-source Android framework for the Pepper robot that addresses these limitations through two key innovations. First, we integrate end-to-end Speech-to-Speech (S2S) models to achieve low-latency interaction while preserving paralinguistic cues and enabling adaptive intonation. Second, we implement extensive Function Calling capabilities that elevate the LLM to an agentic planner, orchestrating robot actions (navigation, gaze control, tablet interaction) and integrating diverse multimodal feedback (vision, touch, system state). The framework runs on the robot's tablet but can also be built to run on regular Android smartphones or tablets, decoupling development from robot hardware. This work provides the HRI community with a practical, extensible platform for exploring advanced LLM-driven embodied interaction.