AI Navigate

Designing for Disagreement: Front-End Guardrails for Assistance Allocation in LLM-Enabled Robots

arXiv cs.AI / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors propose bounded calibration with contestability as a front-end pattern for allocating scarce assistance among multiple users in LLM-enabled robots.
  • The pattern constrains prioritization to a governance-approved menu of admissible modes and keeps the active mode legible at the point of deferral.
  • It provides an outcome-specific contest pathway without renegotiating the global rule to accommodate pluralism and LLM variability.
  • The paper outlines an evaluation agenda focused on legibility, procedural legitimacy, and actionability, including risks of automation bias and uneven usability of contest channels, illustrated by a public-concourse robot vignette.

Abstract

LLM-enabled robots prioritizing scarce assistance in social settings face pluralistic values and LLM behavioral variability: reasonable people can disagree about who is helped first, while LLM-mediated interaction policies vary across prompts, contexts, and groups in ways that are difficult to anticipate or verify at contact point. Yet user-facing guardrails for real-time, multi-user assistance allocation remain under-specified. We propose bounded calibration with contestability, a procedural front-end pattern that (i) constrains prioritization to a governance-approved menu of admissible modes, (ii) keeps the active mode legible in interaction-relevant terms at the point of deferral, and (iii) provides an outcome-specific contest pathway without renegotiating the global rule. Treating pluralism and LLM uncertainty as standing conditions, the pattern avoids both silent defaults that hide implicit value skews and wide-open user-configurable "value settings" that shift burden under time pressure. We illustrate the pattern with a public-concourse robot vignette and outline an evaluation agenda centered on legibility, procedural legitimacy, and actionability, including risks of automation bias and uneven usability of contest channels.