Designing for Disagreement: Front-End Guardrails for Assistance Allocation in LLM-Enabled Robots
arXiv cs.AI / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The authors propose bounded calibration with contestability as a front-end pattern for allocating scarce assistance among multiple users in LLM-enabled robots.
- The pattern constrains prioritization to a governance-approved menu of admissible modes and keeps the active mode legible at the point of deferral.
- It provides an outcome-specific contest pathway without renegotiating the global rule to accommodate pluralism and LLM variability.
- The paper outlines an evaluation agenda focused on legibility, procedural legitimacy, and actionability, including risks of automation bias and uneven usability of contest channels, illustrated by a public-concourse robot vignette.




