Evaluating Generative Models as Interactive Emergent Representations of Human-Like Collaborative Behavior

arXiv cs.RO / 5/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether embodied foundation-model agents develop emergent, human-like collaborative behaviors that suggest they form internal “mental models” of human collaborators.
  • It introduces a 2D color-matching collaborative game where LLM agents and humans coordinate, defining five behavioral indicators (perspective-taking, collaborator-aware planning, introspection, theory of mind, and clarification).
  • Using an LLM-based automated judge system, the study detects these behaviors with fair-to-substantial agreement compared with human annotations.
  • The results suggest foundation models can show these collaboration behaviors consistently even without explicit training for them, with behavior frequencies and patterns varying across different LLMs.
  • A user study found participants generally had positive experiences and perceived collaboration as effective, while requesting improvements such as faster response times and more human-like interaction pacing.

Abstract

Human-AI collaboration requires AI agents to understand human behavior for effective coordination. While advances in foundation models show promising capabilities in understanding and showing human-like behavior, their application in embodied collaborative settings needs further investigation. This work examines whether embodied foundation model agents exhibit emergent collaborative behaviors indicating underlying mental models of their collaborators, which is an important aspect of effective coordination. This paper develops a 2D collaborative game environment where large language model agents and humans complete color-matching tasks requiring coordination. We define five collaborative behaviors as indicators of emergent mental model representation: perspective-taking, collaborator-aware planning, introspection, theory of mind, and clarification. An automated behavior detection system using LLM-based judges identifies these behaviors, achieving fair to substantial agreement with human annotations. Results from the automated behavior detection system show that foundation models consistently exhibit emergent collaborative behaviors without being explicitly trained to do so. These behaviors occur at varying frequencies during collaboration stages, with distinct patterns across different LLMs. A user study was also conducted to evaluate human satisfaction and perceived collaboration effectiveness, with the results indicating positive collaboration experiences. Participants appreciated the agents' task focus, plan verbalization, and initiative, while suggesting improvements in response times and human-like interactions. This work provides an experimental framework for human-AI collaboration, empirical evidence of collaborative behaviors in embodied LLM agents, a validated behavioral analysis methodology, and an assessment of collaboration effectiveness.