Cooperation in Human and Machine Agents: Promise Theory Considerations

arXiv cs.AI / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes using Promise Theory to analyze and design cooperation across systems involving human and machine agents, including semi-automated efforts and mixed socio-technical setups.
  • It frames agent coordination around abstract properties such as signalling, comprehension, trust, risk, and feedback to address how components can adhere to intended purposes.
  • It revisits established principles of agent cooperation in the context of the renewed “agent paradigm,” specifically relating these ideas to modern AI agents.
  • The work aims to unify organizational and functional design considerations for cooperation across humans, hardware, software, and AI—whether or not management is present.

Abstract

Agent based systems are more common than we may think. A Promise Theory perspective on cooperation, in systems of human-machine agents, offers a unified perspective on organization and functional design with semi-automated efforts, in terms of the abstract properties of autonomous agents, This applies to human efforts, hardware systems, software, and artificial intelligence, with and without management. One may ask how does a reasoning system of components keep to an intended purpose? As the agent paradigm is now being revived, in connection with artificial intelligence agents, I revisit established principles of agent cooperation, as applied to humans, machines, and their mutual interactions. Promise Theory represents the fundamentals of signalling, comprehension, trust, risk, and feedback between agents, and offers some lessons about success and failure.