More Is Different: Toward a Theory of Emergence in AI-Native Software Ecosystems

arXiv cs.AI / 4/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multi-agent AI systems can degrade software ecosystems in ways that traditional software engineering theories cannot fully explain, even when individual agents behave correctly.
  • It proposes treating AI-native software ecosystems as complex adaptive systems (CAS), claiming that emergent issues such as architectural entropy, cascade failures, and comprehension debt arise primarily from agent interactions rather than from any single component.
  • The authors map Holland’s core CAS properties to observable ecosystem dynamics and differentiate AI-native ecosystems from conventional microservices architectures or typical open-source networks.
  • They introduce a measurement approach for “causal emergence,” defining micro-level state variables and coarse-graining functions to make ecosystem-level investigation more tractable.
  • The work presents seven falsifiable propositions that could either challenge or extend Lehman’s laws, and—depending on confirmation—may require ecosystem-level monitoring to become the primary governance mechanism for AI-native systems.

Abstract

Software engineering faces a fundamental challenge: multi-agent AI systems fail in ways that defy explanation by traditional theories. While individual agents perform correctly, their interactions degrade entire ecosystems, revealing a gap in our understanding of software evolution. This paper argues that AI-native software ecosystems must be studied as complex adaptive systems (CAS), where emergent properties like architectural entropy, cascade failures, and comprehension debt arise not from individual components, but from their interactions. We map Holland's six CAS properties onto observable ecosystem dynamics, distinguishing these systems from microservices or open-source networks. To measure causal emergence, we define micro-level state variables, coarse-graining functions, and a tractable measurement framework. Seven falsifiable propositions link CAS theory to software evolution, challenging or extending Lehman's laws where agent-level assumptions fail. If confirmed, these findings would demand a radical shift: ecosystem-level monitoring as the primary governance mechanism for AI-native systems. If refuted, existing theories may only need incremental updates. Either way, this work forces us to ask: Can software engineering's core assumptions survive the age of autonomous agents?