More Is Different: Toward a Theory of Emergence in AI-Native Software Ecosystems
arXiv cs.AI / 4/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that multi-agent AI systems can degrade software ecosystems in ways that traditional software engineering theories cannot fully explain, even when individual agents behave correctly.
- It proposes treating AI-native software ecosystems as complex adaptive systems (CAS), claiming that emergent issues such as architectural entropy, cascade failures, and comprehension debt arise primarily from agent interactions rather than from any single component.
- The authors map Holland’s core CAS properties to observable ecosystem dynamics and differentiate AI-native ecosystems from conventional microservices architectures or typical open-source networks.
- They introduce a measurement approach for “causal emergence,” defining micro-level state variables and coarse-graining functions to make ecosystem-level investigation more tractable.
- The work presents seven falsifiable propositions that could either challenge or extend Lehman’s laws, and—depending on confirmation—may require ecosystem-level monitoring to become the primary governance mechanism for AI-native systems.
Related Articles
I’m working on an AGI and human council system that could make the world better and keep checks and balances in place to prevent catastrophes. It could change the world. Really. Im trying to get ahead of the game before an AGI is developed by someone who only has their best interest in mind.
Reddit r/artificial
Deepseek V4 Flash and Non-Flash Out on HuggingFace
Reddit r/LocalLLaMA

DeepSeek V4 Flash & Pro Now out on API
Reddit r/LocalLLaMA

I’m building a post-SaaS app catalog on Base, and here’s what that actually means
Dev.to

From "Hello World" to "Hello Agents": The Developer Keynote That Rewired Software Engineering
Dev.to