Shadow-Loom: Causal Reasoning over Graphical World Model of Narratives
arXiv cs.AI / 5/5/2026
📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The Shadow-Loom framework converts a narrative into a versioned graphical world model that can be acted on by two reasoning engines.
- One engine performs causal reasoning using Pearl’s ladder of causation, while the other uses a counterfactual calculus built for Ancestral Multi-World Networks.
- The framework also introduces a “narrative physics” component that scores the same graph across four reader-oriented structural states (mystery, dramatic irony, suspense, and surprise).
- Large language models are used only at the boundary for tasks like extraction, rendering, and auditing, while core identification, intervention, and counterfactual reasoning are executed in typed code over the graph.
- The authors release Shadow-Loom as an experimental open-source research artifact with code, fixtures, and a pipeline, explicitly avoiding positioning it as a benchmark NLP model.
Related Articles

When Claims Freeze Because a Provider Record Drifted: The Case for Enrollment Repair Agents
Dev.to

The Cash Is Already Earned: Why Construction Pay Application Exceptions Fit an Agent Better Than SaaS
Dev.to

Why Ship-and-Debit Claim Recovery Is a Better Agent Wedge Than Another “AI Back Office” Tool
Dev.to
AI is getting better at doing things, but still bad at deciding what to do?
Reddit r/artificial

I Built an AI-Powered Chinese BaZi (八字) Fortune Teller — Here's What DeepSeek Revealed About Destiny
Dev.to