Automating Domain-Driven Design: Experience with a Prompting Framework

arXiv cs.AI / 3/30/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper proposes a prompting framework that uses structured LLM interactions to automate key Domain-Driven Design (DDD) activities via five sequential steps, from creating an ubiquitous language to mapping technical architecture.
  • In a case study using FTAPI’s enterprise platform requirements, the framework produced useful, usable artifacts for the early stages (Steps 1–3), including outputs like glossaries and context identification.
  • The authors found that inaccuracies in later steps (Steps 4–5) can propagate and accumulate, making the resulting artifacts impractical, which limits the framework’s ability to achieve full automation.
  • Overall, the framework is positioned as a collaborative “sparring partner” that reduces overhead and effort for DDD documentation while keeping critical trade-offs under human expert control.

Abstract

Domain-driven design (DDD) is a powerful design technique for architecting complex software systems. This paper introduces a prompting framework that automates core DDD activities through structured large language model (LLM) interactions. We decompose DDD into five sequential steps: (1) establishing an ubiquitous language, (2) simulating event storming, (3) identifying bounded contexts, (4) designing aggregates, and (5) mapping to technical architecture. In a case study, we validated the prompting framework against real-world requirements from FTAPI's enterprise platform. While the first steps consistently generate valuable and usable artifacts, later steps show how minor errors or inaccuracies can propagate and accumulate. Overall, the framework excels as a collaborative sparring partner for building actionable documentation, such as glossaries and context maps, but not for full automation. This allows the experts to concentrate their discussion on the critical trade-offs. In our evaluation, Steps 1 to 3 worked well, but the accumulated errors rendered the artifacts generated from Steps 4 and 5 impractical. Our findings show that LLMs can enhance, but not replace, architectural expertise, offering a practical tool to reduce the effort and overhead of DDD while preserving human-centric decision-making.