Over the past year, we've all seen the magical "build any app from one prompt" demos. They are fantastic for landing pages and generic scaffolding. But if you've ever tried asking an LLM to iteratively build and maintain a massive, complex business system, you know the harsh reality: the context window fills up, the AI hallucinates bad architecture, and your Repo turns into unmaintainable spaghetti code.
I was tired of endlessly prompting agents to rewrite sprawling React and Java files, so I took a different route.
I built Loj, an open-source, AI-native DSL family explicitly designed for generating full-stack business systems.
The Core Bet: Narrowing the AI's Focus
The core philosophy behind Loj is simple: LLMs are incredibly good at working against a narrow, stable, and schema-checked domain language. They are notoriously bad at managing 50,000 lines of imperative UI state and backend boilerplate over time.
Instead of making the AI directly manipulate a large framework codebase, Loj provides a smaller, structured semantic surface. You (or your AI agent) write the business intent using extremely dense files like .web.loj, .api.loj, and .rules.loj.
Then, our deterministic compilers expand that narrow intent into standard, production-ready framework code. Right now, a single Loj project can generate:
- Frontend: React / TypeScript
- Backend: Spring Boot (Java) or FastAPI (Python)
And because we know no DSL can cover 100% of real-world edge cases, Loj embraces explicit escape hatches. Instead of pretending you'll "never write code again," we push the handwritten code to a much smaller, clearer edge where the compiler cleanly wires it up for you.
The Disappearing Bridge: Full-stack Workflow Atomicity
When you implement a "Ticket Approval" flow in a traditional stack, you usually write the same logic in four places:
-
Backend: An interceptor checking
if status == 'PENDING' && role == 'ADMIN'. - Frontend: Conditional rendering to show/hide the "Approve" button.
-
API: A dedicated
/approveendpoint definition. - State Management: Manually refreshing the list or redirecting after the call.
In Loj, that "bridge" code simply disappears.
You define a single transition: approve in your .flow.loj. The Loj compiler acts as an invisible weaver, simultaneously injecting the visibility logic into your React components and the security interceptors into your Spring or FastAPI controllers.
To an engineer, this can feel almost "counter-intuitive" at first because we are trained to look for those manual if statements. But once you embrace the Atomicity of Business Logic, you realize that .flow.loj is effectively a "Full-stack Constitution."
If the constitution allows a transition, the entire stack—from the UI button to the database write—unconditionally complies. It eliminates that classic, frustrating category of bugs where the UI allows an action but the backend returns a 403.
Why not just use another AI App Builder?
A lot of current AI tooling is optimized for visual websites or raw code manipulation. That’s useful, but it wasn't the problem I needed to solve.
I care about heavy business systems: booking platforms, procurement trackers, internal approval workflows, and customer portals. These systems aren't hard because they need "creative UI." They are hard because they rely heavily on structured domain models, rigid eligibility rules, and complex handoffs between screens and backend states.
When you define a form or an API endpoint in Loj, the intent stays dense. If a production bug occurs, you don't patch an opaque, generated blob. You trace the bug to the predictable .loj source or your explicit native escape hatch. It brings traceability back to AI generation.
The Flight-Booking Proof of Concept
To prove this isn't just a toy, the canonical example in the repo right now is a full-stack flight-booking system. It includes search flows, member history, selection-driven handoffs, and complex read models.
I track "semantic escape" mathematically because I want to know if the DSL is actually doing its job. In the current booking proof, the combined semantic escape across front and back ends sits at roughly 1%.
That means ~1,390 lines of Loj DSL expand into roughly 11k–13k lines of generated React and Spring Boot code, with only a tiny fraction of native handwritten escape code needed for mock data and bespoke wiring.
What Loj is (and isn't) good at
I don’t want to oversell this. Loj is a highly opinionated tool.
It IS exceptionally good at:
- Back-office admin panels
- Workflow-heavy CRUD-plus systems
- Rule-driven transactional apps where frontend/backend coordination is painful
Not for:
- High-brand, visually bespoke marketing sites
- Deeply interactive 3D/Canvas apps
- Highly custom real-time collaboration tools (like Figma clones)
Try it out (and let the AI do the work)
If you are building an AI agent or using tools like Windsurf/Cursor, you can install the loj-authoring skill bundle. It teaches your LLM exactly how to write the DSL for you. We also have a beta VSCode extension available with syntax highlighting.
Install the CLI:
npm install -g @loj-lang/cli
loj --help
# Or without global install: npx @loj-lang/cli --help
Install the AI Authoring Skill (for Codex/Agents):
npx @loj-lang/cli agent add codex --from https://github.com/juliusrl/loj/releases/download/v0.5.0/loj-authoring-0.5.0.tgz
Check out the Code:
The repo is live at github.com/juliusrl/loj.
If you take a look, I'd love to hear your thoughts on architecture:
- Does a "DSL-first" approach feel more robust to you for managing AI generation long-term?
- Do you prefer explicit escape hatches over direct AI refactoring?
Let me know in the comments!


