Most software projects lie. Not maliciously — structurally.
A file called auth.py exists, so authentication is "done." A button renders on screen, so the feature is "shipped." A test file named test_payments.py exists, so payments are "tested." A README says "fully integrated with Stripe," so billing is "working."
None of that is necessarily true. File existence is not feature reality. UI existence is not backend implementation. Test names are not coverage. Docs are not proof.
I kept running into this across every project I worked on — AI apps, trading systems, automation tools, research platforms. The same pattern everywhere: optimistic claims, missing wiring, security bolted on as an afterthought (or not at all), and zero governance over what AI agents were actually allowed to do.
So I built something to fix it.
The problem is deeper than bad documentation
The real issue is that modern software projects — especially AI-powered ones — have no operating model. They have code. They have features. They have a README. But they don't have:
- Truth tracking. Nobody records what is actually working vs. what is claimed, partial, stubbed, dead, or misleading.
- Control governance. AI agents act without permission models, approval gates, or audit trails. "Autonomous" becomes a synonym for "ungoverned."
- Memory management. Every session starts from scratch. What was tried, what failed, what was decided — all lost.
- Evaluation discipline. There's no systematic way to verify that what was built matches what was intended. Claims float without evidence.
- Security architecture. Security is either absent or exists as documentation theater — files that nobody reads and nothing enforces.
These problems compound. A project that can't tell the truth about itself can't be secured. A project that can't be secured can't be trusted. A project that can't be trusted can't scale.
What I built
Khaeldur ProjectOS is a singular AI operating system that adds truth, control, memory, evaluation, security, and workflow governance to software projects.
It is not a scaffold generator. It does not create a folder structure and walk away. It defines a living operating model that governs how a project behaves.
The core idea is what I call the Singularity Principle: everything operates under one universal model.
- One truth vocabulary. Every feature is tracked as WORKING, PARTIAL, STUB, DEAD, MISLEADING, MISSING, or NOT VERIFIED. No project gets to invent its own status language.
- One layer model. Every project is reasoned about through the same eight layers: truth, architecture, operations, intelligence, control, business, documentation, and security.
-
One manifest. Every project resolves into a single
projectos.yamlcontract that declares its identity, layers, rules, and governance posture. - One extension logic. Domain packs can extend the core for trading, OSINT, medical, AI apps, or any other domain — but they can never fork it.
This means a trading system and an AI chatbot and a research platform can all be audited, compared, and governed through the same universal operating model.
What exists today
The v0.1 public foundation is 89 files across 16,000+ lines of real content. Not placeholders. Not aspirational stubs. Real, structured, usable material:
47 governance documents covering architecture, singularity definition, feature truth matrix, universal rules, security architecture, threat modeling, misuse modeling, access control, incident response, ISO alignment matrix, AI risk register, quality model, and more.
12 Python audit tools that run locally with zero dependencies — repo structure audit, stub scanner, secret scanner, singularity alignment checker, release readiness verifier, integrity checker, wiring audit, and others.
5 JSON schemas for the manifest, skills, agents, workflows, and feature truth records.
8 universal prompt files for bootstrapping new projects, auditing existing repos, enforcing singularity, and establishing security posture.
9 domain pack stubs for AI apps, trading systems, OSINT, medical/vision, content/marketing, automation, local assistants, research, and agentic SaaS.
A GitHub Actions CI pipeline that runs syntax checks, repo audit, stub scan, secret scan, and release readiness on every push.
The entire framework is designed to be ISO-aligned (mapped to ISO/IEC 42001, 23894, 27001, and 25010) without claiming certification. Audit-friendly, not audit-theater.
Who this is for
- Teams building AI applications who need governance from day one, not as a retrofit
- Solo developers who want structured control without enterprise overhead
- Organizations migrating messy repositories toward a universal operating model
- Builders of autonomous agents who need permission boundaries and safety controls
- Anyone shipping software where truth, trust, and traceability matter
What comes next
The v0.1 foundation is documentation, tools, and schemas. The roadmap is concrete:
Near-term: CLI tool for project bootstrapping and auditing. Brownfield adapter for migrating existing repos non-destructively. Extended domain pack content.
Medium-term: Agent runtime with permission enforcement. Workflow engine with approval gates. Memory persistence layer. Evaluation harness.
Long-term: Multi-repo governance. Continuous truth verification in CI/CD. ISO-aligned audit evidence generation.
The repo
Everything is open source under MIT:
https://github.com/Khaeldur/khaeldur-project-os
Run the audit tools against your own projects. Read the singularity definition. Look at the truth matrix. If your project can't tell the truth about itself, this might be useful.
Contributions welcome — especially domain packs, tool improvements, and real-world governance patterns from teams who have felt this pain.
Khaeldur ProjectOS — one system, one truth, one direction.




