A few projects have started calling themselves "AI-native" or "AI-first" languages. The pitch is usually the same: fewer tokens, one way to write things, simpler syntax. The metric is input cost — how cheaply can an LLM produce a file.
The bottleneck is not generation cost. It's comprehension cost: can the next agent that touches this code understand what it does, what it's allowed to do, and why it was written this way — without reading every line? Generation is fast and getting cheaper. Understanding is slow and getting more expensive, because the information an agent needs is almost never in the source text itself.
Aver is an AI-native language built around this problem. Not "fewer tokens in." More understanding out.
Every codebase is an archaeology site
Give Claude or GPT a Python project and ask it to add a feature. It reads files — lots of files. It guesses which ones matter. It infers intent from variable names and comments that may be outdated. It has no reliable way to know which functions talk to the network, which ones are pure, and which architectural choices were made deliberately vs. inherited from a Stack Overflow answer in 2019.
The AI is reconstructing information that the original author had but didn't encode in the artifact. Intent, constraints, design rationale — all of it exists only as implicit patterns in the code, if it exists at all.
The code has no API for the thing that reads it most often.
Token efficiency is real. It's just not the whole problem.
There's a popular thesis in the AI-first language space: fewer options, fewer libraries, a shorter spec for the model to memorize. Reduce choice paralysis, reduce token cost, and the language becomes better for AI.
That's a real optimization target. A language that takes 900 tokens to produce a CRUD endpoint instead of 1,800 is genuinely cheaper to generate. But a short program is not a legible program. You can win on token efficiency and still end up with code where the next agent doesn't know the intent behind a function, can't tell which calls have side effects, doesn't know why one approach was chosen over another, has no expected behavior to compare against, and has to regex-parse error output to figure out what went wrong.
Token efficiency helps code get written. A semantic surface helps code get understood, reviewed, repaired, and evolved. The Aver language doesn't compete primarily on fewest tokens to produce a program. It competes on most preserved meaning after the program exists.
What an AI-first language exposes to agents
The language encodes intent, effects, architectural decisions, and expected behavior as part of its grammar — parsed, type-checked, enforced, and exportable:
fn fetchUser(id: String) -> Result<HttpResponse, String>
? "Fetches a user record by ID from the external API."
! [Http.get]
Http.get("https://api.example.com/users/{id}")
The ? line is a description literal — part of the function's signature, parsed by the compiler, exported by tooling. It's not a comment next to code; it's a declaration about the code that the toolchain knows about.
! [Http.get] is a declared effect — enforced statically and at runtime. If the function body calls Disk.writeText without declaring it, that's a compile error. An agent reading this signature knows the complete set of side effects without reading the body.
These ideas aren't novel individually. What matters is that they compose into a single exportable surface — and that there's a concrete command that exports it.
aver context: where an agent enters the codebase
This is the central piece.
When an AI agent starts working on an Aver project, it doesn't read source files. It runs aver context. This is the intended interface — the front door to the codebase for any agent.
aver context examples/core/calculator.av
Output:
## Module: Calculator
> Safe calculator demonstrating Result types, match expressions,
> and co-located verification. Errors are values, not exceptions.
### `safeDivide(a: Int, b: Int) -> Result<Int, String>`
> Safe integer division. Returns Err when divisor is zero.
verify: `safeDivide(7, 0) => Result.Err("Division by zero")`,
`safeDivide(0, 5) => Result.Ok(0)`,
`safeDivide(9, 3) => Result.Ok(3)`
### `safeRoot(n: Int) -> Result<Int, String>`
> Returns Err for negative input, Ok otherwise. Uses match on a bool expression.
verify: `safeRoot(0 - 1) => Result.Err("Cannot take root of negative number")`,
`safeRoot(0 - 99) => Result.Err("Cannot take root of negative number")`,
`safeRoot(0) => Result.Ok(0)`
### Decision: NoExceptions (2024-01-15)
**Chosen:** "Result" — **Rejected:** "Exceptions", "Nullable"
> Exceptions make error paths invisible at the call site.
> Result forces the caller to acknowledge failure explicitly,
> which is essential when AI tooling reads cod…
impacts: `safeDivide`, `safeRoot`
No implementation details. Signatures, descriptions, effects, expected behavior from verify blocks, and the design decisions that constrain the module. In ~2k tokens the agent gets the contracts before the implementation. Compare that with dumping 50k tokens of raw source into a context window.
aver context app.av --budget 10kb
aver context app.av --focus processOrder
aver context app.av --decisions-only
Start high, zoom in. The agent reads the contract map first, then drills into the functions that actually need attention.
Failures are parseable, not just readable
When something breaks, aver check emits structured diagnostics with repair suggestions and source snippets:
error[type-error]: Function 'wrongReturn': body returns String but declared return type is Int
at: test_errors.av:30:1
|
30 | fn wrongReturn() -> Int
| ^^^ declared Int
31 | ? "Returns wrong type (type checker error)."
32 | "oops"
| ^^^^^^ returns String
warning[perf-string-concat]: string concatenation with `acc` in recursive call
at: lint_demo.av:20:31
in-fn: repeat
repair: O(n²) per iteration; consider collecting into a list and joining
Every diagnostic has a machine-readable slug, a source location down to the column, and a repair suggestion. --json emits the same as NDJSON — same schema across check, verify, and replay — so an agent can categorize errors and apply fixes programmatically.
Decisions survive across sessions
The hardest information to preserve across AI sessions is why. An agent can re-derive what the code does by reading it. It cannot re-derive why it was written that way.
Aver's decision blocks encode rationale as first-class syntax:
decision TailRecurrenceForPerformance
date = "2026-02-24"
reason =
"Naive fib(n-1)+fib(n-2) is exponential and easy for AI to generate."
"Tail recursion makes fib linear time and predictable."
chosen = "TailRecursion"
rejected = ["NaiveRecursion"]
impacts = [fib, fibTR]
Parsed, validated, exported via aver context --decisions-only. When an agent touches fib three months from now, it reads this block and knows the exponential version was considered and rejected — with an explicit reason.
I haven't seen many languages treat architectural rationale as grammar rather than convention. That doesn't mean it's a solved problem — enforcing decision quality is still on the author. But the toolchain exposes rationale the same way it exposes types and effects.
Record/replay closes the loop on effects
Pure functions have verify blocks. Effectful code has recordings. aver run --record captures every effectful interaction with caller, arguments, and outcome. aver replay --test --diff re-executes against that recording deterministically. If the code drifts, you get a structured diagnostic — which effect changed, in which function, at which step.
What this adds up to
Aver composes intent, effects, decisions, expected behavior, and structured failures into one exportable surface. aver context is the entry point. The rest of the toolchain feeds into it. The agent gets contracts before implementation, rationale before refactoring, and parseable failures when things break.
Aver is early and incomplete. But this semantic surface already works today. The repo is here, the manifesto is here. cargo install aver-lang, point aver context at a module, and read what comes out.
Previous posts: A prompt is a request, a language is the law | The most boring games you have Aver seen | I gave my language VM four memory lanes


