Wire ll-lang into Claude Code, Cursor, or Zed in 30 Seconds

Dev.to / 4/27/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The article argues that getting an LLM to work effectively with ll-lang is best done via MCP rather than relying on parsing raw shell/terminal output.
  • ll-lang includes a built-in MCP server that can be launched with the command `lllc mcp`, enabling editors/agents to call compiler and project tooling as structured tools.
  • Once configured, agents can request actionable, tool-backed answers such as whether code compiles, what specific error codes mean, where symbols are defined, and which build targets are available, receiving structured JSON responses.
  • The guide provides a configuration example for wiring the MCP server into clients like Claude Code, Cursor, or Zed, including where MCP config files typically live and how to confirm the server appears after restart.
  • It notes that the existing ll-lang documentation (README and `docs/user-guide/09-mcp.md`) enumerates many MCP tools (around 30) covering core compile/check and other IDE-style workflows for an AI coding loop.

Wire ll-lang into Claude Code, Cursor, or Zed in 30 Seconds

If you want an LLM to write ll-lang productively, the best setup is not "open a terminal and hope the model can parse shell output." The better setup is MCP.

ll-lang ships with a built-in MCP server through lllc mcp, so your editor can call the compiler and project tooling as structured tools.

That means your agent can ask questions like:

  • does this compile?
  • what does E005 mean?
  • where is this symbol defined?
  • what targets can I build to?

And get structured JSON back instead of scraped terminal text.

1. Add the MCP server

Use the current config shape from the ll-lang README and MCP user guide:

{
  "mcpServers": {
    "ll-lang": {
      "command": "lllc",
      "args": ["mcp"]
    }
  }
}

If lllc is not on your $PATH, replace command with the absolute path to the binary. In a local repo checkout, you can also point to the bootstrap wrapper if that is your preferred install path.

Typical locations:

  • Claude Code: ~/.config/claude/mcp.json
  • Cursor: project .cursor/mcp.json or equivalent MCP settings path
  • Zed: your MCP server settings file

The important part is the command itself: lllc mcp.
The MCP server name is up to you. The README currently shows lllc; the MCP guide shows ll-lang.

2. Restart the client and confirm the server appears

Once the client reloads MCP servers, ll-lang should show up as an available tool provider.

At that point your agent can call ll-lang directly instead of inferring behavior from shell commands.

3. What tools you get

The current README and docs/user-guide/09-mcp.md document 30 tools. They cover the workflow you actually want in an AI coding loop:

  • Core compile/check: compile_source, check_source, compile_file, check_file
  • Diagnostics and repair: diagnose_source, diagnose_file, explain_error, fix_suggest, apply_fix_preview
  • Formatting and AST inspection: format_source, format_file, parse_source, typed_ast
  • Project-level operations: project_graph, check_project, build_project
  • Symbol navigation: symbols, definition, references
  • Dependency helpers: mod_add, mod_tidy, mod_why
  • FFI helpers: ffi_inspect, ffi_validate
  • Test helpers: test_list, test_run
  • Catalog/meta: stdlib_search, list_errors, lookup_error, list_targets

That is enough for a serious inner loop. The model does not just "write code"; it can inspect the project, ask for diagnostics, search the stdlib, and repair errors with structured feedback.

4. The shortest useful workflow

Once MCP is wired in, the productive loop is simple:

  1. Ask the model to draft a small ll-lang module.
  2. Call check_source or check_file.
  3. If an error comes back, call lookup_error or explain_error.
  4. Apply the fix and re-run check_source.
  5. When clean, use build_project or compile_file.

That loop is much tighter than:

write -> run shell command -> inspect mixed stdout/stderr -> guess what failed

5. Why this setup matters

ll-lang is designed for LLM code generation, so the compiler output is part of the authoring experience, not just a final gate.

Examples:

  • E005 TagViolation tells the agent it passed an untagged value where a tagged value was required.
  • E004 UnitMismatch tells it incompatible units met in arithmetic.
  • lookup_error can turn an error code into a short explanation without making the model search docs manually.

Because the protocol is structured, the model can route on exact fields instead of trying to interpret prose or terminal formatting.

6. Minimal prompt to test it

After MCP is connected, try this:

Create a small ll-lang module with tag UserId, a function that accepts Str[UserId], and then check whether it compiles. If it fails, fix it using the ll-lang MCP tools.

If the wiring is correct, the model should use ll-lang tools directly rather than defaulting to shell output parsing.

7. Install and repo links

Repo: https://github.com/Neftedollar/ll-lang
Landing page: https://neftedollar.com/ll-lang/

Bootstrap path from the README:

git clone https://github.com/Neftedollar/ll-lang.git
cd ll-lang
LLLC_BOOTSTRAP_REINSTALL=1 ./tools/check-selfhost-ci.sh

Then keep the MCP config in place:

{
  "mcpServers": {
    "ll-lang": {
      "command": "lllc",
      "args": ["mcp"]
    }
  }
}

Wire ll-lang into Claude Code, Cursor, or Zed in 30 Seconds | AI Navigate