If you’ve spent any time around developer tooling, you’ve probably internalized the idea that code intelligence means LSP.
That makes sense. The Language Server Protocol changed the industry by giving editors a standard way to talk to language-aware tooling. Hover, autocomplete, go-to-definition, diagnostics, rename, references: LSP turned that entire class of features into a portable interface.
But AI coding assistants are not just faster humans sitting inside an editor.
That difference matters.
At Zaguán Blade, we’ve been building ZLP, the Zaguán Language Protocol, because we think AI coding needs a different contract than the one LSP was designed for. Not because LSP is bad. Quite the opposite. LSP is excellent at what it was made for.
ZLP exists because AI has a different job to do.
What ZLP is
At a high level, ZLP is an AI-first protocol for code analysis and validation.
It gives an AI assistant a fast, structured way to ask questions like:
- Is this code syntactically valid?
- What functions, types, or imports are in this file?
- If I change this symbol, what else might break?
- What is the minimal context I need to understand this part of the codebase?
- Can I validate this edit before anything touches disk?
That last point is especially important.
A lot of AI coding pain comes from a simple pattern: the model makes an edit, writes it, discovers it is broken, edits again, writes again, and so on. That loop is noisy, brittle, and expensive.
ZLP is designed to tighten that loop.
Inside Blade, it can parse code quickly, extract structure, run native validation tools, assemble targeted context, and support validated mutation flows so the assistant can check work before committing it.
In practical terms, that means ZLP is less about “help me type code” and more about:
- Validate generated code
- Surface structure without excess ceremony
- Give the model just enough context
- Preserve edit intent when applying changes
What ZLP is not
Just as important: ZLP is not our attempt to reinvent the IDE stack.
It is not:
- A drop-in replacement for LSP
- A universal editor protocol for hover menus and autocomplete popups
- A full clone of language-server behavior
- A bet on huge, always-on, state-heavy background indexing as the default model
We are not trying to reproduce every feature a human developer expects from a traditional editor.
That is intentional.
LSP was built for a world where a human is writing code keystroke by keystroke and the machine’s job is to assist that human in real time.
Blade’s world is different. In our loop, the AI is often proposing entire edits, validating them, revising them, and reasoning over code structure at a much larger granularity than a single cursor movement.
So instead of asking, “How do we expose every IDE affordance to the model?” we asked a different question:
What is the smallest, fastest, most useful protocol surface for an AI that already understands programming, but still needs grounded feedback from the actual codebase and toolchain?
That question leads you somewhere other than full LSP.
Why we didn’t go all the way to LSP
There are a few reasons.
1. AI does not need the same kind of hand-holding as a human editor user
A human developer benefits from completion menus, tooltip documentation, and a constant UI feedback loop.
An AI model usually does not need that same presentation layer.
What it needs is something much more direct:
- the current code
- the relevant symbols
- the real errors
- the likely impact of a change
- a way to validate before writing
In other words, AI often benefits more from clean machine-readable structure and grounded validation than from a feature-rich interactive editor protocol.
2. We wanted a protocol that is AI-native, not UI-native
LSP is fundamentally shaped by IDE workflows: initialize a server, maintain long-lived state, support a wide range of editor features, and optimize for interactive human use.
ZLP is shaped around AI iteration loops instead.
That means favoring things like:
- fast parse-and-validate cycles
- stateless request/response patterns
- minimal context assembly
- impact analysis before edits
- validation wrappers around writes and patches
Those are not side features for us. They are the center of the design.
3. We care a lot about token efficiency
One of the recurring problems in AI coding is overfeeding context.
Full files become many full files. Many full files become a giant prompt. Then the model spends tokens rediscovering relationships it only needed in summary form.
ZLP is meant to help Blade avoid that trap.
Instead of assuming the answer is “load more code,” ZLP introduces the idea that the system should be able to ask for targeted structure or minimal supporting context for a symbol, file, or proposed change.
That is a better fit for AI systems where token budgets, latency, and precision all matter at once.
4. We wanted one server-side implementation that stays broadly language-agnostic
A classic language-server ecosystem tends to pull you toward deep per-language implementations and feature parity battles.
ZLP takes a different path.
Its foundation is deliberately simple:
- use parsing infrastructure for structure
- use native language toolchains for authoritative validation
- keep the protocol surface general where possible
That lets us support multiple languages without turning the protocol into a sprawling collection of editor-specific behaviors.
How ZLP works, without going too far into the weeds
The short version is this:
Blade handles the conversation and file access. ZLP provides code intelligence and validation.
In the current implementation, ZLP lives server-side and exposes operations for things like:
- parsing
- structure extraction
- validation
- environment checks
- context assembly
- impact analysis
- symbol indexing and lookup
Under the hood, the architecture combines a few different ideas.
Fast structural understanding
For structural analysis, ZLP uses tree-sitter-style parsing to quickly understand the shape of code: functions, types, imports, variables, and similar symbols.
That gives the model a fast way to orient itself without needing a full heavyweight language-server session.
Real validation from real toolchains
Syntax is not enough.
The more important question is whether the code is actually valid for the project it belongs to.
So ZLP is designed to run native validation tools where appropriate: compiler checks, lints, and other language-specific verification steps. That way the AI is not grading its own homework. It is getting feedback from the real toolchain.
Context that tries to stay small on purpose
One of the more AI-specific parts of the design is context assembly.
Instead of assuming the model should ingest entire files or broad swaths of the repo every time, ZLP can assemble narrower context around a symbol or file and keep that result token-budget aware.
That matters because good AI systems are not just smart. They are selective.
Validation-aware editing
We are especially interested in workflows where the model can preserve the type of edit it intended to make.
For example, there is a meaningful difference between:
- applying a targeted patch to an existing file
- writing or rewriting a full file
ZLP-backed validated mutation wrappers keep that distinction intact. A validated patch should still behave like a patch. A validated full write should still behave like a full write.
That may sound subtle, but it matters for safety, concurrency, and keeping edits faithful to the model’s actual intent.
The real philosophical difference
If you had to reduce the whole idea to one sentence, it would be this:
LSP helps humans write code. ZLP helps AI prove that its code changes make sense.
That is obviously a simplification, but it captures the center of gravity.
LSP is optimized for an interactive human typing loop.
ZLP is optimized for an AI generation-validation-revision loop.
Those are adjacent problems, not identical ones.
What makes ZLP interesting to a technical audience
We think ZLP is interesting not because it replaces existing standards, but because it treats AI coding as its own systems problem.
A few parts of that are especially compelling:
- It is grounded. The model can get real structural and validation feedback from the codebase instead of relying only on its prior knowledge.
- It is pragmatic. It does not pretend every problem needs a gigantic semantic engine running all the time.
- It is shaped by AI constraints. Token budgets, latency, edit semantics, and self-correction loops are first-class concerns.
- It aims to be language-agnostic where that genuinely pays off. The goal is not to build a separate miniature universe for every language if the protocol contract can stay clean.
That combination puts ZLP in a different category from traditional editor plumbing.
What ZLP still is not trying to do
It is worth repeating this, because it is easy to misunderstand.
We are not saying:
- “LSP is obsolete”
- “AI no longer needs editor tooling”
- “A protocol like this should absorb every language-specific capability”
- “The right answer is to expose every internal mechanism publicly”
ZLP is a focused layer inside Blade.
Its job is to make AI coding more reliable, more grounded, and more efficient.
That is a narrower ambition than “replace the language-server ecosystem,” and we think that focus is a strength.
Why this matters for Blade
Blade is not just trying to be a chat box that sometimes edits files.
The bigger goal is to give the assistant better operational footing inside real projects.
That means the assistant should be able to:
- inspect structure quickly
- validate work before writing
- understand likely ripple effects
- request the smallest useful slice of context
- recover from errors without wandering blindly
ZLP helps provide that footing.
It gives Blade a more purpose-built substrate for AI coding workflows than a generic “just stuff more files into the prompt” approach.
Where this goes next
We think AI-native developer infrastructure is still in its early days.
For years, most of the ecosystem assumption was: “Take human tooling, add an LLM.”
That gets you part of the way.
But if AI becomes a real participant in the development loop, then some of the underlying protocols should probably be rethought around that reality.
That is what ZLP represents for us.
Not a rejection of what came before.
A recognition that AI coding deserves some primitives of its own.
Final thought
ZLP is our answer to a simple question:
If you were designing code intelligence for AI from scratch today, what would you keep from the old world, and what would you rebuild for the new one?
We do not think the answer is “just rebuild LSP.”
We think the answer is something leaner, faster, more validation-centered, and more honest about what AI actually needs.
That is the space ZLP is meant to occupy.
And we are just getting started.