RUN AI LOCALLY. OR USE CLOUD MODELS.
Your code stays on your machine. No API key required. Switch to cloud when you need speed.
Built for long sessions without losing context.
Use Ollama locally or connect to any OpenAI-compatible endpoint.
Built for Control
Not Bloat
Zaguán Blade was built because modern AI editors feel bloated, opaque and cloud-dependent.
Blade separates the UI from the AI daemon, keeps resource usage low, and respects Git as the source of truth. It's built for engineers who care about control and hate silent failures.
Used daily to develop Zaguán Blade and zcoderd. Tested on real production projects.
THE LIGHTWEIGHT DIFFERENCE
Engineers trust metrics. Here is what idle resource usage looks like:
| Tool | Idle RAM |
|---|---|
| Zaguán Blade | ~320 MB |
| VSCode + AI | ~1.9 GB |
| Windsurf | ~2.8 GB |
NOT FOR EVERYONE
- → If you want a VSCode clone, this isn't it.
- → If you need heavy UI plugins, this isn't it.
- → If you want lightweight, controlled AI workflows — this is for you.
Engineers Think
in Workflows
WHAT IT'S ACTUALLY GOOD AT
- + Refactoring across multiple files.
- + Updating changelogs and build configs.
- + Reviewing AI patches safely with diff control.
- + Running locally with your own models.
TYPICAL FLOW IN BLADE
- Ask for change.
- AI calls project tools.
- Diff appears.
- You review.
- Accept or revert via Git.
No magic behind the scenes. Just predictable operations and complete transparency.
Zaguán Blade (UI)
Tauri v2 + Vite + React
User interface only.
Lightning fast.
zcoderd (AI daemon)
Go server
State management
Tool execution & AI
Model Provider
Local or remote.
Ollama, Anthropic, OpenAI, etc.
You choose.