Vibe Coding has a reputation problem.
Depending on who you ask, it either means the future of software development or the reckless act of letting an AI spray code into a repository while the human nods along.
The truth is more interesting.
Vibe Coding works when the loop is fast, grounded, and shaped by good engineering taste. It fails when the loop becomes vibes only: no constraints, no verification, no memory of how similar work should be done, and no discipline around when to ask questions instead of making assumptions.
That is why Zaguán Blade has a Skills catalog.
Skills are not decorative prompt packs. They are reusable operating procedures for agentic coding. They give the model a way to temporarily load the right discipline for the task in front of it: how to debug, how to review, how to build a feature, how to design a UI, how to audit security, how to write a PRD, how to notice blindspots, how to avoid generic output.
In other words: skills are how we keep Vibe Coding from becoming improv theater.
What a Skill Is
A skill is a focused body of instructions the agent can load when the task calls for it.
The base system prompt gives the assistant its default behavior: be useful, inspect the codebase, avoid destructive actions, validate work, communicate clearly. That foundation matters, but it cannot contain every specialized workflow without becoming huge, stale, and contradictory.
Skills solve a different problem.
They are loaded on demand. A debugging task can pull in a root-cause workflow. A security audit can pull in a threat-modeling methodology. A UI task can pull in design constraints. A vague product request can pull in workflow discovery or PRD writing.
This gives us a clean separation:
- System prompt: stable behavioral foundation
- Tools: concrete capabilities in the workspace
- Skills: domain-specific judgment and process
That separation is important. If every bit of expertise lives in the always-on prompt, the agent carries too much context all the time. If every task is handled only by tools, the agent has capability without a method. Skills sit in the middle: light enough to load when needed, strong enough to change the quality of the work.
The Shape of the Catalog
The current catalog is intentionally broad, but it is not random.
It covers four kinds of work we see constantly in AI-assisted development.
1. Getting the Work Framed Correctly
This includes skills like:
- Planning Mode
- Workflow Discovery
- Feature Development
- PRD
- Disposable MVP
These skills exist because many failures happen before the first line of code.
The model rushes into implementation. The user’s request has multiple valid interpretations. The product goal is fuzzy. A prototype quietly becomes production code. A feature gets built around a solution before the actual problem is understood.
That is not a model intelligence issue. It is a workflow issue.
Planning and discovery skills slow the agent down at the exact moments where speed is dangerous. They teach it to ask the highest-value question, identify scope boundaries, understand the existing system, and get the implementation path into a shape where coding is actually the right next move.
The Disposable MVP skill is especially important in a Vibe Coding environment. Fast prototypes are one of the best uses of AI. But a prototype should be an instrument for learning, not a foundation poured in a hurry. That skill forces the distinction: build the first version to learn, then decide what gets kept, rewritten, or thrown away.
2. Doing Core Engineering Work Properly
This includes:
- Debugging
- Bug Fixing Discipline
- Testing
- Refactoring
- Code Review
- PR Review
- Code Refinement
- Git Operations
- Git Commit
These are the everyday rails.
AI coding does not remove the need for engineering discipline. It raises the cost of not having it. A model can make changes very quickly, which means it can also spread a bad assumption very quickly.
The debugging skill pushes toward root cause instead of symptom treatment. Bug Fixing Discipline rejects timeout band-aids, silent failures, and downstream workarounds. Testing gives the model a structured way to think about verification instead of sprinkling assertions around. Refactoring and Code Refinement focus on preserving behavior and subtracting complexity rather than performing cosmetic churn.
The review-oriented skills matter because Vibe Coding changes the review problem. When an AI produces a large diff, the human reviewer needs help finding the behavioral risk, not a summary of files changed. Code Review and PR Review are designed to prioritize bugs, regressions, missing tests, security issues, and performance risks.
Git and Commit skills are less glamorous, but they matter for the same reason. Agentic coding should leave a clean operational trail. A good commit is not just a save point. It is a claim about intent.
3. Handling Specialized Domains
This includes:
- Security
- Security Audit
- Error Handling
- Type Design
- API Design
- Performance
- Documentation
- Web Research
These skills exist because domain work has sharp edges.
Security is not “look for scary functions.” A good audit traces trust boundaries, attacker reachability, exploit value, and concrete mitigations. Error handling is not “add try/catch.” It is deciding what failures must be surfaced, logged, retried, rolled back, or allowed to stop the operation. Performance is not “use caching.” It is profiling, identifying the bottleneck, and proving the change helped.
Type Design and API Design are in the catalog because a lot of software quality is decided at the boundary. Bad types let illegal states leak everywhere. Bad APIs force every client to understand internal complexity. AI can generate code that compiles, but the shape of the contract still matters.
Web Research is also in this group because modern development depends on information that changes. Library behavior, platform rules, model capabilities, browser support, dependency advisories, pricing, and best practices drift. A skill that teaches the agent how to gather and grade sources is different from simply giving it network access.
4. Improving Product and Interface Quality
This includes:
- Frontend Design
- UI Anti-Generic Design
- Micro-Mobile PWA
Frontend work exposes a very specific AI failure mode: the output often looks polished from ten meters away and wrong from one meter away.
It has cards inside cards. Rounded pills everywhere. Generic dashboards. Fake metrics. Gradient-heavy surfaces. Hero sections where an internal tool needs a dense work surface. Mobile layouts that technically shrink but do not actually work at 320 pixels.
The Frontend Design skill helps the model build complete, usable interfaces. The UI Anti-Generic skill exists as a restraint layer against obvious AI defaults. Micro-Mobile PWA is even more specific: it treats small screens as the base product, not a breakpoint afterthought.
We included these because Vibe Coding is especially tempting in frontend work. You can see progress quickly. But visible progress is not the same as product quality. The catalog needs skills that encode taste, restraint, and ergonomic detail.
Why This Selection
The catalog is selected around failure modes, not around academic categories.
We did not start by asking, “What are all the possible things an AI assistant could know?”
We asked:
- Where do AI coding agents repeatedly make bad assumptions?
- Where does generic advice create the most damage?
- Where does a small amount of process dramatically improve outcomes?
- Where does expertise go stale if it is buried in a monolithic prompt?
- Where does the human need the agent to challenge the frame, not just execute it?
That led to a catalog with a few different layers.
There are workflow skills for framing and sequencing work.
There are discipline skills for common engineering tasks.
There are specialist skills for domains where shallow pattern matching fails.
There are taste and product skills for the parts of software where technically correct output can still feel wrong.
There are also skills like Blindspot Radar that do not fit neatly into classic development categories. That is intentional. One of the highest-value things an AI can do is surface what the developer did not think to ask. Unknown unknowns are not solved by a bigger autocomplete menu.
Skills Make Vibe Coding Less Fragile
The best version of Vibe Coding is not “the human stops thinking.”
It is closer to this:
- The human sets intent.
- The agent gathers context.
- The right skill shapes the workflow.
- The agent makes a focused change.
- Tools and tests ground the result.
- The human reviews decisions instead of micromanaging syntax.
Skills make that loop less fragile because they give the model reusable judgment at the moment it needs it.
Without skills, the model has to infer the workflow from the prompt and its general training. Sometimes that works. Sometimes it writes a feature before clarifying the product goal, treats a same-user desktop behavior as a critical RCE, adds broad error handling that hides the real failure, or generates a beautiful UI that does not match the product.
With skills, the model can load a more precise frame:
- This is a debugging problem; find root cause before fixing.
- This is a security audit; trace attacker reachability before assigning severity.
- This is a vague feature request; clarify scope before coding.
- This is a prototype; document shortcuts before they become architecture.
- This is UI work; avoid generic AI styling and design for the actual workflow.
That does not make the agent perfect. It makes the agent easier to steer.
Skills Are Also a Product Surface
There is another reason the catalog matters: skills are how a coding environment expresses what it values.
A tool that ships only “write code faster” workflows is making a statement. It is saying speed is the center of the product.
We care about speed, but not speed alone.
The Blade catalog says something more specific:
- Ask better questions before coding.
- Preserve behavior when changing code.
- Treat security findings as evidence-backed claims.
- Separate prototypes from production work.
- Make UI specific to the product, not the model’s defaults.
- Surface blindspots instead of only answering direct prompts.
- Prefer grounded validation over confident guesses.
That is the posture we want for AI coding.
The Catalog Will Keep Changing
The catalog is not finished.
Skills should evolve as the failure modes evolve. Some will get sharper. Some will split into narrower variants. Some may disappear if their lessons belong in the base prompt or in tools instead. New ones will appear when we see repeatable workflows that deserve first-class treatment.
That is part of the point.
A skill is a maintainable unit of coding judgment. It can be reviewed, versioned, tested against real work, and improved without rewriting the entire assistant.
For AI coding systems, that matters as much as any model upgrade. Better models help. Better tools help. But the work still needs process. It still needs taste. It still needs context-specific discipline.
Vibe Coding is powerful because it lets you move at the speed of intent.
Skills matter because they make sure intent is not the only thing in the room.