If you’ve been following agentic AI systems like Codex-style coding agents, you’ve probably seen references to things like agents.md, skills.md, or “tool catalogs.” At first glance, these can look like extra ceremony — more files, more config, more abstraction.
They’re not.
They’re the reason agentic systems work at all.
This post explains what agents and skills really are, why people keep several of them around, and how this pattern is essentially RBAC (Role-Based Access Control) for AI. We’ll finish with a concrete, real-world example of telling an agent who it is, what it’s allowed to do, and assigning it a task.
The Core Idea (In Plain English)
Agentic AI systems separate three things:
- Reasoning – what the model thinks and plans
- Authority – what the agent is responsible for
- Capability – what actions the system allows it to take
The last two are where agents.md and skills.md come in.
What agents.md Really Represents
An agent is not just a prompt.
It’s a long-lived role with intent and boundaries.
An agent definition answers questions like:
- Who am I?
- What is my responsibility?
- What am I optimizing for?
- What am I explicitly not allowed to do?
- When should I act autonomously vs ask for help?
In human terms, this is a job description.
“You are a backend engineer. You own this service. You do not deploy to production without approval.”
That’s what agents.md encodes for AI.
What skills.md Really Represents
Skills are explicit, callable capabilities.
They are not abstract abilities like “knows Python.”
They are concrete actions like:
- read files
- write files
- run tests
- commit code
- open a pull request
- deploy to staging
If an agent does not have a skill, it should not be able to perform that action — even if it can reason about it.
In human terms, skills are permissions and tools.
“You can read the repo and run tests, but you cannot deploy or modify infra.”
Why People Keep Multiple Agents and Skill Sets
Single-agent systems don’t scale.
They become:
- unsafe (too much power)
- unpredictable (conflicting goals)
- impossible to audit
So real systems split responsibilities.
Multiple agents (roles)
- Planner agent
- Coder agent
- Reviewer agent
- Ops agent
Each has different authority.
Multiple skill bundles (permissions)
- read-only
- codegen
- git
- CI
- deploy
Agents are granted only the skills they need.
This is exactly how RBAC works in traditional systems — just applied to AI.
This Is RBAC for AI (Literally)
If this feels familiar, that’s because it is.
| RBAC Concept | Agentic AI Equivalent |
|---|---|
| Role | Agent |
| Permission | Skill |
| Policy | Agent definition |
| Access boundary | Skill assignment |
Agentic systems are rediscovering a lesson software engineering learned decades ago:
Reason freely. Act narrowly.
A Concrete Example (End-to-End)
Let’s say you want an AI agent to update documentation for a repo.
Agent definition (agents.md)
You are a Documentation Agent.
Your responsibility is to improve clarity and correctness of documentation.
You do not modify application code.
You do not deploy or change infrastructure.
You should ask the user before making large structural changes.
Skills assigned (skills.md subset)
- read_files
- write_files
- search_repo
- open_pull_request
No deploy. No CI control. No database access.
The instruction you give the system
You are the Documentation Agent.
You may use read_files, search_repo, and write_files.
Please review the README and API docs for outdated references tomainand update them to reflect thetestingbranch.
Open a pull request with your changes and summarize what you updated.
What happens next:
- The agent reasons about the task
- It chooses which skills to invoke
- It reads the files
- It makes scoped edits
- It opens a PR
- It reports back
At no point can it “decide” to deploy, modify infra, or touch unrelated systems — because it literally does not have those skills.
Why This Pattern Keeps Winning
This separation gives you:
- Safety – agents can’t do what they aren’t permitted to do
- Clarity – you can explain why an agent behaved a certain way
- Auditability – actions are explicit and logged
- Composability – agents and skills evolve independently
And crucially:
The model can be powerful without being dangerous.
Final Takeaway
agents.md and skills.md are not documentation fluff.
They are the control plane for agentic AI.
- Agents define intent and responsibility
- Skills define permitted action
- Together, they form RBAC for AI
Once you see this, modern agentic systems stop looking mysterious — they start looking like well-designed software.
And that’s exactly the point.
Leave a Reply
You must be logged in to post a comment.