I've spent countless hours building and studying AI agent systems.
The best ones follow a clear set of principles.
Single LLM call → Workflow → Single Agent → Multi-Agent.
That's the progression. Move up only when you absolutely have to.
Most tasks can be solved with one well-crafted prompt. More than you'd think.
"Agent vs workflow" is a false dichotomy.
The answer is almost always: both.
- Build reliable workflows
- Wrap them as tools
- Let agents decide when to use them
Predictability where you need it. Flexibility where you need it.
- Workflows before agents
If you can define the steps in code, do it.
Why?
- More predictable
- Cheaper to run
- Easier to debug
Save the agent for when you genuinely can't predict the path.
- Choose your reasoning style deliberately
Two main approaches:
- ReAct: Think one step at a time. Adapt as you go. Great for exploration.
- Plan-and-Execute: Plan upfront, then execute. Great for complex, well-defined tasks.
Pick based on the problem, not the hype.
- Multi-agent is expensive
We're talking 15x the token cost of a single chat.
Only justify it when:
- Subtasks are truly parallelizable
- Context won't fit in one window
- The value of the task warrants the cost
Complexity should be earned, not assumed.
- Invest in your tools
Good tool descriptions matter as much as good prompts.
- Write clear descriptions—the LLM reads them to decide when to act
- Keep parameters simple
- Test how the model actually uses your tools
Anthropic's own team said they spent more time on tools than prompts.
- Let agents recover from errors
When a tool call fails, feed the error back. Let the agent retry.
- Include the actual error message
- Suggest alternatives
- Set a max retry limit
LLMs adapt well to clear feedback. Use that.
- Memory is hard. Start small.
LLMs are stateless. Memory must be engineered.
Start with conversation history. Add long-term memory only when you need persistence across sessions.
Beware: irrelevant memories add noise. Stale memories cause errors.
Don't over-engineer this until you have to.
- Add human checkpoints for high-stakes actions
Not everything should be automated end-to-end.
- Notify for awareness
- Question for missing info
- Review before risky actions
Build trust before you remove the guardrails.
- Observe everything
Log decisions. Log tool calls. Log agent interactions.
You can't improve what you can't see.
When something breaks (and it will), your logs are the only thing standing between you and chaos.
__
Building agents isn't rocket science—but it does require intention.
Start simple. Compose patterns. Add complexity only when the problem demands it.
The goal isn't to use agents. The goal is to solve the problem.
__
Enjoy this? ♻️ Repost it to your network and follow me for more.