Agentic Systems

AI systems where software agents perceive their environment, make decisions, and take actions autonomously — within defined boundaries — to accomplish goals.


What is it?

Most AI you interact with today is reactive: you ask a question, it gives an answer. A chatbot waits for your prompt, generates a response, and stops. An agentic system goes further — it can plan, decide, and act across multiple steps without waiting for human input at every turn.1

The word “agentic” comes from agency — the capacity to act independently. An agentic AI system doesn’t just respond to a single request; it breaks a goal into sub-tasks, selects tools, executes actions, evaluates results, and adjusts its approach — all within boundaries set by its designer.2

This is not science fiction autonomy. Today’s agentic systems are practical and bounded: a coding agent that writes, tests, and debugs code across multiple files; a research agent that searches the web, synthesises findings, and produces a report; a customer service agent that accesses databases, processes refunds, and escalates edge cases to humans.3

The key insight is that agency is a spectrum, not a binary.4 A simple chatbot has almost no agency. A coding assistant with tool access has some. A fully autonomous agent managing a deployment pipeline has a lot. Understanding where a system sits on this spectrum — and where it should sit — is the core challenge of designing agentic systems.

In plain terms

A chatbot is like a reference librarian — you ask a question, they answer it, and they wait for your next question. An agentic system is like a research assistant — you give them a goal (“find me everything on this topic and write a summary”), and they go off, make decisions about where to look, what to include, and how to structure it, coming back with a finished result.


At a glance


How does it work?

1. The perception-reasoning-action loop

Every agentic system follows a fundamental cycle: perceive the current state, reason about what to do next, and act on that decision. Then repeat.1

A coding agent perceives the current error message, reasons that the bug is in a specific function, acts by editing the code, perceives the test results, and continues until the tests pass. This loop is what separates an agent from a one-shot response.

Think of it like...

A thermostat is a simple agent. It perceives the temperature (sensor), reasons by comparing it to the target (threshold logic), and acts by turning the heating on or off (actuator). Repeat forever. An AI agent does the same thing, but with far more complex perception (reading documents, APIs, databases) and reasoning (language models, planning algorithms).

2. Tool use

An agent without tools is just a language model generating text. Tools are what give agents real-world impact: calling APIs, querying databases, reading files, executing code, sending emails.2

Tool use follows a pattern: the agent decides which tool to call, formats the input, receives the output, and incorporates it into its reasoning. This is sometimes called the ReAct pattern (Reasoning + Acting) — the agent alternates between thinking about what to do and doing it.5

Think of it like...

A chef (the agent) has a kitchen full of equipment (tools). The chef decides what to cook (reasoning), reaches for the right knife (tool selection), chops the vegetables (execution), tastes the result (evaluation), and adjusts seasoning (next action). The chef’s skill is not just in knowing recipes — it’s in knowing which tool to use when.

Concept to explore

See llm-pipelines for how language models are connected to tools and data sources in production systems.

3. Planning and decomposition

Complex goals require planning — breaking a big task into smaller, manageable steps. A research agent given “analyse the competitive landscape” might decompose this into: identify competitors, gather pricing data, compare features, synthesise findings, write report.3

Planning is where agentic systems are most fragile. A bad plan leads to wasted effort or wrong conclusions. Good agentic design includes checkpoints where the system evaluates its progress and can revise its plan.

4. Boundaries and guardrails

Autonomy without boundaries is dangerous. Agentic systems need guardrails — constraints that prevent the agent from taking harmful, expensive, or irreversible actions.4

Common guardrails include:

  • Scope limits: The agent can only access certain tools and data sources
  • Approval gates: High-stakes actions (spending money, deleting data, contacting customers) require human confirmation
  • Budget constraints: Limits on API calls, compute time, or financial spending
  • Output validation: Checking the agent’s work before it’s delivered

Key distinction

Autonomy is how much the agent can do without asking. Authority is what the agent is allowed to do. Good agentic design keeps autonomy high within clearly defined authority. An agent should be free to choose how to accomplish a task, but constrained in what tasks it can attempt.

5. Multi-agent systems

As tasks grow in complexity, a single agent may not be enough. Multi-agent systems use multiple specialised agents that collaborate — one does research, another writes code, a third reviews the output.3

Coordination between agents introduces its own challenges: who decides the plan? How do agents share information? What happens when agents disagree? This is where orchestration becomes critical.

Concept to explore

See orchestration for how multiple agents are coordinated, scheduled, and managed in production systems.


Why do we use it?

Key reasons

1. Handling complexity. Some tasks require dozens of decisions across multiple steps. A human in the loop at every step creates bottlenecks. Agentic systems handle the routine decisions autonomously, escalating only the exceptions.2

2. Consistency and tirelessness. An agent follows its instructions the same way every time — no fatigue, no Friday-afternoon shortcuts. For repetitive, multi-step workflows, this consistency is transformative.

3. Speed. An agent can execute a multi-step research task in minutes that would take a human hours. Not because it’s smarter, but because it can call tools, process results, and move to the next step without pausing.3

4. Scalability. One well-designed agent can handle thousands of concurrent tasks. Scaling human teams for the same work is orders of magnitude more expensive.


When do we use it?

  • When a task involves multiple steps that depend on each other (research, analyse, synthesise, report)
  • When the task requires tool use — calling APIs, querying databases, processing files
  • When human involvement at every step is a bottleneck but full automation without any oversight is too risky
  • When you need to scale a workflow that currently requires skilled human judgement at each step
  • When building systems that must adapt their approach based on intermediate results

Rule of thumb

If the task can be completed in a single prompt-response exchange, you don’t need an agent — a chatbot is fine. If the task requires multiple steps, tool access, and decisions along the way, you’re in agent territory.


How can I think about it?

The intern with a checklist

Imagine you hire a capable intern and give them a detailed checklist for a task: “Research these five companies, fill in this spreadsheet template, flag anything unusual, and send me the result.”

  • The checklist = the agent’s instructions and plan
  • The intern = the language model doing the reasoning
  • Their laptop and access credentials = the tools (APIs, databases, file system)
  • “Flag anything unusual” = a guardrail (escalate edge cases to a human)
  • The spreadsheet template = the expected output format
  • Your review of their work = output validation

A good intern follows the checklist but uses judgement on details. A good agent does the same — it follows instructions but makes reasonable micro-decisions without asking about every one. And just like an intern, an agent needs clear instructions, appropriate access, and someone reviewing the output.

The autopilot analogy

An airplane’s autopilot is an agentic system. It perceives (altitude, speed, heading via instruments), reasons (compare current state to flight plan), and acts (adjust throttle, ailerons, rudder) — continuously, without pilot intervention for routine flight.

  • The flight plan = the agent’s goal and constraints
  • Instruments = perception (data inputs, tool outputs)
  • Control surfaces = tools the agent can use to affect the world
  • Altitude and speed limits = guardrails and boundaries
  • The pilot taking over for landing = human-in-the-loop for high-stakes moments

Autopilot handles 90% of a flight autonomously. But it doesn’t decide the destination, and the pilot takes over when conditions are unusual. The best agentic systems follow this model: high autonomy for routine work, human control for critical decisions.


Concepts to explore next

ConceptWhat it coversStatus
llm-pipelinesHow language models are connected to tools and data in productionstub
orchestrationCoordinating multiple agents and managing complex workflowsstub
autonomy-spectrumThe chatbot-to-autonomous-agent progression and how to choose the right levelcomplete
agent-memoryHow agentic systems remember across turns and sessionscomplete
multi-agent-systemsArchitectures where multiple specialised agents collaborate on complex taskscomplete

Some cards don't exist yet

A broken link is a placeholder for future learning, not an error.


Check your understanding


Where this concept fits

Position in the knowledge graph

graph TD
    AIML[AI and Machine Learning] --> KE[Knowledge Engineering]
    AIML --> AS[Agentic Systems]
    AS --> LLM[LLM Pipelines]
    AS --> ORCH[Orchestration]
    AS --> AUS[Autonomy Spectrum]
    AS --> AMEM[Agent Memory]
    AS --> TU[Tool Use]
    AS --> GR[Guardrails]
    AS --> MAS[Multi-Agent Systems]
    style AS fill:#4a9ede,color:#fff

Related concepts:

  • software-architecture — agentic systems require architectural decisions about how agents interact with tools, data, and each other
  • iterative-development — agentic systems are built iteratively; you start with a simple agent and add capabilities as you validate each layer
  • knowledge-engineering — structured knowledge makes agents more reliable by grounding their reasoning in verified facts rather than probabilistic generation

Sources


Further reading

Resources

Footnotes

  1. MIT Sloan. (2026). Agentic AI, Explained. MIT Sloan Management Review. 2

  2. DigitalOcean. (2024). What is Agentic AI? Beyond Chatbots and Simple Automation. DigitalOcean. 2 3

  3. Fello AI. (2026). How AI Agents Actually Work: The Complete Technical Guide. Fello AI. 2 3 4

  4. Falconer, S. (2025). The Practical Guide to the Levels of AI Agent Autonomy. Medium. 2

  5. Agent Wiki. (2026). Agent Design Patterns. Agent Wiki.