Artificial Intelligence is evolving beyond simple question-answering systems into autonomous agents capable of complex reasoning, tool usage, and multi-step problem solving. This shift toward “agentic AI” represents a fundamental change in how we build AI applications. In this blog series, I’ll guide you through building and deploying your first AI agent using modern frameworks like Strands and LangGraph, deploying your agent to the web (Amazon Bedrock AgentCore), and extending capabilities with RAG knowledge bases and custom MCP tools. Strap in and join me on this multi-part blog series!
TL;DR: This series teaches you to build AI agents using Strands and LangGraph, deploy to AWS Bedrock AgentCore, and extend with RAG and MCP tools. Post 1 covers fundamentals and the framework landscape. Code examples on GitHub.
Before diving into agents, let’s briefly cover the foundation: Large Language Models (LLMs). These are AI models trained on vast amounts of text data that can understand and generate human-like text. Examples include:
LLMs are powerful for answering questions and generating text, but they have limitations: they can’t access real-time data, perform calculations reliably, or take actions in the real world. This is where agentic AI comes in.
Agentic AI refers to AI systems that can autonomously plan, reason, and take actions to achieve goals. Unlike traditional chatbots that simply respond to prompts, agents can:
Think of an agent as an AI assistant that doesn’t just answer questions but can actually help you accomplish tasks by orchestrating multiple steps and tools.
Instead of just hallucinating an answer or telling you that it cannot assist, an agent can check a weather API, analyze your calendar, and suggest whether you should reschedule your outdoor meeting.
At the heart of every agent is the agent loop - a cycle of reasoning, action, and observation:
This loop enables agents to handle complex, multi-step workflows that would be impossible with a single call to a Large Language Model (LLM).
Tools extend an agent’s capabilities beyond text generation. An agent can use tools to:
Modern LLMs support function calling, allowing them to intelligently select and invoke the right tools based on the task at hand. We’ll explore building agents with tools in the next post of this series.
Building agents from scratch is complex. Fortunately, a rich ecosystem of frameworks has emerged to abstract away the boilerplate and provide battle-tested patterns. While this series focuses on Strands and LangGraph, it’s worth knowing about some of the options available to you:
Strands (AWS) - A production-ready framework with minimal boilerplate, native Bedrock integration, and built-in deployment utilities for AWS. Emphasizes simplicity and rapid development.
LangGraph (LangChain) - Graph-based agent workflows with fine-grained control over state transitions and decision flows. Part of the extensive LangChain ecosystem with broad community support.
CrewAI - A role-playing multi-agent framework where agents work together like a crew, each with specific roles and responsibilities. Ideal for collaborative task execution.
AutoGen (Microsoft) - Enables building multi-agent conversations with customizable and conversable agents. Strong focus on agent-to-agent communication patterns.
Semantic Kernel (Microsoft) - An SDK that integrates LLMs with conventional programming languages, emphasizing enterprise integration and plugin architecture.
LlamaIndex - Originally focused on data indexing and retrieval, now includes powerful agentic capabilities with data agents that can reason over structured and unstructured data.
Swarm (OpenAI) - A lightweight experimental framework for multi-agent orchestration, emphasizing simplicity and handoffs between agents.
There are too many frameworks to list them all (Pydantic AI, Smolagents, …) and each framework has its strengths. The choice depends on your specific use case, existing tech stack, and architectural preferences. For this series, we’ll focus on Strands and LangGraph due to their strong AWS integration and production-ready features.
Strands is a Python framework designed for building production-ready AI agents with minimal code. Key features include:
Strands is ideal if you want to quickly build and deploy agents without getting bogged down in implementation details. This was my first framework and I still enjoy its simplicity.
Try it yourself: Run the example code to see real agents in action with both Strands and LangGraph.
# Simple Strands agent with Ollamafrom strands import Agentfrom strands.models.ollama import OllamaModelmodel = OllamaModel(host="http://localhost:11434", model_id="llama3.2")agent = Agent(model=model)response = agent("Tell me about Cape Town")print(response)
LangGraph, part of the LangChain ecosystem, provides a graph-based approach to building agents:
LangGraph offers more flexibility and control, making it suitable for complex, custom agent architectures.
LangChain vs LangGraph: LangChain is a framework for building LLM applications with chains and components. LangGraph extends this with stateful, graph-based workflows specifically designed for agentic systems. Think of LangChain as the foundation and LangGraph as the specialized tool for building agents with complex decision flows.
# Simple LangGraph agent with Ollamafrom langchain.agents import create_agentfrom langchain_ollama import ChatOllamamodel = ChatOllama(model="llama3.2", base_url="http://localhost:11434")agent = create_agent(model, tools=[])response = agent.invoke({"messages": [("user", "Tell me about Cape Town")]})print(response["messages"][-1].content)
| Feature | Strands | LangGraph |
|---|---|---|
| Learning Curve | Easiest - simple API, model-first approach | Steeper - requires graph thinking |
| Architecture | Model-first: describe goals, agent reasons | Graph-based: explicit nodes and edges |
| AWS Integration | Native Bedrock integration, AWS-optimized | Works with AWS but not native |
| Tool Integration | Built-in MCP support, massive tool library | LangChain ecosystem integration |
| Multi-Agent | Orchestration, swarm, workflow patterns | Graph-based coordination |
| Production Ready | AWS-backed, used internally at Amazon | Battle-tested, widely deployed |
| Deployment | Local, AgentCore, self-hosted | Local, AgentCore, LangGraph Cloud, self-hosted |
| Code Style | Minimal boilerplate, declarative | More explicit control, imperative |
| Best For | Rapid development, portable deployments | Complex workflows, fine-grained control |
Once you’ve built an agent, you need somewhere to run it. You can run agents anywhere, but serverless (💙) platforms like Amazon Bedrock AgentCore eliminate infrastructure management:
AgentCore bridges the gap between development and production, providing enterprise-grade infrastructure for agentic applications.
RAG enhances agents with domain-specific knowledge by:
This allows agents to answer questions about your specific data without fine-tuning models, which is costly and compute-intensive. Also, searches are no longer matching strings like in the early days of search engines - agents can perform semantic searches on your knowledge base, which is great when paired with a good embeddings model. More on that later in this series!
Backstory: It was May 2025 and my LinkedIn feed was flooded with this new “MCP-thing”. It piqued my interest and a few vibe-coding sessions later and I was hooked into the realm of agentic AI!
MCP has become the standard for connecting AI agents to external tools and data sources. With MCP:
FastMCP makes it easy to build custom MCP servers in Python, enabling you to expose any API or functionality as an agent tool.
As multi-agent systems become more common, standardized communication between agents is crucial. The Agent-to-Agent (A2A) protocol enables:
We’ll explore A2A in depth when we cover multi-agent architectures later in this series.
This is the first post in a series on building agentic AI applications. I’ll be publishing new posts weekly, covering topics including:
Each post will include practical code examples that you can follow along with. All code will be available in a GitHub repository.
To follow along with this series, you’ll need:
Choose your model provider (pick one or more):
In the next post, we’ll dive into building your first agent with Strands, starting with a simple conversational agent and progressively adding tools and capabilities.
Agentic AI is rapidly becoming the standard for building sophisticated AI applications. Whether you’re building customer support systems, data analysis tools, or automation workflows, understanding how to build and deploy agents is an essential skill for modern developers.
The frameworks and platforms covered in this series represent the cutting edge of AI development, and mastering them will position you to build the next generation of intelligent applications.
Quick Links
Legal Stuff
