AI Product Engineer Day 2

Day 2 of 14-Day AI Product Engineer Course on Agents vs Workflows

Day 2: Agents vs Workflows

Welcome to Day 2 of the 14-Day AI Product Engineer Course!

P.S. We are figuring out how to automate the delivery of this course. So thank you for your patience on the 2nd Day Lesson.

If you missed the Day 1 Lesson, where we introduce Agents, follow this link.

Today we will be covering "Agent" and "Workflows” with OpenAI Agent SDK. If you want the full tutorial with code snippets, please visit our in-depth article here.

This 14-Day Free Course is a complimentary material for the next AI Product Engineer Bootcamp which starts in June 2025! Use coupon code “EARLY” for $210 Off.

Day 2: Agents vs Workflows


"Agent" and "Workflow" get tossed around a lot, but they mean very different things. According to Anthropic’s article, Building effective agents:

Workflows follow predefined code paths to orchestrate and execute LLMs calls.

Whereas, Agents dynamically direct their execution paths including tool use, handoffs and decision-making on the fly.

Put simply:

  • Workflows = scripted if/then paths.

  • Agents = dynamic, self-directed behavior.

Workflows are ideal for tasks that require precision and repetition. Agents thrive in complex, changing environments. Most agentic systems in 2025 are workflows.

When to Use Agents vs. Workflows

Choosing between an agent and a workflow depends on the complexity and variability of the task at hand. Anthropic suggests:

  • Opt for Workflows when:

    • Tasks are repetitive and well-defined.

    • Consistency and reliability are paramount.

    • There's minimal need for decision-making or adaptation.

  • Opt for Agents when:

    • Tasks are complex and require reasoning.

    • The environment is dynamic, and decisions need to be made in real-time.

    • There's a need for the system to learn and adapt to the enviornment.

Now that you understand the difference between agents and workflows, it’s time to put that knowledge into practice.

In this article we will focus on Workflows, because 80% of current agentic solutions are really workflows under the hood.

In the next section, we’ll explore how to actually build an effective agentic systems using OpenAI’s Agent SDK. We will cover 5 most common workflows conceptually, but if you want more hands-on implementation, read this article.

Building Blocks

The OpenAI Agent SDK introduces some old and some new concepts to define and manage agent workflows:

  • Agents: LLMs configured with instructions, tools, guardrails, and handoffs

  • Tools: Functions that agents can use to take actions beyond just generating text

  • Handoffs: A specialized tool used for transferring control between agents

  • Context: Memory of past actions and custom context passed to

  • Guardrails: Configurable safety checks for input and output validation

*The Agent SDK also has a tracing module that allows you to view, debug, and optimize your workflows inside OpenAI’s developer dashboard.

How to define agents?

from agents import Agent

basic_agent = Agent(
   name="My First Agent",
   instructions="You are a helpful assistant.",
   model="gpt-4o"  # Optional: defaults to "gpt-4o" if not specified
)

Agent as a feedback loop with access to your custom environment

At the center of the OpenAI Agent SDK is the Agent class. It has 3 main components: name, instructions, and model.

Additionally, you can select and define more attributes, like tools, output_type, and handoffs. See the documentation for more details.

Workflow Patterns

A workflow is a structured system where an LLM or in our case instance of an Agent class from OpenAI Agent SDK follows a predefined sequence of steps to complete a task. Unlike fully autonomous agents, which dynamically decide how to act, workflows operate on a fixed path, great for reliability, speed, and repeatability.

In this lesson we will focus on the 5 most common workflow patterns:

  • Prompt Chaining: Breaks complex tasks into a sequence of steps, using each LLM output as the next input.

  • Routing: Classifies the user’s input and sends it down the right specialized workflow.

  • Parallelization:This can be different agents runnning in parallel(managing subtasks) or same agent running multiple times in parallel(voting).

  • Orchestrator–Workers: A lead agent breaks down tasks and assigns them to “worker” agents dynamically based on the problem.

  • Evaluator–Optimizer: One model generates, another critiques in a feedback loop.

Prompt Chaining

Sometimes, solving a problem takes more than one step, especially when each step builds on the last. That’s where prompt chaining comes in. This workflow creates a linear chain of agents, where each one takes the output of the previous agent and pushes the task further.

When to use prompt chaining:

Prompt chaining is ideal when a task has clear sequential steps that depend on one another. It’s especially useful for workflows that require validation or review before moving forward (see "gate” in the diagram below).

Routing

Not every problem can be solved with the same kind of agent. That’s where routing comes in. Instead of relying on one generalist agent, this workflow assigns incoming requests to the right specialist based on the user’s query.

When to use routing:

Routing is powerful when your system supports multiple domains or expertise areas, and you want to give users the best possible answer from the most relevant source. It’s perfect for coding help, customer support, or language-based queries.

By giving each agent a focused role and letting a coordinator handle direction, this pattern keeps your AI system organized.

Parallelization

Agentic workflows can handle tasks simultaneously, and their outputs can be combined or evaluated afterward. This approach is known as parallelization, and it comes in 2 main forms:

  • Sectioning: Split a task into sub-tasks that are independent from one another and run them in parallel.

  • Voting: Run the same task multiple times in parallel to generate diverse outputs, then select the best one.

When to use parallelization:

  • The task can be broken down into parts that can be done faster in parallel.

  • You want multiple perspectives or repeated attempts to boost the quality or confidence of the final result.

For complex problems with several factors to consider, it's often better to let each LLM call focus on just one factor. This leads to clearer, more thoughtful outputs and improves overall performance.

Orchestrator-Workers

When tasks are too complex to break down ahead of time, orchestrator-workers steps in: a central LLM decomposes the request, spins up worker Agents for each subtask, then gathers and unifies their outputs into a single answer.

When to use orchestrator-workers:

  • Perfect when problem requires a complex multi-agent approach, where you can’t define tasks before the user’s query.

  • When orchestrator needs to make multiple changes in different environments or use different sub-agents and tools each time.

Evaluator-Optimizer

When you need to iteratively polish output, evaluator-optimizer loops between a generator LLM and an evaluator LLM: one proposes a response, the other critiques it, and the cycle repeats until it meets your standards.

When to use evaluator-optimizer:

  • When you have clear evaluation criteria and measurable metrics by which to judge improvements.

  • When you want your outputs to be improved with feedback.

  • When you need to prioritize accuracy for the final LLM output.

Congratulations on finishing Day 2 lesson!

If you want to learn how to implement these workflows with a production-grade standards join our upcoming AI Product Engineering Bootcamp that starts in June 2025!

In this bootcamp, you will:

  1. Design a custom multi-agent workflow tailored to a real-world problem.

  2. Implement and test agents using the OpenAI Agents SDK, MCPs and/or AI SDK from Vercel and tracing tools.

  3. Deploy your multi-agent system and monitor performance with real-time traces.

    AND SO MUCH MORE!