We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

By clicking "Accept", you agree to our use of cookies.
Learn more.

CookbooksSupport Agent

How to Create a Support Agent Using Hatchet

Many real-world workflows become difficult to manage once they involve multiple steps, long waits, human replies, and escalation rules. Support is one example, but the same pattern also shows up in onboarding, approvals, incident response, and other operational flows. In this cookbook, we will build a simple support agent that triages a ticket, generates an initial reply, and then waits for either a customer response or a timeout. If the customer replies, the workflow resolves. If no reply arrives in time, the workflow escalates the ticket to a human support agent.

What this example builds

This example implements the following durable support workflow:

Hatchet’s durable execution model helps keep the whole interaction in one workflow rather than scattering it across separate queue jobs and ad hoc timers.

Setup

Prepare your environment

To run this example, you will need:

  • a working local Hatchet environment or access to Hatchet Cloud
  • a Hatchet SDK example environment (see the Quickstart)
  • optionally, an ANTHROPIC_API_KEY for live LLM replies

Without ANTHROPIC_API_KEY, the example runs using a fixed fallback reply. To use the live Claude path, you also need the Anthropic SDK installed for your language.

Define the models

Start by defining the types for the workflow input and task outputs.

The models keep the inputs and outputs for each task explicit, which makes the workflow easier to inspect and test.

Add the workflow tasks

The durable workflow delegates its work to a few small tasks.

First, add a task to classify the incoming ticket:

Next, add a task to generate the initial support reply. When ANTHROPIC_API_KEY is set, the task calls Claude to produce the reply. Otherwise it returns a fixed fallback response.

Finally, add a task to represent escalation to the support team:

Keeping triage, reply generation, and escalation as separate tasks keeps the workflow itself small and makes each piece easier to reason about.

Build the durable workflow

Now tie everything together in a durable Hatchet workflow. A durable workflow is a good fit here because this interaction may stay open for some time while waiting for a customer reply. Hatchet persists the workflow state and its wait conditions, so the workflow can survive long delays, worker restarts, or even a worker crash, then continue later on another worker. That gives you a straightforward way to model the whole interaction without adding custom recovery logic.

The workflow runs triage first, generates an initial reply, and then waits for one of two things to happen: either a customer reply event arrives for that ticket, or the timeout fires. From there, the workflow either resolves the ticket or escalates it.

The detail that matters most here is the lookback window on the reply event condition. A customer reply could arrive while the workflow is still finishing triage or generating the first response. By using a lookback window (consider_events_since in Python, considerEventsSince in TypeScript), the workflow can still pick up that reply once the wait becomes active instead of missing it because the event arrived slightly early.

Register and start the worker

To run this workflow, register the workflow and its tasks on a Hatchet worker, then start it.

In TypeScript, workflows are registered through the shared example worker rather than a per-example registration file.

With the worker running, you can trigger the workflow and observe either the resolved or escalated outcome.

Trigger the workflow

The example also includes a small trigger script that starts the workflow, pushes a scoped reply event, and waits for the result.

Because the workflow uses a lookback window, the trigger can push the reply event immediately after starting the support agent.

Test it

This example includes two end-to-end tests against a live Hatchet instance:

  • a resolved path, where the customer reply event arrives before the timeout
  • a timeout path, where no reply arrives and the workflow escalates

If you are running the SDK examples locally:

pytest examples/support_agent/test_support_agent.py

Together, these tests validate both branches of the workflow and confirm that early reply events are handled safely without coordination sleeps.

Why Hatchet fits this workflow

The interesting part of this example is not the LLM call. It is the combination of waiting, branching, and keeping the full interaction in one place. A support flow like this usually needs to preserve state across several steps, wait for human input, and react differently depending on whether a reply arrives before a deadline. Hatchet fits that pattern well because you can express the event wait and timeout branch directly in the workflow. That makes the control flow easier to inspect, easier to test, and easier to extend as the interaction becomes more complex.

Next steps

A natural next step would be to connect this workflow to a real ticketing system and carry the conversation beyond a single reply. You could also make escalation depend on the content of the customer response instead of only on timeout. For this cookbook, though, the smaller version is enough to show the core pattern: start work immediately, wait safely for a reply, and escalate when the deadline passes.