Introduction to Hatchet
Welcome to the Hatchet User Guide! Hatchet is a distributed, fault-tolerant task queue designed to solve scaling problems like concurrency, fairness, and rate limiting. Instead of managing your own task queue or pub/sub system, you can use Hatchet to distribute your functions between a set of workers with minimal configuration/infrastructure.
Concepts
You run your workers, we manage the rest.
Hatchet is an orchestrator, which means it manages the execution of your workflows. However, the individual tasks comprising each workflow are executed by your own workers (don’t worry, each SDK comes with a worker implementation). This means you can run your workers in your own infrastructure, and Hatchet will manage the scheduling, retries, and monitoring of your workflows.
Hatchet also has a Managed Compute offering, which makes it even easier to run your workers. With Managed Compute, you can run your workers and let us handle managing the infrastructure.
Interested in Managed Compute? Ask us about it!
What is a workflow?
In Hatchet, the fundamental unit of invocable work is a Workflow. Each workflow is a collection of Tasks, which are atomic functions. The simplest workflow can be a single task.
Workflows can be run directly, or can be executed in response to an external trigger (an event, schedule, or API call). For example, if you’d like to send notifications to a user after they’ve signed up, you could create a workflow for that.
Why is that useful?
Instead of processing background tasks and functions in your application handlers, which can lead to complex code, hard-to-debug errors, and resource contention, you can distribute these workflows between a set of workers
. Workers are long-running processes which listen for events, and execute the functions defined in your workflows.
A managed queue
Hatchet is built on top of a low-latency queue, which means it can handle real-time interactions and business-critical tasks. This is particularly useful if you’re building a real-time application, or if you’re running tasks which need to be completed quickly. It can scale to millions of queued tasks and can handle hundreds of tasks per second. We are continuously working to improve our throughput and latency.
Quick Starts
We have a number of quick start tutorials for getting up and running quickly with Hatchet:
We also have a number of guides for getting started with the Hatchet SDKs: