Introduction to Hatchet
Welcome to the Hatchet User Guide! Hatchet is a platform for running background tasks at scale. Instead of managing your own task queue or pub/sub system, you can use Hatchet to distribute your functions between a set of workers with minimal configuration or infrastructure. Hatchet supports the following features:
- 📥 Queues
- 🎻 Task Orchestration (DAGs and durable execution)
- 🚦 Flow Control (concurrency or rate limiting)
- 📅 Scheduling (cron jobs and scheduled tasks)
- 🚏 Task routing (sticky execution and affinity)
- ⚡️ Event triggers and listeners
- 🖥️ Real-time Observability Dashboard
Concepts
Background tasks
Background tasks are functions which are executed outside of the main request/response cycle of your application. They are typically invoked from your application code, from an external event (like a webhook), or on a schedule (like a cron job). Background tasks are useful for offloading work from your application, and for running complex, long-running or resource-intensive tasks.
Workers
Hatchet is responsible for invoking tasks which run on workers. Workers are long-running processes which are connected to Hatchet, and execute the functions defined in your tasks. They can be run on your own infrastructure, or on Hatchet’s managed compute offering.
One of the design goals of Hatchet is to ensure that workers can be run anywhere, from a PaaS like Heroku to a Kubernetes cluster running in your own data center.
What is a task?
A task is a unit of work that can be executed by Hatchet. Tasks can be run directly, or can be executed in response to an external trigger (an event, schedule, or API call). For example, if you’d like to send notifications to a user after they’ve signed up, you could create a task for that. Tasks can be spawned from within another task or can be built into a directed acyclic graph based workflow.
Durable queue
Hatchet is built on top of a durable, low-latency queue, which means it can handle real-time interactions and business-critical tasks. This is particularly useful if you’re building a real-time application, or if you’re running tasks which need to be completed quickly. It can scale to millions of queued tasks and can handle thousands of tasks per second. We are continuously working to improve our throughput and latency.
Quick Starts
We have a number of quick start tutorials for getting up and running quickly with Hatchet:
We also have a number of guides for getting started with the Hatchet SDKs: