We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

By clicking "Accept", you agree to our use of cookies.
Learn more.

GuideWorkers

Workers

Workers are the processes that actually execute your tasks. Each worker is a long-running process in your infrastructure that maintains a persistent gRPC connection to the Hatchet engine. Workers receive task assignments, run your code, and report results back. You can run them locally during development, in containers, or on VMs - and scale them independently from the rest of your stack.

Declaring a worker

A worker needs a name and a set of tasks to handle. Call the worker method on the Hatchet client with both.

When a worker starts, it registers each of its tasks with the Hatchet engine. From that point on, Hatchet knows to route matching tasks to that worker. Multiple workers can register the same task - Hatchet distributes work across all of them.

Starting a worker

The fastest way to run a worker during development is with the Hatchet CLI. This handles authentication and hot-reloads your worker when code changes:

hatchet worker dev

Once the worker starts, you will see logs confirming it is connected:

[INFO]  🪓 -- STARTING HATCHET...
[DEBUG] 🪓 -- 'test-worker' waiting for ['simpletask:step1']
[DEBUG] 🪓 -- acquired action listener: efc4aaf2-...
[DEBUG] 🪓 -- sending heartbeat

For self-hosted users, you may need to set additional gRPC configuration options. See the Self-Hosting docs for details.

Worker lifecycle

A worker moves through four phases during its lifetime:

  • ACTIVE - the worker is connected and accepting tasks.
  • INACTIVE - the engine has not received a heartbeat within the expected window. Tasks assigned to this worker will be reassigned.
  • STOPPED - the worker shut down gracefully. In-flight tasks are allowed to complete before the process exits.

Hatchet uses heartbeats to monitor worker health. Workers send a heartbeat every 4 seconds. If the engine does not receive a heartbeat for 30 seconds, the worker is marked INACTIVE and its in-flight tasks are re-queued for other workers to pick up.

Common reasons a worker misses heartbeats:

  • Process crash - the worker process exits unexpectedly (OOM kill, unhandled exception, SIGKILL).
  • Network disruption - the connection between the worker and the Hatchet engine is interrupted (DNS failure, firewall change, cloud network blip).
  • Blocked main thread - a long-running synchronous computation (e.g. CPU-intensive work, a blocking FFI call) starves the heartbeat loop and prevents it from sending on time.

Slots

Every worker has a fixed number of slots that control how many tasks it can run concurrently. You configure them with the slots option on the worker. If you set slots=5, the worker will run up to five tasks at the same time. Any additional tasks wait in the queue until a slot opens up.

Slots are a local limit - they protect the individual worker process from overcommitting its CPU, memory, or event loop. Concurrency controls are a global limit across your entire fleet - use them to prevent a single tenant or use-case from monopolizing capacity, or to respect the limits of an external resource like a third-party API or database connection pool. The two work together: concurrency controls decide how many runs Hatchet will allow to be active; slots decide how many of those runs each individual worker is willing to accept.

Choosing a slot count

Start with a slot count that matches the degree of parallelism your worker can sustain. For CPU-heavy tasks, that is typically the number of available cores. For I/O-heavy tasks (HTTP calls, database queries), you can safely go higher because most of the time is spent waiting.

Adding slots is only helpful up to the point where the worker is not bottlenecked by another resource. If your worker is CPU-bound, memory-bound, or waiting on network I/O, more slots will just increase contention. Monitor memory usage and event loop lag after changing slot counts - if either climbs, you have gone too far.

Scaling workers

You can increase throughput in two ways: add more slots to a single worker, or run more worker processes. In most workloads, horizontal scaling (more workers) is the simplest path because each worker brings its own pool of slots and its own resources.

When running in Kubernetes or a similar orchestrator, you can autoscale workers based on queue depth using the Task Stats API. Hatchet also supports KEDA integration for event-driven autoscaling.

Task assignment

By default, Hatchet distributes tasks to any available worker that has registered the task. You can influence this behavior in several ways:

ConceptWhat it does
Worker AffinityPrefer or require specific workers based on labels and weights.
Sticky AssignmentPin related tasks in a workflow to the same worker.
Manual Slot ReleaseFree a worker slot before the task function returns.

These are useful when a worker has specialized hardware (a GPU, a loaded ML model), or when co-locating related tasks on the same worker avoids redundant setup.

Running in production

In development, the fastest way to run a worker is hatchet worker dev, which handles authentication and hot-reloads your code on changes. In production, you’ll run workers as standalone processes or containers.

ConceptWhat it does
Running with DockerContainerize workers for deployment.
Autoscaling WorkersScale workers dynamically based on queue depth.
Worker Health ChecksExpose /health and /metrics endpoints for monitoring.
Preparing for ProductionOperational best practices for monitoring, error handling, and scaling.

Workers and tasks

Workers and tasks have a many-to-many relationship. A single worker can register many tasks, and a single task can be registered on many workers. This means you can organize your workers by resource requirements, deployment boundary, or any other criterion - and Hatchet handles routing tasks to the right place.

If you haven’t already, read about tasks to understand how work is defined and configured.