Child Spawning
A task can spawn child tasks at runtime, including other durable tasks or entire DAG workflows. Children run independently on any available worker, and the parent can wait for their results.
Both durable tasks and DAG tasks support child spawning with the same core API. The key difference is that durable tasks free the parent’s worker slot while waiting (via eviction), while DAG tasks hold their slot for the duration of execution.
Spawning from Durable Tasks
A durable task can spawn child tasks at runtime. This is one of the core reasons to choose durable tasks over DAGs: the shape of work is decided as the task runs, not declared upfront.
Waiting for child results puts the parent task into an evictable state, the worker slot is freed and the parent is re-queued when results are available.
Because the parent is evicted while children execute:
- No slot waste — the parent doesn’t hold a worker slot while N children run across your fleet.
- No deadlocks — because the parent is evicted, it can’t starve its own children for slots.
- Dynamic N — you decide how many children to spawn based on runtime data (input size, API responses, agent reasoning).
Spawning child tasks
Use the context object to spawn a child task from within a durable task. The child runs independently on any available worker.
Parallel fan-out
Spawn many children at once and wait for all results. The parent is evicted during the wait, so it consumes no resources while children run.
What children can be
A durable task can spawn any runnable:
| Child type | Example |
|---|---|
| Regular task | Spawn a stateless task for a quick computation or API call. |
| Durable task | Spawn another durable task that has its own checkpoints, sleeps, and event waits. |
| DAG workflow | Spawn an entire multi-task workflow and wait for its final output. |
Error handling
Common Patterns
Dynamic fan-out / fan-in
Process a list of items whose length is only known at runtime. Spawn one child per item, collect all results, then continue. Document processing and batch processing are canonical examples: when a batch of files arrives, a parent fans out to one child per document; each child parses, extracts, and validates its document in parallel across your worker fleet.
Concurrency controls how many children run simultaneously. Hatchet distributes child tasks across available workers, so adding workers increases throughput without code changes. For rate-limited external services (OCR, LLM APIs), combine with Rate Limits to throttle child execution across all workers.
Agent loops
An agent loop is implemented by having a durable task spawn a new child run of itself with updated input until a termination condition is met. Each iteration is a separate child task, giving full observability in the dashboard. AI agents use this pattern when they reason about what to do, spawn a subtask (or a sub-workflow), inspect the result, and decide whether to continue, branch, or stop. See AI Agents for a detailed guide.
Recursive workflows
A durable task spawns child durable tasks, each of which may spawn their own children. This creates a tree of work that’s entirely driven by runtime logic, useful for crawlers, recursive search, and tree-structured computations.
Use cases
- Dynamic fan-out processing — When the number of parallel tasks is determined at runtime. See Batch Processing and Document Processing.
- Reusable workflow components — Create modular workflows that can be reused across different parent workflows.
- Resource-intensive operations — Spread computation across multiple workers.
- Agent-based systems — Allow AI agents to spawn new workflows based on their reasoning. See AI Agents.
- Long-running operations — Break down long operations into smaller, trackable units of work.