Running Tasks
With your task defined and a worker running, you can import the task wherever you need it and invoke it.
Run and wait
Call a task and block until you get the result back. Use this for synchronous workflows like fan-out, LLM calls, or any time you need the output before continuing.
You can use your Task object to run a task and wait for it to complete by calling the run method. This method will block until the task completes and return the result.
You can also await the result of aio_run:
Note that the type of input here is a Pydantic model that matches the input schema of your workflow.
Spawning tasks from within a task
You can spawn tasks from within a task. This is useful for composing tasks together, fanning out batched tasks, or creating conditional workflows.
You can run a task from within a task by calling the aio_run method on the task object from within a task function. This will associate the runs in the dashboard for easier debugging.
The parent task will run and spawn the child task, then collect the results.
Running tasks in parallel
Since the aio_run method returns a coroutine, you can spawn multiple tasks in parallel and await using asyncio.gather.
While you can run multiple tasks in parallel using the Run method, this is
not recommended for large numbers of tasks. Instead, use bulk run
methods for large parallel task execution.
Fire and forget
Enqueue a task without waiting for the result. Use this for background jobs like sending emails, processing uploads, or kicking off long-running pipelines.
You can use your task object to enqueue a task by calling the run_no_wait method. This returns a WorkflowRunRef without waiting for the result.
You can also await the result of aio_run_no_wait:
Note that the type of input here is a Pydantic model that matches the input schema of your task.
Subscribing to results later
The run_no_wait method returns a WorkflowRunRef which includes a listener for the result of the task, so you can subscribe at a later time.
Use ref.result() to block until the result is available:
or await aio_result:
Triggering from the dashboard
In the Hatchet Dashboard, navigate to “Task Runs” in the left sidebar and click “Trigger Run” at the top right. You can specify run parameters such as Input, Additional Metadata, and the Scheduled Time.

Where you can trigger from
- Same codebase or monorepo - import your task and call
run,run_no_wait, or other trigger methods directly. Your API server, CLI, or another service in the same repo can use the same task definition. - External API or separate service (polyrepo) - when the triggering code can’t import the task definition (different repo, language, or microservice), use a stub: a Hatchet task with the same name and input/output types but no implementation. See Inter-Service Triggering for details.
- From the CLI - use the
hatchet runcommand to trigger tasks from the command line. - From the Dashboard - use the Hatchet dashboard to trigger tasks from the web interface.
Other trigger styles
Hatchet supports additional trigger patterns for more advanced use cases:
| Style | Use case | Doc |
|---|---|---|
| Scheduled | Run once at a specific time in the future | Scheduled Trigger |
| Cron | Run on a recurring schedule (daily, weekly, etc.) | Cron Trigger |
| Events | Run when an event is emitted (e.g. webhooks, queues) | Event Trigger |
| Bulk | Run the same task many times with different inputs | Bulk Run Many |
| Webhooks | Let external systems trigger workflows via HTTP | Webhooks |
Next steps
Now that you can run tasks, explore Durable Workflows to compose multiple tasks into pipelines with dependencies and checkpointing.