SDK Improvements in V1
The Hatchet SDKs have seen considerable improvements with the V1 release.
The examples in our documentation now use the V1 SDKs, so following individual examples will help you get familiar with the new SDKs and understand how to migrate from V0.
Highlights
The Python SDK has a number of notable highlights to showcase for V1. Many of them have been highlighted elsewhere, such as in the migration guide, on the Pydantic page, an in various examples. Here, we’ll list out each of them, along with their motivations and benefits.
First and foremost: Many of the changes in the V1 Python SDK are motivated by improved support for type checking and validation across large codebases and in production use-cases. With that in mind, the main highlights in the V1 Python SDK are:
- Workflows are now declared with
hatchet.workflow, which returns aWorkflowobject, orhatchet.task(for simple cases) which returns aStandaloneobject. Workflows then have their corresponding tasks registered withWorkflow.task. TheWorkflowobject (and theStandaloneobject) can be reused easily across the codebase, and has wrapper methods likerunandschedulethat make it easy to run workflows. In these wrapper methods, inputs to the workflow are type checked, and you no longer need to specify the name of the workflow to run as a magic string. - Tasks have their inputs type checked, and inputs are now Pydantic models. The
inputfield is either the model you provide to the workflow as theinput_validator, or is anEmptyModel, which is a helper Pydantic model Hatchet provides and uses as a default. - In the new SDK, we define the
parentsof a task as a list ofTaskobjects as opposed to as a list of strings. This also allows us to usectx.task_output(my_task)to access the output of themy_tasktask in the a downstream task, while allowing that output to be type checked correctly. - In the new SDK, inputs are injected directly into the task as the first positional argument, so the signature of a task now will be
Callable[[YourWorkflowInputType, Context]]. This replaces the old method of accessing workflow inputs viacontext.workflow_input().
Other Breaking Changes
There have been a number of other breaking changes throughout the SDK in V1.
Typing improvements:
- External-facing protobuf objects, such as
StickyStrategyandConcurrencyLimitStrategy, have been replaced by native Python enums to make working with them easier. - All external-facing types that are used for triggering workflows, scheduling workflows, etc. are now Pydantic objects, as opposed to being
TypedDicts. - The return type of each
Taskis restricted to aJSONSerializableMappingor a Pydantic model, to better align with what the Hatchet Engine expects. - The
ClientConfignow uses Pydantic Settings, and we’ve removed the static methods on the Client forfrom_environmentandfrom_configin favor of passing configuration in correctly. - The REST API wrappers, which previously were under
hatchet.rest, have been completely overhauled.
Naming changes:
- We no longer have nested
aioclients for async methods. Instead, async methods throughout the entire SDK are prefixed byaio_, similar to Langchain’s use of theaprefix to indicate async. For example, to run a workflow, you may now either useworkflow.run()orworkflow.aio_run(). - All functions on Hatchet clients are now verbs. For instance, if something was named
hatchet.nounVerbbefore, it now will be something more likehatchet.verb_noun. For example,hatchet.runs.get_resultgets the result of a workflow run. timeout, the execution timeout of a task, has been renamed toexecution_timeoutfor clarity.
Removals:
sync_to_asynchas been removed. We recommend reading our asyncio documentation for our recommendations on handling blocking work in otherwise async tasks.- The
AdminClienthas been removed, and refactored into individual clients. For example, if you absolutely need to create a workflow run manually without usingWorkflow.runorStandalone.run, you can usehatchet.runs.create. This replaces the oldhatchet.admin.run_workflow.
Other miscellaneous changes:
- As shown in the Pydantic example above, there is no longer a
spawn_workflow(s)method on theContext.runis now the preferred method for spawning workflows, which will automatically propagate the parent’s metadata to the child workflow. - All times and durations, such as
execution_timeoutandschedule_timeout, now allowdatetime.timedeltaobjects instead of only allowing strings (e.g."10s"can betimedelta(seconds=10)).
Other New features
There are a handful of other new features that will make interfacing with the SDK easier, which are listed below.
- Concurrency keys using the
inputto a workflow are now checked for validity at runtime. If the workflow’sinput_validatordoes not contain a field that’s used in a key, Hatchet will reject the workflow when it’s created. For example, if the key isinput.user_id, theinput_validatorPydantic model must contain auser_idfield. - There is now an
on_success_taskon theWorkflowobject, which works just like an on-failure task, but it runs after all upstream tasks in the workflow have succeeded. - We’ve exposed feature clients on the Hatchet client to make it easier to interact with and control your environment.
For example, you can write scripts to find all runs that match certain criteria, and replay or cancel them. First, fetch some run ids:
Then, use those ids to bulk cancel:
Or cancel directly by using filters:
The hatchet client also has clients for workflows (declarations), schedules, crons, metrics (i.e. queue depth), events, and workers.