Simple Task Retries
Hatchet provides a simple and effective way to handle failures in your workflow tasks using the task-level retry configuration. This feature allows you to specify the number of times a task should be retried if it fails, helping to improve the reliability and resilience of your workflows.
Task-level retries can be added to both Standalone Tasks
and Workflow Tasks
.
How it works
When a task in your workflow fails (i.e. throws an error or returns a non-zero exit code), Hatchet can automatically retry the task based on the retries
configuration defined in the task object. Here’s how it works:
- If a task fails and
retries
is set to a value greater than 0, Hatchet will catch the error and retry the task. - The task will be retried up to the specified number of times, with each retry being executed after a short delay to avoid overwhelming the system.
- If the task succeeds during any of the retries, the workflow will continue to the next task as normal.
- If the task continues to fail after exhausting all the specified retries, the workflow will be marked as failed.
This simple retry mechanism can help to mitigate transient failures, such as network issues or temporary unavailability of external services, without requiring complex error handling logic in your workflow code.
How to use task-level retries
To enable retries for a task in your workflow, simply add the retries
property to the task object in your workflow definition:
You can add the retries
property to any task in your workflow, and Hatchet will handle the retry logic automatically.
It’s important to note that task-level retries are not suitable for all types of failures. For example, if a task fails due to a programming error or an invalid configuration, retrying the task will likely not resolve the issue. In these cases, you should fix the underlying problem in your code or configuration rather than relying on retries.
Additionally, if a task interacts with external services or databases, you should ensure that the operation is idempotent (i.e. can be safely repeated without changing the result) before enabling retries. Otherwise, retrying the task could lead to unintended side effects or inconsistencies in your data.
Accessing the Retry Count in a Step
If you need to access the current retry count within a task, you can use the retryCount
method available in the task context:
Exponential Backoff
Hatchet also supports exponential backoff for retries, which can be useful for handling failures in a more resilient manner. Exponential backoff increases the delay between retries exponentially, giving the failing service more time to recover before the next retry.
Bypassing Retry logic
The Hatchet SDKs each expose a NonRetryable
exception, which allows you to bypass pre-configured retry logic for the task. If your task raises this exception, it will not be retried. This allows you to circumvent the default retry behavior in instances where you don’t want to or cannot safely retry. Some examples in which this might be useful include:
- A task that calls an external API which returns a 4XX response code.
- A task that contains a single non-idempotent operation that can fail but cannot safely be rerun on failure, such as a billing operation.
- A failure that requires manual intervention to resolve.
In these cases, even though retries
is set to a non-zero number (meaning the task would ordinarily retry), Hatchet will not retry.
Conclusion
Hatchet’s task-level retry feature is a simple and effective way to handle transient failures in your workflow tasks, improving the reliability and resilience of your workflows. By specifying the number of retries for each task, you can ensure that your workflows can recover from temporary issues without requiring complex error handling logic.
Remember to use retries judiciously and only for tasks that are idempotent and can safely be repeated. For more advanced retry strategies, such as exponential backoff or circuit breaking, stay tuned for future updates to Hatchet’s retry capabilities…/../components/UniversalTabs