We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

By clicking "Accept", you agree to our use of cookies.
Learn more.

GuideTasks

Tasks

Everything you run in Hatchet is a task - a named function that you can trigger, retry, schedule, and observe. Tasks can be configured to handle the problems that come up in real systems: transient failures, resource contention, overloaded downstream services, and more.

Defining a task

At minimum, a task needs a name and a function. The returned object is a runnable - you’ll use it directly to trigger the task.

When you define a task, you are telling Hatchet: “here is a piece of work that a worker can pick up.” The task carries a name, the function to run, and optional configuration. Tasks are registered on workers, which are the long-running processes that actually execute them.

Task lifecycle

When you trigger a task, it moves through three phases: queued, running, and a terminal state.

A task can also be CANCELLED at any point - either explicitly or by a timeout expiring.

Triggering a task

The runnable returned by a task definition supports several trigger methods:

MethodWhat it does
RunTrigger the task and wait for the result.
Run no waitEnqueue the task and return immediately.
ScheduleSchedule the task to run at a specific time.
CronRun the task on a recurring schedule.
Bulk runTrigger many instances of the task at once.
On eventTrigger the task automatically when an event is pushed.
WebhookTrigger the task from an external HTTP request.

Configuring a task

Tasks can be configured to handle common problems in distributed systems. For example, you might want to automatically retry a task when an external API returns a transient error, or limit how many instances of a task run at the same time to avoid overwhelming a downstream service.

ConceptWhat it does
RetriesRetry the task on failure, with optional backoff.
TimeoutsLimit how long a task may wait to be scheduled or to run.
ConcurrencyLimit how many runs of this task execute at once.
Rate limitsThrottle task execution over a time window.
PriorityInfluence scheduling order relative to other queued tasks.
Worker affinityPrefer or require specific workers for this task.

Input and output

Every task receives an input - a JSON-serializable object passed when the task is triggered. The value you return from the task function becomes the task’s output, which callers receive when they await the result.

When a task is part of a workflow, its output is also available to downstream tasks through the context object, so data flows naturally from one step to the next. See Accessing Parent Task Outputs for details.

The context object

Every task function receives a context alongside its input. The context is your handle to the Hatchet runtime while the task is executing. Through it you can perform various operations:

  • Runtime information like the task’s run ID, workflow ID, and more.
  • Check for cancellation and respond to it gracefully (Cancellation).
  • Refresh timeouts if a long-running operation needs more time (Timeouts).
  • Release a worker slot early to free capacity for other tasks (Manual Slot Release).

How tasks execute on workers

Tasks don’t run on their own - they are assigned to and executed by workers. A worker is a long-running process in your infrastructure that registers one or more tasks with Hatchet. When a task is triggered, Hatchet places it in a queue and assigns it to an available worker that has registered that task.

Each worker has a fixed number of slots that determine how many tasks it can run concurrently. When all slots are occupied, new tasks stay queued until a slot opens up. You can control this behavior further with concurrency limits, rate limits, and priority.

If you need tasks to run on specific workers - for example, because a worker has a GPU or a particular model loaded in memory - you can use worker affinity or sticky assignment to influence where tasks are placed.

Tasks vs. workflows

A task on its own is a standalone runnable - you can trigger it, wait for its result, schedule it, or fire it off without waiting. When you need to coordinate multiple tasks together (run B after A, fan out across N inputs, etc.), you compose them into a workflow. Both share the same trigger interface - the difference is scope. A task does one thing; a workflow orchestrates many things.

Next, read about how tasks compose into workflows.