SDK Reference
Typescript SDK
Concurrency and Fairness

Concurrency Limits and Fairness

By default, there are no concurrency limits for Hatchet workflows. Workflow runs are immediately executed as soon as they are triggered (by an event, cron, or schedule). However, you can enforce a concurrency limit by adding a concurrency configuration to your workflow declaration. This configuration includes a key which takes a function that returns a concurrency group key, which is a string that is used to group concurrent executions. Note that this function should not also be used as a hatchet.step. For example, the following workflow will only allow 5 concurrent executions for any workflow execution of ConcurrencyDemoWorkflow, since the key is statically set to concurrency-key:

const workflow: Workflow = {
  id: "concurrency-example",
  description: "test",
  on: {
    event: "concurrency:create",
  },
  concurrency: {
    name: "basic-concurrency",
    key: (ctx) => "concurrency-key",
  },
  steps: [
    {
      name: "step1",
      run: async (ctx) => {
        const { data } = ctx.workflowInput();
        const { signal } = ctx.controller;
 
        if (signal.aborted) throw new Error("step1 was aborted");
 
        console.log("starting step1 and waiting 5 seconds...", data);
        await sleep(5000);
 
        if (signal.aborted) throw new Error("step1 was aborted");
 
        // NOTE: the AbortController signal can be passed to many http libraries to cancel active requests
        // fetch(url, { signal })
        // axios.get(url, { signal })
 
        console.log("executed step1!");
        return { step1: `step1 results for ${data}!` };
      },
    },
    {
      name: "step2",
      parents: ["step1"],
      run: (ctx) => {
        console.log(
          "executed step2 after step1 returned ",
          ctx.stepOutput("step1"),
        );
        return { step2: "step2 results!" };
      },
    },
  ],
};

The argument limitStrategy to the concurrency configuration can be set to either CANCEL_IN_PROGRESS (the default, documented above), or GROUP_ROUND_ROBIN. See documentation for the GROUP_ROUND_ROBIN strategy below.

Use-Case: Enforcing Per-User Concurrency Limits

You can use the custom concurrency function to enforce per-user concurrency limits. For example, the following workflow will only allow 1 concurrent execution per user:

const workflow: Workflow = {
  id: "concurrency-example",
  description: "test",
  on: {
    event: "concurrency:create",
  },
  concurrency: {
    name: "basic-concurrency",
    maxRuns: 1,
    key: (ctx) => ctx.workflowInput().userId,
  },
  // Rest of the workflow configuration
}

This same approach can be used for:

  • Setting concurrency for a specific user session by session_id (i.e. multiple chat messages sent)
  • Limiting data or document ingestion by setting an input hash or on-file key.
  • Rudimentary fairness rules by limiting groups per tenant to a certain number of concurrent executions.

Use-Case: Group Round Robin

You can distribute workflows fairly between tenants using the GROUP_ROUND_ROBIN option for limitStrategy. This will ensure that each distinct group gets a fair share of the concurrency limit. For example, let's say 5 workflows got queued in quick succession for keys A, B, and C:

A, A, A, A, A, B, B, B, B, B, C, C, C, C, C

If there is a maximum of 2 concurrent executions, the execution order will be:

A, B, C, A, B, C, A, B, C, A, B, C, A, B, C

This can be set in the concurrency configuration as follows:

const workflow: Workflow = {
  id: 'concurrency-example-rr',
  description: 'test',
  on: {
    event: 'concurrency:create',
  },
  concurrency: {
    name: 'multi-tenant-fairness',
    key: (ctx) => ctx.workflowInput().group,
    maxRuns: 2,
    limitStrategy: ConcurrencyLimitStrategy.GROUP_ROUND_ROBIN,
  },
  steps: [...],
};