Managed Compute in TypeScript
This feature is currently in beta and may be subject to change.
Overview
Hatchet Managed Compute lets you define and manage compute resources directly in your TypeScript code. This guide shows you how to configure compute instances, create workflows, and manage workers using the Hatchet TypeScript SDK.
This guide assumes you are already familiar with the basics of Hatchet and have a local workflow running using Docker. If you are not in this state, please see the Getting Started Guide and Docker Guide.
Basic Configuration
Compute Configuration
Hatchet provides TypeScript interfaces for defining compute requirements. You can create multiple compute configurations to use in your workflows on a step-by-step basis, allowing you to optimize resources for different parts of your workflow.
import { SharedCPUCompute } from "@hatchet-dev/typescript-sdk/clients/worker/compute/compute-config";
import { ManagedWorkerRegion } from "@hatchet-dev/typescript-sdk/clients/rest/generated/cloud/data-contracts";
// Define a basic compute configuration
const basicCompute: SharedCPUCompute = {
cpuKind: "shared",
memoryMb: 1024,
numReplicas: 1,
cpus: 1,
regions: [ManagedWorkerRegion.Ewr],
};
// Define a more powerful compute configuration
const performanceCompute: PerformanceCPUCompute = {
cpuKind: "performance",
memoryMb: 2048,
numReplicas: 2,
cpus: 2,
regions: [ManagedWorkerRegion.Ewr],
};
For a full list of configuration options, see the Compute API documentation.
GPU Support
GPU compute has limited region support and constraints. See the GPU docs for more information.
Hatchet Managed Compute supports GPU instances. You can define GPU compute configurations using the GPUCompute
interface:
import { GPUCompute } from "@hatchet-dev/typescript-sdk/clients/worker/compute/compute-config";
const gpuCompute: GPUCompute = {
cpuKind: "shared",
gpuKind: "l40s",
memoryMb: 1024,
numReplicas: 1,
cpus: 2,
gpus: 1,
regions: [ManagedWorkerRegion.Ewr],
};
For a full list of GPU configuration options, see the Compute API documentation.
Defining Workflows with Compute Requirements
The compute configuration can be specified for each step in your workflow:
import Hatchet, { Workflow } from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
const workflow: Workflow = {
id: "managed-workflow",
description: "Workflow with managed compute",
on: {
event: "user:create",
},
steps: [
{
name: "step1",
compute: basicCompute,
run: async (ctx) => {
console.log("Executing step1 with basic compute");
return { result: "Step 1 complete" };
},
},
{
name: "step2",
parents: ["step1"],
compute: gpuCompute,
run: async (ctx) => {
const step1Result = ctx.stepOutput("step1");
console.log("Executing step2 with GPU compute after", step1Result);
return { result: "Step 2 complete" };
},
},
],
};
Worker Management
Setting Up Workers
Configure and start workers to execute your workflows:
async function main() {
// Initialize worker
const worker = await hatchet.worker("managed-worker");
// Register workflow
await worker.registerWorkflow(workflow);
// Start the worker
worker.start();
}
main();
Complete Example
Here's a complete example of a workflow using different compute configurations:
import Hatchet, { Workflow } from "@hatchet-dev/typescript-sdk";
import { ManagedWorkerRegion } from "@hatchet-dev/typescript-sdk/clients/rest/generated/cloud/data-contracts";
import {
GPUCompute,
SharedCPUCompute,
} from "@hatchet-dev/typescript-sdk/clients/worker/compute/compute-config";
const hatchet = Hatchet.init();
// Define compute configurations
const cpuCompute: SharedCPUCompute = {
cpuKind: "shared",
memoryMb: 1024,
numReplicas: 1,
cpus: 1,
regions: [ManagedWorkerRegion.Ewr],
};
const gpuCompute: GPUCompute = {
cpuKind: "shared",
gpuKind: "l40s",
memoryMb: 1024,
numReplicas: 1,
cpus: 2,
gpus: 1,
regions: [ManagedWorkerRegion.Ewr],
};
// Define workflow
const workflow: Workflow = {
id: "simple-workflow",
description: "Mixed compute workflow example",
on: {
event: "user:create",
},
steps: [
{
name: "step1",
compute: cpuCompute,
run: async (ctx) => {
console.log("executed step1!");
return { step1: "step1 results!" };
},
},
{
name: "step2",
parents: ["step1"],
compute: gpuCompute,
run: (ctx) => {
console.log(
"executed step2 after step1 returned ",
ctx.stepOutput("step1"),
);
return { step2: "step2 results!" };
},
},
],
};
// Start worker
async function main() {
const worker = await hatchet.worker("managed-worker");
await worker.registerWorkflow(workflow);
worker.start();
}
main();
A complete example of a workflow that uses managed compute can be found here (opens in a new tab).