Hatchet Managed Compute
⚠️
This feature is currently in beta and may be subject to change.
Overview
Hatchet Managed Compute provides the simplicity of serverless while delivering the performance and control of traditional infrastructure, making it ideal for long lived, or data intensive AI applications and background job processing. It enables dynamic scaling while eliminating common serverless limitations like cold starts and timeouts.
High-Availability Computing
- Sub-100ms Instance Provisioning: Pre-warms instances before resource demands
- Distributed Architecture: Built on Hatchet Queue for reliable workload distribution
- Multi-Region Support: Deploy across regions for fault tolerance and data locality
Available Compute Classes
- Shared CPUs
- Performance CPUs
- GPU instances
- Customizable worker pools
Smart Workload Management
- State-Aware: Routes tasks to instances with preloaded models/resources using worker labels
- Burstable Capacity: Scales dynamically based on queue depth
- Sticky Assignment: Routes tasks to the same instance when possible using sticky assignments
Infrastructure as Code
Hatchet Managed Compute is defined directly in your workflow code, making it extremely easy to manage your compute resources.
Deployment
- GitOps Integration: Automatic builds and deployments on commit
- Zero-Ops: Managed infrastructure eliminates operational overhead
- Version Control: Infrastructure changes tracked in code
Advantages Over Serverless
- No cold starts or execution timeouts
- Predictable performance
- Cost-effective for sustained workloads
- Fine-grained control over compute resources
- Better suited for AI and data processing tasks
Getting Started
Reach out to support@hatchet.dev to get access to managed compute.