Guarantees & Tradeoffs
Hatchet is designed as a modern task orchestration platform that bridges the gap between simple job queues and complex workflow engines. Understanding where it excels—and where it doesn’t—will help you determine if it’s the right fit for your needs.
Good Fit
✅ | Real-time Requests - Sub-25ms task dispatch for hot workers with thousands of concurrent tasks |
✅ | Workflow Orchestration with dependencies and error handling |
✅ | Reliable Task Processing where durability matters |
✅ | Moderate Throughput (hundreds to low 10,000s of tasks/second) |
✅ | Multi-Language Workers or polyglot teams |
✅ | Operational Simplicity if your team is already using PostgreSQL |
✅ | Cloud or Air-Gapped Environments for flexible deployment options ( Hatchet Cloud and self-hosting) |
Not a Good Fit
❌ | Extremely High Throughput (consistently 10,000+ tasks/second) |
❌ | Sub-Millisecond Latency requirements |
❌ | Memory-Only Queuing where persistence or durability isn’t needed |
❌ | Serverless Environments on cloud providers like AWS Lambda, Google Cloud Functions, or Azure Functions |
Core Reliability Guarantees
Hatchet is designed with the following core reliability guarantees:
Every task will execute at least once. Hatchet ensures that no task gets lost, even during system failures, network outages, or deployments. Failed tasks automatically retry according to your configuration, and all tasks persist through restarts and network issues.
Consistent state management. All workflow state changes happen within PostgreSQL transactions, ensuring that your workflow dependencies resolve consistently and no tasks are lost during failures or deployments.
Predictable execution order. The default task assignment strategy is First In First Out (FIFO) which can be modified with concurrency policies, rate limits, and priorities.
Operational resilience. The engine and API servers are stateless, allowing them to restart without losing state and enabling horizontal scaling by simply adding more instances. Workers automatically reconnect after network issues and can be deployed anywhere—containers, VMs, or local development environments.
Performance Expectations
Understanding Hatchet’s performance characteristics helps you plan your implementation and set realistic expectations.
Typical time-to-start latency for task dispatch is sub 50ms with PostgreSQL storage, though this can be optimized to ~25ms P95 for hot workers in optimized setups. Network latency between your workers and the Hatchet engine will directly impact dispatch times, so consider deployment topology when latency matters.
Throughput capacity varies significantly based on your setup. A single engine instance with PostgreSQL-only storage typically handles hundreds of tasks per second. When you need higher throughput, adding RabbitMQ as a message queue can substantially increase capacity, though your database will eventually become the bottleneck at very high scales. Through tuning and sharding, we can support throughputs of tens of thousands of tasks per second.
Concurrent processing scales well — Hatchet supports thousands of concurrent workers, with worker-level concurrency controlled through slot configuration. The depth of your queues is limited by your database storage capacity rather than memory constraints.
Performance optimization comes through several strategies: RabbitMQ for high-throughput workloads, read replicas for analytics queries, connection pooling with tools like PgBouncer, and shorter retention periods for execution history. Conversely, performance can be limited by database connection limits, large task payloads (over 1MB), complex dependency graphs, and cross-region network latency.
Not seeing expected performance?
If you’re not seeing the performance you expect, please reach out to us or join our community to explore tuning options.
Ready to Get Started?
Now that you understand Hatchet’s capabilities and limitations, explore the technical details:
Quick Start - Set up your first Hatchet worker.
Self-Hosting - Learn how to deploy Hatchet on your own infrastructure with appropriate sizing for your needs.