We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

By clicking "Accept", you agree to our use of cookies.
Learn more.

CookbooksAI AgentsParallelization

Parallelization

Parallelization spawns multiple independent tasks at the same time and aggregates results before continuing. Inside an agent loop, this typically means running several tool calls concurrently when they don’t depend on each other. At a system level, it can mean running the same input through multiple evaluators and picking the best (or majority) result.

Hatchet distributes child runs across all running workers where the task is registered. The parent’s slot is freed while children execute, so you don’t hold resources during parallel work.

There are two common variants:

  • Sectioning: different tasks handle different concerns in parallel (e.g., content generation + safety check).
  • Voting: the same task runs N times and results are aggregated by majority vote or best score.

When to use

ScenarioFit
Agent calls 3 independent APIs (weather, news, stock)Good: no dependencies between calls, latency drops to max of the three
Content generation + safety guardrail in parallelGood: sectioning, both run at once, block if unsafe
Multiple evaluators vote on content qualityGood: voting, aggregate for more reliable decisions
Processing a batch of items (100+ documents)Good: see Batch Processing for large-scale fanout
Steps depend on each other (output of A feeds B)Skip: run sequentially
Provider rate limits are tightCareful: parallel calls may hit limits; use Rate Limits

How it maps to Hatchet

The parent task spawns children via child spawning. Each child runs on any available worker. The parent’s slot is evicted while children execute, so you’re not holding resources during the parallel work. When all children complete, the parent resumes and aggregates.

Step-by-step walkthrough

Define the parallel tasks

Create separate tasks for each concern. These run independently and can be composed in different patterns.

Sectioning (parallel concerns)

Sectioning runs different concerns in parallel. The example generates content and checks safety at the same time. If the safety check fails, the content is blocked even though generation succeeded.

Voting (parallel consensus)

Voting runs the same evaluation N times and aggregates by majority or average score. This produces more reliable decisions than a single evaluation.

Run the worker

Register all tasks and start the worker.

For large-scale parallelism (hundreds or thousands of items), see the Batch Processing guide, which covers fan-out with concurrency control.

Next Steps