User Guide
Working With Hatchet
Understanding Workers

Workers in Hatchet

Workers are the backbone of Hatchet, responsible for executing the individual steps defined within your workflows. They operate autonomously across different nodes in your infrastructure, allowing for distributed and scalable task execution. Understanding how to deploy and manage workers effectively is crucial to fully leverage the power of Hatchet.

How Workers Operate

In Hatchet, workers are long-running processes that wait for instructions from the Hatchet engine to execute specific steps. They communicate with the Hatchet engine to receive tasks, execute them, and report back the results.

Here are the key characteristics of workers in Hatchet:

  1. Distributed Execution: Workers can be deployed across multiple systems or even different cloud environments, enabling distributed task execution.

  2. Language Agnostic: Workers can be implemented in various programming languages, as long as they can communicate with the Hatchet engine and execute the required steps.

  3. Scalability: By adding more workers, you can scale your system horizontally to handle increased loads and distribute tasks across a wider set of resources.

Registering Workflows and Starting Workers

To utilize workers effectively, you need to register your workflows with the worker and start the worker process. Here's how you can do it in different programming languages:

workflow = MyWorkflow()
worker = hatchet.worker('test-worker', max_runs=4)
worker.register_workflow(workflow)
worker.start()

In the above examples:

  1. We create an instance of the worker, specifying a unique identifier for the worker.
  2. We register the workflow(s) that the worker is capable of executing using the registerWorkflow method.
  3. Finally, we start the worker process using the start method, allowing it to begin listening for tasks from the Hatchet engine.

Run your worker process from command line with relevant environment variables set. Refer to the quick start (opens in a new tab) for more details on how to set up your worker.

Best Practices for Managing Workers

To ensure a robust and efficient Hatchet implementation, consider the following best practices when managing your workers:

  1. Reliability: Deploy workers in a stable environment with sufficient resources to avoid resource contention and ensure reliable execution.

  2. Monitoring and Logging: Implement robust monitoring and logging mechanisms to track worker health, performance, and task execution status.

  3. Error Handling: Design workers to handle errors gracefully, report execution failures to Hatchet, and retry tasks based on configured policies.

  4. Secure Communication: Ensure secure communication between workers and the Hatchet engine, especially when distributed across different networks.

  5. Lifecycle Management: Implement proper lifecycle management for workers, including automatic restarts on critical failures and graceful shutdown procedures.

  6. Scalability: Plan for scalability by designing your system to easily add or remove workers based on demand, leveraging containerization, orchestration tools, or cloud auto-scaling features.

  7. Consistent Updates: Keep worker implementations up to date with the latest Hatchet SDKs and ensure compatibility with the Hatchet engine version.

Conclusion

Workers are the essential components that execute tasks orchestrated by the Hatchet engine. By deploying well-managed and efficient workers, you can ensure a reliable, scalable, and high-performing distributed task execution system. Remember to follow best practices and leverage the features provided by Hatchet to build a robust and efficient worker infrastructure.