We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

By clicking "Accept", you agree to our use of cookies.
Learn more.

User GuideStreaming

Streaming in Hatchet

Hatchet tasks can stream data back to a consumer in real-time. This has a number of valuable uses, such as streaming the results of an LLM call back from a Hatchet worker to a frontend or sending progress updates as a task chugs along.

Publishing Stream Events

You can stream data out of a task run by using the put_stream (or equivalent) method on the Context.

This task will stream small chunks of content through Hatchet, which can then be consumed elsewhere. Here we use some text as an example, but this is intended to replicate streaming the results of an LLM call back to a consumer.

Consuming Streams

You can easily consume stream events by using the stream method on the workflow run ref that the various fire-and-forget methods return.

In the examples above, this will result in the famous text below being gradually printed to the console, bit by bit.

Happy families are all alike; every unhappy family is unhappy in its own way.

Everything was in confusion in the Oblonskys' house. The wife had discovered that the husband was carrying on an intrigue with a French girl, who had been a governess in their family, and she had announced to her husband that she could not go on living in the same house with him.
❗️

You must begin consuming the stream before any events are published. Any events published before a consumer is initialized will be dropped. In practice, this will not be an issue in most cases, but adding a short sleep before beginning streaming results back can help.

Streaming to a Web Application

It’s common to want to stream events out of a Hatchet task and back to the frontend of your application, for consumption by an end user. As mentioned before, some clear cases where this is useful would be for streaming back progress of some long-running task for a customer to monitor, or streaming back the results of an LLM call.

In both cases, we recommend using your application’s backend as a proxy for the stream, where you would subscribe to the stream of events from Hatchet, and then stream events through to the frontend as they’re received by the backend.

For example, with FastAPI, you’d do the following:

Then, assuming you run the server on port 8000, running curl -N -X GET http://localhost:8000/stream would result in the text streaming back to your console from Hatchet through your FastAPI proxy.