This example demonstrates how to implement custom retry logic when using third-party services in your Upstash Workflow.
We’ll use OpenAI as an example for such a third-party service. Our retry logic uses response status codes and headers to control when to retry, sleep, or store the third-party API response.
We use context.api.openai.call to send a request to OpenAI.
context.api.openai.call uses context.call in the background and
using context.call to request data from an API is one of the most powerful Upstash Workflow
features. Your request can take much longer than any function timeout would normally allow,
completely bypassing any platform-specific timeout limits.
Our request to OpenAI includes an auth header, model parameters, and the data to be processed by the AI. The response from this function call (response) is used to determine our retry logic based on its status code and headers.
3. Processing a Successful Response (Status Code < 300)
If the OpenAI response is successful (status code under 300), we store the response in our database. We create a new workflow task (workflow.run) to do this for maximum reliability.
If the API response indicates a rate limit error (status code 429), we retrieve our rate limit reset values from the response headers. We calculate the time until the rate limit resets and then pause execution (workflow.sleep) for this duration.
if (response.status === 429) { const resetTime = response.header["x-ratelimit-reset-tokens"]?.[0] || response.header["x-ratelimit-reset-requests"]?.[0] || BASE_DELAY // assuming `resetTime` is in seconds await context.sleep("sleep-until-retry", Number(resetTime)) continue}
To avoid making too many requests in a short period and possibly overloading the OpenAI API, we pause our workflow before the next retry attempt (i.e., 5 seconds), regardless of rate limits.