-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to configure and avoid Open AI rate limiting when ingesting files? #1601
Comments
Edit: I didn't realize this was a graph process. If you're using the full version and a single job fails, you can retry that job look for the orchestration cookbook in the docs. Given that this is a JSON file, it might make sense for you to upload entries as chunks rather than a single document. The embedding requests are sent in batches with exponential back off, though, so I suspect that eventually this will succeed. If you're using the full version, and it fails, you can always retry the job which is especially helpful when you've broken the file up or have many files. |
Even using Hatchet with smaller chunks it can technically run into same rate limit issue, no? |
I think what you're looking for then is the batch_size parameter in the configuration file. The default is 256. Changing this would only impact future graphs, though. |
Describe the problem
I uploaded a json file with around 4000 entries inside. While I was monitoring the processes, I realized Open AI is enforcing rate limiting and the application was not responsive as it was keep retrying the failed calls to Open AI.
What is the recommendation to avoid running into this problem?
To Reproduce
Create a json file with large number of entries (eg 4000 rows).
Expected behavior
Processing the Graph generation with consideration of API rate limits with Open AI
Screenshots
The text was updated successfully, but these errors were encountered: