-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: speed up clean operation #2326
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this was a very clever optimization. I wrote a small comment.
lib/commands/cleanJobsInSet-3.lua
Outdated
@@ -63,7 +65,14 @@ while ((limit <= 0 or deletedCount < limit) and next(jobIds, nil) ~= nil) do | |||
jobTS = rcall("HGET", jobKey, "timestamp") | |||
if (not jobTS or jobTS < maxTimestamp) then | |||
if isList then | |||
rcall("LREM", setKey, 0, jobId) | |||
if deletionMarker == nil then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, so the risk is to pick a deletion marker that could be a jobId that we are not going to delete, and that is why you pick the first jobId. This is a bit unfortunate, if we could have a short, special value we would 1) skip this "if test" for every item and 2) a potential very large jobId that consumes more time and memory to store and delete. Did you try using as a marker the lua value nil?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, the value 0 could be used, but then we need to give an error if the user tries to use a custom Id of 0. I am quite sure nobody does this but it should be in place to avoid possible errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point about possible large job ids. Since Redis list values are strings, a natural empty value would be the empty string. Luckily, it looks like it's already impossible to add a job with an empty string. Testing with:
queue.add({ some: 'data' }, { jobId: '' })
silently ignores the jobId
and generates a numeric id. This is a consequence of the way the addJob
Lua script uses the empty string as a signal that no jobId was passed.
I'll make that change right away.
Thanks for the review, @manast! I changed the deletion marker to the empty string and reran the tests. |
The clean operation on sets backed by lists (wait, active, paused) quickly gets very slow when the list is large. This is because each job deletion scans the whole list in a LREM call, resulting in O(N * M) complexity where N is the number of jobs in the list and M is the number of jobs to delete. With this change, the deletion is done in two passes. The first pass sets each item that should be deleted to a special value. The second pass deletes all items with that special value in a single LREM call. This results in O(N) complexity.
I added a delay to the test I added in the hope that it will help it pass in CI. |
## [4.8.1](v4.8.0...v4.8.1) (2022-03-21) ### Performance Improvements * speed up clean operation ([#2326](#2326)) ([ef5f471](ef5f471))
🎉 This PR is included in version 4.8.1 🎉 The release is available on: Your semantic-release bot 📦🚀 |
Hi there,
Thanks for this very useful queue library. We've been using it successfully for a while and it's been very robust. However, we recently managed to freeze our Redis server when we tried to clean around 1 million jobs from a wait queue using the
clean
operation. This PR is a suggestion for improving the speed of the clean operation so that it takes seconds instead of hours on queues with millions of jobs.The clean operation on sets backed by lists (wait, active, paused) quickly gets very slow when the list is large. This is because each job deletion scans the whole list in a LREM call, resulting in O(N * M) complexity where N is the number of jobs in the list and M is the number of jobs to delete.
With this change, the deletion is done in two passes. The first pass sets each item that should be deleted to a special value. The second pass deletes all items with that special value in a single LREM call. This results in O(N) complexity.
Benchmarks
I ran some (not super accurate) benchmarks on my laptop to show the effect when cleaning either all jobs or 1000 jobs in queues of different sizes:
My benchmark script, for reference:
Alternative implementation
Figuring out the index for the LSET that sets the deletion marker is a bit tricky because we traverse the list in batches from the end. This reverse traversal was introduced in #2205 to speed up the clean operation when a limit is given. The script could be slightly simplified if we traversed the list in batches from the beginning. I'd be happy to have a go at it if that makes more sense.