Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Good ideas for queue #185

Closed
alphanso opened this issue Aug 10, 2022 · 4 comments · Fixed by #196
Closed

Good ideas for queue #185

alphanso opened this issue Aug 10, 2022 · 4 comments · Fixed by #196
Labels
question Further information is requested

Comments

@alphanso
Copy link

Hi,

I have been using quill and it works well for my use case. I just wanted to share another implementation of SPSC queue which I found interesting.

https://github.com/rigtorp/SPSCQueue
https://rigtorp.se/ringbuffer/

Please feel free to close this issue.

@alphanso
Copy link
Author

Also, it will be a good idea to add raw memcpy to benchmark. This will give us a lower limit of how fast a logger can be.

@odygrd
Copy link
Owner

odygrd commented Aug 16, 2022

Hello,

Thanks for sharing and for using quill.

There are multiple ways to implement a SPSC queue.

Also when designing the queue it boils down to mainly two options :

a) a single typed queue where you specify the type as part of the queue e.g. queue<int>

b) a variant queue where you do not specify the type of object you are pushing, instead you just reserve and write to a number of bytes in a queue and therefore you can write any object, but you also need to know how to retrieve that object later.

One possible implementation of a single typed queue is the rigtorp one above.
However, a single typed queue would not work well for a logger. The logger does not know in advance what types the user will be logging. Also you don't want to wrap/group multiple potential types in a union or reserve a max slot size per message (which the single typed queue requires) as it would take unnecessarily a lot of space.

@odygrd odygrd added the question Further information is requested label Aug 16, 2022
@alphanso
Copy link
Author

I am not suggesting you to use rigtorp one above. I really like the idea used to avoid false sharing. It gave writer_index and reader_index, it's own cache line. Also, it created a copy of writer index and reader index so that each core can cache those index values.
It may be possible to borrow those ideas from that implementation. This will help improve workload when reader is lagging behind writer a lot.

@odygrd odygrd closed this as completed Sep 22, 2022
@odygrd odygrd linked a pull request Oct 26, 2022 that will close this issue
@odygrd
Copy link
Owner

odygrd commented Oct 26, 2022

A local cache for the consumer was added in #196

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants