Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance degradation with MaxMessageBytes #328

Closed
sagarkrkv opened this issue Aug 1, 2019 · 6 comments
Closed

Performance degradation with MaxMessageBytes #328

sagarkrkv opened this issue Aug 1, 2019 · 6 comments
Assignees
Labels

Comments

@sagarkrkv
Copy link
Contributor

Describe the bug
Our application produces a lot of messages in Async mode. Since we are in control of the creation of these messages, we don't need the ability to set MaxMessageBytes. But we are paying the performance penalty for this feature.

v0.2.4
Screen Shot 2019-07-31 at 9 44 38 PM

I noticed we don't need to create a channel when producing messages in async mode, so I created a PR to fix that #290.

So I expected v0.2.5 to perform better than the previous version, as this unnecessary allocation was removed. But I was surprised to see a new expensive function running, especially since we didn't change any config to enable this feature.

v0.2.5
Screen Shot 2019-07-31 at 9 44 18 PM

Kafka Version

  • N.A -

To Reproduce
This was found during benchmark testing of our application. If needed I can probably write a simple benchmark to show this.

Expected behavior
Users who don't set MaxMessageBytes shouldn't pay the cost for it.

Additional context
Maybe we can look into reducing the cost of calculating the message size.

cc @ShaneSaww @achille-roussel @stevevls

@sagarkrkv sagarkrkv added the bug label Aug 1, 2019
@achille-roussel
Copy link
Contributor

@sagarkrkv thanks for reporting! I'll look into this shortly.

Other changes have landed between 0.2.4 and 0.2.5, and since we've also released 0.3. Having some simple example to highlight the problem would be very helpful to track down which commit introduced the regression.

@sagarkrkv
Copy link
Contributor Author

@achille-roussel I believe this was caused by #208. I created a PR to fix this issue, PTAL.

@sagarkrkv
Copy link
Contributor Author

I think #318 has reduced this problem to a large extent. I created #329 based on what I've observed in 0.2.5, there's still some slight performance benefit with my new PR.

I'm totally willing to abandon the PR if you think a small performance benefit is not enough to justify the new function.

Cheers.

@achille-roussel
Copy link
Contributor

Improvements are always welcome!

@ShaneSaww
Copy link
Contributor

imo you should always be setting some sort of MaxMessageBytes aka max.request.size
As without it you could end up sending messages to Kafka that don't fit and will be dropped.
FWIW I don't think you can disable themax.request.size in the java client and I think we should follow that pattern.

Maybe we can look into reducing the cost of calculating the message size.

I think this is the best route and looking at #329 it should help.

@sagarkrkv
Copy link
Contributor Author

Fixed by #329

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants