Skip to content
This repository has been archived by the owner on Mar 11, 2020. It is now read-only.

Expose max message sizes and check boundaries on Read/Write #29

Conversation

simonferquel
Copy link
Contributor

This implements correct msize handling where Read/Write operations cannot read/write more than msize-[message envelope size] bytes at once.
For easier processing, this behavior is encapsulated in a io.Reader / io.Writer implementation to chunk read and writes when needed.

@codecov-io
Copy link

codecov-io commented Oct 4, 2016

Current coverage is 15.33% (diff: 0.00%)

Merging #29 into master will decrease coverage by 0.62%

@@             master        #29   diff @@
==========================================
  Files            13         15     +2   
  Lines          1128       1174    +46   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
  Hits            180        180          
- Misses          900        946    +46   
  Partials         48         48          

Powered by Codecov. Last update f717cf6...f9c7ab2

…nts an io.Reader and io.Writer handling this limit correctly

Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
@simonferquel simonferquel force-pushed the expose_max_message_sizes branch from 483d407 to 31dd0e8 Compare October 4, 2016 19:30
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
simonferquel added a commit to simonferquel/datakit that referenced this pull request Oct 6, 2016
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
simonferquel added a commit to simonferquel/datakit that referenced this pull request Oct 6, 2016
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
simonferquel added a commit to simonferquel/datakit that referenced this pull request Oct 21, 2016
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
func (c *client) Read(ctx context.Context, fid Fid, p []byte, offset int64) (n int, err error) {
if len(p) > c.transport.channel().MaxReadSize() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how read can be handled on the client side. The message flow would look something like this:

  1. Client requests data of size and offset.
  2. Server responds with data up to msize - overhead.
  3. Client returns from this method with len(resp.Data).

I think we just need to ensure that the server doesn't respond with too large of a message for the channel.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Problem is, the wording in the man pages seems to indicate that the client should not send message expecting that the response would be longer than msize. What we could do though, instead of failing, is to create a request respecting msize and return the actually read size

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is basically the right fix. This can mostly be done before the message is sent. Modifying Twrite and Rread inline is okay, since the return value will provide enough information for the sender to know they didn't send all the data.

We still need to handle the server side problem, as well.

We also need a way to set an optimal buffer size but I'd like to see how this is being used. Mostly, I'm worried that something is wrong since it took a year to find such a simple bug. For the most part, we haven't seen these conditions until now. Go's io package can be subtle, so I suspect a problem there.

@stevvooe
Copy link
Contributor

#30 handles the client-side truncation of Twrite. I think there are a few other things we must do to protect the client.

Please break out the reader/writer from this PR into a separate PR.

@stevvooe stevvooe closed this Oct 25, 2016
@simonferquel
Copy link
Contributor Author

I am just a bit worried about the write counterpart, as from the io.Writer documentation, a write should always return len (data) unless err is not null. Partial Writes do not fit very well with that.

On Mon, Oct 24, 2016 at 8:58 PM +0200, "Stephen Day" notifications@github.com wrote:

@stevvooe commented on this pull request.

In client.go:

func (c *client) Read(ctx context.Context, fid Fid, p []byte, offset int64) (n int, err error) {

  • if len(p) > c.transport.channel().MaxReadSize() {

That is basically the right fix. This can mostly be done before the message is sent. Modifying Twrite and Rread inline is okay, since the return value will provide enough information for the sender to know they didn't send all the data.

We still need to handle the server side problem, as well.

We also need a way to set an optimal buffer size but I'd like to see how this is being used. Mostly, I'm worried that something is wrong since it took a year to find such a simple bug. For the most part, we haven't seen these conditions until now. Go's io package can be subtle, so I suspect a problem there.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@stevvooe
Copy link
Contributor

@simonferquel Could you discuss on #30?

simonferquel added a commit to simonferquel/datakit that referenced this pull request Nov 15, 2016
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
simonferquel added a commit to simonferquel/datakit that referenced this pull request Nov 15, 2016
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
djs55 pushed a commit to djs55/datakit that referenced this pull request Mar 16, 2017
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
simonferquel added a commit to simonferquel/datakit that referenced this pull request Jun 16, 2017
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants