-
Notifications
You must be signed in to change notification settings - Fork 490
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
network: discard unrequested or stale block messages #5431
network: discard unrequested or stale block messages #5431
Conversation
network/wsPeer.go
Outdated
// Skip the message if it's a response to a request we didn't make or has timed out | ||
if msg.Tag == protocol.TopicMsgRespTag && !wp.hasOutstandingRequests() { | ||
// We never requested anything from this peer so sending a response is breach protocol -- disconnect | ||
if wp.getPeerData(lastSentRequestTime) == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a weak condition since lastSentRequestTime
is never removed from the peer data map.
I also traced the context all way up and do not see if it cancelled by timeout - could you point out? The only cancellation I found in catchup.innerFetch
is case <-ledgerWaitCh
when the ledger received the block by other means.
It appears the message the existence of a hash in responseChannels
is a pretty good indication of "a request was sent" and an empty responseChannels
opposite and a good opportunity to drop.
It will be more complex but more error proof if there would be a compliment data structure to responseChannels
- like topic requests tags have been sent but this is complicated to manage and since there are lots of block requests on catchup.
Edit: agreed on lastSentRequestTime
importance but it needs to be cleared after some period of time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of clearing it out, added a synchronous check and disconnect if the response is more than a minute late.
Codecov Report
@@ Coverage Diff @@
## master #5431 +/- ##
==========================================
- Coverage 55.61% 48.80% -6.81%
==========================================
Files 447 447
Lines 63410 79843 +16433
==========================================
+ Hits 35265 38969 +3704
- Misses 25757 38498 +12741
+ Partials 2388 2376 -12
... and 416 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
network/wsPeer.go
Outdated
|
||
// If the peer sends us a block response after this threshold | ||
// we should disconnect from it | ||
const blockResponseDisconnectThreshold = 60 * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the rationale for punishing slow nodes by disconnecting from them? They are not malicious, just slow, so why is it a protocol violation to send you what you asked for, just a little late?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like this would lead a lot of unnecessary disconnections if the network got into some kind of poorly-performing state, and peers were requesting blocks from each other while bogged down exhausting resources for other reasons, and on top of that they will start to disconnect from each other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a valid point, we can increase this or drop it.
Alternatively we could potentially make it a protocol validation by checking our receive time and not sending anything out if we know that the requesting peer is guaranteed to not be expecting it any longer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the code below will fail to serve the request anyway since the handler to write response to is removed in 4 seconds
childCtx, cancelFunc := context.WithTimeout(ctx, time.Duration(w.config.CatchupGossipBlockFetchTimeoutSec)*time.Second)
maybe 4 sec is too short for full blocks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah for a slow connection, 4 seconds seems kind of aggressive for a full block + cert
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively we could potentially make it a protocol validation by checking our receive time and not sending anything out if we know that the requesting peer is guaranteed to not be expecting it any longer.
If my node is on a slow connection or a slow box, or my node ran out of CPU/memory/diskIO for a little while, it could completely be my fault that I'm receiving messages slowly, not the other end — it doesn't seem to me like you can reliably bring the timing of messages into protocol rules...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed and removed any timestamp logic in favor of counting number of topic requests we've sent to the peer.
Should we change the timeout as part of this PR?
network/wsPeer.go
Outdated
|
||
// If the peer sends us a block response after this threshold | ||
// we should disconnect from it | ||
const blockResponseDisconnectThreshold = 60 * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the code below will fail to serve the request anyway since the handler to write response to is removed in 4 seconds
childCtx, cancelFunc := context.WithTimeout(ctx, time.Duration(w.config.CatchupGossipBlockFetchTimeoutSec)*time.Second)
maybe 4 sec is too short for full blocks
network/wsNetwork_test.go
Outdated
// Stop and confirm that we hit the case of disconnecting a peer for sending an unrequested block response | ||
netB.Stop() | ||
lg := logBuffer.String() | ||
require.Contains(t, lg, "sent TS response without a request") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check outstandingTopicRequests
counter here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at this point netB has already disconnected from netA so can't check that since it would have been on the peer. Trying to monitor it flipping to negative while it's in the process of disconnecting would cause a data race I believe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can't cause a race on an atomic counter though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The relevant counter here is on netBs peer struct representing netA. The race wouldn't be on the atomic counter but on the fact that peer that I'm trying to check the counter on is in the process of being destroyed. Either way I removed the offending log check for this one but kept it for the next case in netC so far.
The last update was just to merge in master and resolve the conflict in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for holding the counter and disconnecting looks fine. Just some comments on testing and suggestion for changing hasOutstandingRequests
network/wsNetwork_test.go
Outdated
// Stop and confirm that we hit the case of disconnecting a peer for sending an unrequested block response | ||
netB.Stop() | ||
lg := logBuffer.String() | ||
require.Contains(t, lg, "sent TS response without a request") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
checking log content seems brittle if the message changes. The concern here is to confirm that it didn't disconnect for any other reasons? If so, it might justify having counters for disconnect reasons which you can extend with this reason.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it is somewhat brittle but is also an easy fix if the log message changes. I'm open to adding counters anyhow but we do use this pattern elsewhere in the code already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are already bumping networkConnectionsDroppedTotal.Inc(map[string]string{"reason": "protocol"})
so you could check that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But isn't it good enough to assert the disconnect happens? (with the new reason code for example) asserting actual behavior vs logging seems better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed that it's good enough and I've changed the reason code
// Stop and confirm that we hit the case of disconnecting a peer for sending a stale block response | ||
netC.Stop() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
netA and netB are Stopped via defer, if the test fails, will this netC be left open?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a defer for netC as well on 4009
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason why I'm also stopping manually (doing the same for netB as well) is that otherwise reading the log would be a datarace
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should not read the log then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed one of the log reading behaviors from netB since I agree that the disconnect reason is a good enough check but this case doesn't warrant a disconnect since we did make the request (or more) we are just no longer interested in the response.
Do you want me to introduce a new counter here and check that instead of the log?
network/wsPeer.go
Outdated
func (wp *wsPeer) hasOutstandingRequests() bool { | ||
wp.responseChannelsMutex.Lock() | ||
defer wp.responseChannelsMutex.Unlock() | ||
return len(wp.responseChannels) > 0 | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you're doing all the work of taking the lock, could you isntead return the len
directly, and let the caller decide to compare it with 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I have a slight preference for the way it is currently but happy to change if there's a +1 .
I just don't think that we will be checking the length of this outside of this use-case and to me this parses slightly easier in the conditional.
// Stop and confirm that we hit the case of disconnecting a peer for sending a stale block response | ||
netC.Stop() | ||
lg = logBuffer.String() | ||
require.Contains(t, lg, "wsPeer readLoop: received a TS response for a stale request ") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rather than assert log behavior, why not actual behavior, like that the message was discarded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that anything else happens as a side-effect here that I could check. We aren't disconnecting or bumping any counters. I wanted to distinguish it from the fall-through case of going through unmarshalling process in the switch statement below though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the behavior is that we're dropping the message so the handler is not called.. there are some similar tests that register handlers for certain tags and count the number of calls
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh this tag doesn't use a handler.. it's handled inline. so weird
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed. Should we make a TS handler to make it more consistent as part of this?
wp.net.log.Warnf("wsPeer readloop: could not discard timed-out TS message from %s : %s", wp.conn.RemoteAddr().String(), err) | ||
continue | ||
} | ||
wp.net.log.Warnf("wsPeer readLoop: received a TS response for a stale request from %s. %d bytes discarded", wp.conn.RemoteAddr().String(), n) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Warnf goes to telemetry by default, but this doesn't seem very important. Could we make this Infof
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure but it's nice to have a telemetry of how often this happened. Current behavior actually logs this case to telemetry but not until after it unmarshalls the message on line 581. This doesn't increase the number of telemetry messages we are expecting to receive but even so happy to downgrade if others agree and do the same for the other place where we log this
network/wsPeer.go
Outdated
// Peer sent us a response to a request we made but we've already timed out -- discard | ||
n, err = io.Copy(io.Discard, reader) | ||
if err != nil { | ||
wp.net.log.Warnf("wsPeer readloop: could not discard timed-out TS message from %s : %s", wp.conn.RemoteAddr().String(), err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similarly maybe this is not important enough to be Warnf level? io.Discard.Write() can't fail but I guess reader.Read() could return err ...
oh I see, we seem to have a wp.reportReadErr(err)
just for that, and has its own special handling of when and how to log read errors from peers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, shouldn't you disconnect here? that's what happens for other reader Read errors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, made the change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comments I had were all addressed, I have reviewed the changes to tests regarding eventually
and log checking.
Summary
This PR adds special handling immediately after reading the incoming message Tag when receiving TS message which is used to send catchup blocks.
It handles the following two cases:
config.CatchupGossipBlockFetchTimeoutSec
has timed out and there is no point in trying to process the message. Discard it and move on.Test Plan
Added a new test to
wsNetwork_test
to confirm that we hit the disconnect case on case of unsolicited blocks.I'm open to suggestions for cleaner testing of this and the second case where we did request a block but the handler has since timed out.