-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
control/controlclient: remove optimization that was more convoluted than useful #14514
Conversation
4750dc4
to
6dd8157
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(In Borat voice) Very nice!
Importing the ~deprecated golang.org/x/exp/maps as "xmaps" to not shadow the std "maps" was getting ugly. And using slices.Collect on an iterator is verbose & allocates more. So copy (x)maps.Keys+Values into our slicesx package instead. Updates #cleanup Updates #12912 Updates #14514 (pulled out of that change) Change-Id: I5e68d12729934de93cf4a9cd87c367645f86123a Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
util/slicesx/slicesx.go
Outdated
// the std "maps" package. This version exists for clarity | ||
// when reading call sites. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like this is a weak argument -- it's easy enough to import it as xmaps
. The allocation optimization seems more compelling, so maybe that should be mentioned first?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's actually not that easy to allocate it as xmaps... goimports doesn't help (as much / as quickly) in that case. And you're back to doing it manually like a caveperson. That's often what bugs me more than the alloc.
6dd8157
to
45bbd13
Compare
@@ -778,14 +753,19 @@ func peerChangeDiff(was tailcfg.NodeView, n *tailcfg.Node) (_ *tailcfg.PeerChang | |||
return ret, true | |||
} | |||
|
|||
func (ms *mapSession) sortedPeers() []tailcfg.NodeView { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we wanted to keep the memorized version of this, we could put it in a lazy value and clear it any time the peer list is written to
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the sort of accounting (invalidating X on all changes) I'm trying to remove to simplify the code.
Turns out there's exactly one caller of this func that I want to eliminate entirely anyway, so memoizing it wouldn't pay off.
Importing the ~deprecated golang.org/x/exp/maps as "xmaps" to not shadow the std "maps" was getting ugly. And using slices.Collect on an iterator is verbose & allocates more. So copy (x)maps.Keys+Values into our slicesx package instead. Updates #cleanup Updates #12912 Updates #14514 (pulled out of that change) Change-Id: I5e68d12729934de93cf4a9cd87c367645f86123a Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
45bbd13
to
a545684
Compare
…han useful While working on #13390, I ran across this non-idiomatic pointer-to-view and parallel-sorted-map accounting code that was all just to avoid a sort later. But the sort later when building a new netmap.NetworkMap is already a drop in the bucket of CPU compared to how much work & allocs mapSession.netmap and LocalBackend's spamming of the full netmap (potentially tens of thousands of peers, MBs of JSON) out to IPNBus clients for any tiny little change (node changing online status, etc). Removing the parallel sorted slice let everything be simpler to reason about, so this does that. The sort might take a bit more CPU time now in theory, but in practice for any netmap size for which it'd matter, the quadratic netmap IPN bus spam (which we need to fix soon) will overshadow that little sort. Updates #13390 Updates #1909 Change-Id: I3092d7c67dc10b2a0f141496fe0e7e98ccc07712 Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
a545684
to
6c69e76
Compare
While working on #13390, I ran across this non-idiomatic
pointer-to-view and parallel-sorted-map accounting code that was all
just to avoid a sort later.
But the sort later when building a new netmap.NetworkMap is already a
drop in the bucket of CPU compared to how much work & allocs
mapSession.netmap and LocalBackend's spamming of the full netmap
(potentially tens of thousands of peers, MBs of JSON) out to IPNBus
clients for any tiny little change (node changing online status, etc).
Removing the parallel sorted slice let everything be simpler to reason
about, so this does that. The sort might take a bit more CPU time now
in theory, but in practice for any netmap size for which it'd matter,
the quadratic netmap IPN bus spam (which we need to fix soon) will
overshadow that little sort.
Updates #13390
Updates #1909