-
Notifications
You must be signed in to change notification settings - Fork 36.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Zero Allocations project #18849
The Zero Allocations project #18849
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
Would be interesting to see this benchmarked to be able to reason about risk vs reward (where the ideal reward would be a significant measurable and user-perceivable performance improvement) :) |
I'd be very interested to see the benchmarks of this :) |
One of the hottest allocations that I found that might be the "easiest" to take a stab at is:
This picture shows that (looks like the One thing I'm trying now is a custom arena allocator for this map to see if that helps. |
This is a bit higher than I expected, but yes this is why gamedevs typically ban dynamic allocations from their main game loop, it absolutely kills performance: heap fragmentation, cache incoherency, etc, etc. Most high performing games arena-allocate or use custom allocators. I'm going into this from a gamedev mindset, I see IBD as our main loop. Let's minimize time spent per frame and maximize FPS (in our case, blocks/txs per second 😅 ). If we can arena/custom allocate some of these hotspots it might help a bunch. |
@jb55 The CCoinsMap is where the actual UTXO cache is stored - it should be one allocation per UTXO. That's a huge number, because we cache a huge number of UTXOs, but I don't expect it will be easy to reduce. IBD performance heavily relies on being able to delete cache entries if they're created and spent without a flush to disk in between, which seems incompatible with arena allocation. |
@jb55 The CCoinsMap is where the actual UTXO cache is stored - it
should be one allocation per UTXO. That's a huge number, because we
cache a huge number of UTXOs, but I don't expect it will be easy to
reduce.
yes in this case the only thing we could do better is a custom allocator
that maintains an efficient memory layout for cache friendliness, This
is something I wanted to try as well in addition to reducing allocations
where possible.
|
We could also replace the hashmap with one that preserves cache locality, like SwissTable (CC #16718) |
re: allocator, looks like there is some prior art here: #16801 |
This matches a change in the C++20 std::span proposal.
This prevents constructing a Span<A> given two pointers into an array of B (where B is a subclass of A), at least without explicit cast to pointers to A.
@jamesob Unless you're running with -par=1, I believe there is no change in behaviour (and even then, it's probably trivial). |
@jamesob thanks for running that, is there an easy way to run these myself as I hack on this branch? I'm guessing most perf would be dominated by IO factors/secp? Another motivating factor for this PR was to investigate heap usage in general. right now my core node uses a gig of ram and I'm looking at ways to reduce that outside of IBD. |
actually, I'm just going to try to reproduce this locally: pruned IBD in-memory filesystem, and I'll just run it a bunch of times up to some height and then compare the time-to-height. |
looks like the current changes don't affect IBD much at all:
looks like I'll have to try harder! but now I have a pretty good testbed with this bpftrace script + linux userspace tracepoints: https://jb55.com/s/ibd.bt.txt |
Consider that testing reindex or IBD up to height 100,000 is uncharacteristic because it happens almost instantaneously relative to the rest of the chain. You may find my previous work in bitcoinperf helpful, if not a little hard to set up (happy to try and improve this with you). Here's an example and accompanying output of benching a subset of IBD within a meaningful chain region. |
@jamesob I tried a bit higher and started to see what might be a performance improvement: #18847 (comment) The only IBD focused commit so far is ZAP1 linked above, I'm still looking into utxo heap improvements which might be more interesting, as it is the # 1 allocator during IBD. |
…key-prevector' into zeroalloc
🐙 This pull request conflicts with the target branch and needs rebase. |
…tion [ZAP1] 83a425d compressor: use a prevector in compressed script serialization (William Casarin) Pull request description: This function was doing millions of unnecessary heap allocations during IBD. I'm start to catalog unnecessary heap allocations as a pet project of mine: as-zero-as-possible-alloc IBD. This is one small step. before: ![May01-174536](https://user-images.githubusercontent.com/45598/80850964-9a38de80-8bd3-11ea-8eec-08cd38ee1fa1.png) after: ![May01-174610](https://user-images.githubusercontent.com/45598/80850974-a91f9100-8bd3-11ea-94a1-e2077391f6f4.png) ~should I type alias this?~ *I type aliased it* This is a part of the Zero Allocations Project #18849 (ZAP1). This code came up as a place where many allocations occur. ACKs for top commit: Empact: ACK 83a425d elichai: tACK 83a425d sipa: utACK 83a425d Tree-SHA512: f0ffa6ab0ea1632715b0b76362753f9f6935f05cdcc80d85566774401155a3c57ad45a687942a1806d3503858f0bb698da9243746c8e2edb8fdf13611235b0e0
Sad to see this closed; I love the idea of exploring how we can reduce allocations in general since it seems like (in addition to any incidental performance benefits) fewer allocations probably makes behavior more predictable... although I guess while we might be able to reasonably preallocate certain things (coinsCache come to mind) others (the block index) might not be possible/desirable, so maybe "Zero Allocations" is somewhat of a misnomer. |
it's generally really hard to get changes like this through; e.g. personally my epoch mempool stuff also has an aim to eliminate short lived allocations in many of the mempool algorithms but it is (from my perspective) stalled out for lack of sufficient review. If big refactorings like ZAP or the Mempool Project are going to make the kind of headway you'd want to see they probably would need to see prioritization during a specific release cycle in order for contributors like myself or @jb55 (who I can't speak for as to why he closed it) in order for it to feel worthwhile investing effort on rebasing/keeping alive... I know that there's also a difficult balance, contributors with full-time or part-time jobs outside of Bitcoin Core are probably often more writing code than reviewing other's code (i.e., to scratch one's itch or fix a specific problem), which leads to less 'review karma' or whatever... |
This is an octomerge staging PR for tracking various heap-allocation reductions during IBD and regular use. Reducing heap allocations improves cache coherency and should improve performance. Maybe. You can use this PR to bench IBD.
compressor: use prevector in CompressScript serialization
ZAP2 - threadnames: don't allocate memory in ThreadRename #18848 -threadnames: don't allocate memory in ThreadRename
insert
andcontains
[ZAP3] #18985 -bloom: use Span instead of std::vector for insert and contains
netaddress: return a prevector from CService::GetKey()