Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom storagepool #23

Merged
merged 3 commits into from
Dec 29, 2019
Merged

Conversation

dan-and
Copy link
Contributor

@dan-and dan-and commented Dec 26, 2019

Hi Kjeld,

Merry christmas!
I added a custom zpool device configuration option, as it allows doing tests with slow physical drives as well to see effects on high compression vs. slow drives.

Let it try a bit and I will offer some test results on this.

@PrivatePuffin
Copy link
Owner

Hi Dan,
Merry christmas to you too!

Interesting, what you are trying to say is basically:
Lets test if higher compression increases throughput on a disk-speed-limited pool vs the cpu-limited ramdisk-pool

Thats indeed one of the scenario's for which compression gets used, so thats worthwhile!

@PrivatePuffin
Copy link
Owner

I looked into this, but this isn't going to work.
The tests also need to be modified to use psync instead of libaio

@dan-and
Copy link
Contributor Author

dan-and commented Dec 29, 2019

I checked out both psync and libaio, but what exactly is the goal to be achieved? The values are not that different. It is a synthetic test which gives you a rough idea what the effect would be for the real applications.

I did both psync and libaio tests for ramdisk and drives: https://github.com/dan-and/zfs-compression-test/tree/test_results/test_results/pr23_libaio_vs_psync

@PrivatePuffin
Copy link
Owner

@dan-and
Okey, what I find interesting are the reads, those are clearly cached in both... Which is understandable, it seems ARC ignores direct IO requests from io-api's.

Anyway: I'll leave this as-is and will do some test changes accordingly.
I realised we can just use psync everywhere (even the ram based tests).

so I need to do some restructuing and get it also a that closer to ZFS build-in tests. Because in the end I want (a version of) this build into the zfs perf suite... and it would be preferable to reuse ZFS fio standards when possible

@PrivatePuffin PrivatePuffin merged commit e753d0d into PrivatePuffin:master Dec 29, 2019
@PrivatePuffin
Copy link
Owner

@dan-and I found the primary issue with unreasonably high speeds, even with sync and spinning rust:
Randomisation of the buffer (and compressability calculation) is by default done in 512 chunks.

I think this leads to unreasonably fast (de)compression when the BS is more than 512... same reason zfs uses 4K buffersize for fio. Working on fixing this :)

@PrivatePuffin
Copy link
Owner

@dan-and
I updated most of the tests and also tested sync vs async writes.

If you want sync writes you can add sync=1 to the tests.
But this absolutely DESTROYS performance even on ramdisk, I have no idea why because the it shouldn't be that bad in theory.

Anyway, performance metrics are now way more realistic, so much more realistic that I don't think sync=1 flag is absolutely needed anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants