Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tikv: add storage flow control doc #6173

Merged
merged 4 commits into from
Aug 18, 2021
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions tikv-configuration-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -312,6 +312,35 @@ Configuration items related to the sharing of block cache among multiple RocksDB
+ Default value: 45% of the size of total system memory
+ Unit: KB|MB|GB

## storage.flow-control

Configuration items related the flow control mechanism in TiKV. This mechanism replaces the write stall mechanism in RocksDB and controls flow at the scheduler layer, which avoids the issue of QPS drop caused by the stuck Raftstore or Apply thread when the write traffic is high.
TomShawn marked this conversation as resolved.
Show resolved Hide resolved

### `enable`

+ Determines whether to enable the flow control mechanism. After it is enabled, TiKV automatically disables the write stall mechanism of KvDB and the write stall mechanism of RaftDB (excluding memtable).
+ Default value: `true`

### `memtables-threshold`

+ When the number of kvDB memtables reaches this threshold, the flow control mechanism starts to work.
+ Default value: `5`

### `l0-files-threshold`

+ When the number of kvDB L0 files reaches this threshold, the flow control mechanism starts to work.
+ Default value: `9`

### `soft-pending-compaction-bytes-limit`

+ When the pending compaction bytes in KvDB reaches this threshold, the flow control mechanism starts to reject some write requests and reports the `ServerIsBusy` error.
TomShawn marked this conversation as resolved.
Show resolved Hide resolved
+ Default value: `"192GB"`

### `hard-pending-compaction-bytes-limit`

+ When the pending compaction bytes in KvDB reaches this threshold, the flow control mechanism rejects all write requests and reports the `ServerIsBusy` error.
TomShawn marked this conversation as resolved.
Show resolved Hide resolved
+ Default value: `"1024GB"`

## storage.io-rate-limit

Configuration items related to the I/O rate limiter.
Expand Down