Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

replace chinese punctuation #8940

Merged
merged 1 commit into from
Jun 14, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion agg-distinct-optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This document introduces the `distinct` optimization in the TiDB query optimizer

## `DISTINCT` modifier in `SELECT` statements

The `DISTINCT` modifier specifies removal of duplicate rows from the result set. `SELECT DISTINCT` is transformed to `GROUP BY`, for example
The `DISTINCT` modifier specifies removal of duplicate rows from the result set. `SELECT DISTINCT` is transformed to `GROUP BY`, for example:

```sql
mysql> explain SELECT DISTINCT a from t;
Expand Down
10 changes: 5 additions & 5 deletions alert-rules.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ This section gives the alert rules for the PD component.

* Description:

The number of Region replicas is smaller than the value of `max-replicas`. When a TiKV machine is down and its downtime exceeds `max-down-time`, it usually leads to missing replicas for some Regions during a period of time.
The number of Region replicas is smaller than the value of `max-replicas`. When a TiKV machine is down and its downtime exceeds `max-down-time`, it usually leads to missing replicas for some Regions during a period of time.

* Solution:

Expand Down Expand Up @@ -425,7 +425,7 @@ This section gives the alert rules for the TiKV component.

* Alert rule:

`sum(increase(tikv_gcworker_gc_tasks_vec{task="gc"}[1d])) < 1 and (sum(increase(tikv_gc_compaction_filter_perform[1d])) < 1 and sum(increase(tikv_engine_event_total{db="kv", cf="write", type="compaction"}[1d])) >= 1)`
`sum(increase(tikv_gcworker_gc_tasks_vec{task="gc"}[1d])) < 1 and (sum(increase(tikv_gc_compaction_filter_perform[1d])) < 1 and sum(increase(tikv_engine_event_total{db="kv", cf="write", type="compaction"}[1d])) >= 1)`

* Description:

Expand All @@ -435,7 +435,7 @@ This section gives the alert rules for the TiKV component.

1. Perform `SELECT VARIABLE_VALUE FROM mysql.tidb WHERE VARIABLE_NAME = "tikv_gc_leader_desc"` to locate the `tidb-server` corresponding to the GC leader;
2. View the log of the `tidb-server`, and grep gc_worker tidb.log;
3. If you find that the GC worker has been resolving locks (the last log is "start resolve locks") or deleting ranges (the last log is start delete {number} ranges) during this time, it means the GC process is running normally. Otherwise, contact [support@pingcap.com](mailto:support@pingcap.com) to resolve this issue.
3. If you find that the GC worker has been resolving locks (the last log is "start resolve locks") or deleting ranges (the last log is "start delete {number} ranges") during this time, it means the GC process is running normally. Otherwise, contact [support@pingcap.com](mailto:support@pingcap.com) to resolve this issue.

### Critical-level alerts

Expand Down Expand Up @@ -633,7 +633,7 @@ This section gives the alert rules for the TiKV component.

* Alert rule:

`histogram_quantile(0.999, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type=tick}[1m])) by (le, instance, type)) > 2`
`histogram_quantile(0.999, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='tick'}[1m])) by (le, instance, type)) > 2`

* Description:

Expand Down Expand Up @@ -751,7 +751,7 @@ This section gives the alert rules for the TiKV component.

* Solution:

The speed of splitting Regions is slower than the write speed. To alleviate this issue, youd better update TiDB to a version that supports batch-split (>= 2.1.0-rc1). If it is not possible to update temporarily, you can use `pd-ctl operator add split-region <region_id> --policy=approximate` to manually split Regions.
The speed of splitting Regions is slower than the write speed. To alleviate this issue, you'd better update TiDB to a version that supports batch-split (>= 2.1.0-rc1). If it is not possible to update temporarily, you can use `pd-ctl operator add split-region <region_id> --policy=approximate` to manually split Regions.

## TiFlash alert rules

Expand Down
2 changes: 1 addition & 1 deletion benchmark/benchmark-tidb-using-sysbench.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ There are multiple Column Families on TiKV cluster which are mainly used to stor

Default CF : Write CF = 4 : 1

Configuring the block cache of RocksDB on TiKV should be based on the machines memory size, in order to make full use of the memory. To deploy a TiKV cluster on a 40GB virtual machine, it is recommended to configure the block cache as follows:
Configuring the block cache of RocksDB on TiKV should be based on the machine's memory size, in order to make full use of the memory. To deploy a TiKV cluster on a 40GB virtual machine, it is recommended to configure the block cache as follows:

```yaml
server_configs:
Expand Down
2 changes: 1 addition & 1 deletion benchmark/online-workloads-and-add-index-operations.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ This test runs in a Kubernetes cluster deployed with 3 TiDB instances, 3 TiKV in
| TiKV | `4151dc8878985df191b47851d67ca21365396133` |
| PD | `811ce0b9a1335d1b2a049fd97ef9e186f1c9efc1` |

Sysbench version1.0.17
Sysbench version: 1.0.17

### TiDB parameter configuration

Expand Down
2 changes: 1 addition & 1 deletion benchmark/v4.0-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ set global tidb_disable_txn_auto_retry=0;

2. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data.

1. Compile BenchmarkSQL
1. Compile BenchmarkSQL:

{{< copyable "bash" >}}

Expand Down
2 changes: 1 addition & 1 deletion benchmark/v5.0-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ set global tidb_enable_clustered_index = 1;

2. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data.

1. Compile BenchmarkSQL
1. Compile BenchmarkSQL:

{{< copyable "bash" >}}

Expand Down
6 changes: 3 additions & 3 deletions best-practices/java-app-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ This section introduces parameters related to `Prepare`.

##### `useServerPrepStmts`

`useServerPrepStmts` is set to `false` by default, that is, even if you use the Prepare API, the prepare operation will be done only on the client. To avoid the parsing overhead of the server, if the same SQL statement uses the Prepare API multiple times, it is recommended to set this configuration to `true`.
`useServerPrepStmts` is set to `false` by default, that is, even if you use the Prepare API, the "prepare" operation will be done only on the client. To avoid the parsing overhead of the server, if the same SQL statement uses the Prepare API multiple times, it is recommended to set this configuration to `true`.

To verify that this setting already takes effect, you can do:

Expand Down Expand Up @@ -129,7 +129,7 @@ To verify that this setting already takes effect, you can do:
While processing batch writes, it is recommended to configure `rewriteBatchedStatements=true`. After using `addBatch()` or `executeBatch()`, JDBC still sends SQL one by one by default, for example:

```java
pstmt = prepare(insert into t (a) values(?));
pstmt = prepare("insert into t (a) values(?)");
pstmt.setInt(1, 10);
pstmt.addBatch();
pstmt.setInt(1, 11);
Expand Down Expand Up @@ -198,7 +198,7 @@ In addition, because of a [client bug](https://bugs.mysql.com/bug.php?id=96623),

Through monitoring, you might notice that although the application only performs `INSERT` operations to the TiDB cluster, there are a lot of redundant `SELECT` statements. Usually this happens because JDBC sends some SQL statements to query the settings, for example, `select @@session.transaction_read_only`. These SQL statements are useless for TiDB, so it is recommended that you configure `useConfigs=maxPerformance` to avoid extra overhead.

`useConfigs=maxPerformance` configuration includes a group of configurations
`useConfigs=maxPerformance` configuration includes a group of configurations:

```ini
cacheServerConfiguration=true
Expand Down
6 changes: 3 additions & 3 deletions best-practices/pd-scheduling-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,8 +139,8 @@ You can use store commands of pd-ctl to query balance status of each store.

The **Grafana PD/Statistics - hotspot** page shows the metrics about hot regions, among which:

- Hot write regions leader/peer distribution: the leader/peer distribution in hot write regions
- Hot read regions leader distribution: the leader distribution in hot read regions
- Hot write region's leader/peer distribution: the leader/peer distribution in hot write regions
- Hot read region's leader distribution: the leader distribution in hot read regions

You can also query the status of hot regions using pd-ctl with the following commands:

Expand Down Expand Up @@ -297,4 +297,4 @@ If a TiKV node fails, PD defaults to setting the corresponding node to the **dow

Practically, if a node failure is considered unrecoverable, you can immediately take it offline. This makes PD replenish replicas soon in another node and reduces the risk of data loss. In contrast, if a node is considered recoverable, but the recovery cannot be done in 30 minutes, you can temporarily adjust `max-store-down-time` to a larger value to avoid unnecessary replenishment of the replicas and resources waste after the timeout.

In TiDB v5.2.0, TiKV introduces the mechanism of slow TiKV node detection. By sampling the requests in TiKV, this mechanism works out a score ranging from 1 to 100. A TiKV node with a score higher than or equal to 80 is marked as slow. You can add [`evict-slow-store-scheduler`](/pd-control.md#scheduler-show--add--remove--pause--resume--config) to detect and schedule slow nodes. If only one TiKV is detected as slow, and the slow score reaches the upper limit (100 by default), the leader in this node will be evicted (similar to the effect of `evict-leader-scheduler`).
In TiDB v5.2.0, TiKV introduces the mechanism of slow TiKV node detection. By sampling the requests in TiKV, this mechanism works out a score ranging from 1 to 100. A TiKV node with a score higher than or equal to 80 is marked as slow. You can add [`evict-slow-store-scheduler`](/pd-control.md#scheduler-show--add--remove--pause--resume--config) to detect and schedule slow nodes. If only one TiKV is detected as slow, and the slow score reaches the upper limit (100 by default), the leader in this node will be evicted (similar to the effect of `evict-leader-scheduler`).
2 changes: 1 addition & 1 deletion best-practices/three-nodes-hybrid-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ tikv:
gc.max-write-bytes-per-sec: 300K
rocksdb.max-background-jobs: 3
rocksdb.max-sub-compactions: 1
rocksdb.rate-bytes-per-sec: 200M
rocksdb.rate-bytes-per-sec: "200M"

tidb:
performance.max-procs: 8
Expand Down
2 changes: 1 addition & 1 deletion br/br-batch-create-table.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ This section describes the test information about the Batch Create Table feature
The test result is as follows:

```
[2022/03/12 22:37:49.060 +08:00] [INFO] [collector.go:67] ["Full restore success summary"] [total-ranges=751760] [ranges-succeed=751760] [ranges-failed=0] [split-region=1h33m18.078448449s] [restore-ranges=542693] [total-take=1h41m35.471476438s] [restore-data-size(after-compressed)=8.337TB] [Size=8336694965072] [BackupTS=431773933856882690] [total-kv=148015861383] [total-kv-size=16.16TB] [average-speed=2.661GB/s]
'[2022/03/12 22:37:49.060 +08:00] [INFO] [collector.go:67] ["Full restore success summary"] [total-ranges=751760] [ranges-succeed=751760] [ranges-failed=0] [split-region=1h33m18.078448449s] [restore-ranges=542693] [total-take=1h41m35.471476438s] [restore-data-size(after-compressed)=8.337TB] [Size=8336694965072] [BackupTS=431773933856882690] [total-kv=148015861383] [total-kv-size=16.16TB] [average-speed=2.661GB/s]'
```

From the test result, you can see that the average speed of restoring one TiKV instance is as high as 181.65 MB/s (which equals to `average-speed`/`tikv_count`).
2 changes: 1 addition & 1 deletion br/use-br-command-line-tool.md
Original file line number Diff line number Diff line change
Expand Up @@ -482,7 +482,7 @@ br restore full -f 'mysql.usertable' -s $external_storage_url --ratelimit 128
> Although you can back up system tables (such as `mysql.tidb`) using the BR tool, BR ignores the following system tables even if you use the `--filter` setting to perform the restoration:
>
> - Statistical information tables (`mysql.stat_*`)
> - System variable tables (`mysql.tidb``mysql.global_variables`)
> - System variable tables (`mysql.tidb`, `mysql.global_variables`)
> - User information tables (such as `mysql.user` and `mysql.columns_priv`)
> - [Other system tables](https://github.com/pingcap/tidb/blob/v5.4.0/br/pkg/restore/systable_restore.go#L31)

Expand Down
2 changes: 1 addition & 1 deletion choose-index.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ mysql> SHOW WARNINGS;

Skyline-pruning is a heuristic filtering rule for indexes, which can reduce the probability of wrong index selection caused by wrong estimation. To judge an index, the following three dimensions are needed:

- How many access conditions are covered by the indexed columns. An access condition is a where condition that can be converted to a column range. And the more access conditions an indexed column set covers, the better it is in this dimension.
- How many access conditions are covered by the indexed columns. An "access condition" is a where condition that can be converted to a column range. And the more access conditions an indexed column set covers, the better it is in this dimension.

- Whether it needs to retrieve rows from a table when you select the index to access the table (that is, the plan generated by the index is IndexReader operator or IndexLookupReader operator). Indexes that do not retrieve rows from a table are better on this dimension than indexes that do. If both indexes need TiDB to retrieve rows from the table, compare how many filtering conditions are covered by the indexed columns. Filtering conditions mean the `where` condition that can be judged based on the index. If the column set of an index covers more access conditions, the smaller the number of retrieved rows from a table, and the better the index is in this dimension.

Expand Down
2 changes: 1 addition & 1 deletion clinic/clinic-data-instruction-for-tiup.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ This section lists the types of diagnostic data that can be collected by Diag fr
| :------ | :------ |:-------- |
| Log | `tiflash.log` | `--include=log` |
| Error log | `tiflash_stderr.log` | `--include=log` |
| Configuration file | `tiflash-learner.toml``tiflash-preprocessed.toml``tiflash.toml` | `--include=config` |
| Configuration file | `tiflash-learner.toml`, `tiflash-preprocessed.toml`, `tiflash.toml` | `--include=config` |
| Real-time configuration | `config.json` | `--include=config` |
| Performance data | `cpu_profile.proto` | `--include=perf` |

Expand Down
4 changes: 2 additions & 2 deletions develop/dev-guide-connection-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ This section introduces parameters related to `Prepare`.

- **useServerPrepStmts**

**useServerPrepStmts** is set to `false` by default, that is, even if you use the Prepare API, the prepare operation will be done only on the client. To avoid the parsing overhead of the server, if the same SQL statement uses the Prepare API multiple times, it is recommended to set this configuration to `true`.
**useServerPrepStmts** is set to `false` by default, that is, even if you use the Prepare API, the "prepare" operation will be done only on the client. To avoid the parsing overhead of the server, if the same SQL statement uses the Prepare API multiple times, it is recommended to set this configuration to `true`.

To verify that this setting already takes effect, you can do:

Expand Down Expand Up @@ -265,7 +265,7 @@ In addition, because of a [client bug](https://bugs.mysql.com/bug.php?id=96623),

Through monitoring, you might notice that although the application only performs `INSERT` operations to the TiDB cluster, there are a lot of redundant `SELECT` statements. Usually this happens because JDBC sends some SQL statements to query the settings, for example, `select @@session.transaction_read_only`. These SQL statements are useless for TiDB, so it is recommended that you configure `useConfigs=maxPerformance` to avoid extra overhead.

`useConfigs=maxPerformance` configuration includes a group of configurations
`useConfigs=maxPerformance` configuration includes a group of configurations:

```ini
cacheServerConfiguration=true
Expand Down
6 changes: 3 additions & 3 deletions develop/dev-guide-insert-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ Before reading this document, you need to prepare the following:

There are two ways to insert multiple rows of data. For example, if you need to insert **3** players' data.

- A **multi-line insertion statement**
- A **multi-line insertion statement**:

{{< copyable "sql" >}}

```sql
INSERT INTO `player` (`id`, `coins`, `goods`) VALUES (1, 1000, 1), (2, 230, 2), (3, 300, 5);
```

- Multiple **single-line insertion statements**
- Multiple **single-line insertion statements**:

{{< copyable "sql" >}}

Expand Down Expand Up @@ -160,7 +160,7 @@ In this case, you **cannot** use SQL like the following to insert:
INSERT INTO `bookshop`.`users` (`id`, `balance`, `nickname`) VALUES (1, 0.00, 'nicky');
```

An error will occur
An error will occur:

```
ERROR 8216 (HY000): Invalid auto random: Explicit insertion on auto_random column is disabled. Try to set @@allow_auto_random_explicit_insert = true.
Expand Down
2 changes: 1 addition & 1 deletion develop/dev-guide-join-tables.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ summary: This document describes how to use multi-table join queries.

# Multi-table Join Queries

In many scenariosyou need to use one query to get data from multiple tables. You can use the `JOIN` statement to combine the data from two or more tables.
In many scenarios, you need to use one query to get data from multiple tables. You can use the `JOIN` statement to combine the data from two or more tables.

## Join types

Expand Down
6 changes: 3 additions & 3 deletions develop/dev-guide-sql-development-specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,17 +29,17 @@ This document introduces some general development specifications for using SQL.
```sql
SELECT gmt_create
FROM ...
WHERE DATE_FORMAT(gmt_create'%Y%m%d %H:%i:%s') = '20090101 00:00:0'
WHERE DATE_FORMAT(gmt_create, '%Y%m%d %H:%i:%s') = '20090101 00:00:0'
```

Recommended:

{{< copyable "sql" >}}

```sql
SELECT DATE_FORMAT(gmt_create'%Y%m%d %H:%i:%s')
SELECT DATE_FORMAT(gmt_create, '%Y%m%d %H:%i:%s')
FROM .. .
WHERE gmt_create = str_to_date('20090101 00:00:00''%Y%m%d %H:%i:s')
WHERE gmt_create = str_to_date('20090101 00:00:00', '%Y%m%d %H:%i:s')
```

## Other specifications
Expand Down
4 changes: 2 additions & 2 deletions develop/dev-guide-transaction-restraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The isolation levels supported by TiDB are **RC (Read Committed)** and **SI (Sna

The `SI` isolation level of TiDB can avoid **Phantom Reads**, but the `RR` in ANSI/ISO SQL standard cannot.

The following two examples show what **phantom reads** is.
The following two examples show what **phantom reads** is.

- Example 1: **Transaction A** first gets `n` rows according to the query, and then **Transaction B** changes `m` rows other than these `n` rows or adds `m` rows that match the query of **Transaction A**. When **Transaction A** runs the query again, it finds that there are `n+m` rows that match the condition. It is like a phantom, so it is called a **phantom read**.

Expand Down Expand Up @@ -154,7 +154,7 @@ public class EffectWriteSkew {
}
```

SQL log
SQL log:

{{< copyable "sql" >}}

Expand Down
2 changes: 1 addition & 1 deletion develop/dev-guide-transaction-troubleshoot.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ In TiDB pessimistic transaction mode, if two clients execute the following state

After client-B encounters a deadlock error, TiDB automatically rolls back the transaction in client-B. Updating `id=2` in client-A will be executed successfully. You can then run `COMMIT` to finish the transaction.

### Solution 1avoid deadlocks
### Solution 1: avoid deadlocks

To get better performance, you can avoid deadlocks at the application level by adjusting the business logic or schema design. In the example above, if client-B also uses the same update order as client-A, that is, they update books with `id=1` first, and then update books with `id=2`. The deadlock can then be avoided:

Expand Down
2 changes: 1 addition & 1 deletion develop/dev-guide-update-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ In each iteration, `SELECT` queries in order of the primary key. It selects prim

In Java (JDBC), a bulk-update application might be similar to the following:

**Code**
**Code:**

{{< copyable "" >}}

Expand Down
4 changes: 2 additions & 2 deletions develop/dev-guide-use-common-table-expression.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ summary: Learn the CTE feature of TiDB, which help you write SQL statements more

# Common Table Expression

In some transaction scenarios, due to application complexity, you might need to write a single SQL statement of up to 2,000 lines. The statement probably contains a lot of aggregations and multi-level subquery nesting. Maintaining such a long SQL statement can be a developers nightmare.
In some transaction scenarios, due to application complexity, you might need to write a single SQL statement of up to 2,000 lines. The statement probably contains a lot of aggregations and multi-level subquery nesting. Maintaining such a long SQL statement can be a developer's nightmare.

To avoid such a long SQL statement, you can simplify queries by using [Views](/develop/dev-guide-use-views.md) or cache intermediate query results by using [Temporary tables](/develop/dev-guide-use-temporary-tables.md).

Expand Down Expand Up @@ -183,7 +183,7 @@ WITH RECURSIVE <query_name> AS (
SELECT ... FROM <query_name>;
```

A classic example is to generate a set of [Fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number) with recursive CTE
A classic example is to generate a set of [Fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number) with recursive CTE:

{{< copyable "sql" >}}

Expand Down
Loading