Skip to content
This repository has been archived by the owner on Sep 30, 2024. It is now read-only.

Fix misc spelling #1506

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions docker/resources/systemctl.py
Original file line number Diff line number Diff line change
Expand Up @@ -1949,7 +1949,7 @@ def do_start_unit_from(self, conf):
service_result = "failed"
break
if service_result in [ "success" ] and mainpid:
logg.debug("okay, wating on socket for %ss", timeout)
logg.debug("okay, waiting on socket for %ss", timeout)
results = self.wait_notify_socket(notify, timeout, mainpid)
if "MAINPID" in results:
new_pid = results["MAINPID"]
Expand Down Expand Up @@ -2756,7 +2756,7 @@ def get_substate_from(self, conf):
else:
return "dead"
def is_failed_modules(self, *modules):
""" [UNIT]... -- check if these units are in failes state
""" [UNIT]... -- check if these units are in failed state
implements True if any is-active = True """
units = []
results = []
Expand Down Expand Up @@ -3618,7 +3618,7 @@ def syntax_check_service(self, conf):
+ "\n\t\t\tUse ' ; ' for multiple commands (ExecReloadPost or ExedReloadPre do not exist)", unit)
if len(usedExecReload) > 0 and "/bin/kill " in usedExecReload[0]:
logg.warning(" %s: the use of /bin/kill is not recommended for ExecReload as it is asychronous."
+ "\n\t\t\tThat means all the dependencies will perform the reload simultanously / out of order.", unit)
+ "\n\t\t\tThat means all the dependencies will perform the reload simultaneously / out of order.", unit)
if conf.getlist("Service", "ExecRestart", []): #pragma: no cover
logg.error(" %s: there no such thing as an ExecRestart (ignored)", unit)
if conf.getlist("Service", "ExecRestartPre", []): #pragma: no cover
Expand Down Expand Up @@ -3854,7 +3854,7 @@ def enabled_default_system_services(self, sysv = "S", default_target = None, ign
default_services.append(unit)
for folder in [ self.rc3_root_folder() ]:
if not os.path.isdir(folder):
logg.warning("non-existant %s", folder)
logg.warning("non-existent %s", folder)
continue
for unit in sorted(os.listdir(folder)):
path = os.path.join(folder, unit)
Expand Down Expand Up @@ -3960,7 +3960,7 @@ def init_modules(self, *modules):
it was never enabled in the system.
/// SPECIAL: when using --now then only the init-loop is started,
with the reap-zombies function and waiting for an interrupt.
(and no unit is started/stoppped wether given or not).
(and no unit is started/stopped wether given or not).
"""
if self._now:
return self.init_loop_until_stop([])
Expand Down Expand Up @@ -4387,7 +4387,7 @@ def logg_debug(*msg): pass
_o.add_option("--reverse", action="store_true",
help="Show reverse dependencies with 'list-dependencies' (ignored)")
_o.add_option("--job-mode", metavar="MODE",
help="Specifiy how to deal with already queued jobs, when queuing a new job (ignored)")
help="Specify how to deal with already queued jobs, when queuing a new job (ignored)")
_o.add_option("--show-types", action="store_true",
help="When showing sockets, explicitly show their type (ignored)")
_o.add_option("-i","--ignore-inhibitors", action="store_true",
Expand Down
2 changes: 1 addition & 1 deletion docs/configuration-discovery-pseudo-gtid.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ See [Pseudo GTID](pseudo-gtid.md)
"AutoPseudoGTID": true,
}
```
And you may ignore any other Pseudo-GTID related configuration (they will all be implicitly overriden by `orchestrator`).
And you may ignore any other Pseudo-GTID related configuration (they will all be implicitly overridden by `orchestrator`).

You will further need to grant the following on your MySQL servers:
```sql
Expand Down
2 changes: 1 addition & 1 deletion docs/deployment-shared-backend.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ To interact with orchestrator from shell/automation/scripts, you may choose to:
- The [orchestrator command line](executing-via-command-line.md).
- Deploy the `orchestrator` binary (you may use the `orchestrator-cli` distributed package) on any box from which you wish to interact with `orchestrator`.
- Create `/etc/orchestrator.conf.json` on those boxes, populate with credentials. This file should generally be the same as for the `orchestrator` service boxes. If you're unsure, use exact same file content.
- The `orchestrator` binary will access the shared backend DB. Make sure to give it access. Typicaly this will be port `3306`.
- The `orchestrator` binary will access the shared backend DB. Make sure to give it access. Typically this will be port `3306`.

It is OK to run `orchestrator` CLI even while the `orchestrator` service is operating, since they will all coordinate on the same backend DB.

Expand Down
2 changes: 1 addition & 1 deletion docs/developers.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Developers

To build, test and contribute to `orchestrator`, please refer t othe following pages:
To build, test and contribute to `orchestrator`, please refer to the following pages:

- [Understanding CI](ci.md)
- [Building and testing](build.md)
Expand Down
2 changes: 1 addition & 1 deletion docs/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ file is bind mounted into container at `/etc/orchestrator.conf.json`
* `ORC_USER`: defaults to `orc_server_user`
* `ORC_PASSWORD`: defaults to `orc_server_password`

To set these variables you could add these to an environment file where you add them like `key=value` (one pair per line). You can then pass this enviroment file to the docker command adding `--env-file=path/to/env-file` to the `docker run` command.
To set these variables you could add these to an environment file where you add them like `key=value` (one pair per line). You can then pass this environment file to the docker command adding `--env-file=path/to/env-file` to the `docker run` command.

## Create package files

Expand Down
2 changes: 1 addition & 1 deletion docs/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ This setup provides semi HA for `orchestrator`. Two variations available:
- The proxy always directs to same server (e.g. `first` algorithm for `HAProxy`) unless that server is dead.
- Death of the active master causes `orchestrator` to talk to other master, which may be somewhat behind. `orchestrator` will typically self reapply the missing changes by nature of its continuous discovery.
- `orchestrator` queries guarantee `STATEMENT` based replication will not cause duplicate errors, and master-master setup will always achieve consistency.
- `orchestrator` will be able to recover the death of a backend master even if in the middle of runnign a recovery (recovery will re-initiate on alternate master)
- `orchestrator` will be able to recover the death of a backend master even if in the middle of running a recovery (recovery will re-initiate on alternate master)
- **Split brain is possible**. Depending on your setup, physical locations, type of proxy, there can be different `orchestrator` service nodes speaking to different backend `MySQL` servers. This scenario can lead two two `orchestrator` services which consider themselves as "active", both of which will run failovers independently, which would lead to topology corruption.

To access your `orchestrator` service you may speak to any healthy node.
Expand Down
2 changes: 1 addition & 1 deletion docs/risks.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Most of the time `orchestrator` only reads status from your topologies. Default
You may use `orchestrator` to refactor your topologies: move replicas around and change the replication tree. `orchestrator` will do its best to:

1. Make sure you only move an instance to a location where it is valid for it to replicate (e.g. that you don't put a 5.5 server below a 5.6 server)
2. Make sure you move an instance at the right time (ie the instance and whichever affected servers are not lagging badly, so that operation can compeletely in a timely manner).
2. Make sure you move an instance at the right time (ie the instance and whichever affected servers are not lagging badly, so that operation can completely in a timely manner).
3. Do the math correctly: stop the replica at the right time, roll it forward to the right position, `CHANGE MASTER` to the correct location & position.

The above is well tested.
Expand Down
6 changes: 3 additions & 3 deletions docs/using-the-web-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,11 +139,11 @@ The structure of an Instance evolves and documentation will always fall behind.
* `ReplicationLagSeconds`: when `ReplicationLagQuery` provided, the computed replica lag; otherwise same as `SecondsBehindMaster`
* `Replicas`: list of MySQL replicas _hostname & port_)
* `ClusterName`: name of cluster this instance is associated with; uniquely identifies cluster
* `DataCenter`: (metadata) name of data center, infered by `DataCenterPattern` config variable
* `PhysicalEnvironment`: (metadata) name of environment, infered by `PhysicalEnvironmentPattern` config variable
* `DataCenter`: (metadata) name of data center, inferred by `DataCenterPattern` config variable
* `PhysicalEnvironment`: (metadata) name of environment, inferred by `PhysicalEnvironmentPattern` config variable
* `ReplicationDepth`: distance from the master (master is `0`, direct replica is `1` and so on)
* `IsCoMaster`: true when this instanceis part of a master-master pair
* `IsLastCheckValid`: whether last attempt at reading this instane succeeeded
* `IsLastCheckValid`: whether last attempt at reading this instance succeeeded
* `IsUpToDate`: whether this data is up to date
* `IsRecentlyChecked`: whether a read attempt on this instance has been recently made
* `SecondsSinceLastSeen`: time elapsed since last successfully accessed this instance
Expand Down
6 changes: 3 additions & 3 deletions go/agent/agent_dao.go
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ func InitHttpClient() {
httpClient = &http.Client{Transport: httpTransport}
}

// httpGet is a convenience method for getting http response from URL, optionaly skipping SSL cert verification
// httpGet is a convenience method for getting http response from URL, optionally skipping SSL cert verification
func httpGet(url string) (resp *http.Response, err error) {
return httpClient.Get(url)
}
Expand Down Expand Up @@ -683,7 +683,7 @@ func executeSeed(seedId int64, targetHostname string, sourceHostname string) err

seedStateId, _ = submitSeedStateEntry(seedId, fmt.Sprintf("Checking MySQL status on target %s", targetHostname), "")
if targetAgent.MySQLRunning {
return updateSeedStateEntry(seedStateId, errors.New("MySQL is running on target host. Cowardly refusing to proceeed. Please stop the MySQL service"))
return updateSeedStateEntry(seedStateId, errors.New("MySQL is running on target host. Cowardly refusing to proceed. Please stop the MySQL service"))
}

seedStateId, _ = submitSeedStateEntry(seedId, fmt.Sprintf("Looking up available snapshots on source %s", sourceHostname), "")
Expand Down Expand Up @@ -711,7 +711,7 @@ func executeSeed(seedId int64, targetHostname string, sourceHostname string) err
return updateSeedStateEntry(seedStateId, err)
}

seedStateId, _ = submitSeedStateEntry(seedId, fmt.Sprintf("Aquiring target host datadir free space on %s", targetHostname), "")
seedStateId, _ = submitSeedStateEntry(seedId, fmt.Sprintf("Acquiring target host datadir free space on %s", targetHostname), "")
targetAgent, err = GetAgent(targetHostname)
if err != nil {
return updateSeedStateEntry(seedStateId, err)
Expand Down
26 changes: 13 additions & 13 deletions go/app/command_help.go
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ func init() {
CommandHelp["match-replicas"] = `
Matches all replicas of a given instance under another (destination) instance. This is a (faster) shortcut
to matching said replicas one by one under the destination instance. In fact, this bulk operation is highly
optimized and can execute in orders of magnitue faster, depeding on the nu,ber of replicas involved and their
optimized and can execute in orders of magnitue faster, depending on the nu,ber of replicas involved and their
respective position behind the instance (the more replicas, the more savings).
The instance itself may be crashed or inaccessible. It is not contacted throughout the operation. Examples:

Expand Down Expand Up @@ -254,7 +254,7 @@ func init() {
local master of its siblings, using Pseudo-GTID. It is uncertain that there *is* a replica that will be able to
become master to all its siblings. But if there is one, orchestrator will pick such one. There are many
constraints, most notably the replication positions of all replicas, whether they use log_slave_updates, and
otherwise version compatabilities etc.
otherwise version compatibilities etc.
As many replicas that can be regrouped under promoted slves are operated on. The rest are untouched.
This command is useful in the event of a crash. For example, in the event that a master dies, this operation
can promote a candidate replacement and set up the remaining topology to correctly replicate from that
Expand Down Expand Up @@ -324,7 +324,7 @@ func init() {
Undo a detach-replica operation. Reverses the binlog change into the original values, and
resumes replication. Example:

orchestrator -c reattach-replica -i detahced.replica.whose.replication.will.amend.com
orchestrator -c reattach-replica -i detached.replica.whose.replication.will.amend.com

Issuing this on an attached (i.e. normal) replica will do nothing.
`
Expand All @@ -340,7 +340,7 @@ func init() {
Undo a detach-replica-master-host operation. Reverses the hostname change into the original value, and
resumes replication. Example:

orchestrator -c reattach-replica-master-host -i detahced.replica.whose.replication.will.amend.com
orchestrator -c reattach-replica-master-host -i detached.replica.whose.replication.will.amend.com

Issuing this on an attached (i.e. normal) replica will do nothing.
`
Expand Down Expand Up @@ -397,7 +397,7 @@ func init() {
Get binlog file:pos of entry given by --pattern (exact full match, not a regular expression) in a given instance.
This will search the instance's binary logs starting with most recent, and terminate as soon as an exact match is found.
The given input is not a regular expression. It must fully match the entry (not a substring).
This is most useful when looking for uniquely identifyable values, such as Pseudo-GTID. Example:
This is most useful when looking for uniquely identifiable values, such as Pseudo-GTID. Example:

orchestrator -c find-binlog-entry -i instance.to.search.on.com --pattern "insert into my_data (my_column) values ('distinct_value_01234_56789')"

Expand Down Expand Up @@ -480,7 +480,7 @@ func init() {
-i not given, implicitly assumed local hostname

Instance must be already known to orchestrator. Topology is generated by orchestrator's mapping
and not from synchronuous investigation of the instances. The generated topology may include
and not from synchronous investigation of the instances. The generated topology may include
instances that are dead, or whose replication is broken.
`
CommandHelp["all-instances"] = `
Expand Down Expand Up @@ -612,7 +612,7 @@ func init() {
assuming some_alias is a known cluster alias (see ClusterNameToAlias or DetectClusterAliasQuery configuration)
`
CommandHelp["instance-status"] = `
Output short status on a given instance (name, replication status, noteable configuration). Example2:
Output short status on a given instance (name, replication status, notable configuration). Example2:

orchestrator -c instance-status -i instance.to.investigate.com

Expand All @@ -631,7 +631,7 @@ func init() {

CommandHelp["discover"] = `
Request that orchestrator cotacts given instance, reads its status, and upsert it into
orchestrator's respository. Examples:
orchestrator's repository. Examples:

orchestrator -c discover -i instance.to.discover.com:3306

Expand All @@ -655,7 +655,7 @@ func init() {
`
CommandHelp["begin-maintenance"] = `
Request a maintenance lock on an instance. Topology changes require placing locks on the minimal set of
affected instances, so as to avoid an incident of two uncoordinated operations on a smae instance (leading
affected instances, so as to avoid an incident of two uncoordinated operations on a same instance (leading
to possible chaos). Locks are placed in the backend database, and so multiple orchestrator instances are safe.
Operations automatically acquire locks and release them. This command manually acquires a lock, and will
block other operations on the instance until lock is released.
Expand All @@ -680,7 +680,7 @@ func init() {
Mark an instance as downtimed. A downtimed instance is assumed to be taken care of, and recovery-analysis does
not apply for such an instance. As result, no recommendation for recovery, and no automated-recovery are issued
on a downtimed instance.
Downtime is different than maintanence in that it places no lock (mainenance uses an exclusive lock on the instance).
Downtime is different than maintenance in that it places no lock (mainenance uses an exclusive lock on the instance).
It is OK to downtime an instance that is already downtimed -- the new begin-downtime command will override whatever
previous downtime attributes there were on downtimes instance.
Note that orchestrator automatically assumes downtime to be expired after MaintenanceExpireMinutes (hard coded value).
Expand Down Expand Up @@ -801,17 +801,17 @@ func init() {
CommandHelp["register-candidate"] = `
Indicate that a specific instance is a preferred candidate for master promotion. Upon a dead master
recovery, orchestrator will do its best to promote instances that are marked as candidates. However
orchestrator cannot guarantee this will always work. Issues like version compatabilities, binlog format
orchestrator cannot guarantee this will always work. Issues like version compatibilities, binlog format
etc. are limiting factors.
You will want to mark an instance as a candidate when: it is replicating directly from the master, has
binary logs and log_slave_updates is enabled, uses same binlog_format as its siblings, compatible version
as its siblings. If you're using DataCenterPattern & PhysicalEnvironmentPattern (see configuration),
you would further wish to make sure you have a candidate in each data center.
Orchestrator first promotes the best-possible replica, and only then replaces it with your candidate,
and only if both in same datcenter and physical enviroment.
and only if both in same datcenter and physical environment.
An instance needs to continuously be marked as candidate, so as to make sure orchestrator is not wasting
time with stale instances. Orchestrator periodically clears candidate-registration for instances that have
not been registeres for over CandidateInstanceExpireMinutes (see config).
not been registers for over CandidateInstanceExpireMinutes (see config).
Example:

orchestrator -c register-candidate -i candidate.instance.com
Expand Down
2 changes: 1 addition & 1 deletion go/cmd/orchestrator/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ import (

var AppVersion, GitCommit string

// main is the application's entry point. It will either spawn a CLI or HTTP itnerfaces.
// main is the application's entry point. It will either spawn a CLI or HTTP interfaces.
func main() {
configFile := flag.String("config", "", "config file name")
command := flag.String("c", "", "command, required. See full list of commands via 'orchestrator -c help'")
Expand Down
Loading