Skip to content

Commit

Permalink
[MNT] fix several spelling mistakes (#5639)
Browse files Browse the repository at this point in the history
Diagnoses #5638 

Depends on #5638
  • Loading branch information
yarnabrina authored Jan 6, 2024
1 parent 14c33c7 commit 3fc99f6
Show file tree
Hide file tree
Showing 25 changed files with 26 additions and 26 deletions.
2 changes: 1 addition & 1 deletion examples/01c_forecasting_hierarchical_global.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2717,7 +2717,7 @@
"* focus on modeling individual products\n",
"* hierarchical information is provided as exgoneous information. \n",
"\n",
"For the M5 competition, winning solution used exogeneous features about the hierarchies like `\"dept_id\"`, `\"store_id\"` etc. to capture similarities and dissimilarities of the products. Other features include holiday events and snap days (specific assisstance program of US social security paid on certain days)."
"For the M5 competition, winning solution used exogeneous features about the hierarchies like `\"dept_id\"`, `\"store_id\"` etc. to capture similarities and dissimilarities of the products. Other features include holiday events and snap days (specific assistance program of US social security paid on certain days)."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/partition_based_clustering.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@
" <br>\n",
" These three cluster initialisation algorithms have been implemented and can\n",
" be chosen to use when constructing either k-means or k-medoids partitioning\n",
" algorithms by parsing the string values 'random' for random iniitialisation,\n",
" algorithms by parsing the string values 'random' for random initialisation,\n",
" 'forgy' for forgy and 'k-means++' for k-means++.\n",
"\n",
"### Assignment (distance measure)\n",
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ all_extras_pandas2 = [
]

# single-task dependencies, e.g., forecasting, classification, etc.
# manually curated and intentionally smaller to avoid dependeny conflicts
# manually curated and intentionally smaller to avoid dependency conflicts
# names are identical with the names of the modules and estimator type strings
# dependency sets are selected to cover the most popular estimators in each module
# (this is a subjective choice, and may change over time as the ecosystem evolves,
Expand Down
2 changes: 1 addition & 1 deletion sktime/_contrib/notebooks/windows_installation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -340,7 +340,7 @@
"source": [
"# Section 4: Configuring `pre-commit` with PyCharm\n",
"\n",
"`pre-commit` is a very useful package for checking your code for simple stye errors at the commit stage. This is very useful when working on large collaborative projects as it allows code reviewers to focus on the function of new code rather than conformity to style. For example, consider the following code: \n",
"`pre-commit` is a very useful package for checking your code for simple style errors at the commit stage. This is very useful when working on large collaborative projects as it allows code reviewers to focus on the function of new code rather than conformity to style. For example, consider the following code: \n",
"\n",
"![21_pre-commit_examples.png](img/windows_installation/21_pre-commit_examples.png)\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion sktime/annotation/clasp.py
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ class ClaSPSegmentation(BaseSeriesAnnotator):
fmt : str {"dense", "sparse"}, optional (default="sparse")
Annotation output format:
* If "sparse", a pd.Series of the found Change Points is returned
* If "dense", a pd.IndexSeries with the Segmenation of X is returned
* If "dense", a pd.IndexSeries with the Segmentation of X is returned
exclusion_radius : int
Exclusion Radius for change points to be non-trivial matches
Expand Down
2 changes: 1 addition & 1 deletion sktime/base/_base_panel.py
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ def _check_y(self, y=None, return_to_mtype=False):
y_inner : object of sktime compatible time series type
can be Series, Panel, Hierarchical
y_metadata : dict
metadata of y, retured by check_is_scitype
metadata of y, returned by check_is_scitype
y_mtype : str, only returned if return_to_mtype=True
mtype of y_inner, after convert
"""
Expand Down
2 changes: 1 addition & 1 deletion sktime/classification/early_classification/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -644,7 +644,7 @@ def _check_y(self, y=None, return_to_mtype=False):
y_inner : object of sktime compatible time series type
can be Series, Panel, Hierarchical
y_metadata : dict
metadata of y, retured by check_is_scitype
metadata of y, returned by check_is_scitype
y_mtype : str, only returned if return_to_mtype=True
mtype of y_inner, after convert
"""
Expand Down
2 changes: 1 addition & 1 deletion sktime/datasets/_readers_writers/arff.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
# ==================================================================================================


# TODO: original author didnt add test for this function
# TODO: original author didn't add test for this function
# Refactor the nested loops
def load_from_arff_to_dataframe(
full_file_path_and_name,
Expand Down
2 changes: 1 addition & 1 deletion sktime/datasets/_readers_writers/long.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from sktime.datatypes._panel._convert import from_long_to_nested


# TODO: original author didnt add test for this function, for research purposes?
# TODO: original author didn't add test for this function, for research purposes?
def load_from_long_to_dataframe(full_file_path_and_name, separator=","):
"""Load data from a long format file into a Pandas DataFrame.
Expand Down
2 changes: 1 addition & 1 deletion sktime/datasets/_readers_writers/tsf.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def _convert_tsf_to_hierarchical(
tsf file metadata
freq : str, optional
pandas compatible time frequency, by default None
if not speciffied it's automatically mapped from the tsf frequency to a pandas
if not specified it's automatically mapped from the tsf frequency to a pandas
frequency
value_column_name: str, optional
The name of the column that contains the values, by default "series_value"
Expand Down
2 changes: 1 addition & 1 deletion sktime/datasets/_readers_writers/tsv.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import pandas as pd


# TODO: original author didnt add test for this function
# TODO: original author didn't add test for this function
def load_from_ucr_tsv_to_dataframe(
full_file_path_and_name, return_separate_X_and_y=True
):
Expand Down
2 changes: 1 addition & 1 deletion sktime/forecasting/trend/_pwl_trend_forecaster.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

class ProphetPiecewiseLinearTrendForecaster(_ProphetAdapter):
"""
Forecast time series data with a piecwise linear trend, fitted via prophet.
Forecast time series data with a piecewise linear trend, fitted via prophet.
The forecaster uses Facebook's prophet algorithm [1]_ and extracts the piecewise
linear trend from it. Only hyper-parameters relevant for the trend modelling are
Expand Down
2 changes: 1 addition & 1 deletion sktime/forecasting/trend/tests/test_pwl_trend.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ def test_pred_errors_against_linear():
"""Check prediction performance on airline dataset.
For a small value of changepoint_prior_scale like 0.001 the
ProphetPiecewiseLinearTrendForecaster must return a single straigth trendline.
ProphetPiecewiseLinearTrendForecaster must return a single straight trendline.
Raises
------
Expand Down
2 changes: 1 addition & 1 deletion sktime/param_est/compose/_func_fitter.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ class FunctionParamFitter(BaseParamFitter):
Examples
--------
This class could be used to contruct a parameter estimator that
This class could be used to construct a parameter estimator that
selects a forecaster based on the input data's length. The
selected forecaster can be stored in the ``selected_forecaster_``
attribute, which can be then passed down to a
Expand Down
2 changes: 1 addition & 1 deletion sktime/transformations/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -1214,7 +1214,7 @@ def _convert_output(self, X, metadata, inverse=False):
# Input has always to be Panel
X_output_mtype = "pd.DataFrame"
else:
# Input can be Panel or Hierachical, since it is supported
# Input can be Panel or Hierarchical, since it is supported
# by the used mtype
output_scitype = X_input_scitype
# Xt_mtype = metadata["mtype"]
Expand Down
4 changes: 2 additions & 2 deletions sktime/transformations/bootstrap/_mbb.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ class STLBootstrapTransformer(BaseTransformer):
Parameters
----------
n_series : int, optional
The number of bootstraped time series that will be generated, by default 10.
The number of bootstrapped time series that will be generated, by default 10.
sp : int, optional
Seasonal periodicity of the data in integer form, by default 12.
Must be an integer >= 2
Expand Down Expand Up @@ -422,7 +422,7 @@ class MovingBlockBootstrapTransformer(BaseTransformer):
Parameters
----------
n_series : int, optional
The number of bootstraped time series that will be generated, by default 10
The number of bootstrapped time series that will be generated, by default 10
block_length : int, optional
The length of the block in the MBB method, by default None.
If not provided, the following heuristic is used, the block length will the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ class MiniRocketMultivariate(BaseTransformer):
This transformer fits one set of paramereters per individual series,
and applies the transform with fitted parameter i to the i-th series in transform.
Vanilla use requies same number of series in fit and transform.
Vanilla use requires same number of series in fit and transform.
To fit and transform series at the same time,
without an identification of fit/transform instances,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ class MiniRocketMultivariateVariable(BaseTransformer):
This transformer fits one set of paramereters per individual series,
and applies the transform with fitted parameter i to the i-th series in transform.
Vanilla use requies same number of series in fit and transform.
Vanilla use requires same number of series in fit and transform.
To fit and transform series at the same time,
without an identification of fit/transform instances,
Expand Down
2 changes: 1 addition & 1 deletion sktime/transformations/panel/rocket/_multirocket.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ class MultiRocket(BaseTransformer):
This transformer fits one set of paramereters per individual series,
and applies the transform with fitted parameter i to the i-th series in transform.
Vanilla use requies same number of series in fit and transform.
Vanilla use requires same number of series in fit and transform.
To fit and transform series at the same time,
without an identification of fit/transform instances,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ class MultiRocketMultivariate(BaseTransformer):
This transformer fits one set of paramereters per individual series,
and applies the transform with fitted parameter i to the i-th series in transform.
Vanilla use requies same number of series in fit and transform.
Vanilla use requires same number of series in fit and transform.
To fit and transform series at the same time,
without an identification of fit/transform instances,
Expand Down
2 changes: 1 addition & 1 deletion sktime/transformations/panel/rocket/_rocket.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ class Rocket(BaseTransformer):
This transformer fits one set of paramereters per individual series,
and applies the transform with fitted parameter i to the i-th series in transform.
Vanilla use requies same number of series in fit and transform.
Vanilla use requires same number of series in fit and transform.
To fit and transform series at the same time,
without an identification of fit/transform instances,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@


def _make_augmentation_pipeline(augmentation_list):
"""Buids an sklearn pipeline of augmentations from a tuple of strings.
"""Build an sklearn pipeline of augmentations from a tuple of strings.
Parameters
----------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ class SignatureTransformer(BaseTransformer):
"post": Rescales the output signature by multiplying the depth-d term by d!.
Aim is that every term becomes ~O(1).
sig_tfm: str, one of ``['signature', 'logsignature']``. default: ``'signature'``
The type of signature transform to use, plain or logaritmic.
The type of signature transform to use, plain or logarithmic.
depth: int, default=4
Signature truncation depth.
backend: str, one of: ``'esig'`` (default), or ``'iisignature'``.
Expand Down
2 changes: 1 addition & 1 deletion sktime/transformations/series/scaledasinh.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ class ScaledAsinhTransformer(BaseTransformer):
Known as variance stabilizing transformation,
Combined with an sktime.forecasting.compose.TransformedTargetForecaster,
can be usefull in time series that exhibit spikes [1]_, [2]_
can be useful in time series that exhibit spikes [1]_, [2]_
Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion sktime/utils/mlflow_sktime.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
`{"predict_method": {"predict": {}, "predict_interval": {"coverage": [0.1, 0.9]}}`.
`Dict[str, list]`, with default parameters in predict method, for example
`{"predict_method": ["predict", "predict_interval"}` (Note: when including
`predict_proba` method the former appraoch must be followed as `quantiles`
`predict_proba` method the former approach must be followed as `quantiles`
parameter has to be provided by the user). If no prediction config is defined
`pyfunc.predict()` will return output from sktime `predict()` method.
"""
Expand Down

0 comments on commit 3fc99f6

Please sign in to comment.