Skip to content

Commit

Permalink
[MNT] Fix spellings using codespell and typos (#6799)
Browse files Browse the repository at this point in the history
- used `typos --locale en --write-changes .` (**automated**)
- used `codespell --write-changes .` (**automated**)
- changed `monotonous` to `monotonic`) (**manual**) -> fixes #6775
  • Loading branch information
yarnabrina authored Jul 20, 2024
1 parent 6cc99aa commit 06427ff
Show file tree
Hide file tree
Showing 39 changed files with 53 additions and 53 deletions.
2 changes: 1 addition & 1 deletion docs/source/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ clean:
@echo "Deleted directory $(BUILDDIR)."

# $(O) is meant as a shortcut for custom options.
# i.e to log stderr into a seprate file:
# i.e to log stderr into a separate file:
# make build O="--no-color 2> build_warnings.log"
html:
# ./symlink_examples.sh
Expand Down
2 changes: 1 addition & 1 deletion docs/source/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7837,7 +7837,7 @@ Refactored
* [ENH] renamed ``fit-in-transform`` and ``fit-in-predict`` to ``fit_is_empty`` (:pr:`2250`) :user:`fkiraly`
* [ENH] refactoring `test_all_classifiers` to test class architecture (:pr:`2257`) :user:`fkiraly`
* [ENH] test parameter refactor: all classifiers (:pr:`2288`) :user:`MatthewMiddlehurst`
* [ENH] test paraneter refactor: ``Arsenal`` (:pr:`2273`) :user:`dionysisbacchus`
* [ENH] test parameter refactor: ``Arsenal`` (:pr:`2273`) :user:`dionysisbacchus`
* [ENH] test parameter refactor: ``RocketClassifier`` (:pr:`2166`) :user:`dionysisbacchus`
* [ENH] test parameter refactor: ``TimeSeriesForestClassifier`` (:pr:`2277`) :user:`lielleravid`
* [ENH] ``FeatureUnion`` refactor - moved to ``transformations``, tags, dunder method (:pr:`2231`) :user:`fkiraly`
Expand Down
2 changes: 1 addition & 1 deletion docs/source/get_involved/code_of_conduct.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ genetic information, sexual orientation, disability status, physical appearance,
body size, citizenship, nationality, national origin, ethnic or social origin, pregnancy,
familial status, family background, veteran status, trade union membership,
religion or belief (or lack thereof), membership of a national minority, property, age,
socio-economic status, neurotypicality or -atypicality, education, and experience level.
socioeconomic status, neurotypicality or -atypicality, education, and experience level.

Everyone who participates in the sktime project activities is required
to conform to this Code of Conduct. This Code of Conduct applies to all
Expand Down
6 changes: 3 additions & 3 deletions examples/01c_forecasting_hierarchical_global.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@
"\n",
"In the `\"pd.DataFrame\"` mtype, time series are represented by an in-memory container `obj: pandas.DataFrame` as follows.\n",
"\n",
"* structure convention: `obj.index` must be monotonous, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.\n",
"* structure convention: `obj.index` must be monotonic, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.\n",
"* variables: columns of `obj` correspond to different variables\n",
"* variable names: column names `obj.columns`\n",
"* time points: rows of `obj` correspond to different, distinct time points\n",
Expand Down Expand Up @@ -270,7 +270,7 @@
"\n",
"In the `\"pd-multiindex\"` mtype, time series panels are represented by an in-memory container `obj: pandas.DataFrame` as follows.\n",
"\n",
"* structure convention: `obj.index` must be a pair multi-index of type `(RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonous.\n",
"* structure convention: `obj.index` must be a pair multi-index of type `(RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonic.\n",
"* instances: rows with the same `\"instances\"` index correspond to the same instance; rows with different `\"instances\"` index correspond to different instances.\n",
"* instance index: the first element of pairs in `obj.index` is interpreted as an instance index. \n",
"* variables: columns of `obj` correspond to different variables\n",
Expand Down Expand Up @@ -407,7 +407,7 @@
"source": [
"#### Hierarchical time series - `Hierarchical` scitype, `\"pd_multiindex_hier\"` mtype\n",
"\n",
"* structure convention: `obj.index` must be a 3 or more level multi-index of type `(RangeIndex, ..., RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonous. We call the last index the \"time-like\" index\n",
"* structure convention: `obj.index` must be a 3 or more level multi-index of type `(RangeIndex, ..., RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonic. We call the last index the \"time-like\" index\n",
"* hierarchy level: rows with the same non-time-like indices correspond to the same hierarchy unit; rows with different non-time-like index combination correspond to different hierarchy unit.\n",
"* hierarchy: the non-time-like indices in `obj.index` are interpreted as a hierarchy identifying index. \n",
"* variables: columns of `obj` correspond to different variables\n",
Expand Down
2 changes: 1 addition & 1 deletion extension_templates/transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ class MyTransformer(BaseTransformer):
#
# skip-inverse-transform = is inverse-transform skipped when called?
"skip-inverse-transform": False,
# if False, capability:inverse_transform tag behaviour is as per devault
# if False, capability:inverse_transform tag behaviour is as per default
# if True, inverse_transform is the identity transform and raises no exception
# this is useful for transformers where inverse_transform
# may be called but should behave as the identity, e.g., imputers
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -376,7 +376,7 @@ ignore=[
"C414", # Unnecessary `list` call within `sorted()`
"S301", # pickle and modules that wrap it can be unsafe
"C416", # Unnecessary list comprehension - rewrite as a generator
"S310", # Audit URL open for permited schemes
"S310", # Audit URL open for permitted schemes
"S202", # Uses of `tarfile.extractall()`
"S307", # Use of possibly insecure function
"C417", # Unnecessary `map` usage (rewrite using a generator expression)
Expand All @@ -386,7 +386,7 @@ ignore=[
"S105", # Possible hardcoded password
"PT018", # Checks for assertions that combine multiple independent condition
"S602", # sub process call with shell=True unsafe
"C419", # Unnecessary list comprehension, some are flaged yet are not
"C419", # Unnecessary list comprehension, some are flagged yet are not
"C409", # Unnecessary `list` literal passed to `tuple()` (rewrite as a `tuple` literal)
"S113", # Probable use of httpx call withour timeout
]
Expand Down
4 changes: 2 additions & 2 deletions sktime/annotation/base/_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ class BaseSeriesAnnotator(BaseEstimator):
task : str {"segmentation", "change_point_detection", "anomaly_detection"}
The main annotation task:
* If ``segmentation``, the annotator divides timeseries into discrete chunks
based on certain criteria. The same label can be applied at mulitple
based on certain criteria. The same label can be applied at multiple
disconnected regions of the timeseries.
* If ``change_point_detection``, the annotator finds points where the
statistical properties of the timeseries change significantly.
Expand Down Expand Up @@ -537,7 +537,7 @@ def _sparse_segments_to_dense(y_sparse, index):
"""
if y_sparse.index.is_overlapping:
raise NotImplementedError(
"Cannot convert overlapping segments to a dense formet yet."
"Cannot convert overlapping segments to a dense format yet."
)

interval_indexes = y_sparse.index.get_indexer(index)
Expand Down
2 changes: 1 addition & 1 deletion sktime/annotation/hmm_learn/gaussian.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ class GaussianHMM(BaseHMMLearn):
----------
n_components : int
Number of states
covariance_type : {"sperical", "diag", "full", "tied"}, optional
covariance_type : {"spherical", "diag", "full", "tied"}, optional
The type of covariance parameters to use:
* "spherical" --- each state uses a single variance value that
applies to all features.
Expand Down
2 changes: 1 addition & 1 deletion sktime/annotation/hmm_learn/gmm.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ class GMMHMM(BaseHMMLearn):
Number of states in the model.
n_mix : int
Number of states in the GMM.
covariance_type : {"sperical", "diag", "full", "tied"}, optional
covariance_type : {"spherical", "diag", "full", "tied"}, optional
The type of covariance parameters to use:
* "spherical" --- each state uses a single variance value that
applies to all features.
Expand Down
2 changes: 1 addition & 1 deletion sktime/base/_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ def _handle_numpy2_softdeps(self):
are in NOT_NP2_COMPATIBLE, this is a hard-coded
list of soft dependencies that are not numpy 2 compatible
* if any are found, adds a numpy<2.0 soft dependency to the list,
and sets it as a dynamic overide of the python_dependencies tag
and sets it as a dynamic override of the python_dependencies tag
"""
from packaging.requirements import Requirement

Expand Down
2 changes: 1 addition & 1 deletion sktime/benchmarking/_lib_mini_kotsu/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
The module is currently undergoing a refactor with the following targets:
* remove kotsu as a soft dependeny. The package is no longer maintained and
* remove kotsu as a soft dependency. The package is no longer maintained and
a dependency liability for sktime, maintainers are non-responsive.
* replace kotsu with sktime's own benchmarking module, with an API that is closer
to sklearn's estimator API (e.g., no separation of class and params).
Expand Down
2 changes: 1 addition & 1 deletion sktime/benchmarking/_lib_mini_kotsu/registration.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ def make(self, id: str, **kwargs) -> Entity:
raise KeyError(f"No registered entity with ID {id}")

def all(self):
"""Return all the entitys in the registry."""
"""Return all the entities in the registry."""
return self.entity_specs.values()

def register(
Expand Down
4 changes: 2 additions & 2 deletions sktime/classification/dictionary_based/_boss_pyts.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,8 @@ class BOSSVSClassifierPyts(_PytsAdapter, BaseClassifier):
alphabet are used.
numerosity_reduction : bool (default = True)
If True, delete sample-wise all but one occurence of back to back
identical occurences of the same words.
If True, delete sample-wise all but one occurrence of back to back
identical occurrences of the same words.
use_idf : bool (default = True)
Enable inverse-document-frequency reweighting.
Expand Down
2 changes: 1 addition & 1 deletion sktime/datasets/_readers_writers/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -299,6 +299,6 @@ def get_path(path: Union[str, pathlib.Path], suffix: str) -> str:
if not p_.suffix:
# Checks if a file with the same name exists
if not os.path.exists(resolved_path):
# adds the specified extention to the path
# adds the specified extension to the path
resolved_path += suffix
return resolved_path
2 changes: 1 addition & 1 deletion sktime/datatypes/_series_as_panel/_convert.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def convert_Series_to_Panel(obj, store=None, return_to_mtype=False):
"""Convert series to a single-series panel.
Adds a dummy dimension to the series.
For pd.Series or DataFrame, this results in a list of DataFram (dim added is list).
For pd.Series or DataFrame, this results in a list of DataFrame (dim added is list).
For numpy array, this results in a third dimension being added.
Assumes input is conformant with one of the three Series mtypes.
Expand Down
2 changes: 1 addition & 1 deletion sktime/dists_kernels/dtw/_dtw_sktime.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ class DtwDist(BasePairwiseTransformerPanel):
Note that the more flexible options above may be less performant.
The algorithms are also available as alignment estimators
``sktime.alignmnent.dtw_numba``, producing alignments aka alignment paths.
``sktime.alignment.dtw_numba``, producing alignments aka alignment paths.
DTW was originally proposed in [1]_, DTW computes the distance between two
time series by considering their alignments during the calculation.
Expand Down
4 changes: 2 additions & 2 deletions sktime/forecasting/autots.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ class AutoTS(BaseForecaster):
The initial template to use for the forecast.
'Random' - randomly generates starting template, 'General' uses template
included in package, 'General+Random' - both of previous.
Also can be overriden with import_template()
Also can be overridden with import_template()
random_seed (int):
The random seed for reproducibility.
Random seed allows (slightly) more consistent results.
Expand Down Expand Up @@ -101,7 +101,7 @@ class AutoTS(BaseForecaster):
drop_most_recent (int):
Option to drop n most recent data points. Useful, say, for monthly sales
data where the current (unfinished) month is included. occurs after any
aggregration is applied, so will be whatever is specified by frequency,
aggregation is applied, so will be whatever is specified by frequency,
will drop n frequencies
drop_data_older_than_periods (int):
The threshold for dropping old data points.
Expand Down
6 changes: 3 additions & 3 deletions sktime/forecasting/base/adapters/_neuralforecast.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,8 +269,8 @@ def _fit(
"""
# A. freq is given {use this}
# B. freq is auto
# B1. freq is infered from fh {use this}
# B2. freq is not infered from fh
# B1. freq is inferred from fh {use this}
# B2. freq is not inferred from fh
# B2.1. y is date-like {raise exception}
# B2.2. y is not date-like
# B2.2.1 equispaced integers {use diff in time}
Expand All @@ -291,7 +291,7 @@ def _fit(

if self.freq != "auto": # A: freq is given as non-auto
self._freq = self.freq
elif fh.freq: # B1: freq is infered from fh
elif fh.freq: # B1: freq is inferred from fh
self._freq = fh.freq
elif isinstance(y.index, pandas.DatetimeIndex): # B2.1: y is date-like
raise ValueError(
Expand Down
2 changes: 1 addition & 1 deletion sktime/forecasting/base/adapters/_pytorchforecasting.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ class _PytorchForecastingAdapter(_BaseGlobalForecaster):
parameters to initialize `TimeSeriesDataSet` [1]_ from `pandas.DataFrame`
max_prediction_length will be overwrite according to fh
time_idx, target, group_ids, time_varying_known_reals, time_varying_unknown_reals
will be infered from data, so you do not have to pass them
will be inferred from data, so you do not have to pass them
train_to_dataloader_params : Dict[str, Any] (default=None)
parameters to be passed for `TimeSeriesDataSet.to_dataloader()`
by default {"train": True}
Expand Down
2 changes: 1 addition & 1 deletion sktime/forecasting/compose/_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -1507,7 +1507,7 @@ def _check_unknown_exog(
# either columns explicitly specified through the `columns` argument
# or all columns in the `X` argument passed in `fit` call are future-unknown
if self.columns is None or len(self.columns) == 0:
# `self._X` is guranteed to exist and be a DataFrame at this point
# `self._X` is guaranteed to exist and be a DataFrame at this point
# ensured by `self.X_was_None_` check in `_get_forecaster_X_prediction`
unknown_columns = self._X.columns
else:
Expand Down
2 changes: 1 addition & 1 deletion sktime/forecasting/compose/_reduce.py
Original file line number Diff line number Diff line change
Expand Up @@ -669,7 +669,7 @@ def pool_preds(y_preds):
return y_pred

def _coerce_to_numpy(y_pred):
"""Coerce predictions to numpy array, assumes pd.DataFram or numpy."""
"""Coerce predictions to numpy array, assumes pd.DataFrame or numpy."""
if isinstance(y_pred, pd.DataFrame):
return y_pred.values
else:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ def test_get_params(
fallback_forecaster=fallback_forecaster,
)

# Fit the forecaster and then fetch the generated paramaters
# Fit the forecaster and then fetch the generated parameters
forecaster.fit(y=series_generator(), fh=horizon)
forecaster_params = forecaster.get_params()

Expand Down
2 changes: 1 addition & 1 deletion sktime/forecasting/enbpi.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ class EnbPIForecaster(BaseForecaster):
"""
Ensemble Bootstrap Prediction Interval Forecaster.
The forecaster combines sktime forecasters, with tsbootstrap bootsrappers
The forecaster combines sktime forecasters, with tsbootstrap bootstrappers
and the EnbPI algorithm [1] implemented in fortuna using the
tutorial from this blogpost [2].
Expand Down
2 changes: 1 addition & 1 deletion sktime/forecasting/hf_transformers_forecaster.py
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ def _fit(self, y, X, fh):
)
else:
raise ValueError(
"The model type is not inferrable from the config."
"The model type is not inferable from the config."
"Thus, the model cannot be loaded."
)
# Load model with the updated config
Expand Down
4 changes: 2 additions & 2 deletions sktime/forecasting/pytorchforecasting.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ class PytorchForecastingTFT(_PytorchForecastingAdapter):
parameters to initialize `TimeSeriesDataSet` [2]_ from `pandas.DataFrame`
max_prediction_length will be overwrite according to fh
time_idx, target, group_ids, time_varying_known_reals, time_varying_unknown_reals
will be infered from data, so you do not have to pass them
will be inferred from data, so you do not have to pass them
train_to_dataloader_params : Dict[str, Any] (default=None)
parameters to be passed for `TimeSeriesDataSet.to_dataloader()`
by default {"train": True}
Expand Down Expand Up @@ -216,7 +216,7 @@ class PytorchForecastingNBeats(_PytorchForecastingAdapter):
parameters to initialize `TimeSeriesDataSet` [2]_ from `pandas.DataFrame`
max_prediction_length will be overwrite according to fh
time_idx, target, group_ids, time_varying_known_reals, time_varying_unknown_reals
will be infered from data, so you do not have to pass them
will be inferred from data, so you do not have to pass them
train_to_dataloader_params : Dict[str, Any] (default=None)
parameters to be passed for `TimeSeriesDataSet.to_dataloader()`
by default {"train": True}
Expand Down
2 changes: 1 addition & 1 deletion sktime/forecasting/tests/test_reconcile.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def test_reconciler_fit_predict(method, flatten, no_levels):
Raises
------
This test asserts that the output of ReconcilerForecaster is actually hierarhical
This test asserts that the output of ReconcilerForecaster is actually hierarchical
in that the predictions sum together appropriately. It also tests the index
and columns of the fitted s and g matrix from each method and finally tests
if the method works for both named and unnamed indexes
Expand Down
2 changes: 1 addition & 1 deletion sktime/libs/pykalman/sqrt/bierman.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
"""Bierman's verion of the Kalman Filter.
"""Bierman's version of the Kalman Filter.
=====================================
Inference for Linear-Gaussian Systems
Expand Down
2 changes: 1 addition & 1 deletion sktime/libs/pykalman/tests/test_standard.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def data():
)
class TestKalmanFilter:
"""All of the actual tests to check against an implementation of the usual
Kalman Filter. Abstract so that sister implementations can re-use these
Kalman Filter. Abstract so that sister implementations can reuse these
tests.
"""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -989,7 +989,7 @@ class AUCalibration(_BaseDistrForecastingMetric):
Computes the unsigned area between the calibration curve and the diagonal.
The calibration curve is the cumulative curve of the sample of
predictive cumulative distibution functions evaluated at the true values.
predictive cumulative distribution functions evaluated at the true values.
Mathematically, let :math:`d_1, \dots, d_N` be the predictive distributions,
let :math:`y_1, \dots, y_N` be the true values, and let :math:`F_i` be the
Expand Down
6 changes: 3 additions & 3 deletions sktime/registry/_tags.py
Original file line number Diff line number Diff line change
Expand Up @@ -479,7 +479,7 @@ class capability__feature_importance(_BaseTag):
If the tag is ``True``, the estimator can produce feature importances.
Feature importances are queriable by the fitted parameter interface
Feature importances are queryable by the fitted parameter interface
via ``get_fitted_params``, after calling ``fit`` of the respective estimator.
If the tag is ``False``, the estimator does not produce feature importances.
Expand Down Expand Up @@ -544,7 +544,7 @@ class capability__train_estimate(_BaseTag):
produce and store an estimate of their own statistical performance,
e.g., via out-of-bag estimates, or cross-validation.
Training performance estimates are queriable by the fitted parameter interface
Training performance estimates are queryable by the fitted parameter interface
via ``get_fitted_params``, after calling ``fit`` of the respective estimator.
If the tag is ``False``, the estimator does not produce
Expand Down Expand Up @@ -616,7 +616,7 @@ class capability__exogeneous(_BaseTag):
that can be used to improve forecasting accuracy.
If the forecaster uses exogeneous data (``ignore-exogeneous-X=False``),
the ``X`` parmameter in ``fit``, ``predict``, and other methods
the ``X`` parameter in ``fit``, ``predict``, and other methods
can be used to pass exogeneous data to the forecaster.
If the ``X-y-must-have-same-index`` tag is ``True``,
Expand Down
Loading

0 comments on commit 06427ff

Please sign in to comment.