diff --git a/docs/source/Makefile b/docs/source/Makefile index f6a880f8fa0..384d175ca03 100644 --- a/docs/source/Makefile +++ b/docs/source/Makefile @@ -20,7 +20,7 @@ clean: @echo "Deleted directory $(BUILDDIR)." # $(O) is meant as a shortcut for custom options. -# i.e to log stderr into a seprate file: +# i.e to log stderr into a separate file: # make build O="--no-color 2> build_warnings.log" html: # ./symlink_examples.sh diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst index 726c7f2f2fe..bedfdc6b73f 100644 --- a/docs/source/changelog.rst +++ b/docs/source/changelog.rst @@ -7837,7 +7837,7 @@ Refactored * [ENH] renamed ``fit-in-transform`` and ``fit-in-predict`` to ``fit_is_empty`` (:pr:`2250`) :user:`fkiraly` * [ENH] refactoring `test_all_classifiers` to test class architecture (:pr:`2257`) :user:`fkiraly` * [ENH] test parameter refactor: all classifiers (:pr:`2288`) :user:`MatthewMiddlehurst` -* [ENH] test paraneter refactor: ``Arsenal`` (:pr:`2273`) :user:`dionysisbacchus` +* [ENH] test parameter refactor: ``Arsenal`` (:pr:`2273`) :user:`dionysisbacchus` * [ENH] test parameter refactor: ``RocketClassifier`` (:pr:`2166`) :user:`dionysisbacchus` * [ENH] test parameter refactor: ``TimeSeriesForestClassifier`` (:pr:`2277`) :user:`lielleravid` * [ENH] ``FeatureUnion`` refactor - moved to ``transformations``, tags, dunder method (:pr:`2231`) :user:`fkiraly` diff --git a/docs/source/get_involved/code_of_conduct.rst b/docs/source/get_involved/code_of_conduct.rst index fda83431fb1..7038fa1d95c 100644 --- a/docs/source/get_involved/code_of_conduct.rst +++ b/docs/source/get_involved/code_of_conduct.rst @@ -86,7 +86,7 @@ genetic information, sexual orientation, disability status, physical appearance, body size, citizenship, nationality, national origin, ethnic or social origin, pregnancy, familial status, family background, veteran status, trade union membership, religion or belief (or lack thereof), membership of a national minority, property, age, -socio-economic status, neurotypicality or -atypicality, education, and experience level. +socioeconomic status, neurotypicality or -atypicality, education, and experience level. Everyone who participates in the sktime project activities is required to conform to this Code of Conduct. This Code of Conduct applies to all diff --git a/examples/01c_forecasting_hierarchical_global.ipynb b/examples/01c_forecasting_hierarchical_global.ipynb index 63c84a79939..f5671b93440 100644 --- a/examples/01c_forecasting_hierarchical_global.ipynb +++ b/examples/01c_forecasting_hierarchical_global.ipynb @@ -99,7 +99,7 @@ "\n", "In the `\"pd.DataFrame\"` mtype, time series are represented by an in-memory container `obj: pandas.DataFrame` as follows.\n", "\n", - "* structure convention: `obj.index` must be monotonous, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.\n", + "* structure convention: `obj.index` must be monotonic, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.\n", "* variables: columns of `obj` correspond to different variables\n", "* variable names: column names `obj.columns`\n", "* time points: rows of `obj` correspond to different, distinct time points\n", @@ -270,7 +270,7 @@ "\n", "In the `\"pd-multiindex\"` mtype, time series panels are represented by an in-memory container `obj: pandas.DataFrame` as follows.\n", "\n", - "* structure convention: `obj.index` must be a pair multi-index of type `(RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonous.\n", + "* structure convention: `obj.index` must be a pair multi-index of type `(RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonic.\n", "* instances: rows with the same `\"instances\"` index correspond to the same instance; rows with different `\"instances\"` index correspond to different instances.\n", "* instance index: the first element of pairs in `obj.index` is interpreted as an instance index. \n", "* variables: columns of `obj` correspond to different variables\n", @@ -407,7 +407,7 @@ "source": [ "#### Hierarchical time series - `Hierarchical` scitype, `\"pd_multiindex_hier\"` mtype\n", "\n", - "* structure convention: `obj.index` must be a 3 or more level multi-index of type `(RangeIndex, ..., RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonous. We call the last index the \"time-like\" index\n", + "* structure convention: `obj.index` must be a 3 or more level multi-index of type `(RangeIndex, ..., RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonic. We call the last index the \"time-like\" index\n", "* hierarchy level: rows with the same non-time-like indices correspond to the same hierarchy unit; rows with different non-time-like index combination correspond to different hierarchy unit.\n", "* hierarchy: the non-time-like indices in `obj.index` are interpreted as a hierarchy identifying index. \n", "* variables: columns of `obj` correspond to different variables\n", diff --git a/extension_templates/transformer.py b/extension_templates/transformer.py index 00347e6d4d5..a093634fab7 100644 --- a/extension_templates/transformer.py +++ b/extension_templates/transformer.py @@ -217,7 +217,7 @@ class MyTransformer(BaseTransformer): # # skip-inverse-transform = is inverse-transform skipped when called? "skip-inverse-transform": False, - # if False, capability:inverse_transform tag behaviour is as per devault + # if False, capability:inverse_transform tag behaviour is as per default # if True, inverse_transform is the identity transform and raises no exception # this is useful for transformers where inverse_transform # may be called but should behave as the identity, e.g., imputers diff --git a/pyproject.toml b/pyproject.toml index a1c0702a2d0..de252a2dae9 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -376,7 +376,7 @@ ignore=[ "C414", # Unnecessary `list` call within `sorted()` "S301", # pickle and modules that wrap it can be unsafe "C416", # Unnecessary list comprehension - rewrite as a generator - "S310", # Audit URL open for permited schemes + "S310", # Audit URL open for permitted schemes "S202", # Uses of `tarfile.extractall()` "S307", # Use of possibly insecure function "C417", # Unnecessary `map` usage (rewrite using a generator expression) @@ -386,7 +386,7 @@ ignore=[ "S105", # Possible hardcoded password "PT018", # Checks for assertions that combine multiple independent condition "S602", # sub process call with shell=True unsafe - "C419", # Unnecessary list comprehension, some are flaged yet are not + "C419", # Unnecessary list comprehension, some are flagged yet are not "C409", # Unnecessary `list` literal passed to `tuple()` (rewrite as a `tuple` literal) "S113", # Probable use of httpx call withour timeout ] diff --git a/sktime/annotation/base/_base.py b/sktime/annotation/base/_base.py index 4ae5852764c..2557a0caa5d 100644 --- a/sktime/annotation/base/_base.py +++ b/sktime/annotation/base/_base.py @@ -37,7 +37,7 @@ class BaseSeriesAnnotator(BaseEstimator): task : str {"segmentation", "change_point_detection", "anomaly_detection"} The main annotation task: * If ``segmentation``, the annotator divides timeseries into discrete chunks - based on certain criteria. The same label can be applied at mulitple + based on certain criteria. The same label can be applied at multiple disconnected regions of the timeseries. * If ``change_point_detection``, the annotator finds points where the statistical properties of the timeseries change significantly. @@ -537,7 +537,7 @@ def _sparse_segments_to_dense(y_sparse, index): """ if y_sparse.index.is_overlapping: raise NotImplementedError( - "Cannot convert overlapping segments to a dense formet yet." + "Cannot convert overlapping segments to a dense format yet." ) interval_indexes = y_sparse.index.get_indexer(index) diff --git a/sktime/annotation/hmm_learn/gaussian.py b/sktime/annotation/hmm_learn/gaussian.py index 49096937d90..099bc3dddfa 100644 --- a/sktime/annotation/hmm_learn/gaussian.py +++ b/sktime/annotation/hmm_learn/gaussian.py @@ -17,7 +17,7 @@ class GaussianHMM(BaseHMMLearn): ---------- n_components : int Number of states - covariance_type : {"sperical", "diag", "full", "tied"}, optional + covariance_type : {"spherical", "diag", "full", "tied"}, optional The type of covariance parameters to use: * "spherical" --- each state uses a single variance value that applies to all features. diff --git a/sktime/annotation/hmm_learn/gmm.py b/sktime/annotation/hmm_learn/gmm.py index 6bb2fe2ce08..8c0f066bab2 100644 --- a/sktime/annotation/hmm_learn/gmm.py +++ b/sktime/annotation/hmm_learn/gmm.py @@ -19,7 +19,7 @@ class GMMHMM(BaseHMMLearn): Number of states in the model. n_mix : int Number of states in the GMM. - covariance_type : {"sperical", "diag", "full", "tied"}, optional + covariance_type : {"spherical", "diag", "full", "tied"}, optional The type of covariance parameters to use: * "spherical" --- each state uses a single variance value that applies to all features. diff --git a/sktime/base/_base.py b/sktime/base/_base.py index 20dc4fd82c3..75ca6eeba44 100644 --- a/sktime/base/_base.py +++ b/sktime/base/_base.py @@ -200,7 +200,7 @@ def _handle_numpy2_softdeps(self): are in NOT_NP2_COMPATIBLE, this is a hard-coded list of soft dependencies that are not numpy 2 compatible * if any are found, adds a numpy<2.0 soft dependency to the list, - and sets it as a dynamic overide of the python_dependencies tag + and sets it as a dynamic override of the python_dependencies tag """ from packaging.requirements import Requirement diff --git a/sktime/benchmarking/_lib_mini_kotsu/__init__.py b/sktime/benchmarking/_lib_mini_kotsu/__init__.py index 0a16ca149f5..516ae49e3e6 100644 --- a/sktime/benchmarking/_lib_mini_kotsu/__init__.py +++ b/sktime/benchmarking/_lib_mini_kotsu/__init__.py @@ -7,7 +7,7 @@ The module is currently undergoing a refactor with the following targets: -* remove kotsu as a soft dependeny. The package is no longer maintained and +* remove kotsu as a soft dependency. The package is no longer maintained and a dependency liability for sktime, maintainers are non-responsive. * replace kotsu with sktime's own benchmarking module, with an API that is closer to sklearn's estimator API (e.g., no separation of class and params). diff --git a/sktime/benchmarking/_lib_mini_kotsu/registration.py b/sktime/benchmarking/_lib_mini_kotsu/registration.py index f9dd999b07b..8e87e9686e2 100644 --- a/sktime/benchmarking/_lib_mini_kotsu/registration.py +++ b/sktime/benchmarking/_lib_mini_kotsu/registration.py @@ -125,7 +125,7 @@ def make(self, id: str, **kwargs) -> Entity: raise KeyError(f"No registered entity with ID {id}") def all(self): - """Return all the entitys in the registry.""" + """Return all the entities in the registry.""" return self.entity_specs.values() def register( diff --git a/sktime/classification/dictionary_based/_boss_pyts.py b/sktime/classification/dictionary_based/_boss_pyts.py index 69d77d68674..f244516bb4d 100644 --- a/sktime/classification/dictionary_based/_boss_pyts.py +++ b/sktime/classification/dictionary_based/_boss_pyts.py @@ -66,8 +66,8 @@ class BOSSVSClassifierPyts(_PytsAdapter, BaseClassifier): alphabet are used. numerosity_reduction : bool (default = True) - If True, delete sample-wise all but one occurence of back to back - identical occurences of the same words. + If True, delete sample-wise all but one occurrence of back to back + identical occurrences of the same words. use_idf : bool (default = True) Enable inverse-document-frequency reweighting. diff --git a/sktime/datasets/_readers_writers/utils.py b/sktime/datasets/_readers_writers/utils.py index 915d535d62c..0ff2c1bb9c5 100644 --- a/sktime/datasets/_readers_writers/utils.py +++ b/sktime/datasets/_readers_writers/utils.py @@ -299,6 +299,6 @@ def get_path(path: Union[str, pathlib.Path], suffix: str) -> str: if not p_.suffix: # Checks if a file with the same name exists if not os.path.exists(resolved_path): - # adds the specified extention to the path + # adds the specified extension to the path resolved_path += suffix return resolved_path diff --git a/sktime/datatypes/_series_as_panel/_convert.py b/sktime/datatypes/_series_as_panel/_convert.py index 685dc76924b..50a84f0e45d 100644 --- a/sktime/datatypes/_series_as_panel/_convert.py +++ b/sktime/datatypes/_series_as_panel/_convert.py @@ -25,7 +25,7 @@ def convert_Series_to_Panel(obj, store=None, return_to_mtype=False): """Convert series to a single-series panel. Adds a dummy dimension to the series. - For pd.Series or DataFrame, this results in a list of DataFram (dim added is list). + For pd.Series or DataFrame, this results in a list of DataFrame (dim added is list). For numpy array, this results in a third dimension being added. Assumes input is conformant with one of the three Series mtypes. diff --git a/sktime/dists_kernels/dtw/_dtw_sktime.py b/sktime/dists_kernels/dtw/_dtw_sktime.py index dab9d983d48..5dd30512f11 100644 --- a/sktime/dists_kernels/dtw/_dtw_sktime.py +++ b/sktime/dists_kernels/dtw/_dtw_sktime.py @@ -36,7 +36,7 @@ class DtwDist(BasePairwiseTransformerPanel): Note that the more flexible options above may be less performant. The algorithms are also available as alignment estimators - ``sktime.alignmnent.dtw_numba``, producing alignments aka alignment paths. + ``sktime.alignment.dtw_numba``, producing alignments aka alignment paths. DTW was originally proposed in [1]_, DTW computes the distance between two time series by considering their alignments during the calculation. diff --git a/sktime/forecasting/autots.py b/sktime/forecasting/autots.py index 112d1f69001..cb10cd3eb5b 100644 --- a/sktime/forecasting/autots.py +++ b/sktime/forecasting/autots.py @@ -72,7 +72,7 @@ class AutoTS(BaseForecaster): The initial template to use for the forecast. 'Random' - randomly generates starting template, 'General' uses template included in package, 'General+Random' - both of previous. - Also can be overriden with import_template() + Also can be overridden with import_template() random_seed (int): The random seed for reproducibility. Random seed allows (slightly) more consistent results. @@ -101,7 +101,7 @@ class AutoTS(BaseForecaster): drop_most_recent (int): Option to drop n most recent data points. Useful, say, for monthly sales data where the current (unfinished) month is included. occurs after any - aggregration is applied, so will be whatever is specified by frequency, + aggregation is applied, so will be whatever is specified by frequency, will drop n frequencies drop_data_older_than_periods (int): The threshold for dropping old data points. diff --git a/sktime/forecasting/base/adapters/_neuralforecast.py b/sktime/forecasting/base/adapters/_neuralforecast.py index a7e24fdf7ef..5d90a0a32cd 100644 --- a/sktime/forecasting/base/adapters/_neuralforecast.py +++ b/sktime/forecasting/base/adapters/_neuralforecast.py @@ -269,8 +269,8 @@ def _fit( """ # A. freq is given {use this} # B. freq is auto - # B1. freq is infered from fh {use this} - # B2. freq is not infered from fh + # B1. freq is inferred from fh {use this} + # B2. freq is not inferred from fh # B2.1. y is date-like {raise exception} # B2.2. y is not date-like # B2.2.1 equispaced integers {use diff in time} @@ -291,7 +291,7 @@ def _fit( if self.freq != "auto": # A: freq is given as non-auto self._freq = self.freq - elif fh.freq: # B1: freq is infered from fh + elif fh.freq: # B1: freq is inferred from fh self._freq = fh.freq elif isinstance(y.index, pandas.DatetimeIndex): # B2.1: y is date-like raise ValueError( diff --git a/sktime/forecasting/base/adapters/_pytorchforecasting.py b/sktime/forecasting/base/adapters/_pytorchforecasting.py index 763c7ba1d86..62f3b84749e 100644 --- a/sktime/forecasting/base/adapters/_pytorchforecasting.py +++ b/sktime/forecasting/base/adapters/_pytorchforecasting.py @@ -32,7 +32,7 @@ class _PytorchForecastingAdapter(_BaseGlobalForecaster): parameters to initialize `TimeSeriesDataSet` [1]_ from `pandas.DataFrame` max_prediction_length will be overwrite according to fh time_idx, target, group_ids, time_varying_known_reals, time_varying_unknown_reals - will be infered from data, so you do not have to pass them + will be inferred from data, so you do not have to pass them train_to_dataloader_params : Dict[str, Any] (default=None) parameters to be passed for `TimeSeriesDataSet.to_dataloader()` by default {"train": True} diff --git a/sktime/forecasting/compose/_pipeline.py b/sktime/forecasting/compose/_pipeline.py index 711a79d8b49..18955df0d29 100644 --- a/sktime/forecasting/compose/_pipeline.py +++ b/sktime/forecasting/compose/_pipeline.py @@ -1507,7 +1507,7 @@ def _check_unknown_exog( # either columns explicitly specified through the `columns` argument # or all columns in the `X` argument passed in `fit` call are future-unknown if self.columns is None or len(self.columns) == 0: - # `self._X` is guranteed to exist and be a DataFrame at this point + # `self._X` is guaranteed to exist and be a DataFrame at this point # ensured by `self.X_was_None_` check in `_get_forecaster_X_prediction` unknown_columns = self._X.columns else: diff --git a/sktime/forecasting/compose/_reduce.py b/sktime/forecasting/compose/_reduce.py index 2bf5f58ba51..503715aca5e 100644 --- a/sktime/forecasting/compose/_reduce.py +++ b/sktime/forecasting/compose/_reduce.py @@ -669,7 +669,7 @@ def pool_preds(y_preds): return y_pred def _coerce_to_numpy(y_pred): - """Coerce predictions to numpy array, assumes pd.DataFram or numpy.""" + """Coerce predictions to numpy array, assumes pd.DataFrame or numpy.""" if isinstance(y_pred, pd.DataFrame): return y_pred.values else: diff --git a/sktime/forecasting/compose/tests/test_transformer_select_forecaster.py b/sktime/forecasting/compose/tests/test_transformer_select_forecaster.py index 58c3ae6442d..6c61fd7857d 100644 --- a/sktime/forecasting/compose/tests/test_transformer_select_forecaster.py +++ b/sktime/forecasting/compose/tests/test_transformer_select_forecaster.py @@ -207,7 +207,7 @@ def test_get_params( fallback_forecaster=fallback_forecaster, ) - # Fit the forecaster and then fetch the generated paramaters + # Fit the forecaster and then fetch the generated parameters forecaster.fit(y=series_generator(), fh=horizon) forecaster_params = forecaster.get_params() diff --git a/sktime/forecasting/enbpi.py b/sktime/forecasting/enbpi.py index 591129e8baf..e486e20d871 100644 --- a/sktime/forecasting/enbpi.py +++ b/sktime/forecasting/enbpi.py @@ -18,7 +18,7 @@ class EnbPIForecaster(BaseForecaster): """ Ensemble Bootstrap Prediction Interval Forecaster. - The forecaster combines sktime forecasters, with tsbootstrap bootsrappers + The forecaster combines sktime forecasters, with tsbootstrap bootstrappers and the EnbPI algorithm [1] implemented in fortuna using the tutorial from this blogpost [2]. diff --git a/sktime/forecasting/hf_transformers_forecaster.py b/sktime/forecasting/hf_transformers_forecaster.py index 054fb43f225..feefa9cc4c1 100644 --- a/sktime/forecasting/hf_transformers_forecaster.py +++ b/sktime/forecasting/hf_transformers_forecaster.py @@ -207,7 +207,7 @@ def _fit(self, y, X, fh): ) else: raise ValueError( - "The model type is not inferrable from the config." + "The model type is not inferable from the config." "Thus, the model cannot be loaded." ) # Load model with the updated config diff --git a/sktime/forecasting/pytorchforecasting.py b/sktime/forecasting/pytorchforecasting.py index 81e58ff73f7..fcd675ccbb0 100644 --- a/sktime/forecasting/pytorchforecasting.py +++ b/sktime/forecasting/pytorchforecasting.py @@ -24,7 +24,7 @@ class PytorchForecastingTFT(_PytorchForecastingAdapter): parameters to initialize `TimeSeriesDataSet` [2]_ from `pandas.DataFrame` max_prediction_length will be overwrite according to fh time_idx, target, group_ids, time_varying_known_reals, time_varying_unknown_reals - will be infered from data, so you do not have to pass them + will be inferred from data, so you do not have to pass them train_to_dataloader_params : Dict[str, Any] (default=None) parameters to be passed for `TimeSeriesDataSet.to_dataloader()` by default {"train": True} @@ -216,7 +216,7 @@ class PytorchForecastingNBeats(_PytorchForecastingAdapter): parameters to initialize `TimeSeriesDataSet` [2]_ from `pandas.DataFrame` max_prediction_length will be overwrite according to fh time_idx, target, group_ids, time_varying_known_reals, time_varying_unknown_reals - will be infered from data, so you do not have to pass them + will be inferred from data, so you do not have to pass them train_to_dataloader_params : Dict[str, Any] (default=None) parameters to be passed for `TimeSeriesDataSet.to_dataloader()` by default {"train": True} diff --git a/sktime/forecasting/tests/test_reconcile.py b/sktime/forecasting/tests/test_reconcile.py index f498d1d4a12..c941622a053 100644 --- a/sktime/forecasting/tests/test_reconcile.py +++ b/sktime/forecasting/tests/test_reconcile.py @@ -41,7 +41,7 @@ def test_reconciler_fit_predict(method, flatten, no_levels): Raises ------ - This test asserts that the output of ReconcilerForecaster is actually hierarhical + This test asserts that the output of ReconcilerForecaster is actually hierarchical in that the predictions sum together appropriately. It also tests the index and columns of the fitted s and g matrix from each method and finally tests if the method works for both named and unnamed indexes diff --git a/sktime/libs/pykalman/sqrt/bierman.py b/sktime/libs/pykalman/sqrt/bierman.py index e41a837caf0..efc77ad31d4 100644 --- a/sktime/libs/pykalman/sqrt/bierman.py +++ b/sktime/libs/pykalman/sqrt/bierman.py @@ -1,4 +1,4 @@ -"""Bierman's verion of the Kalman Filter. +"""Bierman's version of the Kalman Filter. ===================================== Inference for Linear-Gaussian Systems diff --git a/sktime/libs/pykalman/tests/test_standard.py b/sktime/libs/pykalman/tests/test_standard.py index a36ab249124..69948d588b3 100644 --- a/sktime/libs/pykalman/tests/test_standard.py +++ b/sktime/libs/pykalman/tests/test_standard.py @@ -30,7 +30,7 @@ def data(): ) class TestKalmanFilter: """All of the actual tests to check against an implementation of the usual - Kalman Filter. Abstract so that sister implementations can re-use these + Kalman Filter. Abstract so that sister implementations can reuse these tests. """ diff --git a/sktime/performance_metrics/forecasting/probabilistic/_classes.py b/sktime/performance_metrics/forecasting/probabilistic/_classes.py index f9a26218c96..76405a316f9 100644 --- a/sktime/performance_metrics/forecasting/probabilistic/_classes.py +++ b/sktime/performance_metrics/forecasting/probabilistic/_classes.py @@ -989,7 +989,7 @@ class AUCalibration(_BaseDistrForecastingMetric): Computes the unsigned area between the calibration curve and the diagonal. The calibration curve is the cumulative curve of the sample of - predictive cumulative distibution functions evaluated at the true values. + predictive cumulative distribution functions evaluated at the true values. Mathematically, let :math:`d_1, \dots, d_N` be the predictive distributions, let :math:`y_1, \dots, y_N` be the true values, and let :math:`F_i` be the diff --git a/sktime/registry/_tags.py b/sktime/registry/_tags.py index 546a4760727..4885a1072c9 100644 --- a/sktime/registry/_tags.py +++ b/sktime/registry/_tags.py @@ -479,7 +479,7 @@ class capability__feature_importance(_BaseTag): If the tag is ``True``, the estimator can produce feature importances. - Feature importances are queriable by the fitted parameter interface + Feature importances are queryable by the fitted parameter interface via ``get_fitted_params``, after calling ``fit`` of the respective estimator. If the tag is ``False``, the estimator does not produce feature importances. @@ -544,7 +544,7 @@ class capability__train_estimate(_BaseTag): produce and store an estimate of their own statistical performance, e.g., via out-of-bag estimates, or cross-validation. - Training performance estimates are queriable by the fitted parameter interface + Training performance estimates are queryable by the fitted parameter interface via ``get_fitted_params``, after calling ``fit`` of the respective estimator. If the tag is ``False``, the estimator does not produce @@ -616,7 +616,7 @@ class capability__exogeneous(_BaseTag): that can be used to improve forecasting accuracy. If the forecaster uses exogeneous data (``ignore-exogeneous-X=False``), - the ``X`` parmameter in ``fit``, ``predict``, and other methods + the ``X`` parameter in ``fit``, ``predict``, and other methods can be used to pass exogeneous data to the forecaster. If the ``X-y-must-have-same-index`` tag is ``True``, diff --git a/sktime/regression/distance_based/_time_series_neighbors.py b/sktime/regression/distance_based/_time_series_neighbors.py index 81b809c9f39..0a76feb79e2 100644 --- a/sktime/regression/distance_based/_time_series_neighbors.py +++ b/sktime/regression/distance_based/_time_series_neighbors.py @@ -78,7 +78,7 @@ class KNeighborsTimeSeriesRegressor(_BaseKnnTimeSeriesEstimator, BaseRegressor): X, X2 which are pd_multiindex and numpy3D mtype can be pairwise panel transformer inheriting from BasePairwiseTransformerPanel distance_params : dict, optional. default = None. - dictionary for metric parameters , in case that distane is a str + dictionary for metric parameters , in case that distance is a str distance_mtype : str, or list of str optional. default = None. mtype that distance expects for X and X2, if a callable only set this if distance is not BasePairwiseTransformerPanel descendant diff --git a/sktime/split/tests/test_expandingcutoff.py b/sktime/split/tests/test_expandingcutoff.py index 4be8530c8ed..9d02612811c 100644 --- a/sktime/split/tests/test_expandingcutoff.py +++ b/sktime/split/tests/test_expandingcutoff.py @@ -117,7 +117,7 @@ def test_expandingcutoff_splitloc_004(): reason="run test only if softdeps are present and incrementally (if requested)", ) def test_expandingcutoff_hiearchical_splitloc_005(): - """Test hiearchical splitloc with datetime""" + """Test hierarchical splitloc with datetime""" y = _make_hierarchical( min_timepoints=6, max_timepoints=10, @@ -157,7 +157,7 @@ def test_expandingcutoff_hiearchical_splitloc_005(): reason="run test only if softdeps are present and incrementally (if requested)", ) def test_expandingcutoff_hiearchical_forecastbylevel_006(): - """Test hiearchical with forecast by level""" + """Test hierarchical with forecast by level""" from sktime.forecasting.compose import ForecastByLevel from sktime.forecasting.model_selection import ForecastingGridSearchCV from sktime.forecasting.naive import NaiveForecaster diff --git a/sktime/tests/test_all_estimators.py b/sktime/tests/test_all_estimators.py index a8a56a7eaa1..ec247c4d0ea 100644 --- a/sktime/tests/test_all_estimators.py +++ b/sktime/tests/test_all_estimators.py @@ -902,7 +902,7 @@ def test_inheritance(self, estimator_class): # one of them is a transformer base type or _BaseGlobalForecaster type # Global forecasters inherit from _BaseGlobalForecaster, # _BaseGlobalForecaster inherit from BaseForecaster - # therefor, global forecasters is subclass of + # therefore, global forecasters is subclass of # _BaseGlobalForecaster and BaseForecaster if n_base_types > 1: assert issubclass(estimator_class, VALID_TRANSFORMER_TYPES) or issubclass( diff --git a/sktime/transformations/compose/_transformif.py b/sktime/transformations/compose/_transformif.py index f492b6c5f31..916d6558390 100644 --- a/sktime/transformations/compose/_transformif.py +++ b/sktime/transformations/compose/_transformif.py @@ -36,7 +36,7 @@ class TransformIf(_DelegatedTransformer): In other methods, behaves as ``then_est`` or ``else_est``, as above. - Note: ``then_trafo`` and ``else_trafo`` must hae the same input/output signature, + Note: ``then_trafo`` and ``else_trafo`` must have the same input/output signature, e.g., Series-to-Series, or Series-to-Primitives. Parameters diff --git a/sktime/transformations/hierarchical/tests/test_reconcile.py b/sktime/transformations/hierarchical/tests/test_reconcile.py index b3e298ca2a8..e3aaaa43670 100644 --- a/sktime/transformations/hierarchical/tests/test_reconcile.py +++ b/sktime/transformations/hierarchical/tests/test_reconcile.py @@ -41,7 +41,7 @@ def test_reconciler_fit_transform(method, flatten, no_levels): Raises ------ - This test asserts that the output of Reconciler is actually hierarhical + This test asserts that the output of Reconciler is actually hierarchical in that the predictions sum together appropriately. It also tests the index and columns of the fitted s and g matrix from each method and finally tests if the method works for both named and unnamed indexes diff --git a/sktime/transformations/panel/catch22.py b/sktime/transformations/panel/catch22.py index 29b6bf08e45..7b07c798132 100644 --- a/sktime/transformations/panel/catch22.py +++ b/sktime/transformations/panel/catch22.py @@ -463,7 +463,7 @@ def _prepare_output_col_names( Returns ------- Union[range, List[int], List[str]] - Column labels for ouput DataFrame. + Column labels for output DataFrame. """ if self.col_names == "range": return range(n_features) diff --git a/sktime/utils/deep_equals/_deep_equals.py b/sktime/utils/deep_equals/_deep_equals.py index 96ba3f6b36a..567f8ba8076 100644 --- a/sktime/utils/deep_equals/_deep_equals.py +++ b/sktime/utils/deep_equals/_deep_equals.py @@ -195,10 +195,10 @@ def _gluonts_PandasDataset_equals_plugin(x, y, return_msg=False, deep_equals=Non Parameters ---------- - x : gluonts.dataset.pandas.PandasDatset + x : gluonts.dataset.pandas.PandasDataset The first pandasDataset to compare - y : gluonts.dataset.pandas.PandasDatset + y : gluonts.dataset.pandas.PandasDataset The second pandasDataset to compare return_msg : bool, optional diff --git a/sktime/utils/dependencies/_dependencies.py b/sktime/utils/dependencies/_dependencies.py index 57dae400d40..73ac83d6052 100644 --- a/sktime/utils/dependencies/_dependencies.py +++ b/sktime/utils/dependencies/_dependencies.py @@ -30,7 +30,7 @@ def _check_soft_dependencies( ---------- packages : str or list/tuple of str, or length-1-tuple containing list/tuple of str str should be package names and/or package version specifications to check. - Each str must be a PEP 440 compatibe specifier string, for a single package. + Each str must be a PEP 440 compatible specifier string, for a single package. For instance, the PEP 440 compatible package name such as "pandas"; or a package requirement specifier string such as "pandas>1.2.3". arg can be str, kwargs tuple, or tuple/list of str, following calls are valid: @@ -442,7 +442,7 @@ def _check_env_marker(obj, package=None, msg=None, severity="error"): if not isinstance(msg, str): msg = ( f"{class_name} requires an environment to satisfy " - f"packaging marker spec {est_marker}, but enviroment does not satisfy it." + f"packaging marker spec {est_marker}, but environment does not satisfy it." ) if package is not None: diff --git a/sktime/utils/plotting.py b/sktime/utils/plotting.py index 9af6736d06d..6ec2d2d7223 100644 --- a/sktime/utils/plotting.py +++ b/sktime/utils/plotting.py @@ -532,7 +532,7 @@ def plot_calibration(y_true, y_pred, ax=None): e.g., via ``alpha`` in ``predict_quantiles``. Let :math:`y_1, \dots, y_N` be the actual values in ``y_true``, - and let :math:`\widehat{y}_{i,j}`, for `i = 1 \dots N, j = 1 \dota k` + and let :math:`\widehat{y}_{i,j}`, for `i = 1 \dots N, j = 1 \dots k` be quantile predictions at quantile point :math:`p_j`, of the conditional distribution of :math:`y_i`, as contained in ``y_pred``.