Skip to content

Commit

Permalink
docs: Add and apply codespell as a pre-commit hook (#1645)
Browse files Browse the repository at this point in the history
* Add and apply codespell pre-commit hook
   - c.f. https://github.com/codespell-project/codespell/
* Configure codespell to run only over *.py, *.md, *.rst files
* Ignore the HEP specific terms "hist" and "gaus"
  • Loading branch information
alexander-held authored Oct 15, 2021
1 parent 48be630 commit 10e22fd
Show file tree
Hide file tree
Showing 19 changed files with 48 additions and 41 deletions.
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/bug-report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ body:
label: Actual Results
description: >-
Paste verbatim program or command output.
Don't wrap it with tripple backticks — your whole input will be
Don't wrap it with triple backticks — your whole input will be
turned into a code snippet automatically.
render: console
validations:
Expand All @@ -127,7 +127,7 @@ body:
label: pyhf Version
description: >-
Paste verbatim output from `pyhf --version` below, under the prompt line.
Don't wrap it with tripple backticks — your whole input will be
Don't wrap it with triple backticks — your whole input will be
turned into a code snippet automatically.
render: console
placeholder: |
Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Ref: https://help.github.com/en/github/building-a-strong-community/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
blank_issues_enabled: true
contact_links:
- name: 🙋 Useage Questions
- name: 🙋 Usage Questions
url: https://github.com/scikit-hep/pyhf/discussions
about: |
Use pyhf's GitHub Discussions to ask "How do I do X with pyhf?".
Expand Down
7 changes: 7 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,3 +57,10 @@ repos:
hooks:
- id: nbqa-pyupgrade
additional_dependencies: [pyupgrade==2.29.0]

- repo: https://github.com/codespell-project/codespell
rev: v2.1.0
hooks:
- id: codespell
files: ^.*\.(py|md|rst)$
args: ["-w", "-L", "hist,gaus"]
2 changes: 1 addition & 1 deletion docs/examples/notebooks/Recast.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### The originial statistical Model\n"
"### The original statistical Model\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1184,7 +1184,7 @@
" all_histo_deltas_dn = allset_all_histo_deltas_dn[nset]\n",
"\n",
" for nh, histo in enumerate(histoset):\n",
" # bases and exponents need to have an outer product, to esentially tile or repeat over rows/cols\n",
" # bases and exponents need to have an outer product, to essentially tile or repeat over rows/cols\n",
" bases_up = np.einsum(\n",
" 'a,b->ab', np.ones(alphaset.shape), all_histo_deltas_up[nh]\n",
" )\n",
Expand Down
4 changes: 2 additions & 2 deletions docs/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -110,10 +110,10 @@ Kyle Cranmer (co-author of :math:`\HiFa{}`) to study if the graph structure and
differentiation abilities of machine learning frameworks would allow them to be effective
tools for statistical fits.
Lukas would give helpful friendly advice on Matthew's project and one night [1]_ over dinner
in CERN's R1 cafeteria the two were discussing the idea of implimenting :math:`\HiFa{}`
in CERN's R1 cafeteria the two were discussing the idea of implementing :math:`\HiFa{}`
in Python using machine learning libraries to drive the computation.
Continuing the discussion in Lukas's office, Lukas showed Matthew that the core statistical
machinery could be implimented rather succinctly, and that night
machinery could be implemented rather succinctly, and that night
`proceeded to do so <https://github.com/scikit-hep/pyhf/commit/fd32503fb760f070a4047cb867757458b1687599>`_
and |dubbed the project pyhf|_.

Expand Down
4 changes: 2 additions & 2 deletions docs/governance/ROADMAP.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Overview and Goals
We will follow loosely Seibert’s `Heirarchy of
Needs <https://twitter.com/FRoscheck/status/1159158552298229763>`__

|Seibert Heirarchy of Needs SciPy 2019| (`Stan
|Seibert Hierarchy of Needs SciPy 2019| (`Stan
Seibert <https://github.com/seibert>`__, SciPy 2019)

As a general overview that will include:
Expand Down Expand Up @@ -158,7 +158,7 @@ Presentations During Roadmap Timeline
2019 <https://indico.cern.ch/event/773049/contributions/3476180/>`__
(November 4-8th, 2019)

.. |Seibert Heirarchy of Needs SciPy 2019| image:: https://pbs.twimg.com/media/EBYojw8XUAERJhZ?format=png
.. |Seibert Hierarchy of Needs SciPy 2019| image:: https://pbs.twimg.com/media/EBYojw8XUAERJhZ?format=png

.. |check| raw:: html

Expand Down
2 changes: 1 addition & 1 deletion src/pyhf/constraints.py
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ def make_pdf(self, pars):

# similar to expected_data() in constrained_by_poisson
# we multiply by the appropriate factor to achieve
# the desired variance for poisson-type cosntraints
# the desired variance for poisson-type constraints
pois_rates = tensorlib.product(
tensorlib.stack([nuispars, self.batched_factors]), axis=0
)
Expand Down
2 changes: 1 addition & 1 deletion src/pyhf/infer/calculators.py
Original file line number Diff line number Diff line change
Expand Up @@ -706,7 +706,7 @@ def __init__(
:math:`\tilde{q}_{\mu}`, as defined under the Wald approximation in Equation (62)
of :xref:`arXiv:1007.1727` (:func:`~pyhf.infer.test_statistics.qmu_tilde`), ``'q'``
performs the calculation using the test statistic :math:`q_{\mu}`
(:func:`~pyhf.infer.test_statistics.qmu`), and ``'q0'`` perfoms the calculation using
(:func:`~pyhf.infer.test_statistics.qmu`), and ``'q0'`` performs the calculation using
the discovery test statistic :math:`q_{0}` (:func:`~pyhf.infer.test_statistics.q0`).
ntoys (:obj:`int`): Number of toys to use (how many times to sample the underlying distributions).
track_progress (:obj:`bool`): Whether to display the `tqdm` progress bar or not (outputs to `stderr`).
Expand Down
2 changes: 1 addition & 1 deletion src/pyhf/infer/mle.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ def __dir__():
def twice_nll(pars, data, pdf):
r"""
Two times the negative log-likelihood of the model parameters, :math:`\left(\mu, \boldsymbol{\theta}\right)`, given the observed data.
It is used in the calculation of the test statistic, :math:`t_{\mu}`, as defiend in Equation (8) in :xref:`arXiv:1007.1727`
It is used in the calculation of the test statistic, :math:`t_{\mu}`, as defined in Equation (8) in :xref:`arXiv:1007.1727`
.. math::
Expand Down
12 changes: 6 additions & 6 deletions src/pyhf/infer/test_statistics.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ def _qmu_like(
Clipped version of _tmu_like where the returned test statistic
is 0 if muhat > 0 else tmu_like_stat.
If the lower bound of the POI is 0 this automatically implments
If the lower bound of the POI is 0 this automatically implements
qmu_tilde. Otherwise this is qmu (no tilde).
"""
tensorlib, optimizer = get_backend()
Expand All @@ -41,7 +41,7 @@ def _tmu_like(
"""
Basic Profile Likelihood test statistic.
If the lower bound of the POI is 0 this automatically implments
If the lower bound of the POI is 0 this automatically implements
tmu_tilde. Otherwise this is tmu (no tilde).
"""
tensorlib, optimizer = get_backend()
Expand All @@ -63,7 +63,7 @@ def _tmu_like(
def qmu(mu, data, pdf, init_pars, par_bounds, fixed_params, return_fitted_pars=False):
r"""
The test statistic, :math:`q_{\mu}`, for establishing an upper
limit on the strength parameter, :math:`\mu`, as defiend in
limit on the strength parameter, :math:`\mu`, as defined in
Equation (14) in :xref:`arXiv:1007.1727`
.. math::
Expand Down Expand Up @@ -152,7 +152,7 @@ def qmu_tilde(
r"""
The "alternative" test statistic, :math:`\tilde{q}_{\mu}`, for establishing
an upper limit on the strength parameter, :math:`\mu`, for models with
bounded POI, as defiend in Equation (16) in :xref:`arXiv:1007.1727`
bounded POI, as defined in Equation (16) in :xref:`arXiv:1007.1727`
.. math::
:nowrap:
Expand Down Expand Up @@ -242,7 +242,7 @@ def qmu_tilde(
def tmu(mu, data, pdf, init_pars, par_bounds, fixed_params, return_fitted_pars=False):
r"""
The test statistic, :math:`t_{\mu}`, for establishing a two-sided
interval on the strength parameter, :math:`\mu`, as defiend in Equation (8)
interval on the strength parameter, :math:`\mu`, as defined in Equation (8)
in :xref:`arXiv:1007.1727`
.. math::
Expand Down Expand Up @@ -325,7 +325,7 @@ def tmu_tilde(
r"""
The test statistic, :math:`\tilde{t}_{\mu}`, for establishing a two-sided
interval on the strength parameter, :math:`\mu`, for models with
bounded POI, as defiend in Equation (11) in :xref:`arXiv:1007.1727`
bounded POI, as defined in Equation (11) in :xref:`arXiv:1007.1727`
.. math::
Expand Down
2 changes: 1 addition & 1 deletion src/pyhf/modifiers/shapefactor.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ def __init__(self, modifiers, pdfconfig, builder_data, batch_size=None):
(len(shapefactor_mods), self.batch_size or 1, 1),
)
# access field is now
# e.g. for a 3 channnel (3 bins, 2 bins, 5 bins) model
# e.g. for a 3 channel (3 bins, 2 bins, 5 bins) model
# [
# [0 1 2 0 1 0 1 2 3 4] (number of rows according to batch_size but at least 1)
# [0 1 2 0 1 0 1 2 3 4]
Expand Down
4 changes: 2 additions & 2 deletions src/pyhf/optimize/opt_jax.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ def wrap_objective(objective, data, pdf, stitch_pars, do_grad=False, jit_pieces=
if do_grad:

def func(pars):
# need to conver to tuple to make args hashable
# need to convert to tuple to make args hashable
return _jitted_objective_and_grad(
pars,
data,
Expand All @@ -65,7 +65,7 @@ def func(pars):
else:

def func(pars):
# need to conver to tuple to make args hashable
# need to convert to tuple to make args hashable
return _jitted_objective(
pars,
data,
Expand Down
2 changes: 1 addition & 1 deletion src/pyhf/pdf.py
Original file line number Diff line number Diff line change
Expand Up @@ -514,7 +514,7 @@ def logpdf(self, maindata, pars):
Compute the logarithm of the value of the probability density.
Args:
maindata (:obj:`tensor`): The main channnel data (a subset of the full data in a HistFactory model)
maindata (:obj:`tensor`): The main channel data (a subset of the full data in a HistFactory model)
pars (:obj:`tensor`): The model parameters
Returns:
Expand Down
8 changes: 4 additions & 4 deletions src/pyhf/tensor/jax_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ def tile(self, tensor_in, repeats):

def conditional(self, predicate, true_callable, false_callable):
"""
Runs a callable conditional on the boolean value of the evaulation of a predicate
Runs a callable conditional on the boolean value of the evaluation of a predicate
Example:
Expand All @@ -169,8 +169,8 @@ def conditional(self, predicate, true_callable, false_callable):
Args:
predicate (:obj:`scalar`): The logical condition that determines which callable to evaluate
true_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evalutes to :code:`true`
false_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evalutes to :code:`false`
true_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evaluates to :code:`true`
false_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evaluates to :code:`false`
Returns:
JAX ndarray: The output of the callable that was evaluated
Expand Down Expand Up @@ -216,7 +216,7 @@ def astensor(self, tensor_in, dtype="float"):
tensor_in (Number or Tensor): Tensor object
Returns:
`jaxlib.xla_extension.DeviceArray`: A multi-dimensional, fixed-size homogenous array.
`jaxlib.xla_extension.DeviceArray`: A multi-dimensional, fixed-size homogeneous array.
"""
# TODO: Remove doctest:+ELLIPSIS when JAX API stabilized
try:
Expand Down
8 changes: 4 additions & 4 deletions src/pyhf/tensor/numpy_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ def tile(self, tensor_in, repeats):

def conditional(self, predicate, true_callable, false_callable):
"""
Runs a callable conditional on the boolean value of the evaulation of a predicate
Runs a callable conditional on the boolean value of the evaluation of a predicate
Example:
Expand All @@ -154,8 +154,8 @@ def conditional(self, predicate, true_callable, false_callable):
Args:
predicate (:obj:`scalar`): The logical condition that determines which callable to evaluate
true_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evalutes to :code:`true`
false_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evalutes to :code:`false`
true_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evaluates to :code:`true`
false_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evaluates to :code:`false`
Returns:
NumPy ndarray: The output of the callable that was evaluated
Expand Down Expand Up @@ -201,7 +201,7 @@ def astensor(self, tensor_in, dtype='float'):
tensor_in (Number or Tensor): Tensor object
Returns:
`numpy.ndarray`: A multi-dimensional, fixed-size homogenous array.
`numpy.ndarray`: A multi-dimensional, fixed-size homogeneous array.
"""
try:
dtype = self.dtypemap[dtype]
Expand Down
6 changes: 3 additions & 3 deletions src/pyhf/tensor/pytorch_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ def erfinv(self, tensor_in):

def conditional(self, predicate, true_callable, false_callable):
"""
Runs a callable conditional on the boolean value of the evaulation of a predicate
Runs a callable conditional on the boolean value of the evaluation of a predicate
Example:
Expand All @@ -107,8 +107,8 @@ def conditional(self, predicate, true_callable, false_callable):
Args:
predicate (:obj:`scalar`): The logical condition that determines which callable to evaluate
true_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evalutes to :code:`true`
false_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evalutes to :code:`false`
true_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evaluates to :code:`true`
false_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evaluates to :code:`false`
Returns:
PyTorch Tensor: The output of the callable that was evaluated
Expand Down
6 changes: 3 additions & 3 deletions src/pyhf/tensor/tensorflow_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ def tile(self, tensor_in, repeats):

def conditional(self, predicate, true_callable, false_callable):
"""
Runs a callable conditional on the boolean value of the evaulation of a predicate
Runs a callable conditional on the boolean value of the evaluation of a predicate
Example:
>>> import pyhf
Expand All @@ -143,8 +143,8 @@ def conditional(self, predicate, true_callable, false_callable):
Args:
predicate (:obj:`scalar`): The logical condition that determines which callable to evaluate
true_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evalutes to :code:`true`
false_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evalutes to :code:`false`
true_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evaluates to :code:`true`
false_callable (:obj:`callable`): The callable that is evaluated when the :code:`predicate` evaluates to :code:`false`
Returns:
TensorFlow Tensor: The output of the callable that was evaluated
Expand Down
10 changes: 5 additions & 5 deletions tests/test_infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ def test_hypotest_return_tail_probs(tmpdir, hypotest_args, test_stat):
def test_hypotest_return_expected(tmpdir, hypotest_args, test_stat):
"""
Check that the return structure of pyhf.infer.hypotest with the
additon of the return_expected keyword arg is as expected
addition of the return_expected keyword arg is as expected
"""
tb = pyhf.tensorlib

Expand All @@ -152,7 +152,7 @@ def test_hypotest_return_expected(tmpdir, hypotest_args, test_stat):
def test_hypotest_return_expected_set(tmpdir, hypotest_args, test_stat):
"""
Check that the return structure of pyhf.infer.hypotest with the
additon of the return_expected_set keyword arg is as expected
addition of the return_expected_set keyword arg is as expected
"""
tb = pyhf.tensorlib

Expand Down Expand Up @@ -195,7 +195,7 @@ def test_hypotest_return_calculator(
):
"""
Check that the return structure of pyhf.infer.hypotest with the
additon of the return_calculator keyword arg is as expected
addition of the return_calculator keyword arg is as expected
"""
*_, model = hypotest_args

Expand Down Expand Up @@ -423,7 +423,7 @@ def test_emperical_distribution(tmpdir, hypotest_args):

def test_toy_calculator(tmpdir, hypotest_args):
"""
Check that the toy calculator is peforming as expected
Check that the toy calculator is performing as expected
"""
np.random.seed(0)
mu_test, data, model = hypotest_args
Expand Down Expand Up @@ -469,7 +469,7 @@ def test_toy_calculator(tmpdir, hypotest_args):
def test_fixed_poi(tmpdir, hypotest_args):
"""
Check that the return structure of pyhf.infer.hypotest with the
additon of the return_expected keyword arg is as expected
addition of the return_expected keyword arg is as expected
"""

_, _, pdf = hypotest_args
Expand Down

0 comments on commit 10e22fd

Please sign in to comment.