-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Use iminuit v2.0 API #1208
Conversation
4944bdf
to
2d95cac
Compare
For reasons I'm not 100% on at the moment Lines 229 to 239 in 924b86c
_______________________________________________________________________________________ test_set_tolerance[pytorch] ________________________________________________________________________________________
backend = (<pyhf.tensor.pytorch_backend.pytorch_backend object at 0x7fd480eb8280>, None)
def test_set_tolerance(backend):
m = pyhf.simplemodels.hepdata_like([50.0], [100.0], [10.0])
data = pyhf.tensorlib.astensor([125.0] + m.config.auxdata)
assert pyhf.infer.mle.fit(data, m, tolerance=0.01) is not None
pyhf.set_backend(pyhf.tensorlib, pyhf.optimize.scipy_optimizer(tolerance=0.01))
assert pyhf.infer.mle.fit(data, m) is not None
pyhf.set_backend(pyhf.tensorlib, pyhf.optimize.minuit_optimizer(tolerance=0.01))
> assert pyhf.infer.mle.fit(data, m) is not None
tests/test_optim.py:239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/pyhf/infer/mle.py:123: in fit
twice_nll, data, pdf, init_pars, par_bounds, fixed_vals, **kwargs
src/pyhf/optimize/mixins.py:136: in minimize
result = self._internal_minimize(**minimizer_kwargs, options=kwargs)
src/pyhf/optimize/mixins.py:45: in _internal_minimize
options=options,
src/pyhf/optimize/opt_minuit.py:132: in _minimize
minimizer.migrad(ncall=maxiter)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = (Param(number=0, name='x0', value=1.0, error=0.01, merror=None, is_const=False, is_fixed=False, has_limits=True, has_l...lse, is_fixed=False, has_limits=True, has_lower_limit=True, has_upper_limit=True, lower_limit=1e-10, upper_limit=10.0))
ncall = 100000, iterate = 5
def migrad(self, ncall=None, iterate=5):
"""Run MnMigrad from the Minuit2 library.
MIGRAD is a robust minimisation algorithm which earned its reputation
in 40+ years of almost exclusive usage in high-energy physics. How
MIGRAD works is described in the `MINUIT paper`_.
**Arguments:**
* **ncall**: integer or None, optional; (approximate)
maximum number of calls before minimization will be aborted. Default: None
(indicates to use an internal heuristic). Note: The limit may be slightly
violated, because the condition is checked only after a full iteration of
the algorithm, which usually performs several function calls.
* **iterate**: automatically call Migrad up to N times if convergence
was not reached. Default: 5. This simple heuristic makes Migrad converge
more often even if the numerical precision of the cost function is low.
Setting this to 1 disables the feature.
**Return:**
self
"""
if ncall is None:
ncall = 0 # tells C++ Minuit to use its internal heuristic
if iterate < 1:
raise ValueError("iterate must be at least 1")
migrad = MnMigrad(self._fcn, self._last_state, self.strategy)
# Automatically call Migrad up to `iterate` times if minimum is not valid.
# This simple heuristic makes Migrad converge more often.
for _ in range(iterate):
# workaround: precision must be set again after each call to MnMigrad
if self._precision is not None:
migrad.precision = self._precision
> fm = migrad(ncall, self._tolerance)
E RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details)
../../../.venvs/pyhf-CPU/lib/python3.7/site-packages/iminuit/_minuit.py:547: RuntimeError also from _______________ test_optimizer_stitching[minuit-pytorch_backend] _______________
backend = <class 'pyhf.tensor.pytorch_backend.pytorch_backend'>
optimizer = 'minuit'
@pytest.mark.parametrize(
'backend',
[
pyhf.tensor.numpy_backend,
pyhf.tensor.jax_backend,
pyhf.tensor.tensorflow_backend,
pyhf.tensor.pytorch_backend,
],
)
@pytest.mark.parametrize('optimizer', ['scipy', 'minuit'])
def test_optimizer_stitching(backend, optimizer):
pyhf.set_backend(backend(precision='64b'), optimizer)
pdf = pyhf.simplemodels.hepdata_like([50.0], [100.0], [10])
data = [125.0] + pdf.config.auxdata
> result_nostitch = pyhf.infer.mle.fixed_poi_fit(2.0, data, pdf, do_stitch=False)
tests/test_validation.py:848:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/pyhf/infer/mle.py:190: in fixed_poi_fit
return fit(data, pdf, init_pars, par_bounds, fixed_params, **kwargs)
src/pyhf/infer/mle.py:122: in fit
return opt.minimize(
src/pyhf/optimize/mixins.py:136: in minimize
result = self._internal_minimize(**minimizer_kwargs, options=kwargs)
src/pyhf/optimize/mixins.py:38: in _internal_minimize
result = self._minimize(
src/pyhf/optimize/opt_minuit.py:129: in _minimize
minimizer.migrad(ncall=maxiter)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = (Param(number=0, name='x0', value=2.0, error=0.0, merror=None, is_const=False, is_fixed=True, has_limits=True, has_low...lse, is_fixed=False, has_limits=True, has_lower_limit=True, has_upper_limit=True, lower_limit=1e-10, upper_limit=10.0))
ncall = 100000, iterate = 5
def migrad(self, ncall=None, iterate=5):
"""Run MnMigrad from the Minuit2 library.
MIGRAD is a robust minimisation algorithm which earned its reputation
in 40+ years of almost exclusive usage in high-energy physics. How
MIGRAD works is described in the `MINUIT paper`_.
**Arguments:**
* **ncall**: integer or None, optional; (approximate)
maximum number of calls before minimization will be aborted. Default: None
(indicates to use an internal heuristic). Note: The limit may be slightly
violated, because the condition is checked only after a full iteration of
the algorithm, which usually performs several function calls.
* **iterate**: automatically call Migrad up to N times if convergence
was not reached. Default: 5. This simple heuristic makes Migrad converge
more often even if the numerical precision of the cost function is low.
Setting this to 1 disables the feature.
**Return:**
self
"""
if ncall is None:
ncall = 0 # tells C++ Minuit to use its internal heuristic
if iterate < 1:
raise ValueError("iterate must be at least 1")
migrad = MnMigrad(self._fcn, self._last_state, self.strategy)
# Automatically call Migrad up to `iterate` times if minimum is not valid.
# This simple heuristic makes Migrad converge more often.
for _ in range(iterate):
# workaround: precision must be set again after each call to MnMigrad
if self._precision is not None:
migrad.precision = self._precision
> fm = migrad(ncall, self._tolerance)
E RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details)
/opt/hostedtoolcache/Python/3.8.6/x64/lib/python3.8/site-packages/iminuit/_minuit.py:547: RuntimeError @HDembinski Any thoughts on why this seems to be happening for |
@kratsg We're also going to need to revise these tests: Lines 496 to 517 in 924b86c
The good news is that we're down to just two test sets that are failing:
and I think all the (known) errors are described above here, so the set of failures is reduced to a small targeted list. |
Does not look trivial, but I have very little time over the weekend to look into this. I can have a look next week. |
You can try what happens when you call Minuit.fcn(parameter_values). If you get the error, then that would narrow it down. (Replace Minuit with your object instance). |
Thanks @HDembinski. This shouldn't be a high priority for you, but when you do get time for it please let us know if we can help. 👍 |
I want this to work in pyhf, I am eager to help, but I am very short on time right now. Thanks for adopting so quickly! |
We fully understand and appreciate this 🙂 — I think we're all rather impressed with how much development work you're getting done right now when everyone is so busy at the end of the year. |
It is how I procrastinate from other work. |
I checked out this branch locally. |
Ok, got pytorch, now when trying to run tests/test_optim.py, it complains that tensorflow is not installed. If these are optional backends, shouldn't the tests just skip when they are not installed? |
I let pybind11 do the conversion. I am not sure how exactly the cast works, but I guess it expects the type to support the PySequence protocol, which torch.Tensor perhaps does not. I can try to work around that. |
Okay great. I'll rebase this against
Okay let us know if there is any information that we can provide.
Perhaps, but as we don't ship the tests with the code we assume that the dev team is going to be running the full test suite locally and we want to know that all the backends agree. So I guess I would say that the backends are optional for runtime but not optional for the dev team. |
1f7aacf
to
605d10d
Compare
Indeed, torch.Tensor does not register as a Sequence.
|
Thanks for this diagnosis @HDembinski. I think we should avoid you doing any extra work if possible and I believe that we can just case to NumPy here without any issue. I'll let you know if we end up running into additional issues, but I'll work on implementing the
option. |
I am working on the implementation for iminuit. Having a custom caster allows me to improve performance a bit. |
@matthewfeickert I checked that my patch scikit-hep/iminuit#574 fixes the issue. |
It is now fixed in the iminuit develop branch. |
f087df6
to
5bc5e5b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All looks good generally. Note that we mock out minuit since we don't need to use it to check for behavior from it. We're checking that the functions we've implemented handle the fixed values/init values correspondingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just realized we do want to mock out iminuit since we need to check functionality of our code, rather than of minuit's code in these tests.
* from CHANGELOG: Minuit.np_covariance is now obsolete and was removed; see Minuit.covariance
* Minuit.np_values was removed; use Minuit.values instead or np.array(m.values)
* Minuit.np_errors was removed; use Minuit.errors instead or np.array(m.errors)
54fd990
to
decd4ad
Compare
Description
Resolves #1166
Adopt the new
iminuit
v2.0
API.Checklist Before Requesting Reviewer
Before Merging
For the PR Assignees: