-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Participate in iminuit v2.0 beta? #1166
Comments
Hi @HDembinski. Thanks for thinking of us and giving us this heads up.
This is very cool to hear. Congratulations!
I know that I don't do a very good job of checking the warnings, but we can do some checks to make sure we're not hitting any.
Very much so. We have a nightly workflow that runs the test suite using an install from the HEAD of the main branch of pyhf/.github/workflows/dependencies-head.yml Lines 35 to 55 in a3b34a5
If you already have a branch beyond the main |
we took care of them all generally with the major rewrite of our underlying optimizer api. |
Thank you! Since you are testing develop, you should already get the latest, but you can also install the beta from test.pypi: |
Yup! We're already starting to see tests fail there ( |
@matthewfeickert Thank you for the feedback, but I am thoroughly confused. This looks like you are calling |
Ah, it looks like you either reimplemented |
we conostruct https://github.com/scikit-hep/pyhf/blob/master/src/pyhf/optimize/opt_minuit.py#L67 |
|
Hm. I'll have to look at this more later, as running the "HEAD of dependencies" workflow with those changes to this block pyhf/src/pyhf/optimize/opt_minuit.py Lines 122 to 132 in 28fdfe9
on branch |
Oh. Looking at https://github.com/scikit-hep/iminuit/blob/2503a042a292657d573912bb64a3407df76ada89/src/iminuit/_minimize.py#L112-L121 it seems that we want |
Hm. Still getting |
I mostly ripped out some of the python-facing code from iminuit to write up the optimizer wrapper here - so you're right. I liked the idea that you provided a Mainly, we needed a bit more control on instantiation of the object vs actually running the fit (so it was split up this way). |
I think this shows that the counters are still messy and confusing. I am going to remove |
The develop branch had a bug, which I didn't detect before. I merged scikit-hep/iminuit#514, which fixes the bug, but you still need to change the counter names as described in my previous message. |
Okay, with that we're now passing that section but failing Line 47 in 28fdfe9
with > result = pyhf.infer.mle.fit(data, m, do_grad=do_grad, do_stitch=do_stitch)
tests/test_optim.py:105:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/pyhf/infer/mle.py:122: in fit
return opt.minimize(
src/pyhf/optimize/mixins.py:136: in minimize
result = self._internal_minimize(**minimizer_kwargs, options=kwargs)
src/pyhf/optimize/mixins.py:38: in _internal_minimize
result = self._minimize(
src/pyhf/optimize/opt_minuit.py:101: in _minimize
minimizer.migrad(ncall=maxiter)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <iminuit._minuit.Minuit object at 0x7f57644a9f40>, ncall = 100000
precision = None, iterate = 5
def migrad(self, ncall=None, precision=None, iterate=5):
"""Run MIGRAD.
MIGRAD is a robust minimisation algorithm which earned its reputation
in 40+ years of almost exclusive usage in high-energy physics. How
MIGRAD works is described in the `MINUIT paper`_.
**Arguments:**
* **ncall**: integer or None, optional; (approximate)
maximum number of call before MIGRAD will stop trying. Default: None
(indicates to use MIGRAD's internal heuristic). Note: MIGRAD may slightly
violate this limit, because it checks the condition only after a full
iteration of the algorithm, which usually performs several function calls.
* **precision**: override Minuit precision estimate for the cost function.
Default: None (= use epsilon of a C++ double). If the cost function has a
lower precision (e.g. of a C++ float), setting this to a lower value will
accelerate convergence and reduce the rate of unsuccessful convergence.
* **iterate**: automatically call Migrad up to N times if convergence
was not reached. Default: 5. This simple heuristic makes Migrad converge
more often even if the numerical precision of the cost function is low.
Setting this to 1 disables the feature.
**Return:**
:ref:`function-minimum-sruct`, list of :ref:`minuit-param-struct`
"""
if ncall is None:
ncall = 0 # tells C++ Minuit to use its internal heuristic
if iterate < 1:
raise ValueError("iterate must be at least 1")
migrad = MnMigrad(self._fcn, self._last_state, self.strategy)
migrad.set_print_level(self.print_level)
if precision is not None:
migrad.precision = precision
# Automatically call Migrad up to `iterate` times if minimum is not valid.
# This simple heuristic makes Migrad converge more often.
for _ in range(iterate):
> fm = migrad(ncall, self.tol)
E RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details) |
I am not sure what is going on. |
@HDembinski I'm now also seeing in runs def test_hypotest_backends(backend, kwargs):
"""
Check that hypotest runs fully across all backends for all calculator types.
"""
pdf = pyhf.simplemodels.hepdata_like(
signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]
)
data = [51, 48] + pdf.config.auxdata
> assert pyhf.infer.hypotest(1.0, data, pdf, **kwargs) is not None
tests/test_infer.py:165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/pyhf/infer/__init__.py:146: in hypotest
teststat = calc.teststatistic(poi_test)
src/pyhf/infer/calculators.py:258: in teststatistic
qmu_v = teststat_func(
src/pyhf/infer/test_statistics.py:177: in qmu_tilde
return _qmu_like(mu, data, pdf, init_pars, par_bounds, fixed_params)
src/pyhf/infer/test_statistics.py:19: in _qmu_like
tmu_like_stat, (_, muhatbhat) = _tmu_like(
src/pyhf/infer/test_statistics.py:38: in _tmu_like
mubhathat, fixed_poi_fit_lhood_val = fixed_poi_fit(
src/pyhf/infer/mle.py:190: in fixed_poi_fit
return fit(data, pdf, init_pars, par_bounds, fixed_params, **kwargs)
src/pyhf/infer/mle.py:122: in fit
return opt.minimize(
src/pyhf/optimize/mixins.py:136: in minimize
result = self._internal_minimize(**minimizer_kwargs, options=kwargs)
src/pyhf/optimize/mixins.py:35: in _internal_minimize
minimizer = self._get_minimizer(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pyhf.optimize.minuit_optimizer object at 0x7f3fa6e957c0>
objective_and_grad = <function wrap_objective.<locals>.func at 0x7f3f84139b80>
init_pars = [1.0, 1.0, 1.0]
init_bounds = [(0, 10), (1e-10, 10.0), (1e-10, 10.0)], fixed_vals = [(0, 1.0)]
do_grad = False
def _get_minimizer(
self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False
):
step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds]
fixed_vals = fixed_vals or []
# Minuit wants True/False for each parameter
fixed_bools = [False] * len(init_pars)
for index, val in fixed_vals:
fixed_bools[index] = True
init_pars[index] = val
step_sizes[index] = 0.0
# Minuit requires jac=callable
if do_grad:
wrapped_objective = lambda pars: objective_and_grad(pars)[0]
jac = lambda pars: objective_and_grad(pars)[1]
else:
wrapped_objective = objective_and_grad
jac = None
kwargs = dict(
fcn=wrapped_objective,
grad=jac,
start=init_pars,
error=step_sizes,
limit=init_bounds,
fix=fixed_bools,
print_level=self.verbose,
errordef=self.errordef,
)
> return iminuit.Minuit.from_array_func(**kwargs)
E AttributeError: type object 'Minuit' has no attribute 'from_array_func' It looks like scikit-hep/iminuit#526 removed |
I think the replacement for |
Yes, you can just use init now. Minuit(fcn_np, array_of_starting_values) |
The |
Right, at the time when @alexander-held asked the question, there was no changelog yet. |
iminuit v2.0 is released, the breaking changes in the interface are documented in the changelog which can serve as an upgrade guide. |
Cross-posting with the PR: _______________ test_optimizer_stitching[minuit-pytorch_backend] _______________
backend = <class 'pyhf.tensor.pytorch_backend.pytorch_backend'>
optimizer = 'minuit'
@pytest.mark.parametrize(
'backend',
[
pyhf.tensor.numpy_backend,
pyhf.tensor.jax_backend,
pyhf.tensor.tensorflow_backend,
pyhf.tensor.pytorch_backend,
],
)
@pytest.mark.parametrize('optimizer', ['scipy', 'minuit'])
def test_optimizer_stitching(backend, optimizer):
pyhf.set_backend(backend(precision='64b'), optimizer)
pdf = pyhf.simplemodels.hepdata_like([50.0], [100.0], [10])
data = [125.0] + pdf.config.auxdata
> result_nostitch = pyhf.infer.mle.fixed_poi_fit(2.0, data, pdf, do_stitch=False)
tests/test_validation.py:848:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/pyhf/infer/mle.py:190: in fixed_poi_fit
return fit(data, pdf, init_pars, par_bounds, fixed_params, **kwargs)
src/pyhf/infer/mle.py:122: in fit
return opt.minimize(
src/pyhf/optimize/mixins.py:136: in minimize
result = self._internal_minimize(**minimizer_kwargs, options=kwargs)
src/pyhf/optimize/mixins.py:38: in _internal_minimize
result = self._minimize(
src/pyhf/optimize/opt_minuit.py:129: in _minimize
minimizer.migrad(ncall=maxiter)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = (Param(number=0, name='x0', value=2.0, error=0.0, merror=None, is_const=False, is_fixed=True, has_limits=True, has_low...lse, is_fixed=False, has_limits=True, has_lower_limit=True, has_upper_limit=True, lower_limit=1e-10, upper_limit=10.0))
ncall = 100000, iterate = 5
def migrad(self, ncall=None, iterate=5):
"""Run MnMigrad from the Minuit2 library.
MIGRAD is a robust minimisation algorithm which earned its reputation
in 40+ years of almost exclusive usage in high-energy physics. How
MIGRAD works is described in the `MINUIT paper`_.
**Arguments:**
* **ncall**: integer or None, optional; (approximate)
maximum number of calls before minimization will be aborted. Default: None
(indicates to use an internal heuristic). Note: The limit may be slightly
violated, because the condition is checked only after a full iteration of
the algorithm, which usually performs several function calls.
* **iterate**: automatically call Migrad up to N times if convergence
was not reached. Default: 5. This simple heuristic makes Migrad converge
more often even if the numerical precision of the cost function is low.
Setting this to 1 disables the feature.
**Return:**
self
"""
if ncall is None:
ncall = 0 # tells C++ Minuit to use its internal heuristic
if iterate < 1:
raise ValueError("iterate must be at least 1")
migrad = MnMigrad(self._fcn, self._last_state, self.strategy)
# Automatically call Migrad up to `iterate` times if minimum is not valid.
# This simple heuristic makes Migrad converge more often.
for _ in range(iterate):
# workaround: precision must be set again after each call to MnMigrad
if self._precision is not None:
migrad.precision = self._precision
> fm = migrad(ncall, self._tolerance)
E RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details)
/opt/hostedtoolcache/Python/3.8.6/x64/lib/python3.8/site-packages/iminuit/_minuit.py:547: RuntimeError |
From PR #1208
@HDembinski, while we said that we'll work on casting to NumPy for the time being, I'll add that this might be nice down the road to have in |
Agreed. |
Dear pyhf team,
I am about to finish a major rewrite of iminuit, version 2.0, that replaces Cython as the tool to wrap C++ Minuit2 with pybind11, which is going to solve several issues that the legacy code had. All the good things that this will bring are listed on top of this PR:
scikit-hep/iminuit#502
Switching to the new version of iminuit should be completely transparent to you, since the new version passes the comprehensive suite of unit tests of iminuit-v1.x. However, I would like to use this opportunity to finally remove interface that has been successively marked as deprecated in versions 1.3 to 1.5.
Therefore my two question to you:
Best regards,
Hans, iminuit maintainer
The text was updated successfully, but these errors were encountered: