Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: add global optimizer Shuffled Complex Evolution (SCE) to SciPy.optimize. #18436

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

mcuntz
Copy link

@mcuntz mcuntz commented May 6, 2023

What does this implement/fix?

This adds the global optimizer Shuffled Complex Evolution (SCE) to scipy.optimize:

  • Duan, Sorooshian and Gupta (1992) Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour Res 28, 1015-1031, doi: 10.1029/91WR02985

SCE is very popular in the hydrologic community and had performed well in Andrea Gavana's Global Optimization Benchmarks (https://infinity77.net/global_optimization/).

Additional information

The current implementation has some nice features that are missing from other optimizers in scipy.optimize:

  1. It can write out restart files so that one can continue an optimisation if the original run was interrupted. This happens, for example, on compute clusters where a job has an allocated run time.
  2. Parameters can be sampled in different ways (open and closed intervals, logarithmic). Sampling a parameter with a normal distribution if it can vary orders of magnitude gives suboptimal solutions. It is much better to sample the parameter in log-space (e.g. Mai, J Hydrolo 2023, 10.1016/j.jhydrol.2023.129414).
  3. It is also practical that one can pass args and kwargs to the objective function.

The practical coding (function, class, OptimizeResult) follows closely the implementation of the differential evolution code in scipy.optimize.

@mcuntz mcuntz requested a review from andyfaff as a code owner May 6, 2023 22:58
@andyfaff
Copy link
Contributor

andyfaff commented May 7, 2023

Thank you for this contribution @mcuntz. When introducing functionality of this kind we ask that you bring it up for discussion on the scipy-dev mailing list first, this is to ensure that it has greater visibility in the project. It also reduces work, because the maintainer team can advise on how to go about the PR - we've had misguided PRs be submitted that were not going to be merged (for one reason or another), which is a waste of time for the author.

Before adding new global optimisers we ask that the functionality goes through the benchmark process first. I see from the changeset that you've modified the global optimiser benchmark. Can you run sce against the benchmarks and report back with the stats please? Please do 100 repeats.

The review process for adding a new minimiser can be drawn out. This is because ongoing maintenance cost needs to be low, which means that code-style, structure, etc, needs to be really good. If you feel as if the review is never ending please don't get down heartened - 1200 lines of code takes a while to get through.

We'll probably have to come up with a name other than sce, purely because it's not obvious what it is.

Lastly, there is code here that has been committed by accident, namely the PROPACK and boost files

@tupui tupui added enhancement A new feature or improvement scipy.optimize needs-decision Items that need further discussion before they are merged or closed labels May 7, 2023
@mcuntz
Copy link
Author

mcuntz commented May 9, 2023

Thanks @andyfaff for the encouraging words.

I renamed it from sce to shuffled_complex_evolution.

I ran the global benchmark suite. That was not obvious given that the docs and readme are out of date. I had to set export SCIPY_XSLOW=1 on the command line, otherwise it would not work. I finally ran python dev.py bench -t optimize.BenchGlobal. I (try to) attach the output to this comment.
BenchGlobal.txt
While investigating why shuffled_complex_evolution did not work with the function Deb03, I found that its bounds are wrong. They were set to self._bounds = list(zip([-1.0] * self.N, [1.0] * self.N)). But there is x**0.75 in the function so that does not work with x<0. Setting self._bounds = list(zip([0.0] * self.N, [1.0] * self.N)) works.
I also came across that the testsuite stopped at Mishra10. The output was of type int64 and json did not like that to dump that. So setting newres.max_obj = np.max(funs).astype(float) (same for min_obj) solved that.

The PROPACK and boost_math must have been update when I rebased with scipy. I am sorry but I do not know how to remove these again. (I cannot simply delete them, isn't it?) git submodule update --init given in forums did not work. I also tried git submodule deinit -f . ; git submodule update --init (which did a lot of things) but to no avail. It check out the new version again, e.g. 109a814... for boost_math:

+++ b/scipy/_lib/boost_math
@@ -1 +1 @@
-Subproject commit 298a243ccd3639b6eaa59bcdab7ab9d5f008fb36
+Subproject commit 109a814e89f77ff8a3fc8f0391f6b35a12640669

Lastly, I will write an e-mail to the scipy-dev mailing list now.

@dschmitz89
Copy link
Contributor

dschmitz89 commented May 9, 2023

First of all, thanks for this promising PR!

I ran the global benchmark suite. That was not obvious given that the docs and readme are out of date. I had to set export SCIPY_XSLOW=1 on the command line, otherwise it would not work. I finally ran python dev.py bench -t optimize.BenchGlobal. I (try to) attach the output to this comment. BenchGlobal.txt While investigating why shuffled_complex_evolution did not work with the function Deb03, I found that its bounds are wrong. They were set to self._bounds = list(zip([-1.0] * self.N, [1.0] * self.N)). But there is x**0.75 in the function so that does not work with x<0. Setting self._bounds = list(zip([0.0] * self.N, [1.0] * self.N)) works. I also came across that the testsuite stopped at Mishra10. The output was of type int64 and json did not like that to dump that. So setting newres.max_obj = np.max(funs).astype(float) (same for min_obj) solved that.

Another thanks for fixing those issues in the testsuite. It's quite encouraging that benchmarks run succesfully.

To make the benchmark output more easy to digest, could you reuse the script we used for benchmarking DIRECT here? With some plotting on top, we should get plots like these on top of the DIRECT PR. That said, the plots could also be created from the benchmark output text file.

The PROPACK and boost_math must have been update when I rebased with scipy. I am sorry but I do not know how to remove these again. (I cannot simply delete them, isn't it?) git submodule update --init given in forums did not work. I also tried git submodule deinit -f . ; git submodule update --init (which did a lot of things) but to no avail. It check out the new version again, e.g. 109a814... for boost_math:

+++ b/scipy/_lib/boost_math
@@ -1 +1 @@
-Subproject commit 298a243ccd3639b6eaa59bcdab7ab9d5f008fb36
+Subproject commit 109a814e89f77ff8a3fc8f0391f6b35a12640669

I ran into that multiple times as well. Merging main and then using git submodule update --init usually fixed it for me.

@rkern
Copy link
Member

rkern commented May 9, 2023

I would probably drop the restart facility for the first PR (or release). Restarting is something that, in principle, could be added onto many, if not all, of the optimizers. If we're going to add such capabilities to one, I think it behooves us to at least think about the interface such that can be reused for those others ones, too, at least in terms of the call signature. We don't want to get into the position of having slightly different conventions and argument names for each one. We could definitely add the restart facility just to this one at first, but we should first think about a design that would be likely to work for all of them.

For example, I would probably enforce the use of only one file. Exposing two filenames in the signature is a quirk of this particular implementation that wouldn't be needed for other optimizers. For that matter, there isn't a strict need for 2 files even for this optimizer; you can add non-.npy files like JSON-formatted files to an .npz just fine.

@andyfaff
Copy link
Contributor

andyfaff commented May 9, 2023

@rkern, thanks for the comments. During the review process we'll definitely get to that. At the moment I'd like to know if you think we should add the minimiser to the global optimizer stable. There's a post on scipy-dev that is testing the waters for this.

@rkern
Copy link
Member

rkern commented May 9, 2023

Yeah, I mentioned it just because it was listed as a feature over the existing solvers. I'd propose setting those aside for evaluation purposes.

Looking at the benchmark results, I'm not seeing a strong standout role for it given SHGO. Maybe it shines against SHGO on the ND=50 problems, though.

@mcuntz mcuntz force-pushed the shuffled_complex_evolution branch from 6fca6c9 to cd0d70f Compare May 21, 2023 20:40
@mcuntz mcuntz requested a review from rgommers as a code owner May 21, 2023 20:40
@mcuntz
Copy link
Author

mcuntz commented May 21, 2023

@dschmitz89 I could not find the script that you used for DIRECT. Here my script to plot mean_nfev and nsuccess (i.e. nsuccess/ntrials*100) from global-bench-results.json:

#!/usr/bin/env python
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

# read output from
#     export SCIPY_XSLOW=1 ; python dev.py bench -t optimize.BenchGlobal
# in subdirectory benchmarks/
ifile = 'global-bench-results.json'
with open(ifile, 'r') as f:
    res = json.load(f)

# pandas DataFrame with global optimizer as column 'go'
# and function as column 'func'
res2 = {k: pd.DataFrame(v).T.reset_index(names=['go']) for k, v in res.items()}
df = pd.concat(res2, axis=0).reset_index(level=0, names='func')

# mean per optimizer
dfm = df.groupby('go').mean()
dfm.reset_index(inplace=True, names='go')

# plot bar chart
ngo = dfm.shape[0]
xticks = np.arange(ngo)

fig = plt.figure()

metric1 = 'mean_nfev'
ax = fig.subplots()
b1 = ax.bar(xticks, height=dfm[metric1], width=-0.4, align='edge',
            tick_label=dfm['go'], label=metric1)
ax.set_ylabel(metric1)

metric2 = 'nsuccess'
ax2 = plt.twinx()
b2 = ax2.bar(xticks, height=dfm[metric2] / dfm['ntrials'] * 100., width=0.4,
             align='edge', color='red', label=metric2)
ax2.set_ylabel(metric2 + ' (%)')

plt.legend([b1, b2], [metric1, metric2], loc='upper center')

fig.savefig('global-bench-results.png')

giving:
global-bench-results

One can see that SCE has few function evaluations with a rather high success rate. However, SHGO has even less function evaluations.
@rkern So I looked for high-dimensional problems. There is only one 9- and one 10-dimensional problem in the benchmarks.
print(df[df['ndim'] == 9].groupby('go').mean()) gives

          mean_nfev  ndim  nfail  nsuccess  ntrials
go                                                                                                                   
DA      18382.5   9.0   67.0      33.0    100.0
DE       1514.5   9.0   91.0       9.0    100.0
DIRECT  1778.0   9.0    1.0       0.0      1.0
SCE      1225.1   9.0   84.0      16.0    100.0
SHGO     1434.0   9.0    1.0       0.0      1.0
basinh.  5452.3   9.0  100.0       0.0    100.0

and print(df[df['ndim'] == 10].groupby('go').mean()) gives

           mean_nfev ndim  nfail  nsuccess  ntrials
go                                                                                                                  
DA      20284.92  10.0    0.0     100.0    100.0
DE      6194.80  10.0    0.0     100.0    100.0
DIRECT  534.00  10.0    1.0       0.0      1.0
SCE     1086.64  10.0    0.0     100.0    100.0
SHGO    1240.00  10.0    0.0       1.0      1.0
basinh. 14402.86  10.0    0.0     100.0    100.0

(Note that SHGO and DIRECT are no stochastic optimizers (according to the benchmark) and make only one run per benchmark function.) SCE needs less function evaluations than SHGO in this case. But SHGO also fails for ndim=9 so it might stop from some criteria.

I also rebased with scipy main and did git submodule update --init but the PROPACK files are still there :-(

@rgommers rgommers force-pushed the shuffled_complex_evolution branch from cd0d70f to 8e2b7e2 Compare May 22, 2023 08:57
@rgommers
Copy link
Member

I pushed an update that removed the PROPACK and boost-math submodule updates.

@rgommers
Copy link
Member

My impression is that the benchmark results do make a case for inclusion of the SCE solver in scipy.optimize. It improves on DIRECT overall, and has a higher success rate than SHGO.

Copy link
Contributor

@dschmitz89 dschmitz89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A first quick review.

I cannot really review the math in detail but the API is usually the bigger discussion point.

Comment on lines 47 to 48
factor1 = 1.0 - (abs(num / den)) ** 5
factor2 = 2 + (x[0] - 7.0) ** 2 + 2 * (x[1] - 7.0) ** 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these and other similar changes to the benchmark functions necessary? Often an explicit double is used in power operations like here to convert the output to doubles. That is especially important for scipy's optimizers that are implemented in C or Fortran. An objective returning integers crashed them.

Comment on lines 325 to 326
# self.global_optimum = [[-9.99378322, -9.99918927]]
# self.fglob = -0.19990562
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# self.global_optimum = [[-9.99378322, -9.99918927]]
# self.fglob = -0.19990562

Good finding, the new optimum.

Comment on lines 294 to 305
assert_raises(TypeError, shuffled_complex_evolution, func, x0, bounds)
bounds = [(-1, 1), (-1, 1)]
assert_raises(ValueError, shuffled_complex_evolution, func, x0,
bounds, sampling='unknown')
# test correct bool string
assert_raises(ValueError, _strtobool, 'Ja')
# test no initial population found
func = deb03
x0 = [-0.5, -0.5]
bounds = [(-1, 0), (-1, 0.)] # should be (0, 1) to work
assert_raises(ValueError, shuffled_complex_evolution, func, x0,
bounds, printit=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please test that the correct error messages are shown using

with pytest.raises(ValueError, match="error messsage")

Other necessary tests include objectives that:

  • return NaN
  • return inf

Comment on lines 91 to 92
# ToDo:
# - write tmp/population files (of Fortran code)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# ToDo:
# - write tmp/population files (of Fortran code)

Leftovers?

Comment on lines 164 to 165
restart=False, restartfile1='',
restartfile2=''):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should drop the restart functionality. As @rkern also mentioned, this is a generally useful feature for all optimizers, so it should not be available only for one. The API would have to be discussed further if this is generally desired among scipy.optimize.

ngs=2, npg=0, nps=0, nspl=0, mings=0,
seed=None, iniflg=True,
alpha=0.8, beta=0.45, maxit=False, printit=2,
polish=True,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that differential_evolution does this too but in my opinion runnning a local optimizer by default after a global one is not a good API. It is a good choice in many situations but not in all.

Comment on lines 229 to 242
peps : float, optional
Value of normalised geometric range needed for convergence
(default: 0.001).
ngs : int, optional
Number of complexes (default: 2).
npg : int, optional
Number of points in each complex (default: `2*nopt+1`).
nps : int, optional
Number of points in each sub-complex (default: `nopt+1`).
mings : int, optional
Minimum number of complexes required if the number of complexes is
allowed to reduce as the optimization proceeds (default: `ngs`).
nspl : int, optional
Number of evolution steps allowed for each complex before complex
Copy link
Contributor

@dschmitz89 dschmitz89 Jul 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If possible, we should try to rename arguments to something simple to understand (for example n_complexes instead of ngs). I have been guilty of bad argument names myself in the past but let's try to do better in future.

Comment on lines +197 to +200
mask : array_like, optional
Include (1, True) or exclude (0, False) parameters in minimization
(default: include all parameters). The number of parameters ``nopt`` is
``sum(mask)``.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
mask : array_like, optional
Include (1, True) or exclude (0, False) parameters in minimization
(default: include all parameters). The number of parameters ``nopt`` is
``sum(mask)``.

This is a big API change compared to scipy's other optimizers that do not expose such functionality. It is a useful feature in principle but needs to be discussed on a bigger scale first.

Copy link
Member

@j-bowhay j-bowhay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a quick comment, new code should have keyword-only arguments where appropriate as per https://docs.scipy.org/doc/scipy/dev/missing-bits.html#required-keyword-names

@mcuntz mcuntz force-pushed the shuffled_complex_evolution branch from 8e2b7e2 to 1eb8d2c Compare July 30, 2023 12:51
@mcuntz
Copy link
Author

mcuntz commented Aug 1, 2023

@dschmitz89 thanks for looking at the code. I addresses (almost) all concerns of you and others:

  • I reinstated the floats in the benchmark suite.
  • I renamed the keywords to more talkative names.
  • I removed the restart capability, sigh :-(.
  • I require keyword arguments after mask.
  • I also added a more sensible default for the maximum number of function evaluations based on my experience.
  • I test for the matching error message. Note, however, that np.seterr is set now to raise errors on all invalid numbers. I deal with NaN and Inf in SCE but I cannot test this because the testing raises RuntimeWarning as an error.
  • I use assert_allclose instead of assert_almost_equal in the tests.

I did not change two things:

  1. I left the mask keyword because it is just too useful to me (but you do not have to use it ;-). I almost always do not want to optimise all parameters of my models but I want to exclude a few. For example, I do not want to optimise insensitive parameters, which wastes computing time and only gives equifinality. Or I want to fix some parameters after having found strong correlations with other parameters.
  2. I left polish=True as default. I tested the benchmark suite with and without polish. It dropped from 75% to 50% success rate without polishing. I checked that the drop in differential evolution is even larger from more than 85% to 55%. On the other hand, polishing did not add a lot of function evaluations in these cases. From my experience, most users work with the default values (that;s what I do with differential evolution, for example). Because polishing is a good choice in most cases I think it improves the user experience setting it to True as default. You can always switch it off if it does not fit your problem.

Note that the benchmark suite has its quirks. SHGO simply hangs in the problem Cola (leading to timeout) so badly that the benchmark suite cannot even write out the rest of the json file afterwards. I simply commented the class for the benchmarking.
There are other timeouts by other optimizers but they lead only to a 'failed' in the output. These are:

  • differential evolution (DE) times out in the functions DeVilliersGlasser02, Dolan, Mishra09, Powell, and PowerSum
  • basinhopping and SHGO time out in Thurber

This makes the runtime of benchmark painfully slow.

I will merge all the commit messages at the end of the process. In the mean time just ignore the individual sub-commits.

@lesshaste
Copy link

Apologies I haven't been able to look at the benchmark suite, but shgo is buggy when used with the default sampling method. If you are not already using halton or sobol instead of simplicial they might work better .

@mcuntz
Copy link
Author

mcuntz commented Aug 3, 2023

@lesshaste I ran the testsuite for SHGO with sobol and halton. Both sampling methods fail as well with Cola. Halton gave similar success rates as simplicial but Sobol was much better, 83% vs. 63%, needing much more function evaluations, though. You can try the cola function with the following script:

#!/usr/bin/env python
import numpy as np
import scipy.optimize as opt


def cola(x):
    '''
        global_optimum = [[0.651906, 1.30194, 0.099242, -0.883791,
                           -0.8796, 0.204651, -3.28414, 0.851188,
                           -3.46245, 2.53245, -0.895246, 1.40992,
                           -3.07367, 1.96257, -2.97872, -0.807849,
                           -1.68978]]
        fglob = 11.7464

    '''
    d = np.asarray(
        [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
         [1.27, 0, 0, 0, 0, 0, 0, 0, 0, 0],
         [1.69, 1.43, 0, 0, 0, 0, 0, 0, 0, 0],
         [2.04, 2.35, 2.43, 0, 0, 0, 0, 0, 0, 0],
         [3.09, 3.18, 3.26, 2.85, 0, 0, 0, 0, 0, 0],
         [3.20, 3.22, 3.27, 2.88, 1.55, 0, 0, 0, 0, 0],
         [2.86, 2.56, 2.58, 2.59, 3.12, 3.06, 0, 0, 0, 0],
         [3.17, 3.18, 3.18, 3.12, 1.31, 1.64, 3.00, 0, 0, 0],
         [3.21, 3.18, 3.18, 3.17, 1.70, 1.36, 2.95, 1.32, 0, 0],
         [2.38, 2.31, 2.42, 1.94, 2.85, 2.81, 2.56, 2.91, 2.97, 0.]] )

    xi = np.atleast_2d(np.asarray([0.0, x[0]] + list(x[1::2])))
    xj = np.repeat(xi, np.size(xi, 1), axis=0)
    xi = xi.T

    yi = np.atleast_2d(np.asarray([0.0, 0.0] + list(x[2::2])))
    yj = np.repeat(yi, np.size(yi, 1), axis=0)
    yi = yi.T

    inner = (np.sqrt(((xi - xj) ** 2 + (yi - yj) ** 2)) - d) ** 2
    inner = np.tril(inner, -1)

    return np.sum(np.sum(inner, axis=1))


N = 17
bounds = [[0.0, 4.0]] + list(zip([-4.0] * (N - 1),
                                 [4.0] * (N - 1)))

# DE for reference
res = opt.differential_evolution(cola, bounds)
print('DE', res)

# SHGO
sampling = 'halton'  # 'sobol', 'halton', 'simplicial'
res = opt.shgo(cola, bounds, sampling_method=sampling)
print('SHGO', sampling, res)

It always stops in Qhull, eventually.

@dschmitz89
Copy link
Contributor

@dschmitz89 thanks for looking at the code. I addresses (almost) all concerns of you and others:

  • I reinstated the floats in the benchmark suite.
  • I renamed the keywords to more talkative names.
  • I removed the restart capability, sigh :-(.
  • I require keyword arguments after mask.
  • I also added a more sensible default for the maximum number of function evaluations based on my experience.
  • I test for the matching error message. Note, however, that np.seterr is set now to raise errors on all invalid numbers. I deal with NaN and Inf in SCE but I cannot test this because the testing raises RuntimeWarning as an error.
  • I use assert_allclose instead of assert_almost_equal in the tests.

Thanks!

I did not change two things:

  1. I left the mask keyword because it is just too useful to me (but you do not have to use it ;-). I almost always do not want to optimise all parameters of my models but I want to exclude a few. For example, I do not want to optimise insensitive parameters, which wastes computing time and only gives equifinality. Or I want to fix some parameters after having found strong correlations with other parameters.

I understand that this is a useful feature. For MILP, a similar approach is used to indicate whether a parameter is boolean or not. My comments about dropping it still also come from the experience that such API changes make the PR reviews much harder as they often cause widespread debates. That said, if there are no objections, we can leave this functionality in but should agree on the API details. Opinions @andyfaff @tupui @mdhaber ?

  1. I left polish=True as default. I tested the benchmark suite with and without polish. It dropped from 75% to 50% success rate without polishing. I checked that the drop in differential evolution is even larger from more than 85% to 55%. On the other hand, polishing did not add a lot of function evaluations in these cases. From my experience, most users work with the default values (that;s what I do with differential evolution, for example). Because polishing is a good choice in most cases I think it improves the user experience setting it to True as default. You can always switch it off if it does not fit your problem.

This is a very interesting observation. I guess that we should not expect global optimizers to come arbitrarily close to the optimum. I should test out how much better DIRECT becomes with such a polish option as all the other optimizers use local optimizers under the hood.

Note that the benchmark suite has its quirks. SHGO simply hangs in the problem Cola (leading to timeout) so badly that the benchmark suite cannot even write out the rest of the json file afterwards. I simply commented the class for the benchmarking. There are other timeouts by other optimizers but they lead only to a 'failed' in the output. These are:

  • differential evolution (DE) times out in the functions DeVilliersGlasser02, Dolan, Mishra09, Powell, and PowerSum
  • basinhopping and SHGO time out in Thurber

This makes the runtime of benchmark painfully slow.

The benchmark suite is not in a good shape. I worked on it once and had a similar experience. I will open another issue about these functions to discuss what we can do about them. Thanks a lot for letting us know!

I will merge all the commit messages at the end of the process. In the mean time just ignore the individual sub-commits.

Usually, PRs get squash-merged by a maintainer. Depending on the complexity, git history is rewritten or not. You do not need to worry about that for now :).

@lucascolley lucascolley added the needs-work Items that are pending response from the author label Jan 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A new feature or improvement needs-decision Items that need further discussion before they are merged or closed needs-work Items that are pending response from the author scipy.optimize
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants