GLIS is a package for finding the global (GL) minimum of a function that is expensive to evaluate, possibly under constraints, using inverse (I) distance weighting and surrogate (S) radial basis functions. Compared to Bayesian optimization, GLIS is very competitive and often computationally lighter.
The package implements two main algorithms, described here below.
The GLIS algorithm solves the following constrained derivative-free global optimization problem
The approach is particularly useful when
Finite bounds
The algorithm is based on the following paper:
[1] A. Bemporad, "Global optimization via inverse weighting and radial basis functions," Computational Optimization and Applications, vol. 77, pp. 571–595, 2020. [bib entry]
GLISp solves global optimization problems in which the function
and want to solve the following preference-based optimization problem:
find
with
GLISp is particularly useful to solve optimization problems that involve human assessments. In fact, there is no need to explicitly quantify an objective function
The algorithm is based on the following paper:
[2] A. Bemporad, D. Piga, "Active preference learning based on radial basis functions," Machine Learning, vol. 110, no. 2, pp. 417-448, 2021. [bib entry]
C-GLISp is an extension of GLIS and GLISp to handle unknown constraints on
The algorithm is based on the following paper:
[3] M. Zhu, D. Piga, A. Bemporad, "C-GLISp: Preference-based global optimization under unknown constraints with applications to controller calibration,” IEEE Trans. Contr. Systems Technology, vol. 30, no. 3, pp. 2176–2187, Sept. 2022. [bib entry]
A Python version of GLIS/GLISp is also available for download here.
Minimize a function
The user can choose to solve the problem by feeding the function f
into the GLIS solver, or solve the optimization problem step-by-step without explicitly passing the function handle f
to GLIS.
fprintf("Solve the problem by feeding the simulator/fun directly into the GLIS solver \n")
[xopt1, fopt1,prob_setup1] = solve_glis(fun,lb,ub,opts);
fprintf("Solve the problem incrementally (i.e., provide the function evaluation at each iteration) \n")
x_= initialize_glis(lb,ub,opts); % x_ is unscaled
for k = 1: maxevals
f_val = fun(x_);
[x_, prob_setup2] = update_glis(f_val);
end
xopt2 = prob_setup2.xbest;
fopt2 = prob_setup2.fbest;
Examples of numerical benchmark testing using GLIS can be found in the examples
folder.
GLISp solves a preference-based optimization problem with preference function lb
ub
.
fprintf("Solve the problem by feeding the preference expression step directly into the GLISp solver \n")
[xbest, out] = solve_glisp(pref, lb,ub,opts);
fprintf("Solve the problem incrementally (i.e., provide the preference at each iteration) \n")
[xbest2, x2] = initialize_glisp(lb,ub,opts);
for k = 1:maxevals-1
pref_val = pref(x2,xbest2);
[x2, out] = update_glisp(pref_val);
xbest2 = out.xbest;
end
xbest2 = out.xbest;
out.X = out.X(1:end-1,:);
Examples of numerical benchmark testing using GLISp can be found in the examples
folder.
Examples of synthetic preference functions can be found in the examples
folder (e.g.,glisp_function1.m
)
By default, GLIS/GLISp use the inverse quadratic RBF
with
use the following code:
opts.rbf="gaussian";
The following RBFs are available in rbf_fun.m
:
"gaussian", "inverse_quadratic", "multiquadric", "thin_plate_spline", "linear", "inverse_multi_quadric"
In alternative to RBF functions, in GLIS we can use inverse distance weighting (IDW) surrogates:
opts.rbf="idw";
Although IDW functions are simpler to evaluate, usually RBF surrogates perform better.
GLIS acquires a new sample
where
GLIS uses Particle Swarm Optimization (PSO) to determine the minimizer
By default, GLIS takes
To change the default values of the hyper-parameters
opts.alpha=0.5; opts.delta=0.1;
GLISp performs acquisition in a similar way than GLIS. The surrogate
GLISp also supports, in alternative, the acquisition based on the maximimization of the probability of improvement, as defined in [2]. This can be specified as follows:
opts.acquisition_method=2; % 1 = IDW acquisition function, 2 = probability of improvement
By default, opts.acquisition_method
= 1.
The performance of GLISp can be usually improved by recalibrating
the RBF parameter
opts.RBFcalibrate = 1;
opts.thetas=thetas;
opts.RBFcalibrationSteps = steps;
where steps
is an array of step indices at which recalibration must be performed, and thetas
is the array of values of
As detailed in [3], GLIS/GLISp can handle unknown constraints on example
folder for how to instruct the solver to collect such extra information during queries.
% use GLIS
[xbest, fbest,out] = solve_glis(f,lb,ub,opts,eval_feas_,eval_sat_);
% use GLISp
[xbest,out]=solve_glisp(pref,lb,ub,opts,eval_feas_,eval_sat_);
In GLIS, when the objective function has very large and very small values, it is possible to fit the surrogate of a nonlinear transformation of the objective rather the raw objective values. For example, in the camel-six-humps example we want to build the surrogate
opts.obj_transform = @(f) log(f+2.);
Further options in executing GLIS/GLISp are detailed below:
svdtol
tolerance used in SVD decomposition when fitting the RBF function.
shrink_range
flag, if True the given bounds bounds
are shrunk to the bounding box of the feasible constrained set
constraint_penalty
penalty used to penalize the violation of the constraints
feasible_sampling
flag, if True all the initial samples satisfy the constraints
scale_delta
flag, scale
expected_max_evals
expected maximum number of queries (defaulted to maxevals
when using solve_glis.m
.
display
verbosity level: 1 (default).
PSOiters
, PSOswarmsize
, PSOminfunc
: parameters used by the PSO solver from the PSwarm.m
package used by GLIS.
sepvalue
(GLISp only): amount of separation
epsDeltaF
(GLISp only): lower bound on the range of the surrogate function.
This package was coded by Alberto Bemporad and Mengjia Zhu. Marco Forgione and Dario Piga also contributed to the development of the package.
This software is distributed without any warranty. Please cite the above papers if you use this software.
@article{Bem20,
author={A. Bemporad},
title={Global optimization via inverse distance weighting and radial basis functions},
journal={Computational Optimization and Applications},
volume=77,
pages={571--595},
year=2020
}
@article{BP21,
title={Active preference learning based on radial basis functions},
author={A. Bemporad and D. Piga},
journal={Machine Learning},
volume=110,
number=2,
pages={417--448},
year=2021
}
@article{ZPB22,
author={M. Zhu and D. Piga and A. Bemporad},
title={{C-GLISp}: Preference-Based Global Optimization under Unknown Constraints with Applications to Controller Calibration},
journal={IEEE Transactions on Control Systems Technology},
month=sep,
volume=30,
number=3,
pages={2176--2187},
year=2022
}
Apache 2.0
(C) 2019-2023 A. Bemporad, M. Zhu