Skip to content

Commit

Permalink
qLogEHVI (#2036)
Browse files Browse the repository at this point in the history
Summary:

This commit adds `qLogEHVI`, a member of the LogEI family of acquisition functions, for multi-objective optimization problems.

Reviewed By: Balandat

Differential Revision: D49967862
  • Loading branch information
SebastianAment authored and facebook-github-bot committed Oct 16, 2023
1 parent 8413920 commit 5c8620f
Show file tree
Hide file tree
Showing 11 changed files with 1,179 additions and 522 deletions.
15 changes: 15 additions & 0 deletions botorch/acquisition/analytic.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,13 @@
r"""
Analytic Acquisition Functions that evaluate the posterior without performing
Monte-Carlo sampling.
References
.. [Ament2023logei]
S. Ament, S. Daulton, D. Eriksson, M. Balandat, and E. Bakshy.
Unexpected Improvements to Expected Improvement for Bayesian Optimization. Advances
in Neural Information Processing Systems 36, 2023.
"""

from __future__ import annotations
Expand Down Expand Up @@ -362,6 +369,8 @@ class LogExpectedImprovement(AnalyticAcquisitionFunction):
to avoid numerical issues in the computation of the acquisition value and its
gradient in regions where improvement is predicted to be virtually impossible.
See [Ament2023logei]_ for details. Formally,
`LogEI(x) = log(E(max(f(x) - best_f, 0))),`
where the expectation is taken over the value of stochastic function `f` at `x`.
Expand Down Expand Up @@ -423,7 +432,10 @@ class LogConstrainedExpectedImprovement(AnalyticAcquisitionFunction):
multi-outcome, with the index of the objective and constraints passed to
the constructor.
See [Ament2023logei]_ for details. Formally,
`LogConstrainedEI(x) = log(EI(x)) + Sum_i log(P(y_i \in [lower_i, upper_i]))`,
where `y_i ~ constraint_i(x)` and `lower_i`, `upper_i` are the lower and
upper bounds for the i-th constraint, respectively.
Expand Down Expand Up @@ -569,7 +581,10 @@ class LogNoisyExpectedImprovement(AnalyticAcquisitionFunction):
`q=1`. Assumes that the posterior distribution of the model is Gaussian.
The model must be single-outcome.
See [Ament2023logei]_ for details. Formally,
`LogNEI(x) = log(E(max(y - max Y_base), 0))), (y, Y_base) ~ f((x, X_base))`,
where `X_base` are previously observed points.
Note: This acquisition function currently relies on using a FixedNoiseGP (required
Expand Down
23 changes: 18 additions & 5 deletions botorch/acquisition/logei.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,15 @@
# LICENSE file in the root directory of this source tree.

r"""
Batch implementations of the LogEI family of improvements-based acquisition functions.
Monte-Carlo variants of the LogEI family of improvements-based acquisition functions,
see [Ament2023logei]_ for details.
References
.. [Ament2023logei]
S. Ament, S. Daulton, D. Eriksson, M. Balandat, and E. Bakshy.
Unexpected Improvements to Expected Improvement for Bayesian Optimization. Advances
in Neural Information Processing Systems 36, 2023.
"""

from __future__ import annotations
Expand Down Expand Up @@ -138,9 +146,11 @@ class qLogExpectedImprovement(LogImprovementMCAcquisitionFunction):
(3) smoothly maximizing over q, and
(4) averaging over the samples in log space.
`qLogEI(X) ~ log(qEI(X)) = log(E(max(max Y - best_f, 0)))`,
See [Ament2023logei] for details. Formally,
`qLogEI(X) ~ log(qEI(X)) = log(E(max(max Y - best_f, 0)))`.
where `Y ~ f(X)`, and `X = (x_1,...,x_q)`.
where `Y ~ f(X)`, and `X = (x_1,...,x_q)`, .
Example:
>>> model = SingleTaskGP(train_X, train_Y)
Expand Down Expand Up @@ -237,8 +247,11 @@ class qLogNoisyExpectedImprovement(
to the canonical improvement over previously observed points is computed
for each sample and the logarithm of the average is returned.
`qLogNEI(X) ~ log(qNEI(X)) = Log E(max(max Y - max Y_baseline, 0))`, where
`(Y, Y_baseline) ~ f((X, X_baseline)), X = (x_1,...,x_q)`
See [Ament2023logei] for details. Formally,
`qLogNEI(X) ~ log(qNEI(X)) = Log E(max(max Y - max Y_baseline, 0))`,
where `(Y, Y_baseline) ~ f((X, X_baseline)), X = (x_1,...,x_q)`.
Example:
>>> model = SingleTaskGP(train_X, train_Y)
Expand Down
2 changes: 2 additions & 0 deletions botorch/acquisition/monte_carlo.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@
with (quasi) Monte-Carlo sampling. See [Rezende2014reparam]_, [Wilson2017reparam]_ and
[Balandat2020botorch]_.
References
.. [Rezende2014reparam]
D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and
approximate inference in deep generative models. ICML 2014.
Expand Down
Loading

0 comments on commit 5c8620f

Please sign in to comment.