SG++-Doxygen-Documentation
|
Classes | |
class | AdaptiveGradientDescent |
Gradient descent with adaptive step size. More... | |
class | AdaptiveNewton |
Newton method with adaptive step size. More... | |
class | AugmentedLagrangian |
Augmented Lagrangian method for constrained optimization. More... | |
class | BFGS |
BFGS method for unconstrained optimization. More... | |
class | CMAES |
Gradient-free CMA-ES method. More... | |
class | ConstrainedOptimizer |
Abstract class for solving constrained optimization problems. More... | |
class | DifferentialEvolution |
Gradient-free Differential Evolution method. More... | |
class | GradientDescent |
Gradient-based method of steepest descent. More... | |
class | LeastSquaresOptimizer |
Abstract class for solving non-linear least squares problems. More... | |
class | LevenbergMarquardt |
Levenberg-Marquardt algorithm for least squares optimization. More... | |
class | LogBarrier |
Log Barrier method for constrained optimization. More... | |
class | MultiStart |
Meta optimization algorithm calling local algorithm multiple times. More... | |
class | NelderMead |
Gradient-free Nelder-Mead method. More... | |
class | Newton |
Gradient-based nonlinear conjugate gradient method. More... | |
class | NLCG |
Gradient-based nonlinear conjugate gradient method. More... | |
class | Rprop |
Rprop method for unconstrained optimization. More... | |
class | SquaredPenalty |
Squared Penalty method for constrained optimization. More... | |
class | UnconstrainedOptimizer |
Abstract class for optimizing objective functions. More... | |
Functions | |
bool | lineSearchArmijo (ScalarFunction &f, double beta, double gamma, double tol, double eps, const base::DataVector &x, double fx, base::DataVector &gradFx, const base::DataVector &s, base::DataVector &y, size_t &evalCounter) |
Line search (1D optimization on a line) with Armijo's rule used in gradient-based optimization. More... | |
|
inline |
Line search (1D optimization on a line) with Armijo's rule used in gradient-based optimization.
Armijo's rule calculates \(\sigma = \beta^k\) for \(k = 0, 1, \dotsc\) for a fixed \(\beta \in (0, 1)\) and checks if \(\vec{y} = \vec{x} + \sigma\vec{s}\) lies in \([0, 1]^d\) and whether the objective function value improvement meets the condition \(f(\vec{x}) - f(\vec{y}) \ge \gamma\sigma (-\nabla f(\vec{x}) \cdot \vec{s})\) for \(\gamma \in (0, 1)\) fixed.
The return value states whether the relative improvement (depending on two tolerances) is big enough to continue the optimization algorithm.
f | objective function | |
beta | \(\beta \in (0, 1)\) | |
gamma | \(\gamma \in (0, 1)\) | |
tol | tolerance 1 (positive) | |
eps | tolerance 2 (positive) | |
x | point to start the line search in | |
fx | objective function value in x | |
gradFx | objective function gradient in x | |
s | search direction (should be normalized) | |
[out] | y | new point, must have the same size as x before calling this function |
[in,out] | evalCounter | this variable will be increased by the number of evaluations of f while searching |
References sgpp::base::DataVector::dotProduct(), friedman::eps, sgpp::optimization::ScalarFunction::eval(), and sgpp::base::DataVector::getSize().
Referenced by sgpp::optimization::optimizer::GradientDescent::optimize(), sgpp::optimization::optimizer::NLCG::optimize(), and sgpp::optimization::optimizer::Newton::optimize().