Broyden–Fletcher–Goldfarb–Shanno algorithm


In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno algorithm is an iterative method for solving unconstrained nonlinear optimization problems.
The BFGS method belongs to quasi-Newton methods, a class of hill-climbing optimization techniques that seek a stationary point of a function. For such problems, a necessary condition for optimality is that the gradient be zero. Newton's method and the BFGS methods are not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances.
In Quasi-Newton methods, the Hessian matrix of second derivatives is not computed. Instead, the Hessian matrix is approximated using updates specified by gradient evaluations. Quasi-Newton methods are generalizations of the secant method to find the root of the first derivative for multidimensional problems. In multi-dimensional problems, the secant equation does not specify a unique solution, and quasi-Newton methods differ in how they constrain the solution. The BFGS method is one of the most popular members of this class. Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables. The BFGS-B variant handles simple box constraints.
The algorithm is named after Charles George Broyden, Roger Fletcher, Donald Goldfarb and David Shanno.

Rationale

The optimization problem is to minimize, where is a vector in, and is a differentiable scalar function. There are no constraints on the values that can take.
The algorithm begins at an initial estimate for the optimal value and proceeds iteratively to get a better estimate at each stage.
The search direction pk at stage k is given by the solution of the analogue of the Newton equation:
where is an approximation to the Hessian matrix, which is updated iteratively at each stage, and is the gradient of the function evaluated at xk. A line search in the direction pk is then used to find the next point xk+1 by minimizing over the scalar
The quasi-Newton condition imposed on the update of is
Let and, then satisfies, which is the secant equation. The curvature condition should be satisfied for to be positive definite, which can be verified by pre-multiplying the secant equation with. If the function is not strongly convex, then the condition has to be enforced explicitly.
Instead of requiring the full Hessian matrix at the point to be computed as, the approximate Hessian at stage k is updated by the addition of two matrices:
Both and are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS and DFP updating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known as symmetric rank-one method, which does not guarantee the positive definiteness. In order to maintain the symmetry and positive definiteness of, the update form can be chosen as. Imposing the secant condition,. Choosing and, we can obtain:
Finally, we substitute and into and get the update equation of :

Algorithm

From an initial guess and an approximate Hessian matrix the following steps are repeated as converges to the solution:
  1. Obtain a direction by solving.
  2. Perform a one-dimensional optimization to find an acceptable stepsize in the direction found in the first step, so.
  3. Set and update.
  4. .
  5. .
denotes the objective function to be minimized. Convergence can be checked by observing the norm of the gradient,. If is initialized with, the first step will be equivalent to a gradient descent, but further steps are more and more refined by, the approximation to the Hessian.
The first step of the algorithm is carried out using the inverse of the matrix, which can be obtained efficiently by applying the Sherman–Morrison formula to the step 5 of the algorithm, giving
This can be computed efficiently without temporary matrices, recognizing that is symmetric,
and that and are scalars, using an expansion such as
In statistical estimation problems, credible intervals or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix. However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.

Notable implementations