Nonlinear programming


In mathematics, nonlinear programming is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.

Applicability

A typical non-convex problem is that of optimizing transportation costs by selection from a set of transportation methods, one or more of which exhibit economies of scale, with various connectivities and capacity constraints. An example would be petroleum product transport given a selection or combination of pipeline, rail tanker, road tanker, river barge, or coastal tankship. Owing to economic batch size the cost functions may have discontinuities in addition to smooth changes.
In experimental science, some simple data analysis can be done with linear methods, but in general these problems, also, are nonlinear. Typically, one has a theoretical model of the system under study with variable parameters in it and a model the experiment or experiments, which may also have unknown parameters. One tries to find a best fit numerically. In this case one often wants a measure of the precision of the result, as well as the best fit itself.

Definition

Let n, m, and p be positive integers. Let X be a subset of Rn, let f, gi, and hj be real-valued functions on X for each i in and each j in, with at least one of f, gi, and hj being nonlinear.
A nonlinear minimization problem is an optimization problem of the form
A nonlinear maximization problem is defined in a similar way.

Possible types of constraint set

There are several possibilities for the nature of the constraint set, also known as the feasible set or feasible region.
An infeasible problem is one for which no set of values for the choice variables satisfies all the constraints. That is, the constraints are mutually contradictory, and no solution exists; the feasible set is the empty set.
A feasible problem is one for which there exists at least one set of values for the choice variables satisfying all the constraints.
An unbounded problem is a feasible problem for which the objective function can be made to be better than any given finite value. Thus there is no optimal solution, because there is always a feasible solution that gives a better objective function value than does any given proposed solution.

Methods for solving the problem

If the objective function is concave, or convex and the constraint set is convex, then the program is called convex and general methods from convex optimization can be used in most cases.
If the objective function is quadratic and the constraints are linear, quadratic programming techniques are used.
If the objective function is a ratio of a concave and a convex function and the constraints are convex, then the problem can be transformed to a convex optimization problem using fractional programming techniques.
Several methods are available for solving nonconvex problems. One approach is to use special formulations of linear programming problems. Another method involves the use of branch and bound techniques, where the program is divided into subclasses to be solved with convex or linear approximations that form a lower bound on the overall cost within the subdivision. With subsequent divisions, at some point an actual solution will be obtained whose cost is equal to the best lower bound obtained for any of the approximate solutions. This solution is optimal, although possibly not unique. The algorithm may also be stopped early, with the assurance that the best possible solution is within a tolerance from the best point found; such points are called ε-optimal. Terminating to ε-optimal points is typically necessary to ensure finite termination. This is especially useful for large, difficult problems and problems with uncertain costs or values where the uncertainty can be estimated with an appropriate reliability estimation.
Under differentiability and constraint qualifications, the Karush–Kuhn–Tucker conditions provide necessary conditions for a solution to be optimal. Under convexity, these conditions are also sufficient. If some of the functions are non-differentiable, subdifferential versions of
Karush–Kuhn–Tucker conditions are available.

Examples

2-dimensional example

A simple problem can be defined by the constraints
with an objective function to be maximized
where x =.

3-dimensional example

Another simple problem can be defined by the constraints
with an objective function to be maximized
where x =.