Limit this search to....

Lagrange-Type Functions in Constrained Non-Convex Optimization 2003 Edition
Contributor(s): Rubinov, Alexander M. (Author), Xiao-Qi Yang (Author)
ISBN: 1402076274     ISBN-13: 9781402076275
Publisher: Springer
OUR PRICE:   $104.49  
Product Type: Hardcover - Other Formats
Published: November 2003
Qty:
Annotation: This volume provides a systematic examination of Lagrange-type functions and augmented Lagrangians. Weak duality, zero duality gap property and the existence of an exact penalty parameter are examined. Weak duality allows one to estimate a global minimum. The zero duality gap property allows one to reduce the constrained optimization problem to a sequence of unconstrained problems, and the existence of an exact penalty parameter allows one to solve only one unconstrained problem.
By applying Lagrange-type functions, a zero duality gap property for nonconvex constrained optimization problems is established under a coercive condition. It is shown that the zero duality gap property is equivalent to the lower semi-continuity of a perturbation function.
In particular, for a type of kth power penalty functions, this book obtains an analytic expression of the least exact penalty parameter and establishes that a fairly small exact penalty parameter can be achieved. As shown by numerical experiments, this property is very important for some global methods of Lipschitz programming, otherwise ill conditioning may occur.
Audience: The book is suitable for researchers in mathematical programming and optimization and postgraduate students in applied mathematics.
Additional Information
BISAC Categories:
- Mathematics | Probability & Statistics - General
- Mathematics | Applied
- Business & Economics | Operations Research
Dewey: 519.6
LCCN: 2003061926
Series: Applied Optimization
Physical Information: 0.92" H x 6.48" W x 9.5" (1.43 lbs) 286 pages
 
Descriptions, Reviews, Etc.
Publisher Description:
Lagrange and penalty function methods provide a powerful approach, both as a theoretical tool and a computational vehicle, for the study of constrained optimization problems. However, for a nonconvex constrained optimization problem, the classical Lagrange primal-dual method may fail to find a mini- mum as a zero duality gap is not always guaranteed. A large penalty parameter is, in general, required for classical quadratic penalty functions in order that minima of penalty problems are a good approximation to those of the original constrained optimization problems. It is well-known that penaity functions with too large parameters cause an obstacle for numerical implementation. Thus the question arises how to generalize classical Lagrange and penalty functions, in order to obtain an appropriate scheme for reducing constrained optimiza- tion problems to unconstrained ones that will be suitable for sufficiently broad classes of optimization problems from both the theoretical and computational viewpoints. Some approaches for such a scheme are studied in this book. One of them is as follows: an unconstrained problem is constructed, where the objective function is a convolution of the objective and constraint functions of the original problem. While a linear convolution leads to a classical Lagrange function, different kinds of nonlinear convolutions lead to interesting generalizations. We shall call functions that appear as a convolution of the objective function and the constraint functions, Lagrange-type functions.