Addis Ababa University Libraries Electronic Thesis and Dissertations: AAU-ETD! >
Faculty of Business and Economics >
Thesis - Business Adminstration >
Please use this identifier to cite or link to this item:
|Title: ||Penalty and Barrier Function Methods for Constrained Optimization Problems|
|Authors: ||HAILAY, WELDEGIORGIS|
|Advisors: ||Prof. Dr. rer. nat. habil. R. Deumlich|
|Copyright: ||Jun-2010 |
|Date Added: ||11-May-2012 |
The purpose of the study was to explore how effectively the penalty and barrier function methods are able to solve constrained optimization problems.
The approach in these methods is that to transform the constrained problem into an equivalent unconstrained problem or into a problem with simple constraints, and solved using one (or some variant) of the algorithms for unconstrained optimization problems. Lemmas and theorems are proved which ensure the convergence of the sequence of solu-tions of the unconstrained problem into the solution of the original constrained optimiza-tion problem as the penalty parameter goes to infinity in the case of penalty methods and as the barrier parameter approaches to zero in barrier methods. Algorithms and MAT-LAB codes are developed using Powell’s method for unconstrained optimization prob-lems for both penalty and barrier function methods and then problems that have appeared frequently in the optimization literature which have been solved using different tech-niques are solved and compared amongst themselves and with other algorithms.
It is found out in the research that the sequential transformation methods converge to at least to a local minimum in most cases without the need for the convexity assumptions and with no requirement for differentiability of the objective and constraint functions. It is also revealed in the research that the Hessian matrix of the sequential transformation methods become ill-conditioned in a limit the penalty/barrier parameter goes to infini-ty/or zero. For problems of non-convex functions (with different local minimum points) it is recommended to solve the problem with different starting points, penalty/barrier para-meters and penalty/barrier multipliers and take the best solution.
But on the other hand for the exact penalty methods convexity assumptions and second-order sufficiency conditions for a local minimum is needed for the solution of uncon-strained optimization problem to converge to the solution of the original problem with a finite penalty parameter. In these methods a single application of an unconstrained mini-mization technique as against to the sequential methods is used to solve the constrained optimization problem. These methods alleviate the computational difficulties that we face associated with having to take the penalty parameter to infinity.|
|Appears in:||Thesis - Business Adminstration|
Items in the AAUL Digital Library are protected by copyright, with all rights reserved, unless otherwise indicated.