Optimality Condition for Smooth Constrained Optimization Problems
No Thumbnail Available
Date
2020-08-30
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Addis Ababa University
Abstract
Mathematical optimization is the process of maximizing or minimizing an objective
function by _nding the best available values across a set of inputs under a restriction
domain. This project focuses on _nding the optimality condition for optimization
problems of di_erentiable function. For unconstrained optimization problems,
checking the positive de_niteness of the Hessian matrix at stationary points, one
can conclude whether those stationary points are optimum points or not, if the
objective function is di_erentiable. For constrained Optimization problem, the
objective function and the function in the constraint sets are di_erentiable and
the well known optimality condition called Karush-Kuhn-Tucker (KKT) condition
leads to _nd the optimum point(s) of the given optimization problem and the
convetional Lagrangian approach to solving constrained optimization problems
leads to optimality conditions which are either necessary or su_cient, but not
both unless the underlying objective functions and functions in constraints set are
also convex. The Tchebyshev norm leads to an optimality conditions which is
both su_cient and necessarly without any convexity assumption.This optimality
conditions can used to device a conceptually simple method for solving non-convex
inequality constrained optimization problems.
Description
Keywords
Convex Set, Convex Function, Constrained Optimization, Inequality Constraints, Equality Constraints, Smooth Optimization, Positive De_Niteness, Hessian Matrix, Non-Convex Optimization, Equivalent Optimality Conditions, Unconstrained Optimization