Guta, Berhanu (PhD)Tafa, Meta2021-03-252023-11-042021-03-252023-11-042020-08-30http://etd.aau.edu.et/handle/123456789/25666Mathematical optimization is the process of maximizing or minimizing an objective function by _nding the best available values across a set of inputs under a restriction domain. This project focuses on _nding the optimality condition for optimization problems of di_erentiable function. For unconstrained optimization problems, checking the positive de_niteness of the Hessian matrix at stationary points, one can conclude whether those stationary points are optimum points or not, if the objective function is di_erentiable. For constrained Optimization problem, the objective function and the function in the constraint sets are di_erentiable and the well known optimality condition called Karush-Kuhn-Tucker (KKT) condition leads to _nd the optimum point(s) of the given optimization problem and the convetional Lagrangian approach to solving constrained optimization problems leads to optimality conditions which are either necessary or su_cient, but not both unless the underlying objective functions and functions in constraints set are also convex. The Tchebyshev norm leads to an optimality conditions which is both su_cient and necessarly without any convexity assumption.This optimality conditions can used to device a conceptually simple method for solving non-convex inequality constrained optimization problems.enConvex SetConvex FunctionConstrained OptimizationInequality ConstraintsEquality ConstraintsSmooth OptimizationPositive De_NitenessHessian MatrixNon-Convex OptimizationEquivalent Optimality ConditionsUnconstrained OptimizationOptimality Condition for Smooth Constrained Optimization ProblemsThesis