Theorie: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
(Die Seite wurde neu angelegt: „ == Introduction == The Karush-Kuhn-Tucker (KKT) Theorem is a model of nonlinear optimization (NLP). The model is based on the Langrangian optimization, but co…“)
 
 
(2 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt)
Zeile 1: Zeile 1:
 +
 +
'''Karush-Kuhn-Tucker Theorem'''
  
 
== Introduction ==
 
== Introduction ==
Zeile 6: Zeile 8:
 
The six KKT condition are based on the Langrangian function of a random maximization (or minimization) problem.
 
The six KKT condition are based on the Langrangian function of a random maximization (or minimization) problem.
  
<math>L(x,\lambda) = f(x_1 , ... , x_n ) - \sum_{i=0} ^k \lambda_i * g (x_1,...,x_n)</math>
+
<math>L(x,\lambda) = f(x_1 , ... , x_n ) - \sum_{i=0} ^k \lambda_i * g_i (x_1,...,x_n) </math>, where <math> f(x_1,...,x_n) </math> is the objective function and <math>g_i (x_1,...,x_n) = 0</math> are the constraints.
 +
 
 +
The algebraic sign of <math> \sum_{i=0} ^k \lambda_i * g_i (x_1,...,x_n)</math> depends on the state of the problem, for a maximization problem it is “-” for minimization “+“.
 +
 
 +
 
 +
The first KKT Condition is the derivative of <math>L (x,\lambda)</math> after all <math>x_i</math>, which is equal to the gradient of <math>L (x,\lambda)</math>, <math>\nabla L (x,\lambda)</math> and it includes the inequality:
 +
 
 +
<math>1) \frac {\partial L}{\partial x_i} = \frac {\partial f}{\partial x _i}  - \sum_{i=0} ^j * \frac {\partial g_i}{\partial x_i} \leq 0  </math>
 +
 
 +
If the objective function has to be minimized instead, only the sign of <math>\frac {\partial f}{\partial x_i} </math> changes,  because <math>\max f(x) </math> = <math>  \min</math><math> -(f(x)) </math>. The reason therefore is easy to see,when reflecting the objective function on the abcissa,which is clarified by the graphic below. The restrictions (in this case <math>x \le 1</math> ) don’t change.
 +
[[Datei:Graph_constraint.jpg]]
 +
 
 +
 
 +
The second KKT constraint ensures, any x can not be less than 0, which is basically the non-negativity constraint.
 +
 
 +
<math>2) x_i * \frac {\partial L}{\partial x_i} = 0</math>
 +
 
 +
The third and fourth constraints are analog to the first and the second one, but this time for <math>\lambda </math>.
 +
 
 +
<math>3) \frac {\partial L} {\partial \lambda_j}  \leq  0</math>
 +
 
 +
Analogue to the previous changes when one face a minimization problem, the sign of <math>g_i</math> changes.
 +
 
 +
 
 +
<math>4) \lambda_j * \frac {\partial L}{\partial \lambda_j} = 0</math>
 +
 
 +
 
 +
And finally the non-negativity constraints:
 +
 
 +
<math> 5) x_i \geq 0  ,  i=1,...,n </math>
 +
 
 +
<math>6) \lambda_j \geq 0  ,  j = 1,...,k </math>
 +
 
 +
== Examples ==
 +
 
 +
 
 +
== Further Information ==
 +
If <math>f(x_1,...,x_n)</math> is concave and all <math>h_i(x_1,...,x_n)</math> are convex,  the satisfaction of all KKT conditions is necessary and sufficient for a global optimum, if the Slater condition is satisfied to. This means, there exist an <math>x \in \R_{+}^n</math>, so that <math>g(x)<o</math> which means there is a point inside the solution space. Node that <math>h_i(x_1,...,x_n)</math> is the original constraint, but to make things easy those restrictions are changed from inequality functions into quality functions ( <math>g_i(x_1,...,x_n)</math>).

Aktuelle Version vom 30. Juni 2013, 23:37 Uhr

Karush-Kuhn-Tucker Theorem

Introduction

The Karush-Kuhn-Tucker (KKT) Theorem is a model of nonlinear optimization (NLP). The model is based on the Langrangian optimization, but considers inequality as part of the KKT constraints. The approach proofs the optimality of a (given) point concerning a nonlinear objective function. The satisfaction of KKT constraint is a necessary condition for a solution being optimal in NLP.

KKT Conditions

The six KKT condition are based on the Langrangian function of a random maximization (or minimization) problem.

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): L(x,\lambda) = f(x_1 , ... , x_n ) - \sum_{i=0} ^k \lambda_i * g_i (x_1,...,x_n) , where is the objective function and are the constraints.

The algebraic sign of Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \sum_{i=0} ^k \lambda_i * g_i (x_1,...,x_n)

depends on the state of the problem, for a maximization problem it is “-” for minimization “+“. 


The first KKT Condition is the derivative of after all , which is equal to the gradient of , Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \nabla L (x,\lambda)

and it includes the inequality:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): 1) \frac {\partial L}{\partial x_i} = \frac {\partial f}{\partial x _i} - \sum_{i=0} ^j * \frac {\partial g_i}{\partial x_i} \leq 0


If the objective function has to be minimized instead, only the sign of Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac {\partial f}{\partial x_i}

changes,  because  = Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.):  -(f(x)) 

. The reason therefore is easy to see,when reflecting the objective function on the abcissa,which is clarified by the graphic below. The restrictions (in this case Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x \le 1

) don’t change.
Fehler beim Erstellen des Vorschaubildes: Die Miniaturansicht konnte nicht am vorgesehenen Ort gespeichert werden


The second KKT constraint ensures, any x can not be less than 0, which is basically the non-negativity constraint.

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): 2) x_i * \frac {\partial L}{\partial x_i} = 0


The third and fourth constraints are analog to the first and the second one, but this time for .

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): 3) \frac {\partial L} {\partial \lambda_j} \leq 0


Analogue to the previous changes when one face a minimization problem, the sign of changes.


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): 4) \lambda_j * \frac {\partial L}{\partial \lambda_j} = 0


And finally the non-negativity constraints:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): 5) x_i \geq 0 , i=1,...,n


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): 6) \lambda_j \geq 0 , j = 1,...,k


Examples

Further Information

If is concave and all are convex, the satisfaction of all KKT conditions is necessary and sufficient for a global optimum, if the Slater condition is satisfied to. This means, there exist an Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x \in \R_{+}^n , so that which means there is a point inside the solution space. Node that is the original constraint, but to make things easy those restrictions are changed from inequality functions into quality functions ( ).