Theorie: Unterschied zwischen den Versionen
[unmarkierte Version] | [unmarkierte Version] |
(Die Seite wurde neu angelegt: „ == Introduction == The Karush-Kuhn-Tucker (KKT) Theorem is a model of nonlinear optimization (NLP). The model is based on the Langrangian optimization, but co…“) |
|||
Zeile 1: | Zeile 1: | ||
+ | |||
== Introduction == | == Introduction == | ||
Zeile 6: | Zeile 7: | ||
The six KKT condition are based on the Langrangian function of a random maximization (or minimization) problem. | The six KKT condition are based on the Langrangian function of a random maximization (or minimization) problem. | ||
− | <math>L(x,\lambda) = f(x_1 , ... , x_n ) - \sum_{i=0} ^k \lambda_i * | + | <math>L(x,\lambda) = f(x_1 , ... , x_n ) - \sum_{i=0} ^k \lambda_i * g_i (x_1,...,x_n) </math>, where <math> f(x_1,...,x_n) </math> is the objective function and <math>g_i (x_1,...,x_n) = 0</math> are the constraints. |
+ | The algebraic sign of <math> f(x_1,...,x_n)</math> depends on the state of the problem. For a maximization problem it is “+” for minimization “-“. The reason therefore is easy to see,when reflecting the objective function on the x-axis.The graphic below clarifies this coherence. It is visible, that only the algebraic sign of <math> \frac {\delta f} {\delta x} </math> changes, because <math>\max f(x) </math> = <math> \min –(f(x)) </math>. The restrictions (in this case <math>x</math> has to be lower than 1) don’t change. |
Version vom 30. Juni 2013, 16:28 Uhr
Introduction
The Karush-Kuhn-Tucker (KKT) Theorem is a model of nonlinear optimization (NLP). The model is based on the Langrangian optimization, but considers inequality as part of the KKT constraints. The approach proofs the optimality of a (given) point concerning a nonlinear objective function. The satisfaction of KKT constraint is a necessary condition for a solution being optimal in NLP.
KKT Conditions
The six KKT condition are based on the Langrangian function of a random maximization (or minimization) problem.
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): L(x,\lambda) = f(x_1 , ... , x_n ) - \sum_{i=0} ^k \lambda_i * g_i (x_1,...,x_n) , where is the objective function and are the constraints. The algebraic sign of Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x_1,...,x_n)
depends on the state of the problem. For a maximization problem it is “+” for minimization “-“. The reason therefore is easy to see,when reflecting the objective function on the x-axis.The graphic below clarifies this coherence. It is visible, that only the algebraic sign of Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac {\delta f} {\delta x}
changes, because = Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \min –(f(x))
. The restrictions (in this case has to be lower than 1) don’t change.