Nonlinear Opt.: KKT- theorem 1: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
(Die Seite wurde neu angelegt: „ == Introduction == The Karush-Kuhn-Tucker theorem (short: KKT-Theorem) is a way of nonlinear optimization. It is based on Lagrange optimization. Beside the a…“)
 
Zeile 17: Zeile 17:
  
  
(1) <math>\frac{\delta L}{\delta x_{j}}= \frac{\delta f}{\delta x_{j}}\left ( \widehat{x} \right )+ \sum_{i=1}^{m} \widehat{\lambda_{i}}\cdot \frac{\delta g_{i}}{\delta x_{j}}\left ( \widehat{x} \right )\geq 0 </math>
+
<math> (1) \frac{\delta L}{\delta x_{j}}= \frac{\delta f}{\delta x_{j}}\left ( \widehat{x} \right )+ \sum_{i=1}^{m} \widehat{\lambda_{i}}\cdot \frac{\delta g_{i}}{\delta x_{j}}\left ( \widehat{x} \right )\geq 0 </math>
 +
 
 +
<math> (2) \widehat{x}\cdot \frac{\delta L}{\delta x_{j}}= \widehat{x}\cdot \left ( \frac{\delta f\left (  x \right )}{\delta x_{j}}+ \sum_{i=1}^{m}\widehat{\lambda _{i}}\cdot \frac{\delta g_{i}}{\delta x_{j}}\cdot \left ( \widehat{x} \right ) \right )=0
 +
</math>
 +
 
 +
<math>(3)  \frac{\delta L}{\delta \lambda _{i}}=g_{i}\left ( \widehat{x} \right )\leq 0</math>
 +
 
 +
<math>(4)\widehat{\lambda _{i} }\cdot\frac{\delta L }{\delta \lambda _{i}}=\widehat{\lambda _{i}}\cdot g_{i}\left ( \widehat{x} \right )=0 </math>
 +
 +
<math>(5)\widehat{x}\geq 0 </math>
 +
 
 +
<math>(6)\widehat{\lambda }\geq 0 </math>
 +
 
 +
 
 +
 
 +
Langrangian function: <math>\frac{\delta L}{\delta x_{j}}= \frac{\delta f}{\delta x_{j}}\left ( x \right )-\sum_{i=1}^{m}\lambda _{i}\frac{\delta g_{i}}{\delta x_{j}}\left ( x \right )</math>
 +
 
 +
It exist a saddle point in <math>\left (\widehat{x},\widehat{\lambda }  \right )</math> if <math>L\left ( \widehat{x},\lambda  \right )\leq L\left ( \widehat{x},\widehat{\lambda } \right )\leq L\left ( x,\widehat{\lambda } \right )</math>.
 +
 
 +
In general it holds that <math>\widehat{x}</math> minimizes <math>L\left ( x,\widehat{\lambda } \right )</math> and <math>\widehat{\lambda }</math> maximizes <math>L\left ( \widehat{x},\lambda  \right )</math>
 +
 
 +
 
 +
 
 +
 
 +
 
 +
== Ebene-2-Überschrift ==

Version vom 11. Juni 2013, 14:58 Uhr

Introduction

The Karush-Kuhn-Tucker theorem (short: KKT-Theorem) is a way of nonlinear optimization. It is based on Lagrange optimization. Beside the additional conditions of the Lagrange attempt there are some other extra conditions which are called KKT-conditions. The aim of this theory is to solve a problem with additional conditions in form of inequaliities.

The first mention of the KKT-conditions was in the master thesis of William Karush in 1939. But it became more famous in 1951 where Harold W. Kuhn and Albert W. Tucker presented it in a conference paper.



KKT-Conditions

In the following chapter the conditions will be demonstrated in a mathematical form:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (1) \frac{\delta L}{\delta x_{j}}= \frac{\delta f}{\delta x_{j}}\left ( \widehat{x} \right )+ \sum_{i=1}^{m} \widehat{\lambda_{i}}\cdot \frac{\delta g_{i}}{\delta x_{j}}\left ( \widehat{x} \right )\geq 0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (2) \widehat{x}\cdot \frac{\delta L}{\delta x_{j}}= \widehat{x}\cdot \left ( \frac{\delta f\left ( x \right )}{\delta x_{j}}+ \sum_{i=1}^{m}\widehat{\lambda _{i}}\cdot \frac{\delta g_{i}}{\delta x_{j}}\cdot \left ( \widehat{x} \right ) \right )=0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (3) \frac{\delta L}{\delta \lambda _{i}}=g_{i}\left ( \widehat{x} \right )\leq 0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (4)\widehat{\lambda _{i} }\cdot\frac{\delta L }{\delta \lambda _{i}}=\widehat{\lambda _{i}}\cdot g_{i}\left ( \widehat{x} \right )=0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (5)\widehat{x}\geq 0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (6)\widehat{\lambda }\geq 0



Langrangian function: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\delta L}{\delta x_{j}}= \frac{\delta f}{\delta x_{j}}\left ( x \right )-\sum_{i=1}^{m}\lambda _{i}\frac{\delta g_{i}}{\delta x_{j}}\left ( x \right )


It exist a saddle point in if Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): L\left ( \widehat{x},\lambda \right )\leq L\left ( \widehat{x},\widehat{\lambda } \right )\leq L\left ( x,\widehat{\lambda } \right ) .

In general it holds that minimizes and maximizes



Ebene-2-Überschrift