Nonlinear Opt.: KKT- theorem 1: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
Zeile 2: Zeile 2:
 
== Introduction ==
 
== Introduction ==
  
The Karush-Kuhn-Tucker theorem (short: KKT-Theorem) is a way of nonlinear optimization. It is based on Lagrange optimization.
+
The Karush-Kuhn-Tucker theorem (short: KKT-Theorem) is a way of nonlinear optimization. It is based on Lagrangian optimization.
Beside the additional conditions of the Lagrange attempt there are some other extra conditions which are called KKT-conditions.  
+
Beside the additional conditions of the Lagrangian attempt there are some other extra conditions which are called KKT-conditions.  
 
The aim of this theory is to solve a problem with additional conditions in form of inequaliities.
 
The aim of this theory is to solve a problem with additional conditions in form of inequaliities.
  
 
The first mention of the KKT-conditions was in the master thesis of William Karush in 1939. But it became more famous in 1951  
 
The first mention of the KKT-conditions was in the master thesis of William Karush in 1939. But it became more famous in 1951  
 
where Harold W. Kuhn and Albert W. Tucker presented it in a conference paper.
 
where Harold W. Kuhn and Albert W. Tucker presented it in a conference paper.
 
 
  
  
Zeile 38: Zeile 36:
 
In general it holds that <math>\widehat{x}</math> minimizes <math>L\left ( x,\widehat{\lambda } \right )</math> and <math>\widehat{\lambda }</math> maximizes <math>L\left ( \widehat{x},\lambda  \right )</math>
 
In general it holds that <math>\widehat{x}</math> minimizes <math>L\left ( x,\widehat{\lambda } \right )</math> and <math>\widehat{\lambda }</math> maximizes <math>L\left ( \widehat{x},\lambda  \right )</math>
  
 +
If the known Langrangian function is convex in x and concave in <math>\lambda </math> , so the KKT conditions are necessary and sufficient conditions for a seaddle point of <math>L\left ( x,\lambda  \right )</math>
 +
 +
It doesn´t mean that every possible calculated candidate is an optimal solution for this problem. 
  
  

Version vom 11. Juni 2013, 15:15 Uhr

Introduction

The Karush-Kuhn-Tucker theorem (short: KKT-Theorem) is a way of nonlinear optimization. It is based on Lagrangian optimization. Beside the additional conditions of the Lagrangian attempt there are some other extra conditions which are called KKT-conditions. The aim of this theory is to solve a problem with additional conditions in form of inequaliities.

The first mention of the KKT-conditions was in the master thesis of William Karush in 1939. But it became more famous in 1951 where Harold W. Kuhn and Albert W. Tucker presented it in a conference paper.


KKT-Conditions

In the following chapter the conditions will be demonstrated in a mathematical form:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (1) \frac{\delta L}{\delta x_{j}}= \frac{\delta f}{\delta x_{j}}\left ( \widehat{x} \right )+ \sum_{i=1}^{m} \widehat{\lambda_{i}}\cdot \frac{\delta g_{i}}{\delta x_{j}}\left ( \widehat{x} \right )\geq 0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (2) \widehat{x}\cdot \frac{\delta L}{\delta x_{j}}= \widehat{x}\cdot \left ( \frac{\delta f\left ( x \right )}{\delta x_{j}}+ \sum_{i=1}^{m}\widehat{\lambda _{i}}\cdot \frac{\delta g_{i}}{\delta x_{j}}\cdot \left ( \widehat{x} \right ) \right )=0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (3) \frac{\delta L}{\delta \lambda _{i}}=g_{i}\left ( \widehat{x} \right )\leq 0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (4)\widehat{\lambda _{i} }\cdot\frac{\delta L }{\delta \lambda _{i}}=\widehat{\lambda _{i}}\cdot g_{i}\left ( \widehat{x} \right )=0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (5)\widehat{x}\geq 0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (6)\widehat{\lambda }\geq 0



Langrangian function: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\delta L}{\delta x_{j}}= \frac{\delta f}{\delta x_{j}}\left ( x \right )-\sum_{i=1}^{m}\lambda _{i}\frac{\delta g_{i}}{\delta x_{j}}\left ( x \right )


It exist a saddle point in if Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): L\left ( \widehat{x},\lambda \right )\leq L\left ( \widehat{x},\widehat{\lambda } \right )\leq L\left ( x,\widehat{\lambda } \right ) .

In general it holds that minimizes and maximizes

If the known Langrangian function is convex in x and concave in , so the KKT conditions are necessary and sufficient conditions for a seaddle point of

It doesn´t mean that every possible calculated candidate is an optimal solution for this problem.



Ebene-2-Überschrift