Nonlinear Opt.: KKT- theorem 3: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
(Optimization problem:)
 
(6 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt)
Zeile 9: Zeile 9:
 
Given is the following optimization problem:
 
Given is the following optimization problem:
  
<math>\min z= f (x_{1},...,x_{n}) or  \max z= f (x_{1},...,x_{n})</math>
+
<math>\min z= f (x_{1},...,x_{n})</math>
 +
 
 +
or   
 +
 
 +
<math>\max z= f (x_{1},...,x_{n})</math>
  
  
Zeile 20: Zeile 24:
  
 
<math>f (x_{1},...,x_{n}),h_{i}(x_{1},...,x_{n})</math> are continuously differentiable.
 
<math>f (x_{1},...,x_{n}),h_{i}(x_{1},...,x_{n})</math> are continuously differentiable.
 +
 +
 +
----
 +
  
 
1. Start with the Lagrangian function:  
 
1. Start with the Lagrangian function:  
 
<math>L(x,\lambda)= f(x_{1},...,x_{n})+ \sum _{i=1}^{m}\lambda _{i}*h_{i}(x_{1},...,x_{n})</math> (Minimum-problem)
 
<math>L(x,\lambda)= f(x_{1},...,x_{n})+ \sum _{i=1}^{m}\lambda _{i}*h_{i}(x_{1},...,x_{n})</math> (Minimum-problem)
 
  <math>L(x,\lambda)= f(x_{1},...,x_{n})- \sum _{i=1}^{m}\lambda _{i}*h_{i}(x_{1},...,x_{n})</math> (Maximum-problem)
 
  <math>L(x,\lambda)= f(x_{1},...,x_{n})- \sum _{i=1}^{m}\lambda _{i}*h_{i}(x_{1},...,x_{n})</math> (Maximum-problem)
 +
  
 
2. Continue with the Karush- Kuhn- Tucker conditions
 
2. Continue with the Karush- Kuhn- Tucker conditions
  
  
I. <math>\frac{\partial L}{\partial x_{j}}= \frac{\partial f}{\partial x_{j}}(\hat{\underline{x}})+ \sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})\geq 0</math>, j=1,..,n(Minimum-problem)
+
I. <math>\frac{\partial L}{\partial x_{j}}= \frac{\partial f}{\partial x_{j}}(\hat{\underline{x}})+ \sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})\geq 0</math>, j=1,..,n (Minimum-problem)
  
 
  <math>\frac{\partial L}{\partial x_{j}}= \frac{\partial f}{\partial x_{j}}(\hat{\underline{x}})- \sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})\geq 0</math>, j=1,..,n (Maximum-problem)
 
  <math>\frac{\partial L}{\partial x_{j}}= \frac{\partial f}{\partial x_{j}}(\hat{\underline{x}})- \sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})\geq 0</math>, j=1,..,n (Maximum-problem)
 +
  
 
II. <math>\hat{\underline{x}}*\frac{\partial L}{\partial x_{j}}= \hat{\underline{x}} *[\frac{\partial f(x)}{\partial}+\sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})]=0</math>,  j=1,..,n (Minimum-problem)
 
II. <math>\hat{\underline{x}}*\frac{\partial L}{\partial x_{j}}= \hat{\underline{x}} *[\frac{\partial f(x)}{\partial}+\sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})]=0</math>,  j=1,..,n (Minimum-problem)
  
 
  <math>\hat{\underline{x}}*\frac{\partial L}{\partial x_{j}}= \hat{\underline{x}} *[\frac{\partial f(x)}{\partial}-\sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})]=0</math>,  j=1,..,n (Maximum-problem)
 
  <math>\hat{\underline{x}}*\frac{\partial L}{\partial x_{j}}= \hat{\underline{x}} *[\frac{\partial f(x)}{\partial}-\sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})]=0</math>,  j=1,..,n (Maximum-problem)
 +
  
 
III. <math>\frac{\partial L}{\partial \lambda_{i}}= h_{i}(\hat{x}))\leq 0</math>, i=1,…,m
 
III. <math>\frac{\partial L}{\partial \lambda_{i}}= h_{i}(\hat{x}))\leq 0</math>, i=1,…,m
 +
 +
 
IV. <math>\hat{\lambda_{i} }\frac{\partial L}{\partial \lambda_{i}}=\hat{\lambda_{i} }*h_{i}(\hat{x})=0</math>, i=1,…,m
 
IV. <math>\hat{\lambda_{i} }\frac{\partial L}{\partial \lambda_{i}}=\hat{\lambda_{i} }*h_{i}(\hat{x})=0</math>, i=1,…,m
 +
 +
 
V. <math>\hat{\underline{x}}\geq 0</math>
 
V. <math>\hat{\underline{x}}\geq 0</math>
 +
 +
 
VI. <math>\hat{\underline{\lambda}}\geq 0</math>
 
VI. <math>\hat{\underline{\lambda}}\geq 0</math>
 +
  
  
Zeile 54: Zeile 72:
  
 
If <math>\hat{\underline{x}},\hat{\underline{\lambda}}</math> is a saddle point of  <math>L(x,\lambda)</math>, then <math>\hat{x}</math> is a minimizer/maximizer of the NLP.
 
If <math>\hat{\underline{x}},\hat{\underline{\lambda}}</math> is a saddle point of  <math>L(x,\lambda)</math>, then <math>\hat{x}</math> is a minimizer/maximizer of the NLP.
 +
 
If the function <math>f (x_{1},...,x_{n})</math> is concave and all <math>h_{i}(x_{1},...,x_{n})</math> are convex and the Slater condition (there exists a <math>x\epsilon \mathbb{R}_{+}^{n}</math> with <math>g(x)< 0</math>) is satisfied, then the KKT-conditions are necessary and sufficient for an optimal solution.
 
If the function <math>f (x_{1},...,x_{n})</math> is concave and all <math>h_{i}(x_{1},...,x_{n})</math> are convex and the Slater condition (there exists a <math>x\epsilon \mathbb{R}_{+}^{n}</math> with <math>g(x)< 0</math>) is satisfied, then the KKT-conditions are necessary and sufficient for an optimal solution.
 +
  
 
== Example ==
 
== Example ==
  
 
Given is the following optimization problem:
 
Given is the following optimization problem:
 +
 
<math>\max f(x_{1},x_{2})=8*x_{1}-x_{1}^2+4*x_{2}-x_{2}^2</math>
 
<math>\max f(x_{1},x_{2})=8*x_{1}-x_{1}^2+4*x_{2}-x_{2}^2</math>
  
Under the restrictions <math> g(x_{1}, x_{1})=x_{1}+x_{2}\leq 2 </math> and <math> x_{1},x_{2\geq 0}</math>
+
 
 +
Under the restrictions <math> g(x_{1}, x_{1})=x_{1}+x_{2}\leq 2 </math> and <math> x_{1},x_{2}\geq 0</math>
 +
 
 +
 
 +
----
 +
 
 +
 
 
Start with the Lagrangian-function:
 
Start with the Lagrangian-function:
 
<math>L(x,\lambda)=8*x_{1}-x_{1}^2+4*x_{2}-x_{2}^2 - \lambda*(x_{1}+x_{2}-2)</math>
 
<math>L(x,\lambda)=8*x_{1}-x_{1}^2+4*x_{2}-x_{2}^2 - \lambda*(x_{1}+x_{2}-2)</math>
  
 
Continue with the KKT conditions:
 
Continue with the KKT conditions:
I. (a)<math>\frac{\partial L}{\partial x_{1}}= 8-2*x_{1}-\lambda\leq 0</math>
+
 
(b)<math>\frac{\partial L}{\partial x_{2}}= 4-2*x_{2}-\lambda\leq 0</math>
+
I.(a) <math>\frac{\partial L}{\partial x_{1}}= 8-2*x_{1}-\lambda\leq 0</math>
II. (a) <math>x_{1}*\frac{\partial L}{\partial x_{1}}=x_{1}*(8-2*x_{1}-\lambda)=0</math>
+
 
(b) <math>x_{2}*\frac{\partial L}{\partial x_{2}}=x_{2}*(4-2*x_{2}-\lambda)=0</math>
+
I.(b) <math>\frac{\partial L}{\partial x_{2}}= 4-2*x_{2}-\lambda\leq 0</math>
 +
 
 +
II.(a) <math>x_{1}*\frac{\partial L}{\partial x_{1}}=x_{1}*(8-2*x_{1}-\lambda)=0</math>
 +
 
 +
II.(b) <math>x_{2}*\frac{\partial L}{\partial x_{2}}=x_{2}*(4-2*x_{2}-\lambda)=0</math>
 +
 
 
III. <math>\frac{\partial L}{\partial \lambda }= x_{1}+x_{2}-2 \leq 0</math>
 
III. <math>\frac{\partial L}{\partial \lambda }= x_{1}+x_{2}-2 \leq 0</math>
 +
 
IV. <math>\lambda *\frac{\partial L}{\partial \lambda }= \lambda *(x_{1}+x_{2}-2) =  0</math>
 
IV. <math>\lambda *\frac{\partial L}{\partial \lambda }= \lambda *(x_{1}+x_{2}-2) =  0</math>
 +
 
V. <math>x_{1}, x_{2}\geq 0</math>
 
V. <math>x_{1}, x_{2}\geq 0</math>
 +
 
VI. <math>\lambda \geq 0</math>
 
VI. <math>\lambda \geq 0</math>
 +
  
 
There are four possible cases for the solution:  
 
There are four possible cases for the solution:  
(1) <math>x_{1}=0 and x_{2}=0</math>
 
(2) <math>x_{1}>0 and x_{2}>0</math>
 
(3) <math> x_{1}=0 and x_{2}>0</math>
 
(4) <math>x_{1}>0 and x_{2}=0</math>
 
  
case (1): <math>x_{1}=0 and x_{2}=0</math>
+
(1) <math>x_{1}=0</math> and <math>x_{2}=0</math>
If  <math>x_{1}=0</math> and <math> x_{2}=0</math> then <math>\lambda=0</math> because of IV.  
+
 
 +
(2) <math>x_{1}>0</math> and <math>x_{2}>0</math>
 +
 
 +
(3) <math> x_{1}=0</math> and <math>x_{2}>0</math>
 +
 
 +
(4) <math>x_{1}>0</math> and <math>x_{2}=0</math>
 +
 
 +
 
 +
 
 +
- case (1): <math>x_{1}=0</math> and <math>x_{2}=0</math>
 +
 
 +
If  <math>x_{1}=0</math> and <math> x_{2}=0</math> then <math>\lambda=0</math> because of IV.
 +
 
This is contradictory to I: <math>8-2*0-0\leq 0</math>
 
This is contradictory to I: <math>8-2*0-0\leq 0</math>
  
case (2): <math>x_{1}>0 and x_{2}>0</math>
+
 
Because of II: <math> x_{1}=4-0,5*\lambda </math>
+
- case (2): <math>x_{1}>0</math> and <math>x_{2}>0</math>
<math> x_{2}=2-0,5*\lambda</math>
+
 
 +
Because of II: <math> x_{1}=4-0,5*\lambda </math> and <math> x_{2}=2-0,5*\lambda</math>
 +
 
 
a) Because of IV: <math> \lambda =0 </math>,then <math> x_{1}=4</math> and <math> x_{2}=2</math>
 
a) Because of IV: <math> \lambda =0 </math>,then <math> x_{1}=4</math> and <math> x_{2}=2</math>
 +
 
This is contradictory to III: <math> 4+2\leq 2</math>
 
This is contradictory to III: <math> 4+2\leq 2</math>
 +
 +
 
b) Because of IV: <math> x_{1}+ x_{2}=2</math>  ,so <math>  4-0,5*\lambda+2-0,5*\lambda=2</math>
 
b) Because of IV: <math> x_{1}+ x_{2}=2</math>  ,so <math>  4-0,5*\lambda+2-0,5*\lambda=2</math>
 
and so <math> \lambda=4 </math>.
 
and so <math> \lambda=4 </math>.
 +
 
This is contradictory to II: <math> x_{2}=2-0,5*4 </math>,because <math> x_{2}>0</math>
 
This is contradictory to II: <math> x_{2}=2-0,5*4 </math>,because <math> x_{2}>0</math>
  
case (3):<math> x_{1}=0 and x_{2}>0</math>
+
 
Then <math>\lambda \geq 8</math> because of I. (a) and <math>x_{2}=2</math> because of IV.
+
- case (3):<math> x_{1}=0</math> and <math>x_{2}>0</math>
 +
 
 +
Then <math>\lambda \geq 8</math> because of I.(a) and <math>x_{2}=2</math> because of IV.
 +
 
 
With II.(b) you get for  <math>\lambda=0</math>. This is contradictory to I.(a)
 
With II.(b) you get for  <math>\lambda=0</math>. This is contradictory to I.(a)
  
case (4): <math>x_{1}>0 and x_{2}=0</math>
+
 
Then <math>\lambda \geq 4</math> because of I. (b) and <math>x_{1}=2</math> because of IV.  
+
- case (4): <math>x_{1}>0</math> and <math>x_{2}=0</math>
 +
 
 +
Then <math>\lambda \geq 4</math> because of I. (b) and <math>x_{1}=2</math> because of IV.
 +
 
With II.(a) you get for  <math>\lambda=4</math>.  
 
With II.(a) you get for  <math>\lambda=4</math>.  
  
If you check now all KKT conditions you can see that <math>x_{1}=2, x_{2}=0 and \lambda=4</math> fulfill the conditions.
 
The function <math>f(x_{1},x_{2})=8*x_{1}-x_{1}^2+4*x_{2}-x_{2}^2</math> is concave (I*) and the Slater conditions is satisfied(II*) so the KKT-conditions are necessary and sufficient for an optimal solution.
 
  
(I* )A function f(x) is strictly concave if the Hessian matrix is negative definite.
+
 
 +
If you check now all KKT conditions, you can see that <math>x_{1}=2, x_{2}=0</math> and <math>\lambda=4</math> fulfill all the conditions.
 +
 
 +
The function <math>f(x_{1},x_{2})=8*x_{1}-x_{1}^2+4*x_{2}-x_{2}^2</math> is concave (I*) and the Slater conditions is satisfied (II*).
 +
So the KKT-conditions are necessary and sufficient for an optimal solution.
 +
 
 +
(I*) A function f(x) is strictly concave if the Hessian matrix is negative definite.
 +
 
 
Here:  <math> \bigtriangledown f(x)=\begin{bmatrix}8-2*x_{1}\\4-2*x_{2}\end{bmatrix}</math> and <math> G(x)=\begin{bmatrix}-2 &0 \\ 0&-2 \end{bmatrix}</math> ,so  
 
Here:  <math> \bigtriangledown f(x)=\begin{bmatrix}8-2*x_{1}\\4-2*x_{2}\end{bmatrix}</math> and <math> G(x)=\begin{bmatrix}-2 &0 \\ 0&-2 \end{bmatrix}</math> ,so  
 +
 
<math>det(H(x)-\lambda*I))=\begin{bmatrix}-2-\lambda &0 \\0&-2-\lambda\end{bmatrix}=(2+\lambda )^2=0</math>
 
<math>det(H(x)-\lambda*I))=\begin{bmatrix}-2-\lambda &0 \\0&-2-\lambda\end{bmatrix}=(2+\lambda )^2=0</math>
So <math>\lambda = -2< 0</math> id est G(x) is negative definite and consequently f(x) concave.
 
  
(II*) The Slater-condition is sufficient, if a <math>x=(x_{1},x_{2})\geq 0</math> exists, so that <math> x_{1}+x_{2}< 2</math> ; this is given with (0,5/0,5)
+
 
The optimal solution is x*=(2,0) with f(2,0)=12
+
So <math>\lambda = -2< 0</math>, id est G(x) is negative definite and consequently f(x) concave.
 +
 
 +
 
 +
(II*) The Slater-condition is sufficient, if a <math>x=(x_{1},x_{2})\geq 0</math> exists, so that <math> x_{1}+x_{2}< 2</math>
 +
 
 +
this is given with <math>(0,5/0,5)</math>
 +
 
 +
 
 +
The optimal solution is <math>x^*=(2,0)</math> with <math>f(2,0)=12</math>
  
  

Aktuelle Version vom 25. Juni 2013, 14:11 Uhr

Introduction:

The Karush–Kuhn–Tucker (KKT) conditions are first order necessary conditions for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations.


Optimization problem:

Given is the following optimization problem:

or


With these constraints:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): h_{i}(x_{1},...,x_{n})\leq 0,i=1,...,m


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x_{j}\geq 0, j=1,...,n


are continuously differentiable.




1. Start with the Lagrangian function: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): L(x,\lambda)= f(x_{1},...,x_{n})+ \sum _{i=1}^{m}\lambda _{i}*h_{i}(x_{1},...,x_{n})

(Minimum-problem)
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): L(x,\lambda)= f(x_{1},...,x_{n})- \sum _{i=1}^{m}\lambda _{i}*h_{i}(x_{1},...,x_{n})
(Maximum-problem)


2. Continue with the Karush- Kuhn- Tucker conditions


I. Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial L}{\partial x_{j}}= \frac{\partial f}{\partial x_{j}}(\hat{\underline{x}})+ \sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})\geq 0 , j=1,..,n (Minimum-problem)

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial L}{\partial x_{j}}= \frac{\partial f}{\partial x_{j}}(\hat{\underline{x}})- \sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})\geq 0

, j=1,..,n (Maximum-problem)


II. Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \hat{\underline{x}}*\frac{\partial L}{\partial x_{j}}= \hat{\underline{x}} *[\frac{\partial f(x)}{\partial}+\sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})]=0 , j=1,..,n (Minimum-problem)

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \hat{\underline{x}}*\frac{\partial L}{\partial x_{j}}= \hat{\underline{x}} *[\frac{\partial f(x)}{\partial}-\sum_{i=1}^{m}\hat{\lambda_{i} }* \frac{\partial h_{i}}{\partial x_{j}}(\hat{\underline{x}})]=0

, j=1,..,n (Maximum-problem)


III. Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial L}{\partial \lambda_{i}}= h_{i}(\hat{x}))\leq 0 , i=1,…,m


IV. Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \hat{\lambda_{i} }\frac{\partial L}{\partial \lambda_{i}}=\hat{\lambda_{i} }*h_{i}(\hat{x})=0 , i=1,…,m


V. Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \hat{\underline{x}}\geq 0


VI. Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \hat{\underline{\lambda}}\geq 0



is a global minimum/maximum of for a fixed


is a global maximum/minimum of for a fixed


A vector in is called saddle point of Lagrangian function , if it holds for all and :

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): L(\hat{x},\lambda )\leq L(\hat{x},\hat{\lambda} )\leq L(x,\hat{\lambda} )


If is a saddle point of , then is a minimizer/maximizer of the NLP.

If the function is concave and all are convex and the Slater condition (there exists a with ) is satisfied, then the KKT-conditions are necessary and sufficient for an optimal solution.


Example

Given is the following optimization problem:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \max f(x_{1},x_{2})=8*x_{1}-x_{1}^2+4*x_{2}-x_{2}^2


Under the restrictions Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): g(x_{1}, x_{1})=x_{1}+x_{2}\leq 2

and Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.):  x_{1},x_{2}\geq 0




Start with the Lagrangian-function: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): L(x,\lambda)=8*x_{1}-x_{1}^2+4*x_{2}-x_{2}^2 - \lambda*(x_{1}+x_{2}-2)


Continue with the KKT conditions:

I.(a) Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial L}{\partial x_{1}}= 8-2*x_{1}-\lambda\leq 0


I.(b) Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial L}{\partial x_{2}}= 4-2*x_{2}-\lambda\leq 0


II.(a) Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x_{1}*\frac{\partial L}{\partial x_{1}}=x_{1}*(8-2*x_{1}-\lambda)=0


II.(b) Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x_{2}*\frac{\partial L}{\partial x_{2}}=x_{2}*(4-2*x_{2}-\lambda)=0


III. Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial L}{\partial \lambda }= x_{1}+x_{2}-2 \leq 0


IV. Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \lambda *\frac{\partial L}{\partial \lambda }= \lambda *(x_{1}+x_{2}-2) = 0


V. Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x_{1}, x_{2}\geq 0


VI. Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \lambda \geq 0


There are four possible cases for the solution:

(1) and

(2) and

(3) and

(4) and


- case (1): and

If and then because of IV.

This is contradictory to I: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): 8-2*0-0\leq 0


- case (2): and

Because of II: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x_{1}=4-0,5*\lambda

and Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.):  x_{2}=2-0,5*\lambda


a) Because of IV: ,then and

This is contradictory to III: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): 4+2\leq 2


b) Because of IV: ,so Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): 4-0,5*\lambda+2-0,5*\lambda=2

and so .

This is contradictory to II: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x_{2}=2-0,5*4 ,because


- case (3): and

Then Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \lambda \geq 8

because of I.(a) and  because of IV.

With II.(b) you get for . This is contradictory to I.(a)


- case (4): and

Then Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \lambda \geq 4

because of I. (b) and  because of IV.

With II.(a) you get for .


If you check now all KKT conditions, you can see that and fulfill all the conditions.

The function Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x_{1},x_{2})=8*x_{1}-x_{1}^2+4*x_{2}-x_{2}^2

is concave (I*) and the Slater conditions is satisfied (II*). 

So the KKT-conditions are necessary and sufficient for an optimal solution.

(I*) A function f(x) is strictly concave if the Hessian matrix is negative definite.

Here: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \bigtriangledown f(x)=\begin{bmatrix}8-2*x_{1}\\4-2*x_{2}\end{bmatrix}

and Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.):  G(x)=\begin{bmatrix}-2 &0 \\ 0&-2 \end{bmatrix}
,so 

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): det(H(x)-\lambda*I))=\begin{bmatrix}-2-\lambda &0 \\0&-2-\lambda\end{bmatrix}=(2+\lambda )^2=0


So Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \lambda = -2< 0 , id est G(x) is negative definite and consequently f(x) concave.


(II*) The Slater-condition is sufficient, if a Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x=(x_{1},x_{2})\geq 0

exists, so that 

this is given with


The optimal solution is Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x^*=(2,0)

with 


Source

S. Boyd, L.Vandenberghe, Convex Optimization, Cambridge: Cambridge University Press, 2004

W. Domschke, A.Drexl, Einführung in Operations Research, 6th edition,2005

W. Domschke, et al., Übungen und Fallbeispiele zum Operations Research, 5th edition,2005