Nonlinear Opt.: Quadratic Problems 3: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
(Example For A Quadratic Problem)
K (Sources)
 
(10 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt)
Zeile 5: Zeile 5:
 
The difference between linear and quadratic optimazation problems is determined in the objective function F(x) which also contains quadratic terms <math> (c_{ij}x_{ij}^{2}~  or~  c_{ij}x_{i}x{j}) </math>.
 
The difference between linear and quadratic optimazation problems is determined in the objective function F(x) which also contains quadratic terms <math> (c_{ij}x_{ij}^{2}~  or~  c_{ij}x_{i}x{j}) </math>.
 
There is no universal solution method - like the simplex is for linear optimisation problems - that can be used to solve quadratic problems.
 
There is no universal solution method - like the simplex is for linear optimisation problems - that can be used to solve quadratic problems.
The three most common solution methods are Wolfe`s Algorithm, Achieve Set-Methods an the Dual Algorithm.
+
The three most common solution methods are  
 +
* [[Nonlinear Opt.: Wolfe algorithm|Wolfe´s Algorithm ]]
 +
* Achieve Set-Methods
 +
* and the Dual Algorithm.
  
 
== General Form ==
 
== General Form ==
The general form of quadratic problems is given as:
 
  
  
Zeile 18: Zeile 20:
  
 
NNC: <math> x_{i}\geq 0, \quad j= 1,...,n</math>
 
NNC: <math> x_{i}\geq 0, \quad j= 1,...,n</math>
 
  
 
== Quadratic Form ==
 
== Quadratic Form ==
To transfer the general form of the quadratic problmen in the quadratic form, it is neccassary to summerzize...
+
To transfer the general form of the quadratic problmen in the quadratic form, it is neccassary to transfer
  
  
...the coefficients<math>~c_{ij}</math> in a symmetric n x n matrix C,
+
* the coefficients<math>~c_{ij}</math> in a symmetric n x n matrix C,
  
<math>a_{ij}</math> in a m x n matrix A,
+
* <math>a_{ij}</math> in a m x n matrix A,
  
<math>c_{k}</math>in a column vector <math>c ~\epsilon~ \mathbb{R}^{n}</math>,
+
* <math>c_{k}</math>in a column vector <math>c ~\epsilon~ \mathbb{R}^{n}</math>,
  
<math>b_{j}</math> column vector <math>b ~\epsilon~ \mathbb{R}^{m}</math> and the
+
* <math>b_{j}</math> column vector <math>b ~\epsilon~ \mathbb{R}^{m}</math> and the
  
variables <math>x_{j}</math> in a vector <math>x~ \epsilon ~\mathbb{R}^{n}</math>
+
* and the variables <math>x_{j}</math> in a vector <math>x~ \epsilon ~\mathbb{R}^{n}</math>
  
  
Zeile 51: Zeile 52:
 
NNC:  <math> ~ x\geq 0</math>
 
NNC:  <math> ~ x\geq 0</math>
  
By using the coefficient  <math>~\frac {1}{2}~</math>  the KKT-conditions are easier to handle.
+
By using the coefficient  <math>~\frac {1}{2}~</math>  the [[Nonlinear Opt.: KKT- theorem |KKT-conditions]] are easier to handle.
  
 
For maximization (minimization) problems the objective funktion hast to be convex (concav).
 
For maximization (minimization) problems the objective funktion hast to be convex (concav).
  
For <math>x\epsilon \mathbb{R}</math> the function is:
+
For <math>\quad x\epsilon \mathbb{R} \quad </math> the function is:
  
concave, if <math>{F}''\leq 0 </math> and
+
* concave, if <math>{F}''\leq 0 </math> and
  
convex, if <math>{F}''\geq 0 </math>
+
* convex, if <math>{F}''\geq 0 </math>
  
For <math> x\epsilon\mathbb{R}^{n} </math> the function is:
+
For <math>\quad x\epsilon\mathbb{R}^{n} \quad </math> the function is:
  
concav, if the Hesse-Matrix is negative-semidefinit and
+
* concav, if the Hesse-Matrix is negative-semidefinit and
  
convex, if the Hesse-Matrix is positive-semidefinit.
+
* convex, if the Hesse-Matrix is positive-semidefinit.
  
 
A symmetric, quadratic matrix matrix C is positive (negative) semi-definite if all the Eigen-Values are positive (negative).
 
A symmetric, quadratic matrix matrix C is positive (negative) semi-definite if all the Eigen-Values are positive (negative).
Zeile 100: Zeile 101:
 
\end{pmatrix} </math>
 
\end{pmatrix} </math>
  
Eigen-Value:  
+
Eigen-Value of the Hessematrix:  
  
 
<math>det(h-\lambda E)= det\begin{pmatrix}
 
<math>det(h-\lambda E)= det\begin{pmatrix}
Zeile 139: Zeile 140:
 
\end{pmatrix} </math>
 
\end{pmatrix} </math>
  
Eigen-Value:
+
Eigen-Value of the Hessematrix:
  
 
<math>det(h-\lambda E)= det\begin{pmatrix}
 
<math>det(h-\lambda E)= det\begin{pmatrix}
Zeile 152: Zeile 153:
  
  
<math>\lambda_{1,2}<0 </math> ==> F is negatice definit and therefore concave
+
<math>\lambda_{1,2}<0 </math> ==> F is negative definit and therefore concave
  
 
== Example For A Quadratic Problem ==
 
== Example For A Quadratic Problem ==
Zeile 159: Zeile 160:
 
Production program scheduling:
 
Production program scheduling:
  
A monopolist produces shirts <math>(x_{1})</math> and jeans <math>(x_{2})</math> and wants to maximize its marginal revenue under observence of linear constraints.
+
A monopolist produces T-Shirts <math>(x_{1})</math> and Tank-Tops <math>(x_{2})</math> and wants to maximize its marginal revenue under observence of linear constraints.
 +
 
 +
The price-sales function of the two products is given as:
 +
 
 +
<math>p_{1}(x_{1})=10-x_{1} </math>
 +
 
 +
<math>p_{2}(x_{2})=10-x_{1}-x_{2} </math>
  
The price-sales function of the two products is given as: <math> p_{i}(x_{i})=10-x_{i} </math>
 
 
(Remember: Since this is a monopolist, he determines the price and the consumers react with their demand)
 
(Remember: Since this is a monopolist, he determines the price and the consumers react with their demand)
  
The constant variable costs are: k_{i}=3
+
 
 +
The constant variable costs are:  
 +
 
 +
<math>k_{1}=3</math>
 +
 
 +
<math>k_{2}=2</math>
 +
 
  
 
The marginal revenue of product i is:
 
The marginal revenue of product i is:
  
<math>R(x_{i})=p_{i}(x_{i})-k_{i}x_{i}=(10-x_{i}x_{i})-2x_{i}=8x_{i}-x_{i}^{2}</math>
+
<math> R(x_{i})=p_{i}(x_{i})-k_{i}x_{i}=10x_{1}-x_{1}^{2}+10_x{2}-x_{1}x_{2}-x_{2}^{2}-3x_{1}-5x_{2}=-x_{1}^{2}+7x_{1}-x_{1}x_{2}-x_{2}^{2}+5x_{2}</math>
  
With integration of linear restrictions we get the following Nonlinear Optimization Problem:
 
  
Objective function
+
With integration of linear constraints we get the following Nonlinear Optimization Problem:
Max <math>~ F(x_{1},x_{2})=8x_{1}-x_{1}^{2}+8x_{2}-x_{2}^{2}</math>
+
 
 +
Objective function:
 +
 
 +
Max <math> F(x_{1},x_{2})=-x_{1}^{2}+7x_{1}-x_{1}x_{2}-x_{2}^{2}+5x_{2}</math>
  
 
linear constraints:
 
linear constraints:
Zeile 183: Zeile 197:
 
<math>x_{1},x_{2}\geq 0</math>
 
<math>x_{1},x_{2}\geq 0</math>
  
== Sources ==
+
 
 +
Transormation in Quadratic Form:
 +
 
 +
<math>F(x_{1},x_{2})=x_{1}(-x_{1}+7- \frac{1}{2} x_{2})+x_{2}(-x_{2}+5- \frac{1}{2} x_{1}=7x_{1}- \frac{1}{2} ((2x_{1}+x_{2})x_{1}+(2x_{2}+x_{1})x_{2})+5x_{2}</math>
 +
 
 +
Conveying in matrix notation:
 +
 
 +
Max <math> \quad F(x_{1},x_{2})= \begin{pmatrix}7
 +
& 5
 +
\end{pmatrix}
 +
 
 +
\begin{pmatrix}x_1
 +
\\
 +
x_2
 +
\end{pmatrix}
 +
- \frac{1}{2}
 +
 
 +
\begin{pmatrix}x_1
 +
& x_2
 +
\end{pmatrix}
 +
 
 +
\begin{pmatrix}
 +
2 & 1\\
 +
2 & 1
 +
\end{pmatrix}
 +
\begin{pmatrix}x_1
 +
\\
 +
x_2
 +
\end{pmatrix}
 +
</math>
 +
 
 +
Linear constraints:
 +
 
 +
<math>\begin{pmatrix}
 +
1&2 \\
 +
3&1
 +
\end{pmatrix}
 +
 
 +
\binom{x_1}{x_2}\leq \binom{11}{7}</math>
 +
 
 +
NNC: <math> \quad \binom{x_1}{x_2}\leq \binom{0}{0} </math>
 +
 
 +
== References ==
  
 
Prof. Dr. Oliver Wendt: Skriptum Operations Research, Summer Term 2013
 
Prof. Dr. Oliver Wendt: Skriptum Operations Research, Summer Term 2013
  
Domschke, Drexl: Einführung in Operations Research 7. Auflage
+
Domschke, Drexl: "Einführung in Operations Research" 7. Auflage, Darmstadt/Kiel 2006, Springer Verlag
  
Domschke: Übungen und Fallbeispiele zum Operations-Research 6. Auflage
+
Domschke: "Übungen und Fallbeispiele zum Operations-Research" 6. Auflage

Aktuelle Version vom 30. Juni 2013, 20:00 Uhr

Introduction

Quadratic problems are part of nonlinear optimisation problems. Each problem type has its own special solution technique. The difference between linear and quadratic optimazation problems is determined in the objective function F(x) which also contains quadratic terms . There is no universal solution method - like the simplex is for linear optimisation problems - that can be used to solve quadratic problems. The three most common solution methods are

General Form

min Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): F=\sum_{i=1}^{n}\sum_{j=1}^{n}c_{ij}x_{i}x_{j}+\sum_{k=1}^{n}c_{k}x_{k}


s.t. linear constraints:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \sum_{j=1}^{n}a_{lj}x_{j}\leq b_{l}, \quad l=1,...,m


NNC: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x_{i}\geq 0, \quad j= 1,...,n


Quadratic Form

To transfer the general form of the quadratic problmen in the quadratic form, it is neccassary to transfer


  • the coefficients in a symmetric n x n matrix C,
  • in a m x n matrix A,
  • in a column vector ,
  • column vector and the
  • and the variables in a vector


This provides the Quadratic Problem in quadratic form:

min

or

max Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x)=c^{T}x-\frac {1}{2}x^{T}C_{x}


linear constraints:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): Ax \leq b


NNC: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): ~ x\geq 0


By using the coefficient the KKT-conditions are easier to handle.

For maximization (minimization) problems the objective funktion hast to be convex (concav).

For the function is:

  • concave, if Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): {F}''\leq 0
and
  • convex, if Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): {F}''\geq 0


For the function is:

  • concav, if the Hesse-Matrix is negative-semidefinit and
  • convex, if the Hesse-Matrix is positive-semidefinit.

A symmetric, quadratic matrix matrix C is positive (negative) semi-definite if all the Eigen-Values are positive (negative).

Example For Convex And Concave Functions

1)

min

Since this is a minimization problem, we have to verify that the function is a convex one.

First differentiations:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial F}{\partial x_{1}}=4x_{1}+x_{2}



Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial F}{\partial x_{2}}=6x_{2}+x_{2}


Second differentiations:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial F }{\partial^2 x_{1}}=4, \quad \frac{\partial F }{\partial x_{2}x_{1}}=1



Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial F }{\partial x_{1}x_{2}}=1, \quad \frac{\partial F }{\partial^2 x_{2}}=6


The corresponding Hessematrix therefore is:

Eigen-Value of the Hessematrix:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): det(h-\lambda E)= det\begin{pmatrix} 4-\lambda & 1\\ 1 & 6-\lambda \end{pmatrix} =(4-\lambda)(6-\lambda)-1=\lambda^{2}-10\lambda+23


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \lambda_{1,2}=5\pm\sqrt{25-23}


==> F is positive definit and therefore convex


2)

min Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): F(x_{1},x_{2})=-2x_{1}^{2}-3x_{2}^{2}+x_{1}x_{2}


Since this is a minimization problem, we have to verify that the function is a convex one.

First differentiations:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial F}{\partial x_{1}}=-4x_{1}+x_{2}


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial F}{\partial x_{2}}=-6x_{2}+x_{2}


Second differentiations:Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial F }{\partial x_{1}x_{2}}=1, \quad \frac{\partial F }{\partial x_{2}x_{1}}=1


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \frac{\partial F }{\partial^2 x_{1}}=-4, \quad \frac{\partial F }{\partial^2 x_{2}}=-6


The corresponding Hessematrix is: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): H=\begin{pmatrix} -4 & 1\\ 1 & -6 \end{pmatrix}


Eigen-Value of the Hessematrix:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): det(h-\lambda E)= det\begin{pmatrix} 4-\lambda & 1\\ 1 & 6-\lambda \end{pmatrix} =(-4-\lambda)(-6-\lambda)-1=\lambda^{2}+10\lambda+23



Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \lambda_{1,2}=-5\pm\sqrt{25-23}


==> F is negative definit and therefore concave

Example For A Quadratic Problem

Production program scheduling:

A monopolist produces T-Shirts and Tank-Tops and wants to maximize its marginal revenue under observence of linear constraints.

The price-sales function of the two products is given as:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): p_{1}(x_{1})=10-x_{1}


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): p_{2}(x_{2})=10-x_{1}-x_{2}


(Remember: Since this is a monopolist, he determines the price and the consumers react with their demand)


The constant variable costs are:


The marginal revenue of product i is:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): R(x_{i})=p_{i}(x_{i})-k_{i}x_{i}=10x_{1}-x_{1}^{2}+10_x{2}-x_{1}x_{2}-x_{2}^{2}-3x_{1}-5x_{2}=-x_{1}^{2}+7x_{1}-x_{1}x_{2}-x_{2}^{2}+5x_{2}


With integration of linear constraints we get the following Nonlinear Optimization Problem:

Objective function:

Max Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): F(x_{1},x_{2})=-x_{1}^{2}+7x_{1}-x_{1}x_{2}-x_{2}^{2}+5x_{2}


linear constraints:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x_{1}+2x_{2}-11\leq 0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): 3x_{1}+x_{2}-7\leq 0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x_{1},x_{2}\geq 0


Transormation in Quadratic Form:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): F(x_{1},x_{2})=x_{1}(-x_{1}+7- \frac{1}{2} x_{2})+x_{2}(-x_{2}+5- \frac{1}{2} x_{1}=7x_{1}- \frac{1}{2} ((2x_{1}+x_{2})x_{1}+(2x_{2}+x_{1})x_{2})+5x_{2}


Conveying in matrix notation:

Max Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \quad F(x_{1},x_{2})= \begin{pmatrix}7 & 5 \end{pmatrix} \begin{pmatrix}x_1 \\ x_2 \end{pmatrix} - \frac{1}{2} \begin{pmatrix}x_1 & x_2 \end{pmatrix} \begin{pmatrix} 2 & 1\\ 2 & 1 \end{pmatrix} \begin{pmatrix}x_1 \\ x_2 \end{pmatrix}


Linear constraints:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \begin{pmatrix} 1&2 \\ 3&1 \end{pmatrix} \binom{x_1}{x_2}\leq \binom{11}{7}


NNC: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \quad \binom{x_1}{x_2}\leq \binom{0}{0}


References

Prof. Dr. Oliver Wendt: Skriptum Operations Research, Summer Term 2013

Domschke, Drexl: "Einführung in Operations Research" 7. Auflage, Darmstadt/Kiel 2006, Springer Verlag

Domschke: "Übungen und Fallbeispiele zum Operations-Research" 6. Auflage