Nonlinear Opt.: Basic concepts 2: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
 
(27 dazwischenliegende Versionen von 3 Benutzern werden nicht angezeigt)
Zeile 1: Zeile 1:
In a '''non-linear problem''' there is either a ''non-linear objective function and no restrictions'' or a ''non-linear objective function and linear/non-linear restrictions'' or a ''linear objective function and non-linear restrictions''.
+
In a '''nonlinear problem''' there is either a ''nonlinear objective function and no restrictions'' or a ''nonlinear objective function and linear/nonlinear restrictions'' or a ''linear objective function and nonlinear restrictions''.
  
 
== Theory ==
 
== Theory ==
In contrast to the '''linear programming''' where we have the [http://en.wikipedia.org/wiki/Simplex_algorithm simplex algorithm] to solve this there is not an universal algorithm for solving a '''non-linear problem'''.
+
In contrast to the '''linear programming''' where we have the [http://en.wikipedia.org/wiki/Simplex_algorithm simplex algorithm] to solve this there is no universal algorithm for solving a '''nonlinear problem'''.
  
  
Zeile 13: Zeile 13:
 
<math> x \in W_1 \times W_2 \times ,..., \times W_n , W_j \in \{ \mathbb R_+, \mathbb Z_+, \mathbb B \}  ~~~~~~~~~~(for~j = 1,...,n)</math>
 
<math> x \in W_1 \times W_2 \times ,..., \times W_n , W_j \in \{ \mathbb R_+, \mathbb Z_+, \mathbb B \}  ~~~~~~~~~~(for~j = 1,...,n)</math>
  
In contrast to linear problems, non-linear optimization problems got a non-linear objective function and/or at least one non-linear restriction.
+
In contrast to linear problems, nonlinear optimization problems got a nonlinear objective function and/or at least one nonlinear restriction.
As distinct from integer and combinatorial problems, where integer or binary variables occur, we assume that only (non-negative) real variables occur in non-linear optimization.
+
As distinct from integer and combinatorial problems, where integer or binary variables occur, we assume that only (nonnegative) real variables occur in nonlinear optimization.
The to be minimized objective function can be replaced with the to be maximized objective function and vice versa.
+
The objective function, that has to be minimized can be replaced with the objective function, that has to be maximized and vice versa.
Furthermore, every <math>\ge</math> – restriction can be transformed into a <math>\le</math> – testriction by multiplication with -1. Beyond that, all equations can be replaced by two <math>\le</math> – restrictions.
+
Furthermore, every <math>\ge</math> – restriction can be transformed into a <math>\le</math> – restriction by multiplication with -1. Beyond that, all equations can be replaced by two <math>\le</math> – restrictions.
 
To simplify the notation, functions <math>g_i</math> consist of the constants <math>b_i</math>, which are designated within the linear optimization:
 
To simplify the notation, functions <math>g_i</math> consist of the constants <math>b_i</math>, which are designated within the linear optimization:
  
Zeile 25: Zeile 25:
 
<math> g_i(x) := f_i (x)-b_i \le 0 </math>
 
<math> g_i(x) := f_i (x)-b_i \le 0 </math>
  
The following notation is assumed to be the notation of a non-linear optimization problem:
+
Assume the following notation is a nonlinear optimization problem:
  
maximize F(x)
+
Maximize F(x)
 
under the restrictions
 
under the restrictions
 
  
 
<math>g_i (x) \le 0 ~~for~~  i = 1,...,m</math>
 
<math>g_i (x) \le 0 ~~for~~  i = 1,...,m</math>
Zeile 36: Zeile 35:
  
 
<math>x_j \ge 0 ~~for~~ j = 1,...,n</math>
 
<math>x_j \ge 0 ~~for~~ j = 1,...,n</math>
 
  
 
== Examples ==
 
== Examples ==
Zeile 43: Zeile 41:
 
=== Example 1 (Minimization)===
 
=== Example 1 (Minimization)===
 
<math>f(x) = x^2 - 8x </math>
 
<math>f(x) = x^2 - 8x </math>
 +
[[Datei:NLP_Basic_Concepts1.jpg]]
  
 
'''Step 1:''' The necessary condition for solving this problem is to derive the function and set this derivative equal to zero.
 
'''Step 1:''' The necessary condition for solving this problem is to derive the function and set this derivative equal to zero.
Zeile 48: Zeile 47:
 
<math>f'(x) = 2x - 8 ~~=> ~2x - 8 = 0 ~<=>~ x = 4</math>  
 
<math>f'(x) = 2x - 8 ~~=> ~2x - 8 = 0 ~<=>~ x = 4</math>  
  
'''Step 2:''' Now you have to check the sufficient condition; if you have a minimum, the second derivative of the function has to be greater than zero, for a maximum less than zero.
+
'''Step 2:''' Now you have to check the sufficient condition; if you have a minimum, the second derivative of the function has to be greater than zero, for a maximum it has to be less than zero.
  
 
<math>f''(x) = 2 ~> 0 ~~=> minimum </math>
 
<math>f''(x) = 2 ~> 0 ~~=> minimum </math>
Zeile 54: Zeile 53:
 
=== Example 2 (Maximization)===
 
=== Example 2 (Maximization)===
 
<math>f(x) = -x^2 - 8x </math>
 
<math>f(x) = -x^2 - 8x </math>
 +
[[Datei:NLP_Basic_Concepts2.jpg]]
  
 
'''Step 1:''' The necessary condition for solving this problem is to derive the function and set this derivative equal to zero.
 
'''Step 1:''' The necessary condition for solving this problem is to derive the function and set this derivative equal to zero.
Zeile 59: Zeile 59:
 
<math>f'(x) = -2x - 8 ~~=> ~-2x - 8 = 0 ~<=>~ x = -4</math>  
 
<math>f'(x) = -2x - 8 ~~=> ~-2x - 8 = 0 ~<=>~ x = -4</math>  
  
'''Step 2:''' Now you have to check the sufficient condition; if you have a minimum, the second derivative of the function has to be greater than zero, for a maximum less than zero.
+
'''Step 2:''' Now you have to check the sufficient condition; if you have a minimum, the second derivative of the function has to be greater than zero, for a maximum it has to be less than zero.
  
 
<math>f''(x) = -2 ~< 0 ~~=> maximum </math>
 
<math>f''(x) = -2 ~< 0 ~~=> maximum </math>
Zeile 67: Zeile 67:
 
We would maximize the sum of the contribution margin of two products.
 
We would maximize the sum of the contribution margin of two products.
  
<math> C_m = p_i(x_i) x_i*c_i x_i  ~~~~~i={1,2}</math>
+
<math> C_m = p_i(x_i) x_i*c_i x_i  ~~~~~i={1,2}</math>
 
+
 
<math> p_i(x_i) = 7 - x_i ; c_i = 2 </math>
 
<math> p_i(x_i) = 7 - x_i ; c_i = 2 </math>
  
  
 
<math>Objective~function: ~~max~F(x_1,x_2) = 5x_1 - x_1^2 + 5x_2 - x_2^2</math>
 
<math>Objective~function: ~~max~F(x_1,x_2) = 5x_1 - x_1^2 + 5x_2 - x_2^2</math>
 +
 +
[[Datei:NLP_Basic_Concepts3.jpg]]
  
  
Zeile 83: Zeile 84:
 
\end{bmatrix}</math>
 
\end{bmatrix}</math>
  
 +
<math> (-2-\lambda)*(-2-\lambda)=0
 +
=> \lambda=2 ~~</math>
 +
 +
The eigenvalues are positive,so the matrix is positiv definite
  
 
<math> \nabla F(x_1)=0=5-2x_1 ~=>x_1=2,5</math>
 
<math> \nabla F(x_1)=0=5-2x_1 ~=>x_1=2,5</math>
Zeile 89: Zeile 94:
  
  
If <math>F(x_i)</math> is twice differentiable, so is <math>F(x_i)</math> a necessary and sufficiently condition for a global maximum and <math>-H(x)</math> or <math>H(x)</math> has to be [http://en.wikipedia.org/wiki/Positive_definiteness positive definite].
+
If <math>F(x_i)</math> is twice differentiable, so <math>F(x_i)</math> is a necessary and sufficient condition for a global maximum and <math>-H(x)</math> or <math>H(x)</math> has to be [http://en.wikipedia.org/wiki/Positive_definiteness positive definite].
  
== How to solve different types of non-linear problems ==
+
== How to solve different types of nonlinear problems ==
  
 
This is not a basic concept and therefore explained in other wiki-entries.
 
This is not a basic concept and therefore explained in other wiki-entries.
  
=== Class 1 (non-linear objective function, no restrictions)===
+
=== Class 1 (nonlinear objective function, no restrictions)===
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_Classical_gradient_method Classical gradient method]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_Classical_gradient_method Classical gradient method]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_Gold_section_search Golden section search]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_Gold_section_search Golden section search]
  
=== Class 2 (non-linear objective function, linear restrictions)===
+
=== Class 2 (nonlinear objective function, linear restrictions)===
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_Lagrangian_condition Lagrangian condition]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_Lagrangian_condition Lagrangian condition]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_KKT-_theorem Karush-Kuhn-Tucker]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_KKT-_theorem Karush-Kuhn-Tucker]
  
=== Class 3 (non-linear objective function, non-linear restrictions)===
+
=== Class 3 (nonlinear objective function, nonlinear restrictions)===
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_Lagrangian_condition Lagrangian condition]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_Lagrangian_condition Lagrangian condition]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_KKT-_theorem Karush-Kuhn-Tucker]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_KKT-_theorem Karush-Kuhn-Tucker]
  
=== Class 4 (linear objective function, non-linear restrictions)===
+
=== Class 4 (linear objective function, nonlinear restrictions)===
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_Lagrangian_condition Lagrangian condition]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_Lagrangian_condition Lagrangian condition]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_KKT-_theorem Karush-Kuhn-Tucker]
 
:* [http://bisor.wiwi.uni-kl.de/orwiki/Nonlinear_Opt.:_KKT-_theorem Karush-Kuhn-Tucker]

Aktuelle Version vom 8. Juli 2013, 08:42 Uhr

In a nonlinear problem there is either a nonlinear objective function and no restrictions or a nonlinear objective function and linear/nonlinear restrictions or a linear objective function and nonlinear restrictions.

Theory

In contrast to the linear programming where we have the simplex algorithm to solve this there is no universal algorithm for solving a nonlinear problem.


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): Restrictions: g_i(x) \begin{Bmatrix} \le \\ = \\ \ge \end{Bmatrix}~0 ~~~~~~(for~ i ~= ~1,...,m)



Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x \in W_1 \times W_2 \times ,..., \times W_n , W_j \in \{ \mathbb R_+, \mathbb Z_+, \mathbb B \} ~~~~~~~~~~(for~j = 1,...,n)


In contrast to linear problems, nonlinear optimization problems got a nonlinear objective function and/or at least one nonlinear restriction. As distinct from integer and combinatorial problems, where integer or binary variables occur, we assume that only (nonnegative) real variables occur in nonlinear optimization. The objective function, that has to be minimized can be replaced with the objective function, that has to be maximized and vice versa. Furthermore, every Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \ge

– restriction can be transformed into a Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \le
– restriction by multiplication with -1. Beyond that, all equations can be replaced by two Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \le
– restrictions.

To simplify the notation, functions consist of the constants , which are designated within the linear optimization:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f_i(x) \le b_i


is transformed into

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): g_i(x) := f_i (x)-b_i \le 0


Assume the following notation is a nonlinear optimization problem:

Maximize F(x) under the restrictions

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): g_i (x) \le 0 ~~for~~ i = 1,...,m


and

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x_j \ge 0 ~~for~~ j = 1,...,n


Examples

The first two examples are very easy, because there are not any restrictions. Example 1 is a simple minimization problem and example 2 a maximization problem.

Example 1 (Minimization)

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x) = x^2 - 8x

Fehler beim Erstellen des Vorschaubildes: Die Miniaturansicht konnte nicht am vorgesehenen Ort gespeichert werden

Step 1: The necessary condition for solving this problem is to derive the function and set this derivative equal to zero.

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f'(x) = 2x - 8 ~~=> ~2x - 8 = 0 ~<=>~ x = 4


Step 2: Now you have to check the sufficient condition; if you have a minimum, the second derivative of the function has to be greater than zero, for a maximum it has to be less than zero.

Example 2 (Maximization)

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x) = -x^2 - 8x

Fehler beim Erstellen des Vorschaubildes: Die Miniaturansicht konnte nicht am vorgesehenen Ort gespeichert werden

Step 1: The necessary condition for solving this problem is to derive the function and set this derivative equal to zero.

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f'(x) = -2x - 8 ~~=> ~-2x - 8 = 0 ~<=>~ x = -4


Step 2: Now you have to check the sufficient condition; if you have a minimum, the second derivative of the function has to be greater than zero, for a maximum it has to be less than zero.

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f''(x) = -2 ~< 0 ~~=> maximum


Example 3 (Classical gradient method)

We would maximize the sum of the contribution margin of two products.

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): C_m = p_i(x_i) x_i*c_i x_i ~~~~~i={1,2}

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): p_i(x_i) = 7 - x_i ; c_i = 2


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): Objective~function: ~~max~F(x_1,x_2) = 5x_1 - x_1^2 + 5x_2 - x_2^2


Fehler beim Erstellen des Vorschaubildes: Die Miniaturansicht konnte nicht am vorgesehenen Ort gespeichert werden


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \nabla F(x_i) = \begin{bmatrix} 5-2x_1\\ 5-2x_2 \end{bmatrix} ~=> Hessian-Matrix~H(x)= \begin{bmatrix} -2&~0\\ ~0&-2 \end{bmatrix}


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): (-2-\lambda)*(-2-\lambda)=0 => \lambda=2 ~~


The eigenvalues are positive,so the matrix is positiv definite

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \nabla F(x_1)=0=5-2x_1 ~=>x_1=2,5


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \nabla F(x_2)=0=5-2x_2 ~=>x_2=2,5


If is twice differentiable, so is a necessary and sufficient condition for a global maximum and Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): -H(x)

or  has to be positive definite.

How to solve different types of nonlinear problems

This is not a basic concept and therefore explained in other wiki-entries.

Class 1 (nonlinear objective function, no restrictions)

Class 2 (nonlinear objective function, linear restrictions)

Class 3 (nonlinear objective function, nonlinear restrictions)

Class 4 (linear objective function, nonlinear restrictions)

Sources

Internet sources

Literature

  • Prof. Dr. Oliver Wendt: Operations Research Script, Summer Term 2013
  • Immanuel M. Bomze/W. Grossmann: Optimierung - Theorie und Algorithmen, ISBN:3-411-1509-1
  • Kurt Marti/Detlef Gröger: Einführung in die lineare und nichtlineare Optimierung, ISBN:3-790-81297-8
  • Wolfgang Domschke/Andreas Drexl: Einführung in Operations Research 6. Auflage ISBN:3-540-23431-4
  • Hans Corsten/Hilde Corsten/Carsten Sartor: Operations Research ISBN:9-783800-632022