Nonlinear Opt.: Basic concepts 4: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
(Example:)
(Example:)
Zeile 60: Zeile 60:
  
 
'''Example 1:'''  
 
'''Example 1:'''  
 +
 
'''Simple NLP maximization'''
 
'''Simple NLP maximization'''
 +
  
 
<math>f(x)= 3x-x^3 \Rightarrow max</math>
 
<math>f(x)= 3x-x^3 \Rightarrow max</math>
 +
 
''
 
''
 
First derivative:''
 
First derivative:''
 +
  
 
<math>f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1</math>
 
<math>f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1</math>
 +
  
 
''Second derivative:''
 
''Second derivative:''
  
 
<math>f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)</math>
 
<math>f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)</math>
 +
 +
  
 
'''
 
'''
Zeile 76: Zeile 83:
  
 
'''NLP Gradient Method'''
 
'''NLP Gradient Method'''
 +
  
 
First we need to find the gradient:
 
First we need to find the gradient:
 +
  
 
<math>f(x,y,z)= 6x^2 +2y^2+2z^2 \rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}</math>
 
<math>f(x,y,z)= 6x^2 +2y^2+2z^2 \rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}</math>
 +
  
 
Gradient has to equal 0:
 
Gradient has to equal 0:
 +
  
 
<math>\triangledown  f(x)=0
 
<math>\triangledown  f(x)=0
 
  \left \{ 6x,\, 2y,\, 2z \right \}= 0</math>
 
  \left \{ 6x,\, 2y,\, 2z \right \}= 0</math>
 +
  
 
Now we get the Hesse-Matrix by the derivative once again:
 
Now we get the Hesse-Matrix by the derivative once again:
 +
  
 
<math>\triangledown^2_f = (6, 2, 2)</math>
 
<math>\triangledown^2_f = (6, 2, 2)</math>
 +
  
 
Solve for example with Rule of SARRUS:
 
Solve for example with Rule of SARRUS:
  
<math>a_1_,_2=2,</math>
 
  
a_3=6 a_1_,_2_,_3>0 \Rightarrow global minimum at (0,0,0)</math>
+
<math>a_1=2, a_2=2, a_3=6; a_1, a_2, a_3 > 0 </math>
  
  
 +
<math>\Rightarrow\, global\, minimum\, at\, (0,0,0)</math>
  
  
Zeile 108: Zeile 122:
  
  
 +
 +
'''Sources:'''
 +
 +
----
  
  
Surces:
 
 
http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf
 
http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf
www.wikipedia.org/nonlinear_programming
+
 
www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf
+
http://www.wikipedia.org/nonlinear_programming
 +
 
 +
http://www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf

Version vom 1. Juli 2013, 16:02 Uhr

Nonlinear Optimaziation: Basic Concepts



Theory


A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function, general constraints and variable bounds. The significant difference from a NLP to a LP is that NLP includes minimum one nonlinear function.


First we define what an optimal solution is. The function which is supposed to be minimized or maximized is () called the objective function. x* is an local minimum, if: (Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): )[x^*\epsilon\,\mathbb{R}^n,\varepsilon > 0 ,\: f(x^*)\leq f(x)\, for\,all\,x\, \epsilon A (x^*; \varepsilon)]( ) And the global minimum, if: (Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): ) \[\inline \: f(x^*)\leq f(x)\, for\, all\, x\, \epsilon \, \mathbb{R}^n\] ( )

The problem can be stated simply as: \[x\,\epsilon\, X,\, max f(x)\,to\, maximize\, some\, variable\, such\, as\, product\, throughput\] \[\inline x\,\epsilon\, X,\, min f(x)\,to\, minimize\, a\, cost\, function\, where\,\] \[\inline f: R^n \rightarrow R\] \[\inline x\, \epsilon\, R^n\] subject to: \[\inline h_i(x) = 0, \, i\,\epsilon \, I\, = 1, ..., p\]

\[\inline g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m\]

Example:


The following set of NLP are genaral subroutines:

NLPCG Conjugate Gradient Method

NLPDD Double Dogleg Method

NLPNMS Nelder-Mead Simplex Method

NLPNRA Newton-Raphson Method

NLPNRR Newton-Raphson Ridge Method

NLPQN (Dual) Quasi-Newton Method

NLPQUA Quadratic Optimization Method

NLPTR Trust-Region Method


The following subroutines are provided for solving nonlinear least-squares problems:

NLPLM Levenberg-Marquardt Least-Squares Method

NLPHQN Hybrid Quasi-Newton Least-Squares Methods


Example 1:

Simple NLP maximization


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x)= 3x-x^3 \Rightarrow max


First derivative:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1


Second derivative:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)



Example 2:

NLP Gradient Method


First we need to find the gradient:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x,y,z)= 6x^2 +2y^2+2z^2 \rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}


Gradient has to equal 0:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown f(x)=0 \left \{ 6x,\, 2y,\, 2z \right \}= 0


Now we get the Hesse-Matrix by the derivative once again:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown^2_f = (6, 2, 2)


Solve for example with Rule of SARRUS:



Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \Rightarrow\, global\, minimum\, at\, (0,0,0)







Sources:



http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf

http://www.wikipedia.org/nonlinear_programming

http://www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf