Nonlinear Opt.: Basic concepts 4: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
(Example:)
(Example:)
Zeile 47: Zeile 47:
  
  
'''Example 1:''' Simple NLP maximization
+
'''Example 1:'''
\[f(x)= 3x-x^3 \Rightarrow max\]
+
'''Simple NLP maximization'''
First derivative:
+
 
\[\inline f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1\,\]
+
<math>f(x)= 3x-x^3 \Rightarrow max</math>
Second derivative:
+
''
\[\inline f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max\: (local maximum)\]
+
First derivative:''
 +
 
 +
<math>f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1</math>
 +
 
 +
''Second derivative:''
 +
 
 +
<math>f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max\: (local maximum)\]</math>
 +
 
 
'''
 
'''
Example 2:''' NLP Gradient Method
+
Example 2:'''
 +
 
 +
'''NLP Gradient Method'''
  
 
First we need to find the gradient:
 
First we need to find the gradient:
\[\inline f(x,y,z)= 6x^2 +2y^2+2z^2 \:\rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}\]
+
 
 +
<math>f(x,y,z)= 6x^2 +2y^2+2z^2 \:\rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}</math>
  
 
Gradient has to equal 0:
 
Gradient has to equal 0:
\triangledown  f(x)=0
+
 
\[\inline \left \{ 6x,\, 2y,\, 2z \right \}= 0\]
+
<math>\triangledown  f(x)=0
 +
\[\inline \left \{ 6x,\, 2y,\, 2z \right \}= 0\]</math>
  
 
Now we get the Hesse-Matrix by the derivative once again:
 
Now we get the Hesse-Matrix by the derivative once again:
\triangledown^2_f = (6, 2, 2)
+
 
 +
<math>\triangledown^2_f = (6, 2, 2)</math>
  
 
Solve for example with Rule of SARRUS:
 
Solve for example with Rule of SARRUS:
  
\[\inline a_1_,_2=2, \, a_3=6;\: a_1_,_2_,_3> 0 \Rightarrow global\, minimum\, at\, (0,0,0)\]
+
<math>a_1_,_2=2, \, a_3=6;\: a_1_,_2_,_3> 0 \Rightarrow global\, minimum\, at\, (0,0,0)</math>
  
  

Version vom 1. Juli 2013, 15:47 Uhr

Nonlinear Optimaziation: Basic Concepts



Theory


A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function, general constraints and variable bounds. The significant difference from a NLP to a LP is that NLP includes minimum one nonlinear function.


First we define what an optimal solution is. The function which is supposed to be minimized or maximized is () called the objective function. x* is an local minimum, if: (Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): )[x^*\epsilon\,\mathbb{R}^n,\varepsilon > 0 ,\: f(x^*)\leq f(x)\, for\,all\,x\, \epsilon A (x^*; \varepsilon)]( ) And the global minimum, if: (Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): ) \[\inline \: f(x^*)\leq f(x)\, for\, all\, x\, \epsilon \, \mathbb{R}^n\] ( )

The problem can be stated simply as: \[x\,\epsilon\, X,\, max f(x)\,to\, maximize\, some\, variable\, such\, as\, product\, throughput\] \[\inline x\,\epsilon\, X,\, min f(x)\,to\, minimize\, a\, cost\, function\, where\,\] \[\inline f: R^n \rightarrow R\] \[\inline x\, \epsilon\, R^n\] subject to: \[\inline h_i(x) = 0, \, i\,\epsilon \, I\, = 1, ..., p\]

\[\inline g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m\]

Example:


The following set of NLP are genaral subroutines:

NLPCG Conjugate Gradient Method NLPDD Double Dogleg Method NLPNMS Nelder-Mead Simplex Method NLPNRA Newton-Raphson Method NLPNRR Newton-Raphson Ridge Method NLPQN (Dual) Quasi-Newton Method NLPQUA Quadratic Optimization Method NLPTR Trust-Region Method The following subroutines are provided for solving nonlinear least-squares problems: NLPLM Levenberg-Marquardt Least-Squares Method NLPHQN Hybrid Quasi-Newton Least-Squares Methods


Example 1: Simple NLP maximization

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x)= 3x-x^3 \Rightarrow max

First derivative:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1


Second derivative:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max\: (local maximum)\]


Example 2:

NLP Gradient Method

First we need to find the gradient:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x,y,z)= 6x^2 +2y^2+2z^2 \:\rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}


Gradient has to equal 0:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown f(x)=0 \[\inline \left \{ 6x,\, 2y,\, 2z \right \}= 0\]


Now we get the Hesse-Matrix by the derivative once again:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown^2_f = (6, 2, 2)


Solve for example with Rule of SARRUS:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): a_1_,_2=2, \, a_3=6;\: a_1_,_2_,_3> 0 \Rightarrow global\, minimum\, at\, (0,0,0)









Surces: http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf www.wikipedia.org/nonlinear_programming www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf