Nonlinear Opt.: Basic concepts 4: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
(Theory)
(Example:)
Zeile 35: Zeile 35:
 
(<math>) g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m(</math>)
 
(<math>) g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m(</math>)
  
== Example: ==
+
== Example ==
  
 
----
 
----
'''
+
 
The following set of NLP are genaral subroutines:'''
+
 
 +
'''The following set of NLP are genaral subroutines:'''
  
 
NLPCG Conjugate Gradient Method
 
NLPCG Conjugate Gradient Method
Zeile 67: Zeile 68:
  
 
'''Example 1:'''  
 
'''Example 1:'''  
 +
  
 
'''Simple NLP maximization'''
 
'''Simple NLP maximization'''
Zeile 72: Zeile 74:
  
 
<math>f(x)= 3x-x^3 \Rightarrow max</math>
 
<math>f(x)= 3x-x^3 \Rightarrow max</math>
 +
  
  
Zeile 78: Zeile 81:
  
 
<math>f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1</math>
 
<math>f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1</math>
 +
  
  
 
''Second derivative:''
 
''Second derivative:''
 +
  
 
<math>f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)</math>
 
<math>f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)</math>
 +
 +
[[Datei:Screen Shot 2013-07-01 at 2.02.47 PM.png]]
  
  
Zeile 92: Zeile 99:
  
  
First we need to find the gradient:
+
''First we need to find the gradient:''
  
  
Zeile 98: Zeile 105:
  
  
Gradient has to equal 0:
+
 
 +
''Gradient has to equal 0:''
  
  
Zeile 105: Zeile 113:
  
  
Now we get the Hesse-Matrix by the derivative once again:
+
 
 +
''Now we get the Hesse-Matrix by the derivative once again:''
  
  
Zeile 111: Zeile 120:
  
  
Solve for example with Rule of SARRUS:
+
 
 +
''Solve for example with Rule of SARRUS:''
  
  
Zeile 140: Zeile 150:
  
 
http://www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf
 
http://www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf
 +
 +
http://www.wolframalpha.com

Version vom 1. Juli 2013, 16:21 Uhr

Nonlinear Optimaziation: Basic Concepts



Theory


A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function, general constraints and variable bounds. The significant difference from a NLP to a LP is that NLP includes minimum one nonlinear function.


First we define what an optimal solution is. The function which is supposed to be minimized or maximized is () called the objective function. x* is an local minimum, if:

(Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): )x^*\epsilon\,\mathbb{R}^n,\varepsilon > 0 f(x^*)\leq f(x)\, for\,all\,x\, \epsilon A (x^*; \varepsilon)( )

And the global minimum, if:

(Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): )f(x^*)\leq f(x)\, for\, all\, x\, \epsilon \, \mathbb{R}^n\ ( )

Maximum is vice versa.

The problem can be stated simply as:

()

(Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): )f=R^n \rightarrow R\ x\, \epsilon\, R^n( )

subject to:

()

(Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): ) g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m( )

Example



The following set of NLP are genaral subroutines:

NLPCG Conjugate Gradient Method

NLPDD Double Dogleg Method

NLPNMS Nelder-Mead Simplex Method

NLPNRA Newton-Raphson Method

NLPNRR Newton-Raphson Ridge Method

NLPQN (Dual) Quasi-Newton Method

NLPQUA Quadratic Optimization Method

NLPTR Trust-Region Method


The following subroutines are provided for solving nonlinear least-squares problems:

NLPLM Levenberg-Marquardt Least-Squares Method

NLPHQN Hybrid Quasi-Newton Least-Squares Methods


Example 1:


Simple NLP maximization


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x)= 3x-x^3 \Rightarrow max



First derivative:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1



Second derivative:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)


Fehler beim Erstellen des Vorschaubildes: Die Miniaturansicht konnte nicht am vorgesehenen Ort gespeichert werden


Example 2:


NLP Gradient Method


First we need to find the gradient:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x,y,z)= 6x^2 +2y^2+2z^2 \rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}



Gradient has to equal 0:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown f(x)=0 \left \{ 6x,\, 2y,\, 2z \right \}= 0



Now we get the Hesse-Matrix by the derivative once again:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown^2_f = (6, 2, 2)



Solve for example with Rule of SARRUS:



Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \Rightarrow\, global\, minimum\, at\, (0,0,0)







Sources:



http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf

http://www.wikipedia.org/nonlinear_programming

http://www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf

http://www.wolframalpha.com