Nonlinear Opt.: Basic concepts 4: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
(Example:)
(Nonlinear Optimaziation: Basic Concepts)
 
(3 dazwischenliegende Versionen von einem anderen Benutzer werden nicht angezeigt)
Zeile 2: Zeile 2:
  
 
----
 
----
 
  
 
== Theory ==
 
== Theory ==
Zeile 9: Zeile 8:
  
 
A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function,
 
A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function,
general constraints and variable bounds. The significant difference from a NLP to a LP is that NLP includes minimum one nonlinear function.
+
 
 +
general constraints and variable bounds.
 +
The significant difference from a NLP to a LP is that NLP includes minimum one nonlinear function.
 +
 
 +
 
 +
 
 +
First we define what an optimal solution is.
 +
 
 +
 
 +
The function which is supposed to be minimized or maximized is
 +
 
 +
<math>f(x), for x \epsilon \mathbb{R}^n</math>
 +
 
 +
 
 +
called the objective function.
 +
 
 +
 
 +
''x* is an local minimum, if:''
 +
 
 +
<math>x^*\epsilon\,\mathbb{R}^n,\varepsilon > 0</math>
 +
 
 +
<math>f(x^*)\leq f(x)\, for\,all\,x\, \epsilon A (x^*; \varepsilon)</math>
 +
 
 +
 
 +
An''d the global minimum, if:''
 +
 
 +
<math>f(x^*)\leq f(x)\, for\, all\, x\, \epsilon \, \mathbb{R}^n</math>
 +
 
  
  
First we define what an optimal solution is. The function which is supposed to be minimized or maximized is (<math>)f(x), for x \epsilon \mathbb{R}^n(</math>)  called the objective function.
+
''Maximum is vice versa.''
x* is an local minimum, if:
+
  
(<math>)x^*\epsilon\,\mathbb{R}^n,\varepsilon > 0 f(x^*)\leq f(x)\, for\,all\,x\, \epsilon A (x^*; \varepsilon)(</math>)
 
  
And the global minimum, if:
+
''The problem can be stated simply as:''
  
(<math>)f(x^*)\leq f(x)\, for\, all\, x\, \epsilon \, \mathbb{R}^n\ (</math>)
 
  
Maximum is vice versa.
+
<math> x\,\epsilon\, X\, max f(x)\, to\, maximize\, some\, variable\, such\, as\, product\, throughput\,x\,\epsilon\, X\, min f(x)\,to\, minimize\, a\, cost\, function\, where</math>
  
The problem can be stated simply as:
+
<math>f=R^n \rightarrow R\ x\, \epsilon\, R^n</math>
  
(<math>) x\,\epsilon\, X\, max f(x)\, to\, maximize\, some\, variable\, such\, as\, product\, throughput\,x\,\epsilon\, X\, min f(x)\,to\, minimize\, a\, cost\, function\, where (</math>)
 
  
(<math>)f=R^n \rightarrow R\ x\, \epsilon\, R^n(</math>)
+
''subject to:''
  
subject to:
 
  
(<math>)h_i(x) = 0, \, i\,\epsilon \, I\, = 1, ..., p(</math>)
+
<math>h_i(x) = 0, \, i\,\epsilon \, I\, = 1, ..., p</math>
  
(<math>) g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m(</math>)
+
<math>g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m</math>
  
 
== Example ==
 
== Example ==
Zeile 109: Zeile 130:
  
  
<math>\triangledown  f(x)=0
+
<math>\triangledown  f(x)=0 \rightarrow
 
  \left \{ 6x,\, 2y,\, 2z \right \}= 0</math>
 
  \left \{ 6x,\, 2y,\, 2z \right \}= 0</math>
  

Aktuelle Version vom 8. Juli 2013, 00:09 Uhr

Nonlinear Optimaziation: Basic Concepts


Theory


A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function,

general constraints and variable bounds. The significant difference from a NLP to a LP is that NLP includes minimum one nonlinear function.


First we define what an optimal solution is.


The function which is supposed to be minimized or maximized is


called the objective function.


x* is an local minimum, if:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x^*\epsilon\,\mathbb{R}^n,\varepsilon > 0


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x^*)\leq f(x)\, for\,all\,x\, \epsilon A (x^*; \varepsilon)


And the global minimum, if:

Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x^*)\leq f(x)\, for\, all\, x\, \epsilon \, \mathbb{R}^n



Maximum is vice versa.


The problem can be stated simply as:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f=R^n \rightarrow R\ x\, \epsilon\, R^n


subject to:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m


Example



The following set of NLP are genaral subroutines:

NLPCG Conjugate Gradient Method

NLPDD Double Dogleg Method

NLPNMS Nelder-Mead Simplex Method

NLPNRA Newton-Raphson Method

NLPNRR Newton-Raphson Ridge Method

NLPQN (Dual) Quasi-Newton Method

NLPQUA Quadratic Optimization Method

NLPTR Trust-Region Method


The following subroutines are provided for solving nonlinear least-squares problems:

NLPLM Levenberg-Marquardt Least-Squares Method

NLPHQN Hybrid Quasi-Newton Least-Squares Methods


Example 1:


Simple NLP maximization


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x)= 3x-x^3 \Rightarrow max



First derivative:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1



Second derivative:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)


Fehler beim Erstellen des Vorschaubildes: Die Miniaturansicht konnte nicht am vorgesehenen Ort gespeichert werden


Example 2:


NLP Gradient Method


First we need to find the gradient:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x,y,z)= 6x^2 +2y^2+2z^2 \rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}



Gradient has to equal 0:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown f(x)=0 \rightarrow \left \{ 6x,\, 2y,\, 2z \right \}= 0



Now we get the Hesse-Matrix by the derivative once again:


Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown^2_f = (6, 2, 2)



Solve for example with Rule of SARRUS:



Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \Rightarrow\, global\, minimum\, at\, (0,0,0)







Sources:



http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf

http://www.wikipedia.org/nonlinear_programming

http://www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf

http://www.wolframalpha.com