Nonlinear Opt.: Basic concepts 4: Unterschied zwischen den Versionen
[unmarkierte Version] | [unmarkierte Version] |
(→Example:) |
(→Example:) |
||
Zeile 31: | Zeile 31: | ||
---- | ---- | ||
− | + | ''' | |
− | The following set of NLP are genaral subroutines: | + | The following set of NLP are genaral subroutines:''' |
NLPCG Conjugate Gradient Method | NLPCG Conjugate Gradient Method | ||
+ | |||
NLPDD Double Dogleg Method | NLPDD Double Dogleg Method | ||
+ | |||
NLPNMS Nelder-Mead Simplex Method | NLPNMS Nelder-Mead Simplex Method | ||
+ | |||
NLPNRA Newton-Raphson Method | NLPNRA Newton-Raphson Method | ||
+ | |||
NLPNRR Newton-Raphson Ridge Method | NLPNRR Newton-Raphson Ridge Method | ||
+ | |||
NLPQN (Dual) Quasi-Newton Method | NLPQN (Dual) Quasi-Newton Method | ||
+ | |||
NLPQUA Quadratic Optimization Method | NLPQUA Quadratic Optimization Method | ||
+ | |||
NLPTR Trust-Region Method | NLPTR Trust-Region Method | ||
− | The following subroutines are provided for solving nonlinear least-squares problems: | + | |
+ | |||
+ | '''The following subroutines are provided for solving nonlinear least-squares problems:''' | ||
+ | |||
NLPLM Levenberg-Marquardt Least-Squares Method | NLPLM Levenberg-Marquardt Least-Squares Method | ||
+ | |||
NLPHQN Hybrid Quasi-Newton Least-Squares Methods | NLPHQN Hybrid Quasi-Newton Least-Squares Methods | ||
+ | |||
Zeile 58: | Zeile 70: | ||
''Second derivative:'' | ''Second derivative:'' | ||
− | <math>f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max | + | <math>f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)</math> |
''' | ''' | ||
Zeile 67: | Zeile 79: | ||
First we need to find the gradient: | First we need to find the gradient: | ||
− | <math>f(x,y,z)= 6x^2 +2y^2+2z^2 | + | <math>f(x,y,z)= 6x^2 +2y^2+2z^2 \rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}</math> |
Gradient has to equal 0: | Gradient has to equal 0: | ||
<math>\triangledown f(x)=0 | <math>\triangledown f(x)=0 | ||
− | + | \left \{ 6x,\, 2y,\, 2z \right \}= 0</math> | |
Now we get the Hesse-Matrix by the derivative once again: | Now we get the Hesse-Matrix by the derivative once again: | ||
Zeile 80: | Zeile 92: | ||
Solve for example with Rule of SARRUS: | Solve for example with Rule of SARRUS: | ||
− | <math>a_1_,_2=2, | + | <math>a_1_,_2=2,</math> |
+ | |||
+ | a_3=6 a_1_,_2_,_3>0 \Rightarrow global minimum at (0,0,0)</math> | ||
Version vom 1. Juli 2013, 15:55 Uhr
Nonlinear Optimaziation: Basic Concepts
Theory
A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function, general constraints and variable bounds. The significant difference from a NLP to a LP is that NLP includes minimum one nonlinear function.
First we define what an optimal solution is. The function which is supposed to be minimized or maximized is () called the objective function.
x* is an local minimum, if:
(Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): )[x^*\epsilon\,\mathbb{R}^n,\varepsilon > 0 ,\: f(x^*)\leq f(x)\, for\,all\,x\, \epsilon A (x^*; \varepsilon)](
)
And the global minimum, if:
(Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): ) \[\inline \: f(x^*)\leq f(x)\, for\, all\, x\, \epsilon \, \mathbb{R}^n\] (
)
The problem can be stated simply as: \[x\,\epsilon\, X,\, max f(x)\,to\, maximize\, some\, variable\, such\, as\, product\, throughput\] \[\inline x\,\epsilon\, X,\, min f(x)\,to\, minimize\, a\, cost\, function\, where\,\] \[\inline f: R^n \rightarrow R\] \[\inline x\, \epsilon\, R^n\] subject to: \[\inline h_i(x) = 0, \, i\,\epsilon \, I\, = 1, ..., p\]
\[\inline g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m\]
Example:
The following set of NLP are genaral subroutines:
NLPCG Conjugate Gradient Method
NLPDD Double Dogleg Method
NLPNMS Nelder-Mead Simplex Method
NLPNRA Newton-Raphson Method
NLPNRR Newton-Raphson Ridge Method
NLPQN (Dual) Quasi-Newton Method
NLPQUA Quadratic Optimization Method
NLPTR Trust-Region Method
The following subroutines are provided for solving nonlinear least-squares problems:
NLPLM Levenberg-Marquardt Least-Squares Method
NLPHQN Hybrid Quasi-Newton Least-Squares Methods
Example 1: Simple NLP maximization
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x)= 3x-x^3 \Rightarrow max
First derivative:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1
Second derivative:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)
Example 2:
NLP Gradient Method
First we need to find the gradient:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x,y,z)= 6x^2 +2y^2+2z^2 \rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}
Gradient has to equal 0:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown f(x)=0 \left \{ 6x,\, 2y,\, 2z \right \}= 0
Now we get the Hesse-Matrix by the derivative once again:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown^2_f = (6, 2, 2)
Solve for example with Rule of SARRUS:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): a_1_,_2=2,
a_3=6 a_1_,_2_,_3>0 \Rightarrow global minimum at (0,0,0)</math>
Surces: http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf www.wikipedia.org/nonlinear_programming www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf