Nonlinear Opt.: Basic concepts 4: Unterschied zwischen den Versionen
[unmarkierte Version] | [unmarkierte Version] |
(Die Seite wurde neu angelegt: „== Nonlinear Optimaziation: Basic Concepts == ---- == Theory == ---- A nonlinear problem also so called NLP, is similar to a linear program but it is crea…“) |
(→Nonlinear Optimaziation: Basic Concepts) |
||
(14 dazwischenliegende Versionen von einem anderen Benutzer werden nicht angezeigt) | |||
Zeile 2: | Zeile 2: | ||
---- | ---- | ||
− | |||
== Theory == | == Theory == | ||
Zeile 8: | Zeile 7: | ||
---- | ---- | ||
− | A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function, | + | A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function, |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | general constraints and variable bounds. | |
− | + | The significant difference from a NLP to a LP is that NLP includes minimum one nonlinear function. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | |||
− | == Example | + | |
+ | First we define what an optimal solution is. | ||
+ | |||
+ | |||
+ | The function which is supposed to be minimized or maximized is | ||
+ | |||
+ | <math>f(x), for x \epsilon \mathbb{R}^n</math> | ||
+ | |||
+ | |||
+ | called the objective function. | ||
+ | |||
+ | |||
+ | ''x* is an local minimum, if:'' | ||
+ | |||
+ | <math>x^*\epsilon\,\mathbb{R}^n,\varepsilon > 0</math> | ||
+ | |||
+ | <math>f(x^*)\leq f(x)\, for\,all\,x\, \epsilon A (x^*; \varepsilon)</math> | ||
+ | |||
+ | |||
+ | An''d the global minimum, if:'' | ||
+ | |||
+ | <math>f(x^*)\leq f(x)\, for\, all\, x\, \epsilon \, \mathbb{R}^n</math> | ||
+ | |||
+ | |||
+ | |||
+ | ''Maximum is vice versa.'' | ||
+ | |||
+ | |||
+ | ''The problem can be stated simply as:'' | ||
+ | |||
+ | |||
+ | <math> x\,\epsilon\, X\, max f(x)\, to\, maximize\, some\, variable\, such\, as\, product\, throughput\,x\,\epsilon\, X\, min f(x)\,to\, minimize\, a\, cost\, function\, where</math> | ||
+ | |||
+ | <math>f=R^n \rightarrow R\ x\, \epsilon\, R^n</math> | ||
+ | |||
+ | |||
+ | ''subject to:'' | ||
+ | |||
+ | |||
+ | <math>h_i(x) = 0, \, i\,\epsilon \, I\, = 1, ..., p</math> | ||
+ | |||
+ | <math>g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m</math> | ||
+ | |||
+ | == Example == | ||
---- | ---- | ||
− | The following set of NLP are genaral subroutines: | + | |
+ | '''The following set of NLP are genaral subroutines:''' | ||
NLPCG Conjugate Gradient Method | NLPCG Conjugate Gradient Method | ||
+ | |||
NLPDD Double Dogleg Method | NLPDD Double Dogleg Method | ||
+ | |||
NLPNMS Nelder-Mead Simplex Method | NLPNMS Nelder-Mead Simplex Method | ||
+ | |||
NLPNRA Newton-Raphson Method | NLPNRA Newton-Raphson Method | ||
+ | |||
NLPNRR Newton-Raphson Ridge Method | NLPNRR Newton-Raphson Ridge Method | ||
+ | |||
NLPQN (Dual) Quasi-Newton Method | NLPQN (Dual) Quasi-Newton Method | ||
+ | |||
NLPQUA Quadratic Optimization Method | NLPQUA Quadratic Optimization Method | ||
+ | |||
NLPTR Trust-Region Method | NLPTR Trust-Region Method | ||
− | The following subroutines are provided for solving nonlinear least-squares problems: | + | |
+ | |||
+ | '''The following subroutines are provided for solving nonlinear least-squares problems:''' | ||
+ | |||
NLPLM Levenberg-Marquardt Least-Squares Method | NLPLM Levenberg-Marquardt Least-Squares Method | ||
+ | |||
NLPHQN Hybrid Quasi-Newton Least-Squares Methods | NLPHQN Hybrid Quasi-Newton Least-Squares Methods | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | '''Example 1:''' | |
− | + | ||
− | |||
− | |||
− | |||
− | + | '''Simple NLP maximization''' | |
− | + | ||
− | |||
− | + | <math>f(x)= 3x-x^3 \Rightarrow max</math> | |
+ | ''First derivative:'' | ||
+ | <math>f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1</math> | ||
+ | ''Second derivative:'' | ||
+ | <math>f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)</math> | ||
+ | [[Datei:Screen Shot 2013-07-01 at 2.02.47 PM.png]] | ||
+ | |||
+ | '''Example 2:''' | ||
+ | |||
+ | |||
+ | '''NLP Gradient Method''' | ||
+ | |||
+ | |||
+ | ''First we need to find the gradient:'' | ||
+ | |||
+ | |||
+ | <math>f(x,y,z)= 6x^2 +2y^2+2z^2 \rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}</math> | ||
+ | |||
+ | |||
+ | |||
+ | ''Gradient has to equal 0:'' | ||
+ | |||
+ | |||
+ | <math>\triangledown f(x)=0 \rightarrow | ||
+ | \left \{ 6x,\, 2y,\, 2z \right \}= 0</math> | ||
+ | |||
+ | |||
+ | |||
+ | ''Now we get the Hesse-Matrix by the derivative once again:'' | ||
+ | |||
+ | |||
+ | <math>\triangledown^2_f = (6, 2, 2)</math> | ||
+ | |||
+ | |||
+ | |||
+ | ''Solve for example with Rule of SARRUS:'' | ||
+ | |||
+ | |||
+ | <math>a_1=2, a_2=2, a_3=6; a_1, a_2, a_3 > 0 </math> | ||
+ | |||
+ | |||
+ | <math>\Rightarrow\, global\, minimum\, at\, (0,0,0)</math> | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | '''Sources:''' | ||
+ | |||
+ | ---- | ||
− | |||
http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf | http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf | ||
− | www.wikipedia.org/nonlinear_programming | + | |
− | www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf | + | http://www.wikipedia.org/nonlinear_programming |
+ | |||
+ | http://www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf | ||
+ | |||
+ | http://www.wolframalpha.com |
Aktuelle Version vom 8. Juli 2013, 00:09 Uhr
Nonlinear Optimaziation: Basic Concepts
Theory
A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function,
general constraints and variable bounds. The significant difference from a NLP to a LP is that NLP includes minimum one nonlinear function.
First we define what an optimal solution is.
The function which is supposed to be minimized or maximized is
called the objective function.
x* is an local minimum, if:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): x^*\epsilon\,\mathbb{R}^n,\varepsilon > 0
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x^*)\leq f(x)\, for\,all\,x\, \epsilon A (x^*; \varepsilon)
And the global minimum, if:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x^*)\leq f(x)\, for\, all\, x\, \epsilon \, \mathbb{R}^n
Maximum is vice versa.
The problem can be stated simply as:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f=R^n \rightarrow R\ x\, \epsilon\, R^n
subject to:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m
Example
The following set of NLP are genaral subroutines:
NLPCG Conjugate Gradient Method
NLPDD Double Dogleg Method
NLPNMS Nelder-Mead Simplex Method
NLPNRA Newton-Raphson Method
NLPNRR Newton-Raphson Ridge Method
NLPQN (Dual) Quasi-Newton Method
NLPQUA Quadratic Optimization Method
NLPTR Trust-Region Method
The following subroutines are provided for solving nonlinear least-squares problems:
NLPLM Levenberg-Marquardt Least-Squares Method
NLPHQN Hybrid Quasi-Newton Least-Squares Methods
Example 1:
Simple NLP maximization
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x)= 3x-x^3 \Rightarrow max
First derivative:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1
Second derivative:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f''(x)= -6 \Rightarrow -6< 0 \Rightarrow max (local maximum)
Example 2:
NLP Gradient Method
First we need to find the gradient:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): f(x,y,z)= 6x^2 +2y^2+2z^2 \rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}
Gradient has to equal 0:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown f(x)=0 \rightarrow \left \{ 6x,\, 2y,\, 2z \right \}= 0
Now we get the Hesse-Matrix by the derivative once again:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \triangledown^2_f = (6, 2, 2)
Solve for example with Rule of SARRUS:
Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \Rightarrow\, global\, minimum\, at\, (0,0,0)
Sources:
http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf
http://www.wikipedia.org/nonlinear_programming
http://www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf