Nonlinear Opt.: Basic concepts 4

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche

Nonlinear Optimaziation: Basic Concepts



Theory


A nonlinear problem also so called NLP, is similar to a linear program but it is created by the objective function, general constraints and variable bounds. The significant difference from a NLP to a LP is that NLP includes minimum one nonlinear function. First we define what an optimal solution is. The function which is supposed to be minimized or maximized is () called the objective function. x* is an local minimum, if: (Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): ) \[\inline x^*\epsilon\, \mathbb{R}^n , \varepsilon > 0 , \: f(x^*)\leq f(x)\, for\, all\, x\, \epsilon A (x^*; \varepsilon)\] ( ) And the global minimum, if: (Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): ) \[\inline \: f(x^*)\leq f(x)\, for\, all\, x\, \epsilon \, \mathbb{R}^n\] ( )

The problem can be stated simply as: \[\inline x\,\epsilon\, X,\, max f(x)\,to\, maximize\, some\, variable\, such\, as\, product\, throughput\] \[\inline x\,\epsilon\, X,\, min f(x)\,to\, minimize\, a\, cost\, function\, where\,\] \[\inline f: R^n \rightarrow R\] \[\inline x\, \epsilon\, R^n\] subject to: \[\inline h_i(x) = 0, \, i\,\epsilon \, I\, = 1, ..., p\]

\[\inline g_j(x)\,\leq \,0,\,j\,\epsilon \,J\,= 1,...,m\]

Example:


The following set of NLP are genaral subroutines:

NLPCG Conjugate Gradient Method NLPDD Double Dogleg Method NLPNMS Nelder-Mead Simplex Method NLPNRA Newton-Raphson Method NLPNRR Newton-Raphson Ridge Method NLPQN (Dual) Quasi-Newton Method NLPQUA Quadratic Optimization Method NLPTR Trust-Region Method The following subroutines are provided for solving nonlinear least-squares problems: NLPLM Levenberg-Marquardt Least-Squares Method NLPHQN Hybrid Quasi-Newton Least-Squares Methods


Example 1: Simple NLP maximization \[\inline f(x)= 3x-x^3 \Rightarrow max\] First derivative: \[\inline f'(x)= 3-3x^2 \Rightarrow 3-3x^2=0 \Rightarrow x_1 = +1, \, x_2= -1\,\] Second derivative: \[\inline f(x)= -6 \Rightarrow -6< 0 \Rightarrow max\: (local maximum)\] Example 2: NLP Gradient Method

First we need to find the gradient: \[\inline f(x,y,z)= 6x^2 +2y^2+2z^2 \:\rightarrow \triangledown_f (6x^2 +2y+2z)=\left \{ 6x,\, 2y,\, 2z \right \}\]

Gradient has to equal 0: \triangledown f(x)=0 \[\inline \left \{ 6x,\, 2y,\, 2z \right \}= 0\]

Now we get the Hesse-Matrix by the derivative once again: \triangledown^2_f = (6, 2, 2)

Solve for example with Rule of SARRUS:

\[\inline a_1_,_2=2, \, a_3=6;\: a_1_,_2_,_3> 0 \Rightarrow global\, minimum\, at\, (0,0,0)\]








Surces: http://ciser.cornell.edu/sasdoc/saspdf/iml/chap11.pdf www.wikipedia.org/nonlinear_programming www.sce.carleton.ca/faculty/chinneck/po/Chapter%2016.pdf