In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms.[1] It is also known as Rosenbrock's valley or Rosenbrock's banana function.
The global minimum is inside a long, narrow, parabolic-shaped flat valley. To find the valley is trivial. To converge to the global minimum, however, is difficult.
The function is defined by
f(x,y)=(a-x)2+b(y-x2)2
It has a global minimum at
(x,y)=(a,a2)
f(x,y)=0
a=1
b=100
a=0
Two variants are commonly encountered.
One is the sum of
N/2
N
f(x)=f(x1,x2,...,xN)=
N/2 | |
\sum | |
i=1 |
2 | |
\left[100(x | |
2i-1 |
-x2i)2 +(x2i-1-1)2\right].
This variant has predictably simple solutions.
A second, more involved variant is
f(x)=
N-1 | |
\sum | |
i=1 |
[100(xi+1-
2 | |
x | |
i |
)2+
2] | |
(1-x | |
i) |
where x=(x1,\ldots,xN)\inRN.
has exactly one minimum for
N=3
(1,1,1)
4\leN\le7
(1,1,...,1)
\hat{x
x
N
|xi|<2.4
N
Many of the stationary points of the function exhibit a regular pattern when plotted.[4] This structure can be exploited to locate them.
The Rosenbrock function can be efficiently optimized by adapting appropriate coordinate system without using any gradient information and without building local approximation models (in contrast to many derivate-free optimizers). The following figure illustrates an example of 2-dimensional Rosenbrock function optimization byadaptive coordinate descent from starting point
x0=(-3,-4)
10-10
Using the Nelder–Mead method from starting point
x0=(-1,1)
1.36 ⋅ 10-10