summaryrefslogtreecommitdiff
path: root/gsl-1.9/doc/multimin.texi
diff options
context:
space:
mode:
Diffstat (limited to 'gsl-1.9/doc/multimin.texi')
-rw-r--r--gsl-1.9/doc/multimin.texi752
1 files changed, 752 insertions, 0 deletions
diff --git a/gsl-1.9/doc/multimin.texi b/gsl-1.9/doc/multimin.texi
new file mode 100644
index 0000000..4eb1ae8
--- /dev/null
+++ b/gsl-1.9/doc/multimin.texi
@@ -0,0 +1,752 @@
+@cindex minimization, multidimensional
+
+This chapter describes routines for finding minima of arbitrary
+multidimensional functions. The library provides low level components
+for a variety of iterative minimizers and convergence tests. These can
+be combined by the user to achieve the desired solution, while providing
+full access to the intermediate steps of the algorithms. Each class of
+methods uses the same framework, so that you can switch between
+minimizers at runtime without needing to recompile your program. Each
+instance of a minimizer keeps track of its own state, allowing the
+minimizers to be used in multi-threaded programs. The minimization
+algorithms can be used to maximize a function by inverting its sign.
+
+The header file @file{gsl_multimin.h} contains prototypes for the
+minimization functions and related declarations.
+
+@menu
+* Multimin Overview::
+* Multimin Caveats::
+* Initializing the Multidimensional Minimizer::
+* Providing a function to minimize::
+* Multimin Iteration::
+* Multimin Stopping Criteria::
+* Multimin Algorithms::
+* Multimin Examples::
+* Multimin References and Further Reading::
+@end menu
+
+@node Multimin Overview
+@section Overview
+
+The problem of multidimensional minimization requires finding a point
+@math{x} such that the scalar function,
+@tex
+\beforedisplay
+$$
+f(x_1, \dots, x_n)
+$$
+\afterdisplay
+@end tex
+@ifinfo
+
+@example
+f(x_1, @dots{}, x_n)
+@end example
+
+@end ifinfo
+@noindent
+takes a value which is lower than at any neighboring point. For smooth
+functions the gradient @math{g = \nabla f} vanishes at the minimum. In
+general there are no bracketing methods available for the
+minimization of @math{n}-dimensional functions. The algorithms
+proceed from an initial guess using a search algorithm which attempts
+to move in a downhill direction.
+
+Algorithms making use of the gradient of the function perform a
+one-dimensional line minimisation along this direction until the lowest
+point is found to a suitable tolerance. The search direction is then
+updated with local information from the function and its derivatives,
+and the whole process repeated until the true @math{n}-dimensional
+minimum is found.
+
+The Nelder-Mead Simplex algorithm applies a different strategy. It
+maintains @math{n+1} trial parameter vectors as the vertices of a
+@math{n}-dimensional simplex. In each iteration step it tries to
+improve the worst vertex by a simple geometrical transformation until
+the size of the simplex falls below a given tolerance.
+
+Both types of algorithms use a standard framework. The user provides a
+high-level driver for the algorithms, and the library provides the
+individual functions necessary for each of the steps. There are three
+main phases of the iteration. The steps are,
+
+@itemize @bullet
+@item
+initialize minimizer state, @var{s}, for algorithm @var{T}
+
+@item
+update @var{s} using the iteration @var{T}
+
+@item
+test @var{s} for convergence, and repeat iteration if necessary
+@end itemize
+
+@noindent
+Each iteration step consists either of an improvement to the
+line-minimisation in the current direction or an update to the search
+direction itself. The state for the minimizers is held in a
+@code{gsl_multimin_fdfminimizer} struct or a
+@code{gsl_multimin_fminimizer} struct.
+
+@node Multimin Caveats
+@section Caveats
+@cindex Multimin, caveats
+
+Note that the minimization algorithms can only search for one local
+minimum at a time. When there are several local minima in the search
+area, the first minimum to be found will be returned; however it is
+difficult to predict which of the minima this will be. In most cases,
+no error will be reported if you try to find a local minimum in an area
+where there is more than one.
+
+It is also important to note that the minimization algorithms find local
+minima; there is no way to determine whether a minimum is a global
+minimum of the function in question.
+
+@node Initializing the Multidimensional Minimizer
+@section Initializing the Multidimensional Minimizer
+
+The following function initializes a multidimensional minimizer. The
+minimizer itself depends only on the dimension of the problem and the
+algorithm and can be reused for different problems.
+
+@deftypefun {gsl_multimin_fdfminimizer *} gsl_multimin_fdfminimizer_alloc (const gsl_multimin_fdfminimizer_type * @var{T}, size_t @var{n})
+@deftypefunx {gsl_multimin_fminimizer *} gsl_multimin_fminimizer_alloc (const gsl_multimin_fminimizer_type * @var{T}, size_t @var{n})
+This function returns a pointer to a newly allocated instance of a
+minimizer of type @var{T} for an @var{n}-dimension function. If there
+is insufficient memory to create the minimizer then the function returns
+a null pointer and the error handler is invoked with an error code of
+@code{GSL_ENOMEM}.
+@end deftypefun
+
+@deftypefun int gsl_multimin_fdfminimizer_set (gsl_multimin_fdfminimizer * @var{s}, gsl_multimin_function_fdf * @var{fdf}, const gsl_vector * @var{x}, double @var{step_size}, double @var{tol})
+This function initializes the minimizer @var{s} to minimize the function
+@var{fdf} starting from the initial point @var{x}. The size of the
+first trial step is given by @var{step_size}. The accuracy of the line
+minimization is specified by @var{tol}. The precise meaning of this
+parameter depends on the method used. Typically the line minimization
+is considered successful if the gradient of the function @math{g} is
+orthogonal to the current search direction @math{p} to a relative
+accuracy of @var{tol}, where @c{$p\cdot g < tol |p| |g|$}
+@math{dot(p,g) < tol |p| |g|}. A @var{tol} value of 0.1 is
+suitable for most purposes, since line minimization only needs to
+be carried out approximately. Note that setting @var{tol} to zero will
+force the use of ``exact'' line-searches, which are extremely expensive.
+
+@deftypefunx int gsl_multimin_fminimizer_set (gsl_multimin_fminimizer * @var{s}, gsl_multimin_function * @var{f}, const gsl_vector * @var{x}, const gsl_vector * @var{step_size})
+This function initializes the minimizer @var{s} to minimize the function
+@var{f}, starting from the initial point
+@var{x}. The size of the initial trial steps is given in vector
+@var{step_size}. The precise meaning of this parameter depends on the
+method used.
+@end deftypefun
+
+@deftypefun void gsl_multimin_fdfminimizer_free (gsl_multimin_fdfminimizer * @var{s})
+@deftypefunx void gsl_multimin_fminimizer_free (gsl_multimin_fminimizer * @var{s})
+This function frees all the memory associated with the minimizer
+@var{s}.
+@end deftypefun
+
+@deftypefun {const char *} gsl_multimin_fdfminimizer_name (const gsl_multimin_fdfminimizer * @var{s})
+@deftypefunx {const char *} gsl_multimin_fminimizer_name (const gsl_multimin_fminimizer * @var{s})
+This function returns a pointer to the name of the minimizer. For example,
+
+@example
+printf ("s is a '%s' minimizer\n",
+ gsl_multimin_fdfminimizer_name (s));
+@end example
+
+@noindent
+would print something like @code{s is a 'conjugate_pr' minimizer}.
+@end deftypefun
+
+@node Providing a function to minimize
+@section Providing a function to minimize
+
+You must provide a parametric function of @math{n} variables for the
+minimizers to operate on. You may also need to provide a routine which
+calculates the gradient of the function and a third routine which
+calculates both the function value and the gradient together. In order
+to allow for general parameters the functions are defined by the
+following data types:
+
+@deftp {Data Type} gsl_multimin_function_fdf
+This data type defines a general function of @math{n} variables with
+parameters and the corresponding gradient vector of derivatives,
+
+@table @code
+@item double (* f) (const gsl_vector * @var{x}, void * @var{params})
+this function should return the result
+@c{$f(x,\hbox{\it params})$}
+@math{f(x,params)} for argument @var{x} and parameters @var{params}.
+
+@item void (* df) (const gsl_vector * @var{x}, void * @var{params}, gsl_vector * @var{g})
+this function should store the @var{n}-dimensional gradient
+@c{$g_i = \partial f(x,\hbox{\it params}) / \partial x_i$}
+@math{g_i = d f(x,params) / d x_i} in the vector @var{g} for argument @var{x}
+and parameters @var{params}, returning an appropriate error code if the
+function cannot be computed.
+
+@item void (* fdf) (const gsl_vector * @var{x}, void * @var{params}, double * f, gsl_vector * @var{g})
+This function should set the values of the @var{f} and @var{g} as above,
+for arguments @var{x} and parameters @var{params}. This function
+provides an optimization of the separate functions for @math{f(x)} and
+@math{g(x)}---it is always faster to compute the function and its
+derivative at the same time.
+
+@item size_t n
+the dimension of the system, i.e. the number of components of the
+vectors @var{x}.
+
+@item void * params
+a pointer to the parameters of the function.
+@end table
+@end deftp
+@deftp {Data Type} gsl_multimin_function
+This data type defines a general function of @math{n} variables with
+parameters,
+
+@table @code
+@item double (* f) (const gsl_vector * @var{x}, void * @var{params})
+this function should return the result
+@c{$f(x,\hbox{\it params})$}
+@math{f(x,params)} for argument @var{x} and parameters @var{params}.
+
+@item size_t n
+the dimension of the system, i.e. the number of components of the
+vectors @var{x}.
+
+@item void * params
+a pointer to the parameters of the function.
+@end table
+@end deftp
+
+@noindent
+The following example function defines a simple paraboloid with two
+parameters,
+
+@example
+/* Paraboloid centered on (dp[0],dp[1]) */
+
+double
+my_f (const gsl_vector *v, void *params)
+@{
+ double x, y;
+ double *dp = (double *)params;
+
+ x = gsl_vector_get(v, 0);
+ y = gsl_vector_get(v, 1);
+
+ return 10.0 * (x - dp[0]) * (x - dp[0]) +
+ 20.0 * (y - dp[1]) * (y - dp[1]) + 30.0;
+@}
+
+/* The gradient of f, df = (df/dx, df/dy). */
+void
+my_df (const gsl_vector *v, void *params,
+ gsl_vector *df)
+@{
+ double x, y;
+ double *dp = (double *)params;
+
+ x = gsl_vector_get(v, 0);
+ y = gsl_vector_get(v, 1);
+
+ gsl_vector_set(df, 0, 20.0 * (x - dp[0]));
+ gsl_vector_set(df, 1, 40.0 * (y - dp[1]));
+@}
+
+/* Compute both f and df together. */
+void
+my_fdf (const gsl_vector *x, void *params,
+ double *f, gsl_vector *df)
+@{
+ *f = my_f(x, params);
+ my_df(x, params, df);
+@}
+@end example
+
+@noindent
+The function can be initialized using the following code,
+
+@example
+gsl_multimin_function_fdf my_func;
+
+double p[2] = @{ 1.0, 2.0 @}; /* center at (1,2) */
+
+my_func.f = &my_f;
+my_func.df = &my_df;
+my_func.fdf = &my_fdf;
+my_func.n = 2;
+my_func.params = (void *)p;
+@end example
+
+@node Multimin Iteration
+@section Iteration
+
+The following function drives the iteration of each algorithm. The
+function performs one iteration to update the state of the minimizer.
+The same function works for all minimizers so that different methods can
+be substituted at runtime without modifications to the code.
+
+@deftypefun int gsl_multimin_fdfminimizer_iterate (gsl_multimin_fdfminimizer * @var{s})
+@deftypefunx int gsl_multimin_fminimizer_iterate (gsl_multimin_fminimizer * @var{s})
+These functions perform a single iteration of the minimizer @var{s}. If
+the iteration encounters an unexpected problem then an error code will
+be returned.
+@end deftypefun
+
+@noindent
+The minimizer maintains a current best estimate of the minimum at all
+times. This information can be accessed with the following auxiliary
+functions,
+
+@deftypefun {gsl_vector *} gsl_multimin_fdfminimizer_x (const gsl_multimin_fdfminimizer * @var{s})
+@deftypefunx {gsl_vector *} gsl_multimin_fminimizer_x (const gsl_multimin_fminimizer * @var{s})
+@deftypefunx double gsl_multimin_fdfminimizer_minimum (const gsl_multimin_fdfminimizer * @var{s})
+@deftypefunx double gsl_multimin_fminimizer_minimum (const gsl_multimin_fminimizer * @var{s})
+@deftypefunx {gsl_vector *} gsl_multimin_fdfminimizer_gradient (const gsl_multimin_fdfminimizer * @var{s})
+@deftypefunx double gsl_multimin_fminimizer_size (const gsl_multimin_fminimizer * @var{s})
+These functions return the current best estimate of the location of the
+minimum, the value of the function at that point, its gradient,
+and minimizer specific characteristic size for the minimizer @var{s}.
+@end deftypefun
+
+@deftypefun int gsl_multimin_fdfminimizer_restart (gsl_multimin_fdfminimizer * @var{s})
+This function resets the minimizer @var{s} to use the current point as a
+new starting point.
+@end deftypefun
+
+@node Multimin Stopping Criteria
+@section Stopping Criteria
+
+A minimization procedure should stop when one of the following
+conditions is true:
+
+@itemize @bullet
+@item
+A minimum has been found to within the user-specified precision.
+
+@item
+A user-specified maximum number of iterations has been reached.
+
+@item
+An error has occurred.
+@end itemize
+
+@noindent
+The handling of these conditions is under user control. The functions
+below allow the user to test the precision of the current result.
+
+@deftypefun int gsl_multimin_test_gradient (const gsl_vector * @var{g}, double @var{epsabs})
+This function tests the norm of the gradient @var{g} against the
+absolute tolerance @var{epsabs}. The gradient of a multidimensional
+function goes to zero at a minimum. The test returns @code{GSL_SUCCESS}
+if the following condition is achieved,
+@tex
+\beforedisplay
+$$
+|g| < \hbox{\it epsabs}
+$$
+\afterdisplay
+@end tex
+@ifinfo
+
+@example
+|g| < epsabs
+@end example
+
+@end ifinfo
+@noindent
+and returns @code{GSL_CONTINUE} otherwise. A suitable choice of
+@var{epsabs} can be made from the desired accuracy in the function for
+small variations in @math{x}. The relationship between these quantities
+is given by @c{$\delta{f} = g\,\delta{x}$}
+@math{\delta f = g \delta x}.
+@end deftypefun
+
+@deftypefun int gsl_multimin_test_size (const double @var{size}, double @var{epsabs})
+This function tests the minimizer specific characteristic
+size (if applicable to the used minimizer) against absolute tolerance @var{epsabs}.
+The test returns @code{GSL_SUCCESS} if the size is smaller than tolerance,
+otherwise @code{GSL_CONTINUE} is returned.
+@end deftypefun
+
+@node Multimin Algorithms
+@section Algorithms
+
+There are several minimization methods available. The best choice of
+algorithm depends on the problem. All of the algorithms use the value
+of the function and its gradient at each evaluation point, except for
+the Simplex algorithm which uses function values only.
+
+@deffn {Minimizer} gsl_multimin_fdfminimizer_conjugate_fr
+@cindex Fletcher-Reeves conjugate gradient algorithm, minimization
+@cindex Conjugate gradient algorithm, minimization
+@cindex minimization, conjugate gradient algorithm
+This is the Fletcher-Reeves conjugate gradient algorithm. The conjugate
+gradient algorithm proceeds as a succession of line minimizations. The
+sequence of search directions is used to build up an approximation to the
+curvature of the function in the neighborhood of the minimum.
+
+An initial search direction @var{p} is chosen using the gradient, and line
+minimization is carried out in that direction. The accuracy of the line
+minimization is specified by the parameter @var{tol}. The minimum
+along this line occurs when the function gradient @var{g} and the search direction
+@var{p} are orthogonal. The line minimization terminates when
+@c{$p\cdot g < tol |p| |g|$}
+@math{dot(p,g) < tol |p| |g|}. The
+search direction is updated using the Fletcher-Reeves formula
+@math{p' = g' - \beta g} where @math{\beta=-|g'|^2/|g|^2}, and
+the line minimization is then repeated for the new search
+direction.
+@end deffn
+
+@deffn {Minimizer} gsl_multimin_fdfminimizer_conjugate_pr
+@cindex Polak-Ribiere algorithm, minimization
+@cindex minimization, Polak-Ribiere algorithm
+This is the Polak-Ribiere conjugate gradient algorithm. It is similar
+to the Fletcher-Reeves method, differing only in the choice of the
+coefficient @math{\beta}. Both methods work well when the evaluation
+point is close enough to the minimum of the objective function that it
+is well approximated by a quadratic hypersurface.
+@end deffn
+
+@deffn {Minimizer} gsl_multimin_fdfminimizer_vector_bfgs2
+@deffnx {Minimizer} gsl_multimin_fdfminimizer_vector_bfgs
+@cindex BFGS algorithm, minimization
+@cindex minimization, BFGS algorithm
+These methods use the vector Broyden-Fletcher-Goldfarb-Shanno (BFGS)
+algorithm. This is a quasi-Newton method which builds up an approximation
+to the second derivatives of the function @math{f} using the difference
+between successive gradient vectors. By combining the first and second
+derivatives the algorithm is able to take Newton-type steps towards the
+function minimum, assuming quadratic behavior in that region.
+
+The @code{bfgs2} version of this minimizer is the most efficient
+version available, and is a faithful implementation of the line
+minimization scheme described in Fletcher's @cite{Practical Methods of
+Optimization}, Algorithms 2.6.2 and 2.6.4. It supercedes the original
+@code{bfgs} routine and requires substantially fewer function and
+gradient evaluations. The user-supplied tolerance @var{tol}
+corresponds to the parameter @math{\sigma} used by Fletcher. A value
+of 0.1 is recommended for typical use (larger values correspond to
+less accurate line searches).
+
+@end deffn
+
+@deffn {Minimizer} gsl_multimin_fdfminimizer_steepest_descent
+@cindex steepest descent algorithm, minimization
+@cindex minimization, steepest descent algorithm
+The steepest descent algorithm follows the downhill gradient of the
+function at each step. When a downhill step is successful the step-size
+is increased by a factor of two. If the downhill step leads to a higher
+function value then the algorithm backtracks and the step size is
+decreased using the parameter @var{tol}. A suitable value of @var{tol}
+for most applications is 0.1. The steepest descent method is
+inefficient and is included only for demonstration purposes.
+@end deffn
+
+@deffn {Minimizer} gsl_multimin_fminimizer_nmsimplex
+@cindex Nelder-Mead simplex algorithm for minimization
+@cindex simplex algorithm, minimization
+@cindex minimization, simplex algorithm
+This is the Simplex algorithm of Nelder and Mead. It constructs
+@math{n} vectors @math{p_i} from the
+starting vector @var{x} and the vector @var{step_size} as follows:
+@tex
+\beforedisplay
+$$
+\eqalign{
+p_0 & = (x_0, x_1, \cdots , x_n) \cr
+p_1 & = (x_0 + step\_size_0, x_1, \cdots , x_n) \cr
+p_2 & = (x_0, x_1 + step\_size_1, \cdots , x_n) \cr
+\dots &= \dots \cr
+p_n & = (x_0, x_1, \cdots , x_n+step\_size_n) \cr
+}
+$$
+\afterdisplay
+@end tex
+@ifinfo
+
+@example
+p_0 = (x_0, x_1, ... , x_n)
+p_1 = (x_0 + step_size_0, x_1, ... , x_n)
+p_2 = (x_0, x_1 + step_size_1, ... , x_n)
+... = ...
+p_n = (x_0, x_1, ... , x_n+step_size_n)
+@end example
+
+@end ifinfo
+@noindent
+These vectors form the @math{n+1} vertices of a simplex in @math{n}
+dimensions. On each iteration the algorithm tries to improve
+the parameter vector @math{p_i} corresponding to the highest
+function value by simple geometrical transformations. These
+are reflection, reflection followed by expansion, contraction and multiple
+contraction. Using these transformations the simplex moves through
+the parameter space towards the minimum, where it contracts itself.
+
+After each iteration, the best vertex is returned. Note, that due to
+the nature of the algorithm not every step improves the current
+best parameter vector. Usually several iterations are required.
+
+The routine calculates the minimizer specific characteristic size as the
+average distance from the geometrical center of the simplex to all its
+vertices. This size can be used as a stopping criteria, as the simplex
+contracts itself near the minimum. The size is returned by the function
+@code{gsl_multimin_fminimizer_size}.
+@end deffn
+
+@node Multimin Examples
+@section Examples
+
+This example program finds the minimum of the paraboloid function
+defined earlier. The location of the minimum is offset from the origin
+in @math{x} and @math{y}, and the function value at the minimum is
+non-zero. The main program is given below, it requires the example
+function given earlier in this chapter.
+
+@smallexample
+int
+main (void)
+@{
+ size_t iter = 0;
+ int status;
+
+ const gsl_multimin_fdfminimizer_type *T;
+ gsl_multimin_fdfminimizer *s;
+
+ /* Position of the minimum (1,2). */
+ double par[2] = @{ 1.0, 2.0 @};
+
+ gsl_vector *x;
+ gsl_multimin_function_fdf my_func;
+
+ my_func.f = &my_f;
+ my_func.df = &my_df;
+ my_func.fdf = &my_fdf;
+ my_func.n = 2;
+ my_func.params = &par;
+
+ /* Starting point, x = (5,7) */
+ x = gsl_vector_alloc (2);
+ gsl_vector_set (x, 0, 5.0);
+ gsl_vector_set (x, 1, 7.0);
+
+ T = gsl_multimin_fdfminimizer_conjugate_fr;
+ s = gsl_multimin_fdfminimizer_alloc (T, 2);
+
+ gsl_multimin_fdfminimizer_set (s, &my_func, x, 0.01, 1e-4);
+
+ do
+ @{
+ iter++;
+ status = gsl_multimin_fdfminimizer_iterate (s);
+
+ if (status)
+ break;
+
+ status = gsl_multimin_test_gradient (s->gradient, 1e-3);
+
+ if (status == GSL_SUCCESS)
+ printf ("Minimum found at:\n");
+
+ printf ("%5d %.5f %.5f %10.5f\n", iter,
+ gsl_vector_get (s->x, 0),
+ gsl_vector_get (s->x, 1),
+ s->f);
+
+ @}
+ while (status == GSL_CONTINUE && iter < 100);
+
+ gsl_multimin_fdfminimizer_free (s);
+ gsl_vector_free (x);
+
+ return 0;
+@}
+@end smallexample
+
+@noindent
+The initial step-size is chosen as 0.01, a conservative estimate in this
+case, and the line minimization parameter is set at 0.0001. The program
+terminates when the norm of the gradient has been reduced below
+0.001. The output of the program is shown below,
+
+@example
+ x y f
+ 1 4.99629 6.99072 687.84780
+ 2 4.98886 6.97215 683.55456
+ 3 4.97400 6.93501 675.01278
+ 4 4.94429 6.86073 658.10798
+ 5 4.88487 6.71217 625.01340
+ 6 4.76602 6.41506 561.68440
+ 7 4.52833 5.82083 446.46694
+ 8 4.05295 4.63238 261.79422
+ 9 3.10219 2.25548 75.49762
+ 10 2.85185 1.62963 67.03704
+ 11 2.19088 1.76182 45.31640
+ 12 0.86892 2.02622 30.18555
+Minimum found at:
+ 13 1.00000 2.00000 30.00000
+@end example
+
+@noindent
+Note that the algorithm gradually increases the step size as it
+successfully moves downhill, as can be seen by plotting the successive
+points.
+
+@iftex
+@sp 1
+@center @image{multimin,3.4in}
+@end iftex
+
+@noindent
+The conjugate gradient algorithm finds the minimum on its second
+direction because the function is purely quadratic. Additional
+iterations would be needed for a more complicated function.
+
+Here is another example using the Nelder-Mead Simplex algorithm to
+minimize the same example object function, as above.
+
+@smallexample
+int
+main(void)
+@{
+ size_t np = 2;
+ double par[2] = @{1.0, 2.0@};
+
+ const gsl_multimin_fminimizer_type *T =
+ gsl_multimin_fminimizer_nmsimplex;
+ gsl_multimin_fminimizer *s = NULL;
+ gsl_vector *ss, *x;
+ gsl_multimin_function minex_func;
+
+ size_t iter = 0, i;
+ int status;
+ double size;
+
+ /* Initial vertex size vector */
+ ss = gsl_vector_alloc (np);
+
+ /* Set all step sizes to 1 */
+ gsl_vector_set_all (ss, 1.0);
+
+ /* Starting point */
+ x = gsl_vector_alloc (np);
+
+ gsl_vector_set (x, 0, 5.0);
+ gsl_vector_set (x, 1, 7.0);
+
+ /* Initialize method and iterate */
+ minex_func.f = &my_f;
+ minex_func.n = np;
+ minex_func.params = (void *)&par;
+
+ s = gsl_multimin_fminimizer_alloc (T, np);
+ gsl_multimin_fminimizer_set (s, &minex_func, x, ss);
+
+ do
+ @{
+ iter++;
+ status = gsl_multimin_fminimizer_iterate(s);
+
+ if (status)
+ break;
+
+ size = gsl_multimin_fminimizer_size (s);
+ status = gsl_multimin_test_size (size, 1e-2);
+
+ if (status == GSL_SUCCESS)
+ @{
+ printf ("converged to minimum at\n");
+ @}
+
+ printf ("%5d ", iter);
+ for (i = 0; i < np; i++)
+ @{
+ printf ("%10.3e ", gsl_vector_get (s->x, i));
+ @}
+ printf ("f() = %7.3f size = %.3f\n", s->fval, size);
+ @}
+ while (status == GSL_CONTINUE && iter < 100);
+
+ gsl_vector_free(x);
+ gsl_vector_free(ss);
+ gsl_multimin_fminimizer_free (s);
+
+ return status;
+@}
+@end smallexample
+
+@noindent
+The minimum search stops when the Simplex size drops to 0.01. The output is
+shown below.
+
+@example
+ 1 6.500e+00 5.000e+00 f() = 512.500 size = 1.082
+ 2 5.250e+00 4.000e+00 f() = 290.625 size = 1.372
+ 3 5.250e+00 4.000e+00 f() = 290.625 size = 1.372
+ 4 5.500e+00 1.000e+00 f() = 252.500 size = 1.372
+ 5 2.625e+00 3.500e+00 f() = 101.406 size = 1.823
+ 6 3.469e+00 1.375e+00 f() = 98.760 size = 1.526
+ 7 1.820e+00 3.156e+00 f() = 63.467 size = 1.105
+ 8 1.820e+00 3.156e+00 f() = 63.467 size = 1.105
+ 9 1.016e+00 2.812e+00 f() = 43.206 size = 1.105
+ 10 2.041e+00 2.008e+00 f() = 40.838 size = 0.645
+ 11 1.236e+00 1.664e+00 f() = 32.816 size = 0.645
+ 12 1.236e+00 1.664e+00 f() = 32.816 size = 0.447
+ 13 5.225e-01 1.980e+00 f() = 32.288 size = 0.447
+ 14 1.103e+00 2.073e+00 f() = 30.214 size = 0.345
+ 15 1.103e+00 2.073e+00 f() = 30.214 size = 0.264
+ 16 1.103e+00 2.073e+00 f() = 30.214 size = 0.160
+ 17 9.864e-01 1.934e+00 f() = 30.090 size = 0.132
+ 18 9.190e-01 1.987e+00 f() = 30.069 size = 0.092
+ 19 1.028e+00 2.017e+00 f() = 30.013 size = 0.056
+ 20 1.028e+00 2.017e+00 f() = 30.013 size = 0.046
+ 21 1.028e+00 2.017e+00 f() = 30.013 size = 0.033
+ 22 9.874e-01 1.985e+00 f() = 30.006 size = 0.028
+ 23 9.846e-01 1.995e+00 f() = 30.003 size = 0.023
+ 24 1.007e+00 2.003e+00 f() = 30.001 size = 0.012
+converged to minimum at
+ 25 1.007e+00 2.003e+00 f() = 30.001 size = 0.010
+@end example
+
+@noindent
+The simplex size first increases, while the simplex moves towards the
+minimum. After a while the size begins to decrease as the simplex
+contracts around the minimum.
+
+@node Multimin References and Further Reading
+@section References and Further Reading
+
+The conjugate gradient and BFGS methods are described in detail in the
+following book,
+
+@itemize @asis
+@item R. Fletcher,
+@cite{Practical Methods of Optimization (Second Edition)} Wiley
+(1987), ISBN 0471915475.
+@end itemize
+
+A brief description of multidimensional minimization algorithms and
+more recent references can be found in,
+
+@itemize @asis
+@item C.W. Ueberhuber,
+@cite{Numerical Computation (Volume 2)}, Chapter 14, Section 4.4
+``Minimization Methods'', p.@: 325--335, Springer (1997), ISBN
+3-540-62057-5.
+@end itemize
+
+@noindent
+The simplex algorithm is described in the following paper,
+
+@itemize @asis
+@item J.A. Nelder and R. Mead,
+@cite{A simplex method for function minimization}, Computer Journal
+vol.@: 7 (1965), 308--315.
+@end itemize
+
+@noindent