summaryrefslogtreecommitdiff
path: root/gsl-1.9/doc/gsl-ref.info-3
diff options
context:
space:
mode:
Diffstat (limited to 'gsl-1.9/doc/gsl-ref.info-3')
-rw-r--r--gsl-1.9/doc/gsl-ref.info-37520
1 files changed, 7520 insertions, 0 deletions
diff --git a/gsl-1.9/doc/gsl-ref.info-3 b/gsl-1.9/doc/gsl-ref.info-3
new file mode 100644
index 0000000..d30fefa
--- /dev/null
+++ b/gsl-1.9/doc/gsl-ref.info-3
@@ -0,0 +1,7520 @@
+This is gsl-ref.info, produced by makeinfo version 4.8 from
+gsl-ref.texi.
+
+INFO-DIR-SECTION Scientific software
+START-INFO-DIR-ENTRY
+* gsl-ref: (gsl-ref). GNU Scientific Library - Reference
+END-INFO-DIR-ENTRY
+
+ Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
+2005, 2006, 2007 The GSL Team.
+
+ Permission is granted to copy, distribute and/or modify this document
+under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with the
+Invariant Sections being "GNU General Public License" and "Free Software
+Needs Free Documentation", the Front-Cover text being "A GNU Manual",
+and with the Back-Cover Text being (a) (see below). A copy of the
+license is included in the section entitled "GNU Free Documentation
+License".
+
+ (a) The Back-Cover Text is: "You have freedom to copy and modify this
+GNU Manual, like GNU software."
+
+
+File: gsl-ref.info, Node: Maximum and Minimum values, Next: Median and Percentiles, Prev: Weighted Samples, Up: Statistics
+
+20.7 Maximum and Minimum values
+===============================
+
+The following functions find the maximum and minimum values of a
+dataset (or their indices). If the data contains `NaN's then a `NaN'
+will be returned, since the maximum or minimum value is undefined. For
+functions which return an index, the location of the first `NaN' in the
+array is returned.
+
+ -- Function: double gsl_stats_max (const double DATA[], size_t STRIDE,
+ size_t N)
+ This function returns the maximum value in DATA, a dataset of
+ length N with stride STRIDE. The maximum value is defined as the
+ value of the element x_i which satisfies x_i >= x_j for all j.
+
+ If you want instead to find the element with the largest absolute
+ magnitude you will need to apply `fabs' or `abs' to your data
+ before calling this function.
+
+ -- Function: double gsl_stats_min (const double DATA[], size_t STRIDE,
+ size_t N)
+ This function returns the minimum value in DATA, a dataset of
+ length N with stride STRIDE. The minimum value is defined as the
+ value of the element x_i which satisfies x_i <= x_j for all j.
+
+ If you want instead to find the element with the smallest absolute
+ magnitude you will need to apply `fabs' or `abs' to your data
+ before calling this function.
+
+ -- Function: void gsl_stats_minmax (double * MIN, double * MAX, const
+ double DATA[], size_t STRIDE, size_t N)
+ This function finds both the minimum and maximum values MIN, MAX
+ in DATA in a single pass.
+
+ -- Function: size_t gsl_stats_max_index (const double DATA[], size_t
+ STRIDE, size_t N)
+ This function returns the index of the maximum value in DATA, a
+ dataset of length N with stride STRIDE. The maximum value is
+ defined as the value of the element x_i which satisfies x_i >= x_j
+ for all j. When there are several equal maximum elements then the
+ first one is chosen.
+
+ -- Function: size_t gsl_stats_min_index (const double DATA[], size_t
+ STRIDE, size_t N)
+ This function returns the index of the minimum value in DATA, a
+ dataset of length N with stride STRIDE. The minimum value is
+ defined as the value of the element x_i which satisfies x_i >= x_j
+ for all j. When there are several equal minimum elements then the
+ first one is chosen.
+
+ -- Function: void gsl_stats_minmax_index (size_t * MIN_INDEX, size_t *
+ MAX_INDEX, const double DATA[], size_t STRIDE, size_t N)
+ This function returns the indexes MIN_INDEX, MAX_INDEX of the
+ minimum and maximum values in DATA in a single pass.
+
+
+File: gsl-ref.info, Node: Median and Percentiles, Next: Example statistical programs, Prev: Maximum and Minimum values, Up: Statistics
+
+20.8 Median and Percentiles
+===========================
+
+The median and percentile functions described in this section operate on
+sorted data. For convenience we use "quantiles", measured on a scale
+of 0 to 1, instead of percentiles (which use a scale of 0 to 100).
+
+ -- Function: double gsl_stats_median_from_sorted_data (const double
+ SORTED_DATA[], size_t STRIDE, size_t N)
+ This function returns the median value of SORTED_DATA, a dataset
+ of length N with stride STRIDE. The elements of the array must be
+ in ascending numerical order. There are no checks to see whether
+ the data are sorted, so the function `gsl_sort' should always be
+ used first.
+
+ When the dataset has an odd number of elements the median is the
+ value of element (n-1)/2. When the dataset has an even number of
+ elements the median is the mean of the two nearest middle values,
+ elements (n-1)/2 and n/2. Since the algorithm for computing the
+ median involves interpolation this function always returns a
+ floating-point number, even for integer data types.
+
+ -- Function: double gsl_stats_quantile_from_sorted_data (const double
+ SORTED_DATA[], size_t STRIDE, size_t N, double F)
+ This function returns a quantile value of SORTED_DATA, a
+ double-precision array of length N with stride STRIDE. The
+ elements of the array must be in ascending numerical order. The
+ quantile is determined by the F, a fraction between 0 and 1. For
+ example, to compute the value of the 75th percentile F should have
+ the value 0.75.
+
+ There are no checks to see whether the data are sorted, so the
+ function `gsl_sort' should always be used first.
+
+ The quantile is found by interpolation, using the formula
+
+ quantile = (1 - \delta) x_i + \delta x_{i+1}
+
+ where i is `floor'((n - 1)f) and \delta is (n-1)f - i.
+
+ Thus the minimum value of the array (`data[0*stride]') is given by
+ F equal to zero, the maximum value (`data[(n-1)*stride]') is given
+ by F equal to one and the median value is given by F equal to 0.5.
+ Since the algorithm for computing quantiles involves
+ interpolation this function always returns a floating-point
+ number, even for integer data types.
+
+
+File: gsl-ref.info, Node: Example statistical programs, Next: Statistics References and Further Reading, Prev: Median and Percentiles, Up: Statistics
+
+20.9 Examples
+=============
+
+Here is a basic example of how to use the statistical functions:
+
+ #include <stdio.h>
+ #include <gsl/gsl_statistics.h>
+
+ int
+ main(void)
+ {
+ double data[5] = {17.2, 18.1, 16.5, 18.3, 12.6};
+ double mean, variance, largest, smallest;
+
+ mean = gsl_stats_mean(data, 1, 5);
+ variance = gsl_stats_variance(data, 1, 5);
+ largest = gsl_stats_max(data, 1, 5);
+ smallest = gsl_stats_min(data, 1, 5);
+
+ printf ("The dataset is %g, %g, %g, %g, %g\n",
+ data[0], data[1], data[2], data[3], data[4]);
+
+ printf ("The sample mean is %g\n", mean);
+ printf ("The estimated variance is %g\n", variance);
+ printf ("The largest value is %g\n", largest);
+ printf ("The smallest value is %g\n", smallest);
+ return 0;
+ }
+
+ The program should produce the following output,
+
+ The dataset is 17.2, 18.1, 16.5, 18.3, 12.6
+ The sample mean is 16.54
+ The estimated variance is 4.2984
+ The largest value is 18.3
+ The smallest value is 12.6
+
+ Here is an example using sorted data,
+
+ #include <stdio.h>
+ #include <gsl/gsl_sort.h>
+ #include <gsl/gsl_statistics.h>
+
+ int
+ main(void)
+ {
+ double data[5] = {17.2, 18.1, 16.5, 18.3, 12.6};
+ double median, upperq, lowerq;
+
+ printf ("Original dataset: %g, %g, %g, %g, %g\n",
+ data[0], data[1], data[2], data[3], data[4]);
+
+ gsl_sort (data, 1, 5);
+
+ printf ("Sorted dataset: %g, %g, %g, %g, %g\n",
+ data[0], data[1], data[2], data[3], data[4]);
+
+ median
+ = gsl_stats_median_from_sorted_data (data,
+ 1, 5);
+
+ upperq
+ = gsl_stats_quantile_from_sorted_data (data,
+ 1, 5,
+ 0.75);
+ lowerq
+ = gsl_stats_quantile_from_sorted_data (data,
+ 1, 5,
+ 0.25);
+
+ printf ("The median is %g\n", median);
+ printf ("The upper quartile is %g\n", upperq);
+ printf ("The lower quartile is %g\n", lowerq);
+ return 0;
+ }
+
+ This program should produce the following output,
+
+ Original dataset: 17.2, 18.1, 16.5, 18.3, 12.6
+ Sorted dataset: 12.6, 16.5, 17.2, 18.1, 18.3
+ The median is 17.2
+ The upper quartile is 18.1
+ The lower quartile is 16.5
+
+
+File: gsl-ref.info, Node: Statistics References and Further Reading, Prev: Example statistical programs, Up: Statistics
+
+20.10 References and Further Reading
+====================================
+
+The standard reference for almost any topic in statistics is the
+multi-volume `Advanced Theory of Statistics' by Kendall and Stuart.
+
+ Maurice Kendall, Alan Stuart, and J. Keith Ord. `The Advanced
+ Theory of Statistics' (multiple volumes) reprinted as `Kendall's
+ Advanced Theory of Statistics'. Wiley, ISBN 047023380X.
+
+Many statistical concepts can be more easily understood by a Bayesian
+approach. The following book by Gelman, Carlin, Stern and Rubin gives a
+comprehensive coverage of the subject.
+
+ Andrew Gelman, John B. Carlin, Hal S. Stern, Donald B. Rubin.
+ `Bayesian Data Analysis'. Chapman & Hall, ISBN 0412039915.
+
+For physicists the Particle Data Group provides useful reviews of
+Probability and Statistics in the "Mathematical Tools" section of its
+Annual Review of Particle Physics.
+
+ `Review of Particle Properties' R.M. Barnett et al., Physical
+ Review D54, 1 (1996)
+
+The Review of Particle Physics is available online at the website
+`http://pdg.lbl.gov/'.
+
+
+File: gsl-ref.info, Node: Histograms, Next: N-tuples, Prev: Statistics, Up: Top
+
+21 Histograms
+*************
+
+This chapter describes functions for creating histograms. Histograms
+provide a convenient way of summarizing the distribution of a set of
+data. A histogram consists of a set of "bins" which count the number of
+events falling into a given range of a continuous variable x. In GSL
+the bins of a histogram contain floating-point numbers, so they can be
+used to record both integer and non-integer distributions. The bins
+can use arbitrary sets of ranges (uniformly spaced bins are the
+default). Both one and two-dimensional histograms are supported.
+
+ Once a histogram has been created it can also be converted into a
+probability distribution function. The library provides efficient
+routines for selecting random samples from probability distributions.
+This can be useful for generating simulations based on real data.
+
+ The functions are declared in the header files `gsl_histogram.h' and
+`gsl_histogram2d.h'.
+
+* Menu:
+
+* The histogram struct::
+* Histogram allocation::
+* Copying Histograms::
+* Updating and accessing histogram elements::
+* Searching histogram ranges::
+* Histogram Statistics::
+* Histogram Operations::
+* Reading and writing histograms::
+* Resampling from histograms::
+* The histogram probability distribution struct::
+* Example programs for histograms::
+* Two dimensional histograms::
+* The 2D histogram struct::
+* 2D Histogram allocation::
+* Copying 2D Histograms::
+* Updating and accessing 2D histogram elements::
+* Searching 2D histogram ranges::
+* 2D Histogram Statistics::
+* 2D Histogram Operations::
+* Reading and writing 2D histograms::
+* Resampling from 2D histograms::
+* Example programs for 2D histograms::
+
+
+File: gsl-ref.info, Node: The histogram struct, Next: Histogram allocation, Up: Histograms
+
+21.1 The histogram struct
+=========================
+
+A histogram is defined by the following struct,
+
+ -- Data Type: gsl_histogram
+ `size_t n'
+ This is the number of histogram bins
+
+ `double * range'
+ The ranges of the bins are stored in an array of N+1 elements
+ pointed to by RANGE.
+
+ `double * bin'
+ The counts for each bin are stored in an array of N elements
+ pointed to by BIN. The bins are floating-point numbers, so
+ you can increment them by non-integer values if necessary.
+
+The range for BIN[i] is given by RANGE[i] to RANGE[i+1]. For n bins
+there are n+1 entries in the array RANGE. Each bin is inclusive at the
+lower end and exclusive at the upper end. Mathematically this means
+that the bins are defined by the following inequality,
+ bin[i] corresponds to range[i] <= x < range[i+1]
+
+Here is a diagram of the correspondence between ranges and bins on the
+number-line for x,
+
+
+ [ bin[0] )[ bin[1] )[ bin[2] )[ bin[3] )[ bin[4] )
+ ---|---------|---------|---------|---------|---------|--- x
+ r[0] r[1] r[2] r[3] r[4] r[5]
+
+In this picture the values of the RANGE array are denoted by r. On the
+left-hand side of each bin the square bracket `[' denotes an inclusive
+lower bound (r <= x), and the round parentheses `)' on the right-hand
+side denote an exclusive upper bound (x < r). Thus any samples which
+fall on the upper end of the histogram are excluded. If you want to
+include this value for the last bin you will need to add an extra bin
+to your histogram.
+
+ The `gsl_histogram' struct and its associated functions are defined
+in the header file `gsl_histogram.h'.
+
+
+File: gsl-ref.info, Node: Histogram allocation, Next: Copying Histograms, Prev: The histogram struct, Up: Histograms
+
+21.2 Histogram allocation
+=========================
+
+The functions for allocating memory to a histogram follow the style of
+`malloc' and `free'. In addition they also perform their own error
+checking. If there is insufficient memory available to allocate a
+histogram then the functions call the error handler (with an error
+number of `GSL_ENOMEM') in addition to returning a null pointer. Thus
+if you use the library error handler to abort your program then it
+isn't necessary to check every histogram `alloc'.
+
+ -- Function: gsl_histogram * gsl_histogram_alloc (size_t N)
+ This function allocates memory for a histogram with N bins, and
+ returns a pointer to a newly created `gsl_histogram' struct. If
+ insufficient memory is available a null pointer is returned and the
+ error handler is invoked with an error code of `GSL_ENOMEM'. The
+ bins and ranges are not initialized, and should be prepared using
+ one of the range-setting functions below in order to make the
+ histogram ready for use.
+
+ -- Function: int gsl_histogram_set_ranges (gsl_histogram * H, const
+ double RANGE[], size_t SIZE)
+ This function sets the ranges of the existing histogram H using
+ the array RANGE of size SIZE. The values of the histogram bins
+ are reset to zero. The `range' array should contain the desired
+ bin limits. The ranges can be arbitrary, subject to the
+ restriction that they are monotonically increasing.
+
+ The following example shows how to create a histogram with
+ logarithmic bins with ranges [1,10), [10,100) and [100,1000).
+
+ gsl_histogram * h = gsl_histogram_alloc (3);
+
+ /* bin[0] covers the range 1 <= x < 10 */
+ /* bin[1] covers the range 10 <= x < 100 */
+ /* bin[2] covers the range 100 <= x < 1000 */
+
+ double range[4] = { 1.0, 10.0, 100.0, 1000.0 };
+
+ gsl_histogram_set_ranges (h, range, 4);
+
+ Note that the size of the RANGE array should be defined to be one
+ element bigger than the number of bins. The additional element is
+ required for the upper value of the final bin.
+
+ -- Function: int gsl_histogram_set_ranges_uniform (gsl_histogram * H,
+ double XMIN, double XMAX)
+ This function sets the ranges of the existing histogram H to cover
+ the range XMIN to XMAX uniformly. The values of the histogram
+ bins are reset to zero. The bin ranges are shown in the table
+ below,
+ bin[0] corresponds to xmin <= x < xmin + d
+ bin[1] corresponds to xmin + d <= x < xmin + 2 d
+ ......
+ bin[n-1] corresponds to xmin + (n-1)d <= x < xmax
+
+ where d is the bin spacing, d = (xmax-xmin)/n.
+
+ -- Function: void gsl_histogram_free (gsl_histogram * H)
+ This function frees the histogram H and all of the memory
+ associated with it.
+
+
+File: gsl-ref.info, Node: Copying Histograms, Next: Updating and accessing histogram elements, Prev: Histogram allocation, Up: Histograms
+
+21.3 Copying Histograms
+=======================
+
+ -- Function: int gsl_histogram_memcpy (gsl_histogram * DEST, const
+ gsl_histogram * SRC)
+ This function copies the histogram SRC into the pre-existing
+ histogram DEST, making DEST into an exact copy of SRC. The two
+ histograms must be of the same size.
+
+ -- Function: gsl_histogram * gsl_histogram_clone (const gsl_histogram
+ * SRC)
+ This function returns a pointer to a newly created histogram which
+ is an exact copy of the histogram SRC.
+
+
+File: gsl-ref.info, Node: Updating and accessing histogram elements, Next: Searching histogram ranges, Prev: Copying Histograms, Up: Histograms
+
+21.4 Updating and accessing histogram elements
+==============================================
+
+There are two ways to access histogram bins, either by specifying an x
+coordinate or by using the bin-index directly. The functions for
+accessing the histogram through x coordinates use a binary search to
+identify the bin which covers the appropriate range.
+
+ -- Function: int gsl_histogram_increment (gsl_histogram * H, double X)
+ This function updates the histogram H by adding one (1.0) to the
+ bin whose range contains the coordinate X.
+
+ If X lies in the valid range of the histogram then the function
+ returns zero to indicate success. If X is less than the lower
+ limit of the histogram then the function returns `GSL_EDOM', and
+ none of bins are modified. Similarly, if the value of X is greater
+ than or equal to the upper limit of the histogram then the function
+ returns `GSL_EDOM', and none of the bins are modified. The error
+ handler is not called, however, since it is often necessary to
+ compute histograms for a small range of a larger dataset, ignoring
+ the values outside the range of interest.
+
+ -- Function: int gsl_histogram_accumulate (gsl_histogram * H, double
+ X, double WEIGHT)
+ This function is similar to `gsl_histogram_increment' but increases
+ the value of the appropriate bin in the histogram H by the
+ floating-point number WEIGHT.
+
+ -- Function: double gsl_histogram_get (const gsl_histogram * H, size_t
+ I)
+ This function returns the contents of the I-th bin of the histogram
+ H. If I lies outside the valid range of indices for the histogram
+ then the error handler is called with an error code of `GSL_EDOM'
+ and the function returns 0.
+
+ -- Function: int gsl_histogram_get_range (const gsl_histogram * H,
+ size_t I, double * LOWER, double * UPPER)
+ This function finds the upper and lower range limits of the I-th
+ bin of the histogram H. If the index I is valid then the
+ corresponding range limits are stored in LOWER and UPPER. The
+ lower limit is inclusive (i.e. events with this coordinate are
+ included in the bin) and the upper limit is exclusive (i.e. events
+ with the coordinate of the upper limit are excluded and fall in the
+ neighboring higher bin, if it exists). The function returns 0 to
+ indicate success. If I lies outside the valid range of indices for
+ the histogram then the error handler is called and the function
+ returns an error code of `GSL_EDOM'.
+
+ -- Function: double gsl_histogram_max (const gsl_histogram * H)
+ -- Function: double gsl_histogram_min (const gsl_histogram * H)
+ -- Function: size_t gsl_histogram_bins (const gsl_histogram * H)
+ These functions return the maximum upper and minimum lower range
+ limits and the number of bins of the histogram H. They provide a
+ way of determining these values without accessing the
+ `gsl_histogram' struct directly.
+
+ -- Function: void gsl_histogram_reset (gsl_histogram * H)
+ This function resets all the bins in the histogram H to zero.
+
+
+File: gsl-ref.info, Node: Searching histogram ranges, Next: Histogram Statistics, Prev: Updating and accessing histogram elements, Up: Histograms
+
+21.5 Searching histogram ranges
+===============================
+
+The following functions are used by the access and update routines to
+locate the bin which corresponds to a given x coordinate.
+
+ -- Function: int gsl_histogram_find (const gsl_histogram * H, double
+ X, size_t * I)
+ This function finds and sets the index I to the bin number which
+ covers the coordinate X in the histogram H. The bin is located
+ using a binary search. The search includes an optimization for
+ histograms with uniform range, and will return the correct bin
+ immediately in this case. If X is found in the range of the
+ histogram then the function sets the index I and returns
+ `GSL_SUCCESS'. If X lies outside the valid range of the histogram
+ then the function returns `GSL_EDOM' and the error handler is
+ invoked.
+
+
+File: gsl-ref.info, Node: Histogram Statistics, Next: Histogram Operations, Prev: Searching histogram ranges, Up: Histograms
+
+21.6 Histogram Statistics
+=========================
+
+ -- Function: double gsl_histogram_max_val (const gsl_histogram * H)
+ This function returns the maximum value contained in the histogram
+ bins.
+
+ -- Function: size_t gsl_histogram_max_bin (const gsl_histogram * H)
+ This function returns the index of the bin containing the maximum
+ value. In the case where several bins contain the same maximum
+ value the smallest index is returned.
+
+ -- Function: double gsl_histogram_min_val (const gsl_histogram * H)
+ This function returns the minimum value contained in the histogram
+ bins.
+
+ -- Function: size_t gsl_histogram_min_bin (const gsl_histogram * H)
+ This function returns the index of the bin containing the minimum
+ value. In the case where several bins contain the same maximum
+ value the smallest index is returned.
+
+ -- Function: double gsl_histogram_mean (const gsl_histogram * H)
+ This function returns the mean of the histogrammed variable, where
+ the histogram is regarded as a probability distribution. Negative
+ bin values are ignored for the purposes of this calculation. The
+ accuracy of the result is limited by the bin width.
+
+ -- Function: double gsl_histogram_sigma (const gsl_histogram * H)
+ This function returns the standard deviation of the histogrammed
+ variable, where the histogram is regarded as a probability
+ distribution. Negative bin values are ignored for the purposes of
+ this calculation. The accuracy of the result is limited by the bin
+ width.
+
+ -- Function: double gsl_histogram_sum (const gsl_histogram * H)
+ This function returns the sum of all bin values. Negative bin
+ values are included in the sum.
+
+
+File: gsl-ref.info, Node: Histogram Operations, Next: Reading and writing histograms, Prev: Histogram Statistics, Up: Histograms
+
+21.7 Histogram Operations
+=========================
+
+ -- Function: int gsl_histogram_equal_bins_p (const gsl_histogram * H1,
+ const gsl_histogram * H2)
+ This function returns 1 if the all of the individual bin ranges of
+ the two histograms are identical, and 0 otherwise.
+
+ -- Function: int gsl_histogram_add (gsl_histogram * H1, const
+ gsl_histogram * H2)
+ This function adds the contents of the bins in histogram H2 to the
+ corresponding bins of histogram H1, i.e. h'_1(i) = h_1(i) +
+ h_2(i). The two histograms must have identical bin ranges.
+
+ -- Function: int gsl_histogram_sub (gsl_histogram * H1, const
+ gsl_histogram * H2)
+ This function subtracts the contents of the bins in histogram H2
+ from the corresponding bins of histogram H1, i.e. h'_1(i) = h_1(i)
+ - h_2(i). The two histograms must have identical bin ranges.
+
+ -- Function: int gsl_histogram_mul (gsl_histogram * H1, const
+ gsl_histogram * H2)
+ This function multiplies the contents of the bins of histogram H1
+ by the contents of the corresponding bins in histogram H2, i.e.
+ h'_1(i) = h_1(i) * h_2(i). The two histograms must have identical
+ bin ranges.
+
+ -- Function: int gsl_histogram_div (gsl_histogram * H1, const
+ gsl_histogram * H2)
+ This function divides the contents of the bins of histogram H1 by
+ the contents of the corresponding bins in histogram H2, i.e.
+ h'_1(i) = h_1(i) / h_2(i). The two histograms must have identical
+ bin ranges.
+
+ -- Function: int gsl_histogram_scale (gsl_histogram * H, double SCALE)
+ This function multiplies the contents of the bins of histogram H
+ by the constant SCALE, i.e. h'_1(i) = h_1(i) * scale.
+
+ -- Function: int gsl_histogram_shift (gsl_histogram * H, double OFFSET)
+ This function shifts the contents of the bins of histogram H by
+ the constant OFFSET, i.e. h'_1(i) = h_1(i) + offset.
+
+
+File: gsl-ref.info, Node: Reading and writing histograms, Next: Resampling from histograms, Prev: Histogram Operations, Up: Histograms
+
+21.8 Reading and writing histograms
+===================================
+
+The library provides functions for reading and writing histograms to a
+file as binary data or formatted text.
+
+ -- Function: int gsl_histogram_fwrite (FILE * STREAM, const
+ gsl_histogram * H)
+ This function writes the ranges and bins of the histogram H to the
+ stream STREAM in binary format. The return value is 0 for success
+ and `GSL_EFAILED' if there was a problem writing to the file.
+ Since the data is written in the native binary format it may not
+ be portable between different architectures.
+
+ -- Function: int gsl_histogram_fread (FILE * STREAM, gsl_histogram * H)
+ This function reads into the histogram H from the open stream
+ STREAM in binary format. The histogram H must be preallocated
+ with the correct size since the function uses the number of bins
+ in H to determine how many bytes to read. The return value is 0
+ for success and `GSL_EFAILED' if there was a problem reading from
+ the file. The data is assumed to have been written in the native
+ binary format on the same architecture.
+
+ -- Function: int gsl_histogram_fprintf (FILE * STREAM, const
+ gsl_histogram * H, const char * RANGE_FORMAT, const char *
+ BIN_FORMAT)
+ This function writes the ranges and bins of the histogram H
+ line-by-line to the stream STREAM using the format specifiers
+ RANGE_FORMAT and BIN_FORMAT. These should be one of the `%g',
+ `%e' or `%f' formats for floating point numbers. The function
+ returns 0 for success and `GSL_EFAILED' if there was a problem
+ writing to the file. The histogram output is formatted in three
+ columns, and the columns are separated by spaces, like this,
+
+ range[0] range[1] bin[0]
+ range[1] range[2] bin[1]
+ range[2] range[3] bin[2]
+ ....
+ range[n-1] range[n] bin[n-1]
+
+ The values of the ranges are formatted using RANGE_FORMAT and the
+ value of the bins are formatted using BIN_FORMAT. Each line
+ contains the lower and upper limit of the range of the bins and the
+ value of the bin itself. Since the upper limit of one bin is the
+ lower limit of the next there is duplication of these values
+ between lines but this allows the histogram to be manipulated with
+ line-oriented tools.
+
+ -- Function: int gsl_histogram_fscanf (FILE * STREAM, gsl_histogram *
+ H)
+ This function reads formatted data from the stream STREAM into the
+ histogram H. The data is assumed to be in the three-column format
+ used by `gsl_histogram_fprintf'. The histogram H must be
+ preallocated with the correct length since the function uses the
+ size of H to determine how many numbers to read. The function
+ returns 0 for success and `GSL_EFAILED' if there was a problem
+ reading from the file.
+
+
+File: gsl-ref.info, Node: Resampling from histograms, Next: The histogram probability distribution struct, Prev: Reading and writing histograms, Up: Histograms
+
+21.9 Resampling from histograms
+===============================
+
+A histogram made by counting events can be regarded as a measurement of
+a probability distribution. Allowing for statistical error, the height
+of each bin represents the probability of an event where the value of x
+falls in the range of that bin. The probability distribution function
+has the one-dimensional form p(x)dx where,
+
+ p(x) = n_i/ (N w_i)
+
+In this equation n_i is the number of events in the bin which contains
+x, w_i is the width of the bin and N is the total number of events.
+The distribution of events within each bin is assumed to be uniform.
+
+
+File: gsl-ref.info, Node: The histogram probability distribution struct, Next: Example programs for histograms, Prev: Resampling from histograms, Up: Histograms
+
+21.10 The histogram probability distribution struct
+===================================================
+
+The probability distribution function for a histogram consists of a set
+of "bins" which measure the probability of an event falling into a
+given range of a continuous variable x. A probability distribution
+function is defined by the following struct, which actually stores the
+cumulative probability distribution function. This is the natural
+quantity for generating samples via the inverse transform method,
+because there is a one-to-one mapping between the cumulative
+probability distribution and the range [0,1]. It can be shown that by
+taking a uniform random number in this range and finding its
+corresponding coordinate in the cumulative probability distribution we
+obtain samples with the desired probability distribution.
+
+ -- Data Type: gsl_histogram_pdf
+ `size_t n'
+ This is the number of bins used to approximate the probability
+ distribution function.
+
+ `double * range'
+ The ranges of the bins are stored in an array of N+1 elements
+ pointed to by RANGE.
+
+ `double * sum'
+ The cumulative probability for the bins is stored in an array
+ of N elements pointed to by SUM.
+
+The following functions allow you to create a `gsl_histogram_pdf'
+struct which represents this probability distribution and generate
+random samples from it.
+
+ -- Function: gsl_histogram_pdf * gsl_histogram_pdf_alloc (size_t N)
+ This function allocates memory for a probability distribution with
+ N bins and returns a pointer to a newly initialized
+ `gsl_histogram_pdf' struct. If insufficient memory is available a
+ null pointer is returned and the error handler is invoked with an
+ error code of `GSL_ENOMEM'.
+
+ -- Function: int gsl_histogram_pdf_init (gsl_histogram_pdf * P, const
+ gsl_histogram * H)
+ This function initializes the probability distribution P with the
+ contents of the histogram H. If any of the bins of H are negative
+ then the error handler is invoked with an error code of `GSL_EDOM'
+ because a probability distribution cannot contain negative values.
+
+ -- Function: void gsl_histogram_pdf_free (gsl_histogram_pdf * P)
+ This function frees the probability distribution function P and
+ all of the memory associated with it.
+
+ -- Function: double gsl_histogram_pdf_sample (const gsl_histogram_pdf
+ * P, double R)
+ This function uses R, a uniform random number between zero and
+ one, to compute a single random sample from the probability
+ distribution P. The algorithm used to compute the sample s is
+ given by the following formula,
+
+ s = range[i] + delta * (range[i+1] - range[i])
+
+ where i is the index which satisfies sum[i] <= r < sum[i+1] and
+ delta is (r - sum[i])/(sum[i+1] - sum[i]).
+
+
+File: gsl-ref.info, Node: Example programs for histograms, Next: Two dimensional histograms, Prev: The histogram probability distribution struct, Up: Histograms
+
+21.11 Example programs for histograms
+=====================================
+
+The following program shows how to make a simple histogram of a column
+of numerical data supplied on `stdin'. The program takes three
+arguments, specifying the upper and lower bounds of the histogram and
+the number of bins. It then reads numbers from `stdin', one line at a
+time, and adds them to the histogram. When there is no more data to
+read it prints out the accumulated histogram using
+`gsl_histogram_fprintf'.
+
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <gsl/gsl_histogram.h>
+
+ int
+ main (int argc, char **argv)
+ {
+ double a, b;
+ size_t n;
+
+ if (argc != 4)
+ {
+ printf ("Usage: gsl-histogram xmin xmax n\n"
+ "Computes a histogram of the data "
+ "on stdin using n bins from xmin "
+ "to xmax\n");
+ exit (0);
+ }
+
+ a = atof (argv[1]);
+ b = atof (argv[2]);
+ n = atoi (argv[3]);
+
+ {
+ double x;
+ gsl_histogram * h = gsl_histogram_alloc (n);
+ gsl_histogram_set_ranges_uniform (h, a, b);
+
+ while (fscanf (stdin, "%lg", &x) == 1)
+ {
+ gsl_histogram_increment (h, x);
+ }
+ gsl_histogram_fprintf (stdout, h, "%g", "%g");
+ gsl_histogram_free (h);
+ }
+ exit (0);
+ }
+
+Here is an example of the program in use. We generate 10000 random
+samples from a Cauchy distribution with a width of 30 and histogram
+them over the range -100 to 100, using 200 bins.
+
+ $ gsl-randist 0 10000 cauchy 30
+ | gsl-histogram -100 100 200 > histogram.dat
+
+A plot of the resulting histogram shows the familiar shape of the
+Cauchy distribution and the fluctuations caused by the finite sample
+size.
+
+ $ awk '{print $1, $3 ; print $2, $3}' histogram.dat
+ | graph -T X
+
+
+File: gsl-ref.info, Node: Two dimensional histograms, Next: The 2D histogram struct, Prev: Example programs for histograms, Up: Histograms
+
+21.12 Two dimensional histograms
+================================
+
+A two dimensional histogram consists of a set of "bins" which count the
+number of events falling in a given area of the (x,y) plane. The
+simplest way to use a two dimensional histogram is to record
+two-dimensional position information, n(x,y). Another possibility is
+to form a "joint distribution" by recording related variables. For
+example a detector might record both the position of an event (x) and
+the amount of energy it deposited E. These could be histogrammed as
+the joint distribution n(x,E).
+
+
+File: gsl-ref.info, Node: The 2D histogram struct, Next: 2D Histogram allocation, Prev: Two dimensional histograms, Up: Histograms
+
+21.13 The 2D histogram struct
+=============================
+
+Two dimensional histograms are defined by the following struct,
+
+ -- Data Type: gsl_histogram2d
+ `size_t nx, ny'
+ This is the number of histogram bins in the x and y
+ directions.
+
+ `double * xrange'
+ The ranges of the bins in the x-direction are stored in an
+ array of NX + 1 elements pointed to by XRANGE.
+
+ `double * yrange'
+ The ranges of the bins in the y-direction are stored in an
+ array of NY + 1 elements pointed to by YRANGE.
+
+ `double * bin'
+ The counts for each bin are stored in an array pointed to by
+ BIN. The bins are floating-point numbers, so you can
+ increment them by non-integer values if necessary. The array
+ BIN stores the two dimensional array of bins in a single
+ block of memory according to the mapping `bin(i,j)' = `bin[i
+ * ny + j]'.
+
+The range for `bin(i,j)' is given by `xrange[i]' to `xrange[i+1]' in
+the x-direction and `yrange[j]' to `yrange[j+1]' in the y-direction.
+Each bin is inclusive at the lower end and exclusive at the upper end.
+Mathematically this means that the bins are defined by the following
+inequality,
+ bin(i,j) corresponds to xrange[i] <= x < xrange[i+1]
+ and yrange[j] <= y < yrange[j+1]
+
+Note that any samples which fall on the upper sides of the histogram are
+excluded. If you want to include these values for the side bins you
+will need to add an extra row or column to your histogram.
+
+ The `gsl_histogram2d' struct and its associated functions are
+defined in the header file `gsl_histogram2d.h'.
+
+
+File: gsl-ref.info, Node: 2D Histogram allocation, Next: Copying 2D Histograms, Prev: The 2D histogram struct, Up: Histograms
+
+21.14 2D Histogram allocation
+=============================
+
+The functions for allocating memory to a 2D histogram follow the style
+of `malloc' and `free'. In addition they also perform their own error
+checking. If there is insufficient memory available to allocate a
+histogram then the functions call the error handler (with an error
+number of `GSL_ENOMEM') in addition to returning a null pointer. Thus
+if you use the library error handler to abort your program then it
+isn't necessary to check every 2D histogram `alloc'.
+
+ -- Function: gsl_histogram2d * gsl_histogram2d_alloc (size_t NX,
+ size_t NY)
+ This function allocates memory for a two-dimensional histogram with
+ NX bins in the x direction and NY bins in the y direction. The
+ function returns a pointer to a newly created `gsl_histogram2d'
+ struct. If insufficient memory is available a null pointer is
+ returned and the error handler is invoked with an error code of
+ `GSL_ENOMEM'. The bins and ranges must be initialized with one of
+ the functions below before the histogram is ready for use.
+
+ -- Function: int gsl_histogram2d_set_ranges (gsl_histogram2d * H,
+ const double XRANGE[], size_t XSIZE, const double YRANGE[],
+ size_t YSIZE)
+ This function sets the ranges of the existing histogram H using
+ the arrays XRANGE and YRANGE of size XSIZE and YSIZE respectively.
+ The values of the histogram bins are reset to zero.
+
+ -- Function: int gsl_histogram2d_set_ranges_uniform (gsl_histogram2d *
+ H, double XMIN, double XMAX, double YMIN, double YMAX)
+ This function sets the ranges of the existing histogram H to cover
+ the ranges XMIN to XMAX and YMIN to YMAX uniformly. The values of
+ the histogram bins are reset to zero.
+
+ -- Function: void gsl_histogram2d_free (gsl_histogram2d * H)
+ This function frees the 2D histogram H and all of the memory
+ associated with it.
+
+
+File: gsl-ref.info, Node: Copying 2D Histograms, Next: Updating and accessing 2D histogram elements, Prev: 2D Histogram allocation, Up: Histograms
+
+21.15 Copying 2D Histograms
+===========================
+
+ -- Function: int gsl_histogram2d_memcpy (gsl_histogram2d * DEST, const
+ gsl_histogram2d * SRC)
+ This function copies the histogram SRC into the pre-existing
+ histogram DEST, making DEST into an exact copy of SRC. The two
+ histograms must be of the same size.
+
+ -- Function: gsl_histogram2d * gsl_histogram2d_clone (const
+ gsl_histogram2d * SRC)
+ This function returns a pointer to a newly created histogram which
+ is an exact copy of the histogram SRC.
+
+
+File: gsl-ref.info, Node: Updating and accessing 2D histogram elements, Next: Searching 2D histogram ranges, Prev: Copying 2D Histograms, Up: Histograms
+
+21.16 Updating and accessing 2D histogram elements
+==================================================
+
+You can access the bins of a two-dimensional histogram either by
+specifying a pair of (x,y) coordinates or by using the bin indices
+(i,j) directly. The functions for accessing the histogram through
+(x,y) coordinates use binary searches in the x and y directions to
+identify the bin which covers the appropriate range.
+
+ -- Function: int gsl_histogram2d_increment (gsl_histogram2d * H,
+ double X, double Y)
+ This function updates the histogram H by adding one (1.0) to the
+ bin whose x and y ranges contain the coordinates (X,Y).
+
+ If the point (x,y) lies inside the valid ranges of the histogram
+ then the function returns zero to indicate success. If (x,y) lies
+ outside the limits of the histogram then the function returns
+ `GSL_EDOM', and none of the bins are modified. The error handler
+ is not called, since it is often necessary to compute histograms
+ for a small range of a larger dataset, ignoring any coordinates
+ outside the range of interest.
+
+ -- Function: int gsl_histogram2d_accumulate (gsl_histogram2d * H,
+ double X, double Y, double WEIGHT)
+ This function is similar to `gsl_histogram2d_increment' but
+ increases the value of the appropriate bin in the histogram H by
+ the floating-point number WEIGHT.
+
+ -- Function: double gsl_histogram2d_get (const gsl_histogram2d * H,
+ size_t I, size_t J)
+ This function returns the contents of the (I,J)-th bin of the
+ histogram H. If (I,J) lies outside the valid range of indices for
+ the histogram then the error handler is called with an error code
+ of `GSL_EDOM' and the function returns 0.
+
+ -- Function: int gsl_histogram2d_get_xrange (const gsl_histogram2d *
+ H, size_t I, double * XLOWER, double * XUPPER)
+ -- Function: int gsl_histogram2d_get_yrange (const gsl_histogram2d *
+ H, size_t J, double * YLOWER, double * YUPPER)
+ These functions find the upper and lower range limits of the I-th
+ and J-th bins in the x and y directions of the histogram H. The
+ range limits are stored in XLOWER and XUPPER or YLOWER and YUPPER.
+ The lower limits are inclusive (i.e. events with these
+ coordinates are included in the bin) and the upper limits are
+ exclusive (i.e. events with the value of the upper limit are not
+ included and fall in the neighboring higher bin, if it exists).
+ The functions return 0 to indicate success. If I or J lies
+ outside the valid range of indices for the histogram then the
+ error handler is called with an error code of `GSL_EDOM'.
+
+ -- Function: double gsl_histogram2d_xmax (const gsl_histogram2d * H)
+ -- Function: double gsl_histogram2d_xmin (const gsl_histogram2d * H)
+ -- Function: size_t gsl_histogram2d_nx (const gsl_histogram2d * H)
+ -- Function: double gsl_histogram2d_ymax (const gsl_histogram2d * H)
+ -- Function: double gsl_histogram2d_ymin (const gsl_histogram2d * H)
+ -- Function: size_t gsl_histogram2d_ny (const gsl_histogram2d * H)
+ These functions return the maximum upper and minimum lower range
+ limits and the number of bins for the x and y directions of the
+ histogram H. They provide a way of determining these values
+ without accessing the `gsl_histogram2d' struct directly.
+
+ -- Function: void gsl_histogram2d_reset (gsl_histogram2d * H)
+ This function resets all the bins of the histogram H to zero.
+
+
+File: gsl-ref.info, Node: Searching 2D histogram ranges, Next: 2D Histogram Statistics, Prev: Updating and accessing 2D histogram elements, Up: Histograms
+
+21.17 Searching 2D histogram ranges
+===================================
+
+The following functions are used by the access and update routines to
+locate the bin which corresponds to a given (x,y) coordinate.
+
+ -- Function: int gsl_histogram2d_find (const gsl_histogram2d * H,
+ double X, double Y, size_t * I, size_t * J)
+ This function finds and sets the indices I and J to the to the bin
+ which covers the coordinates (X,Y). The bin is located using a
+ binary search. The search includes an optimization for histograms
+ with uniform ranges, and will return the correct bin immediately
+ in this case. If (x,y) is found then the function sets the indices
+ (I,J) and returns `GSL_SUCCESS'. If (x,y) lies outside the valid
+ range of the histogram then the function returns `GSL_EDOM' and
+ the error handler is invoked.
+
+
+File: gsl-ref.info, Node: 2D Histogram Statistics, Next: 2D Histogram Operations, Prev: Searching 2D histogram ranges, Up: Histograms
+
+21.18 2D Histogram Statistics
+=============================
+
+ -- Function: double gsl_histogram2d_max_val (const gsl_histogram2d * H)
+ This function returns the maximum value contained in the histogram
+ bins.
+
+ -- Function: void gsl_histogram2d_max_bin (const gsl_histogram2d * H,
+ size_t * I, size_t * J)
+ This function finds the indices of the bin containing the maximum
+ value in the histogram H and stores the result in (I,J). In the
+ case where several bins contain the same maximum value the first
+ bin found is returned.
+
+ -- Function: double gsl_histogram2d_min_val (const gsl_histogram2d * H)
+ This function returns the minimum value contained in the histogram
+ bins.
+
+ -- Function: void gsl_histogram2d_min_bin (const gsl_histogram2d * H,
+ size_t * I, size_t * J)
+ This function finds the indices of the bin containing the minimum
+ value in the histogram H and stores the result in (I,J). In the
+ case where several bins contain the same maximum value the first
+ bin found is returned.
+
+ -- Function: double gsl_histogram2d_xmean (const gsl_histogram2d * H)
+ This function returns the mean of the histogrammed x variable,
+ where the histogram is regarded as a probability distribution.
+ Negative bin values are ignored for the purposes of this
+ calculation.
+
+ -- Function: double gsl_histogram2d_ymean (const gsl_histogram2d * H)
+ This function returns the mean of the histogrammed y variable,
+ where the histogram is regarded as a probability distribution.
+ Negative bin values are ignored for the purposes of this
+ calculation.
+
+ -- Function: double gsl_histogram2d_xsigma (const gsl_histogram2d * H)
+ This function returns the standard deviation of the histogrammed x
+ variable, where the histogram is regarded as a probability
+ distribution. Negative bin values are ignored for the purposes of
+ this calculation.
+
+ -- Function: double gsl_histogram2d_ysigma (const gsl_histogram2d * H)
+ This function returns the standard deviation of the histogrammed y
+ variable, where the histogram is regarded as a probability
+ distribution. Negative bin values are ignored for the purposes of
+ this calculation.
+
+ -- Function: double gsl_histogram2d_cov (const gsl_histogram2d * H)
+ This function returns the covariance of the histogrammed x and y
+ variables, where the histogram is regarded as a probability
+ distribution. Negative bin values are ignored for the purposes of
+ this calculation.
+
+ -- Function: double gsl_histogram2d_sum (const gsl_histogram2d * H)
+ This function returns the sum of all bin values. Negative bin
+ values are included in the sum.
+
+
+File: gsl-ref.info, Node: 2D Histogram Operations, Next: Reading and writing 2D histograms, Prev: 2D Histogram Statistics, Up: Histograms
+
+21.19 2D Histogram Operations
+=============================
+
+ -- Function: int gsl_histogram2d_equal_bins_p (const gsl_histogram2d *
+ H1, const gsl_histogram2d * H2)
+ This function returns 1 if all the individual bin ranges of the two
+ histograms are identical, and 0 otherwise.
+
+ -- Function: int gsl_histogram2d_add (gsl_histogram2d * H1, const
+ gsl_histogram2d * H2)
+ This function adds the contents of the bins in histogram H2 to the
+ corresponding bins of histogram H1, i.e. h'_1(i,j) = h_1(i,j) +
+ h_2(i,j). The two histograms must have identical bin ranges.
+
+ -- Function: int gsl_histogram2d_sub (gsl_histogram2d * H1, const
+ gsl_histogram2d * H2)
+ This function subtracts the contents of the bins in histogram H2
+ from the corresponding bins of histogram H1, i.e. h'_1(i,j) =
+ h_1(i,j) - h_2(i,j). The two histograms must have identical bin
+ ranges.
+
+ -- Function: int gsl_histogram2d_mul (gsl_histogram2d * H1, const
+ gsl_histogram2d * H2)
+ This function multiplies the contents of the bins of histogram H1
+ by the contents of the corresponding bins in histogram H2, i.e.
+ h'_1(i,j) = h_1(i,j) * h_2(i,j). The two histograms must have
+ identical bin ranges.
+
+ -- Function: int gsl_histogram2d_div (gsl_histogram2d * H1, const
+ gsl_histogram2d * H2)
+ This function divides the contents of the bins of histogram H1 by
+ the contents of the corresponding bins in histogram H2, i.e.
+ h'_1(i,j) = h_1(i,j) / h_2(i,j). The two histograms must have
+ identical bin ranges.
+
+ -- Function: int gsl_histogram2d_scale (gsl_histogram2d * H, double
+ SCALE)
+ This function multiplies the contents of the bins of histogram H
+ by the constant SCALE, i.e. h'_1(i,j) = h_1(i,j) scale.
+
+ -- Function: int gsl_histogram2d_shift (gsl_histogram2d * H, double
+ OFFSET)
+ This function shifts the contents of the bins of histogram H by
+ the constant OFFSET, i.e. h'_1(i,j) = h_1(i,j) + offset.
+
+
+File: gsl-ref.info, Node: Reading and writing 2D histograms, Next: Resampling from 2D histograms, Prev: 2D Histogram Operations, Up: Histograms
+
+21.20 Reading and writing 2D histograms
+=======================================
+
+The library provides functions for reading and writing two dimensional
+histograms to a file as binary data or formatted text.
+
+ -- Function: int gsl_histogram2d_fwrite (FILE * STREAM, const
+ gsl_histogram2d * H)
+ This function writes the ranges and bins of the histogram H to the
+ stream STREAM in binary format. The return value is 0 for success
+ and `GSL_EFAILED' if there was a problem writing to the file.
+ Since the data is written in the native binary format it may not
+ be portable between different architectures.
+
+ -- Function: int gsl_histogram2d_fread (FILE * STREAM, gsl_histogram2d
+ * H)
+ This function reads into the histogram H from the stream STREAM in
+ binary format. The histogram H must be preallocated with the
+ correct size since the function uses the number of x and y bins in
+ H to determine how many bytes to read. The return value is 0 for
+ success and `GSL_EFAILED' if there was a problem reading from the
+ file. The data is assumed to have been written in the native
+ binary format on the same architecture.
+
+ -- Function: int gsl_histogram2d_fprintf (FILE * STREAM, const
+ gsl_histogram2d * H, const char * RANGE_FORMAT, const char *
+ BIN_FORMAT)
+ This function writes the ranges and bins of the histogram H
+ line-by-line to the stream STREAM using the format specifiers
+ RANGE_FORMAT and BIN_FORMAT. These should be one of the `%g',
+ `%e' or `%f' formats for floating point numbers. The function
+ returns 0 for success and `GSL_EFAILED' if there was a problem
+ writing to the file. The histogram output is formatted in five
+ columns, and the columns are separated by spaces, like this,
+
+ xrange[0] xrange[1] yrange[0] yrange[1] bin(0,0)
+ xrange[0] xrange[1] yrange[1] yrange[2] bin(0,1)
+ xrange[0] xrange[1] yrange[2] yrange[3] bin(0,2)
+ ....
+ xrange[0] xrange[1] yrange[ny-1] yrange[ny] bin(0,ny-1)
+
+ xrange[1] xrange[2] yrange[0] yrange[1] bin(1,0)
+ xrange[1] xrange[2] yrange[1] yrange[2] bin(1,1)
+ xrange[1] xrange[2] yrange[1] yrange[2] bin(1,2)
+ ....
+ xrange[1] xrange[2] yrange[ny-1] yrange[ny] bin(1,ny-1)
+
+ ....
+
+ xrange[nx-1] xrange[nx] yrange[0] yrange[1] bin(nx-1,0)
+ xrange[nx-1] xrange[nx] yrange[1] yrange[2] bin(nx-1,1)
+ xrange[nx-1] xrange[nx] yrange[1] yrange[2] bin(nx-1,2)
+ ....
+ xrange[nx-1] xrange[nx] yrange[ny-1] yrange[ny] bin(nx-1,ny-1)
+
+ Each line contains the lower and upper limits of the bin and the
+ contents of the bin. Since the upper limits of the each bin are
+ the lower limits of the neighboring bins there is duplication of
+ these values but this allows the histogram to be manipulated with
+ line-oriented tools.
+
+ -- Function: int gsl_histogram2d_fscanf (FILE * STREAM,
+ gsl_histogram2d * H)
+ This function reads formatted data from the stream STREAM into the
+ histogram H. The data is assumed to be in the five-column format
+ used by `gsl_histogram_fprintf'. The histogram H must be
+ preallocated with the correct lengths since the function uses the
+ sizes of H to determine how many numbers to read. The function
+ returns 0 for success and `GSL_EFAILED' if there was a problem
+ reading from the file.
+
+
+File: gsl-ref.info, Node: Resampling from 2D histograms, Next: Example programs for 2D histograms, Prev: Reading and writing 2D histograms, Up: Histograms
+
+21.21 Resampling from 2D histograms
+===================================
+
+As in the one-dimensional case, a two-dimensional histogram made by
+counting events can be regarded as a measurement of a probability
+distribution. Allowing for statistical error, the height of each bin
+represents the probability of an event where (x,y) falls in the range
+of that bin. For a two-dimensional histogram the probability
+distribution takes the form p(x,y) dx dy where,
+
+ p(x,y) = n_{ij}/ (N A_{ij})
+
+In this equation n_{ij} is the number of events in the bin which
+contains (x,y), A_{ij} is the area of the bin and N is the total number
+of events. The distribution of events within each bin is assumed to be
+uniform.
+
+ -- Data Type: gsl_histogram2d_pdf
+ `size_t nx, ny'
+ This is the number of histogram bins used to approximate the
+ probability distribution function in the x and y directions.
+
+ `double * xrange'
+ The ranges of the bins in the x-direction are stored in an
+ array of NX + 1 elements pointed to by XRANGE.
+
+ `double * yrange'
+ The ranges of the bins in the y-direction are stored in an
+ array of NY + 1 pointed to by YRANGE.
+
+ `double * sum'
+ The cumulative probability for the bins is stored in an array
+ of NX*NY elements pointed to by SUM.
+
+The following functions allow you to create a `gsl_histogram2d_pdf'
+struct which represents a two dimensional probability distribution and
+generate random samples from it.
+
+ -- Function: gsl_histogram2d_pdf * gsl_histogram2d_pdf_alloc (size_t
+ NX, size_t NY)
+ This function allocates memory for a two-dimensional probability
+ distribution of size NX-by-NY and returns a pointer to a newly
+ initialized `gsl_histogram2d_pdf' struct. If insufficient memory
+ is available a null pointer is returned and the error handler is
+ invoked with an error code of `GSL_ENOMEM'.
+
+ -- Function: int gsl_histogram2d_pdf_init (gsl_histogram2d_pdf * P,
+ const gsl_histogram2d * H)
+ This function initializes the two-dimensional probability
+ distribution calculated P from the histogram H. If any of the
+ bins of H are negative then the error handler is invoked with an
+ error code of `GSL_EDOM' because a probability distribution cannot
+ contain negative values.
+
+ -- Function: void gsl_histogram2d_pdf_free (gsl_histogram2d_pdf * P)
+ This function frees the two-dimensional probability distribution
+ function P and all of the memory associated with it.
+
+ -- Function: int gsl_histogram2d_pdf_sample (const gsl_histogram2d_pdf
+ * P, double R1, double R2, double * X, double * Y)
+ This function uses two uniform random numbers between zero and one,
+ R1 and R2, to compute a single random sample from the
+ two-dimensional probability distribution P.
+
+
+File: gsl-ref.info, Node: Example programs for 2D histograms, Prev: Resampling from 2D histograms, Up: Histograms
+
+21.22 Example programs for 2D histograms
+========================================
+
+This program demonstrates two features of two-dimensional histograms.
+First a 10-by-10 two-dimensional histogram is created with x and y
+running from 0 to 1. Then a few sample points are added to the
+histogram, at (0.3,0.3) with a height of 1, at (0.8,0.1) with a height
+of 5 and at (0.7,0.9) with a height of 0.5. This histogram with three
+events is used to generate a random sample of 1000 simulated events,
+which are printed out.
+
+ #include <stdio.h>
+ #include <gsl/gsl_rng.h>
+ #include <gsl/gsl_histogram2d.h>
+
+ int
+ main (void)
+ {
+ const gsl_rng_type * T;
+ gsl_rng * r;
+
+ gsl_histogram2d * h = gsl_histogram2d_alloc (10, 10);
+
+ gsl_histogram2d_set_ranges_uniform (h,
+ 0.0, 1.0,
+ 0.0, 1.0);
+
+ gsl_histogram2d_accumulate (h, 0.3, 0.3, 1);
+ gsl_histogram2d_accumulate (h, 0.8, 0.1, 5);
+ gsl_histogram2d_accumulate (h, 0.7, 0.9, 0.5);
+
+ gsl_rng_env_setup ();
+
+ T = gsl_rng_default;
+ r = gsl_rng_alloc (T);
+
+ {
+ int i;
+ gsl_histogram2d_pdf * p
+ = gsl_histogram2d_pdf_alloc (h->nx, h->ny);
+
+ gsl_histogram2d_pdf_init (p, h);
+
+ for (i = 0; i < 1000; i++) {
+ double x, y;
+ double u = gsl_rng_uniform (r);
+ double v = gsl_rng_uniform (r);
+
+ gsl_histogram2d_pdf_sample (p, u, v, &x, &y);
+
+ printf ("%g %g\n", x, y);
+ }
+
+ gsl_histogram2d_pdf_free (p);
+ }
+
+ gsl_histogram2d_free (h);
+ gsl_rng_free (r);
+
+ return 0;
+ }
+
+
+
+File: gsl-ref.info, Node: N-tuples, Next: Monte Carlo Integration, Prev: Histograms, Up: Top
+
+22 N-tuples
+***********
+
+This chapter describes functions for creating and manipulating
+"ntuples", sets of values associated with events. The ntuples are
+stored in files. Their values can be extracted in any combination and
+"booked" in a histogram using a selection function.
+
+ The values to be stored are held in a user-defined data structure,
+and an ntuple is created associating this data structure with a file.
+The values are then written to the file (normally inside a loop) using
+the ntuple functions described below.
+
+ A histogram can be created from ntuple data by providing a selection
+function and a value function. The selection function specifies whether
+an event should be included in the subset to be analyzed or not. The
+value function computes the entry to be added to the histogram for each
+event.
+
+ All the ntuple functions are defined in the header file
+`gsl_ntuple.h'
+
+* Menu:
+
+* The ntuple struct::
+* Creating ntuples::
+* Opening an existing ntuple file::
+* Writing ntuples::
+* Reading ntuples ::
+* Closing an ntuple file::
+* Histogramming ntuple values::
+* Example ntuple programs::
+* Ntuple References and Further Reading::
+
+
+File: gsl-ref.info, Node: The ntuple struct, Next: Creating ntuples, Up: N-tuples
+
+22.1 The ntuple struct
+======================
+
+Ntuples are manipulated using the `gsl_ntuple' struct. This struct
+contains information on the file where the ntuple data is stored, a
+pointer to the current ntuple data row and the size of the user-defined
+ntuple data struct.
+
+ typedef struct {
+ FILE * file;
+ void * ntuple_data;
+ size_t size;
+ } gsl_ntuple;
+
+
+File: gsl-ref.info, Node: Creating ntuples, Next: Opening an existing ntuple file, Prev: The ntuple struct, Up: N-tuples
+
+22.2 Creating ntuples
+=====================
+
+ -- Function: gsl_ntuple * gsl_ntuple_create (char * FILENAME, void *
+ NTUPLE_DATA, size_t SIZE)
+ This function creates a new write-only ntuple file FILENAME for
+ ntuples of size SIZE and returns a pointer to the newly created
+ ntuple struct. Any existing file with the same name is truncated
+ to zero length and overwritten. A pointer to memory for the
+ current ntuple row NTUPLE_DATA must be supplied--this is used to
+ copy ntuples in and out of the file.
+
+
+File: gsl-ref.info, Node: Opening an existing ntuple file, Next: Writing ntuples, Prev: Creating ntuples, Up: N-tuples
+
+22.3 Opening an existing ntuple file
+====================================
+
+ -- Function: gsl_ntuple * gsl_ntuple_open (char * FILENAME, void *
+ NTUPLE_DATA, size_t SIZE)
+ This function opens an existing ntuple file FILENAME for reading
+ and returns a pointer to a corresponding ntuple struct. The
+ ntuples in the file must have size SIZE. A pointer to memory for
+ the current ntuple row NTUPLE_DATA must be supplied--this is used
+ to copy ntuples in and out of the file.
+
+
+File: gsl-ref.info, Node: Writing ntuples, Next: Reading ntuples, Prev: Opening an existing ntuple file, Up: N-tuples
+
+22.4 Writing ntuples
+====================
+
+ -- Function: int gsl_ntuple_write (gsl_ntuple * NTUPLE)
+ This function writes the current ntuple NTUPLE->NTUPLE_DATA of
+ size NTUPLE->SIZE to the corresponding file.
+
+ -- Function: int gsl_ntuple_bookdata (gsl_ntuple * NTUPLE)
+ This function is a synonym for `gsl_ntuple_write'.
+
+
+File: gsl-ref.info, Node: Reading ntuples, Next: Closing an ntuple file, Prev: Writing ntuples, Up: N-tuples
+
+22.5 Reading ntuples
+====================
+
+ -- Function: int gsl_ntuple_read (gsl_ntuple * NTUPLE)
+ This function reads the current row of the ntuple file for NTUPLE
+ and stores the values in NTUPLE->DATA.
+
+
+File: gsl-ref.info, Node: Closing an ntuple file, Next: Histogramming ntuple values, Prev: Reading ntuples, Up: N-tuples
+
+22.6 Closing an ntuple file
+===========================
+
+ -- Function: int gsl_ntuple_close (gsl_ntuple * NTUPLE)
+ This function closes the ntuple file NTUPLE and frees its
+ associated allocated memory.
+
+
+File: gsl-ref.info, Node: Histogramming ntuple values, Next: Example ntuple programs, Prev: Closing an ntuple file, Up: N-tuples
+
+22.7 Histogramming ntuple values
+================================
+
+Once an ntuple has been created its contents can be histogrammed in
+various ways using the function `gsl_ntuple_project'. Two user-defined
+functions must be provided, a function to select events and a function
+to compute scalar values. The selection function and the value function
+both accept the ntuple row as a first argument and other parameters as
+a second argument.
+
+ The "selection function" determines which ntuple rows are selected
+for histogramming. It is defined by the following struct,
+ typedef struct {
+ int (* function) (void * ntuple_data, void * params);
+ void * params;
+ } gsl_ntuple_select_fn;
+
+The struct component FUNCTION should return a non-zero value for each
+ntuple row that is to be included in the histogram.
+
+ The "value function" computes scalar values for those ntuple rows
+selected by the selection function,
+ typedef struct {
+ double (* function) (void * ntuple_data, void * params);
+ void * params;
+ } gsl_ntuple_value_fn;
+
+In this case the struct component FUNCTION should return the value to
+be added to the histogram for the ntuple row.
+
+ -- Function: int gsl_ntuple_project (gsl_histogram * H, gsl_ntuple *
+ NTUPLE, gsl_ntuple_value_fn * VALUE_FUNC,
+ gsl_ntuple_select_fn * SELECT_FUNC)
+ This function updates the histogram H from the ntuple NTUPLE using
+ the functions VALUE_FUNC and SELECT_FUNC. For each ntuple row
+ where the selection function SELECT_FUNC is non-zero the
+ corresponding value of that row is computed using the function
+ VALUE_FUNC and added to the histogram. Those ntuple rows where
+ SELECT_FUNC returns zero are ignored. New entries are added to
+ the histogram, so subsequent calls can be used to accumulate
+ further data in the same histogram.
+
+
+File: gsl-ref.info, Node: Example ntuple programs, Next: Ntuple References and Further Reading, Prev: Histogramming ntuple values, Up: N-tuples
+
+22.8 Examples
+=============
+
+The following example programs demonstrate the use of ntuples in
+managing a large dataset. The first program creates a set of 10,000
+simulated "events", each with 3 associated values (x,y,z). These are
+generated from a gaussian distribution with unit variance, for
+demonstration purposes, and written to the ntuple file `test.dat'.
+
+ #include <gsl/gsl_ntuple.h>
+ #include <gsl/gsl_rng.h>
+ #include <gsl/gsl_randist.h>
+
+ struct data
+ {
+ double x;
+ double y;
+ double z;
+ };
+
+ int
+ main (void)
+ {
+ const gsl_rng_type * T;
+ gsl_rng * r;
+
+ struct data ntuple_row;
+ int i;
+
+ gsl_ntuple *ntuple
+ = gsl_ntuple_create ("test.dat", &ntuple_row,
+ sizeof (ntuple_row));
+
+ gsl_rng_env_setup ();
+
+ T = gsl_rng_default;
+ r = gsl_rng_alloc (T);
+
+ for (i = 0; i < 10000; i++)
+ {
+ ntuple_row.x = gsl_ran_ugaussian (r);
+ ntuple_row.y = gsl_ran_ugaussian (r);
+ ntuple_row.z = gsl_ran_ugaussian (r);
+
+ gsl_ntuple_write (ntuple);
+ }
+
+ gsl_ntuple_close (ntuple);
+ gsl_rng_free (r);
+
+ return 0;
+ }
+
+The next program analyses the ntuple data in the file `test.dat'. The
+analysis procedure is to compute the squared-magnitude of each event,
+E^2=x^2+y^2+z^2, and select only those which exceed a lower limit of
+1.5. The selected events are then histogrammed using their E^2 values.
+
+ #include <math.h>
+ #include <gsl/gsl_ntuple.h>
+ #include <gsl/gsl_histogram.h>
+
+ struct data
+ {
+ double x;
+ double y;
+ double z;
+ };
+
+ int sel_func (void *ntuple_data, void *params);
+ double val_func (void *ntuple_data, void *params);
+
+ int
+ main (void)
+ {
+ struct data ntuple_row;
+
+ gsl_ntuple *ntuple
+ = gsl_ntuple_open ("test.dat", &ntuple_row,
+ sizeof (ntuple_row));
+ double lower = 1.5;
+
+ gsl_ntuple_select_fn S;
+ gsl_ntuple_value_fn V;
+
+ gsl_histogram *h = gsl_histogram_alloc (100);
+ gsl_histogram_set_ranges_uniform(h, 0.0, 10.0);
+
+ S.function = &sel_func;
+ S.params = &lower;
+
+ V.function = &val_func;
+ V.params = 0;
+
+ gsl_ntuple_project (h, ntuple, &V, &S);
+ gsl_histogram_fprintf (stdout, h, "%f", "%f");
+ gsl_histogram_free (h);
+ gsl_ntuple_close (ntuple);
+
+ return 0;
+ }
+
+ int
+ sel_func (void *ntuple_data, void *params)
+ {
+ struct data * data = (struct data *) ntuple_data;
+ double x, y, z, E2, scale;
+ scale = *(double *) params;
+
+ x = data->x;
+ y = data->y;
+ z = data->z;
+
+ E2 = x * x + y * y + z * z;
+
+ return E2 > scale;
+ }
+
+ double
+ val_func (void *ntuple_data, void *params)
+ {
+ struct data * data = (struct data *) ntuple_data;
+ double x, y, z;
+
+ x = data->x;
+ y = data->y;
+ z = data->z;
+
+ return x * x + y * y + z * z;
+ }
+
+ The following plot shows the distribution of the selected events.
+Note the cut-off at the lower bound.
+
+
+File: gsl-ref.info, Node: Ntuple References and Further Reading, Prev: Example ntuple programs, Up: N-tuples
+
+22.9 References and Further Reading
+===================================
+
+Further information on the use of ntuples can be found in the
+documentation for the CERN packages PAW and HBOOK (available online).
+
+
+File: gsl-ref.info, Node: Monte Carlo Integration, Next: Simulated Annealing, Prev: N-tuples, Up: Top
+
+23 Monte Carlo Integration
+**************************
+
+This chapter describes routines for multidimensional Monte Carlo
+integration. These include the traditional Monte Carlo method and
+adaptive algorithms such as VEGAS and MISER which use importance
+sampling and stratified sampling techniques. Each algorithm computes an
+estimate of a multidimensional definite integral of the form,
+
+ I = \int_xl^xu dx \int_yl^yu dy ... f(x, y, ...)
+
+over a hypercubic region ((x_l,x_u), (y_l,y_u), ...) using a fixed
+number of function calls. The routines also provide a statistical
+estimate of the error on the result. This error estimate should be
+taken as a guide rather than as a strict error bound--random sampling
+of the region may not uncover all the important features of the
+function, resulting in an underestimate of the error.
+
+ The functions are defined in separate header files for each routine,
+`gsl_monte_plain.h', `gsl_monte_miser.h' and `gsl_monte_vegas.h'.
+
+* Menu:
+
+* Monte Carlo Interface::
+* PLAIN Monte Carlo::
+* MISER::
+* VEGAS::
+* Monte Carlo Examples::
+* Monte Carlo Integration References and Further Reading::
+
+
+File: gsl-ref.info, Node: Monte Carlo Interface, Next: PLAIN Monte Carlo, Up: Monte Carlo Integration
+
+23.1 Interface
+==============
+
+All of the Monte Carlo integration routines use the same general form of
+interface. There is an allocator to allocate memory for control
+variables and workspace, a routine to initialize those control
+variables, the integrator itself, and a function to free the space when
+done.
+
+ Each integration function requires a random number generator to be
+supplied, and returns an estimate of the integral and its standard
+deviation. The accuracy of the result is determined by the number of
+function calls specified by the user. If a known level of accuracy is
+required this can be achieved by calling the integrator several times
+and averaging the individual results until the desired accuracy is
+obtained.
+
+ Random sample points used within the Monte Carlo routines are always
+chosen strictly within the integration region, so that endpoint
+singularities are automatically avoided.
+
+ The function to be integrated has its own datatype, defined in the
+header file `gsl_monte.h'.
+
+ -- Data Type: gsl_monte_function
+ This data type defines a general function with parameters for Monte
+ Carlo integration.
+
+ `double (* f) (double * X, size_t DIM, void * PARAMS)'
+ this function should return the value f(x,params) for the
+ argument X and parameters PARAMS, where X is an array of size
+ DIM giving the coordinates of the point where the function is
+ to be evaluated.
+
+ `size_t dim'
+ the number of dimensions for X.
+
+ `void * params'
+ a pointer to the parameters of the function.
+
+Here is an example for a quadratic function in two dimensions,
+
+ f(x,y) = a x^2 + b x y + c y^2
+
+with a = 3, b = 2, c = 1. The following code defines a
+`gsl_monte_function' `F' which you could pass to an integrator:
+
+ struct my_f_params { double a; double b; double c; };
+
+ double
+ my_f (double x[], size_t dim, void * p) {
+ struct my_f_params * fp = (struct my_f_params *)p;
+
+ if (dim != 2)
+ {
+ fprintf (stderr, "error: dim != 2");
+ abort ();
+ }
+
+ return fp->a * x[0] * x[0]
+ + fp->b * x[0] * x[1]
+ + fp->c * x[1] * x[1];
+ }
+
+ gsl_monte_function F;
+ struct my_f_params params = { 3.0, 2.0, 1.0 };
+
+ F.f = &my_f;
+ F.dim = 2;
+ F.params = &params;
+
+The function f(x) can be evaluated using the following macro,
+
+ #define GSL_MONTE_FN_EVAL(F,x)
+ (*((F)->f))(x,(F)->dim,(F)->params)
+
+
+File: gsl-ref.info, Node: PLAIN Monte Carlo, Next: MISER, Prev: Monte Carlo Interface, Up: Monte Carlo Integration
+
+23.2 PLAIN Monte Carlo
+======================
+
+The plain Monte Carlo algorithm samples points randomly from the
+integration region to estimate the integral and its error. Using this
+algorithm the estimate of the integral E(f; N) for N randomly
+distributed points x_i is given by,
+
+ E(f; N) = = V <f> = (V / N) \sum_i^N f(x_i)
+
+where V is the volume of the integration region. The error on this
+estimate \sigma(E;N) is calculated from the estimated variance of the
+mean,
+
+ \sigma^2 (E; N) = (V / N) \sum_i^N (f(x_i) - <f>)^2.
+
+For large N this variance decreases asymptotically as \Var(f)/N, where
+\Var(f) is the true variance of the function over the integration
+region. The error estimate itself should decrease as
+\sigma(f)/\sqrt{N}. The familiar law of errors decreasing as
+1/\sqrt{N} applies--to reduce the error by a factor of 10 requires a
+100-fold increase in the number of sample points.
+
+ The functions described in this section are declared in the header
+file `gsl_monte_plain.h'.
+
+ -- Function: gsl_monte_plain_state * gsl_monte_plain_alloc (size_t DIM)
+ This function allocates and initializes a workspace for Monte Carlo
+ integration in DIM dimensions.
+
+ -- Function: int gsl_monte_plain_init (gsl_monte_plain_state* S)
+ This function initializes a previously allocated integration state.
+ This allows an existing workspace to be reused for different
+ integrations.
+
+ -- Function: int gsl_monte_plain_integrate (gsl_monte_function * F,
+ double * XL, double * XU, size_t DIM, size_t CALLS, gsl_rng *
+ R, gsl_monte_plain_state * S, double * RESULT, double *
+ ABSERR)
+ This routines uses the plain Monte Carlo algorithm to integrate the
+ function F over the DIM-dimensional hypercubic region defined by
+ the lower and upper limits in the arrays XL and XU, each of size
+ DIM. The integration uses a fixed number of function calls CALLS,
+ and obtains random sampling points using the random number
+ generator R. A previously allocated workspace S must be supplied.
+ The result of the integration is returned in RESULT, with an
+ estimated absolute error ABSERR.
+
+ -- Function: void gsl_monte_plain_free (gsl_monte_plain_state * S)
+ This function frees the memory associated with the integrator state
+ S.
+
+
+File: gsl-ref.info, Node: MISER, Next: VEGAS, Prev: PLAIN Monte Carlo, Up: Monte Carlo Integration
+
+23.3 MISER
+==========
+
+The MISER algorithm of Press and Farrar is based on recursive
+stratified sampling. This technique aims to reduce the overall
+integration error by concentrating integration points in the regions of
+highest variance.
+
+ The idea of stratified sampling begins with the observation that for
+two disjoint regions a and b with Monte Carlo estimates of the integral
+E_a(f) and E_b(f) and variances \sigma_a^2(f) and \sigma_b^2(f), the
+variance \Var(f) of the combined estimate E(f) = (1/2) (E_a(f) + E_b(f))
+is given by,
+
+ \Var(f) = (\sigma_a^2(f) / 4 N_a) + (\sigma_b^2(f) / 4 N_b).
+
+It can be shown that this variance is minimized by distributing the
+points such that,
+
+ N_a / (N_a + N_b) = \sigma_a / (\sigma_a + \sigma_b).
+
+Hence the smallest error estimate is obtained by allocating sample
+points in proportion to the standard deviation of the function in each
+sub-region.
+
+ The MISER algorithm proceeds by bisecting the integration region
+along one coordinate axis to give two sub-regions at each step. The
+direction is chosen by examining all d possible bisections and
+selecting the one which will minimize the combined variance of the two
+sub-regions. The variance in the sub-regions is estimated by sampling
+with a fraction of the total number of points available to the current
+step. The same procedure is then repeated recursively for each of the
+two half-spaces from the best bisection. The remaining sample points are
+allocated to the sub-regions using the formula for N_a and N_b. This
+recursive allocation of integration points continues down to a
+user-specified depth where each sub-region is integrated using a plain
+Monte Carlo estimate. These individual values and their error
+estimates are then combined upwards to give an overall result and an
+estimate of its error.
+
+ The functions described in this section are declared in the header
+file `gsl_monte_miser.h'.
+
+ -- Function: gsl_monte_miser_state * gsl_monte_miser_alloc (size_t DIM)
+ This function allocates and initializes a workspace for Monte Carlo
+ integration in DIM dimensions. The workspace is used to maintain
+ the state of the integration.
+
+ -- Function: int gsl_monte_miser_init (gsl_monte_miser_state* S)
+ This function initializes a previously allocated integration state.
+ This allows an existing workspace to be reused for different
+ integrations.
+
+ -- Function: int gsl_monte_miser_integrate (gsl_monte_function * F,
+ double * XL, double * XU, size_t DIM, size_t CALLS, gsl_rng *
+ R, gsl_monte_miser_state * S, double * RESULT, double *
+ ABSERR)
+ This routines uses the MISER Monte Carlo algorithm to integrate the
+ function F over the DIM-dimensional hypercubic region defined by
+ the lower and upper limits in the arrays XL and XU, each of size
+ DIM. The integration uses a fixed number of function calls CALLS,
+ and obtains random sampling points using the random number
+ generator R. A previously allocated workspace S must be supplied.
+ The result of the integration is returned in RESULT, with an
+ estimated absolute error ABSERR.
+
+ -- Function: void gsl_monte_miser_free (gsl_monte_miser_state * S)
+ This function frees the memory associated with the integrator state
+ S.
+
+ The MISER algorithm has several configurable parameters. The
+following variables can be accessed through the `gsl_monte_miser_state'
+struct,
+
+ -- Variable: double estimate_frac
+ This parameter specifies the fraction of the currently available
+ number of function calls which are allocated to estimating the
+ variance at each recursive step. The default value is 0.1.
+
+ -- Variable: size_t min_calls
+ This parameter specifies the minimum number of function calls
+ required for each estimate of the variance. If the number of
+ function calls allocated to the estimate using ESTIMATE_FRAC falls
+ below MIN_CALLS then MIN_CALLS are used instead. This ensures
+ that each estimate maintains a reasonable level of accuracy. The
+ default value of MIN_CALLS is `16 * dim'.
+
+ -- Variable: size_t min_calls_per_bisection
+ This parameter specifies the minimum number of function calls
+ required to proceed with a bisection step. When a recursive step
+ has fewer calls available than MIN_CALLS_PER_BISECTION it performs
+ a plain Monte Carlo estimate of the current sub-region and
+ terminates its branch of the recursion. The default value of this
+ parameter is `32 * min_calls'.
+
+ -- Variable: double alpha
+ This parameter controls how the estimated variances for the two
+ sub-regions of a bisection are combined when allocating points.
+ With recursive sampling the overall variance should scale better
+ than 1/N, since the values from the sub-regions will be obtained
+ using a procedure which explicitly minimizes their variance. To
+ accommodate this behavior the MISER algorithm allows the total
+ variance to depend on a scaling parameter \alpha,
+
+ \Var(f) = {\sigma_a \over N_a^\alpha} + {\sigma_b \over N_b^\alpha}.
+
+ The authors of the original paper describing MISER recommend the
+ value \alpha = 2 as a good choice, obtained from numerical
+ experiments, and this is used as the default value in this
+ implementation.
+
+ -- Variable: double dither
+ This parameter introduces a random fractional variation of size
+ DITHER into each bisection, which can be used to break the
+ symmetry of integrands which are concentrated near the exact
+ center of the hypercubic integration region. The default value of
+ dither is zero, so no variation is introduced. If needed, a
+ typical value of DITHER is 0.1.
+
+
+File: gsl-ref.info, Node: VEGAS, Next: Monte Carlo Examples, Prev: MISER, Up: Monte Carlo Integration
+
+23.4 VEGAS
+==========
+
+The VEGAS algorithm of Lepage is based on importance sampling. It
+samples points from the probability distribution described by the
+function |f|, so that the points are concentrated in the regions that
+make the largest contribution to the integral.
+
+ In general, if the Monte Carlo integral of f is sampled with points
+distributed according to a probability distribution described by the
+function g, we obtain an estimate E_g(f; N),
+
+ E_g(f; N) = E(f/g; N)
+
+with a corresponding variance,
+
+ \Var_g(f; N) = \Var(f/g; N).
+
+If the probability distribution is chosen as g = |f|/I(|f|) then it can
+be shown that the variance V_g(f; N) vanishes, and the error in the
+estimate will be zero. In practice it is not possible to sample from
+the exact distribution g for an arbitrary function, so importance
+sampling algorithms aim to produce efficient approximations to the
+desired distribution.
+
+ The VEGAS algorithm approximates the exact distribution by making a
+number of passes over the integration region while histogramming the
+function f. Each histogram is used to define a sampling distribution
+for the next pass. Asymptotically this procedure converges to the
+desired distribution. In order to avoid the number of histogram bins
+growing like K^d the probability distribution is approximated by a
+separable function: g(x_1, x_2, ...) = g_1(x_1) g_2(x_2) ... so that
+the number of bins required is only Kd. This is equivalent to locating
+the peaks of the function from the projections of the integrand onto
+the coordinate axes. The efficiency of VEGAS depends on the validity
+of this assumption. It is most efficient when the peaks of the
+integrand are well-localized. If an integrand can be rewritten in a
+form which is approximately separable this will increase the efficiency
+of integration with VEGAS.
+
+ VEGAS incorporates a number of additional features, and combines both
+stratified sampling and importance sampling. The integration region is
+divided into a number of "boxes", with each box getting a fixed number
+of points (the goal is 2). Each box can then have a fractional number
+of bins, but if the ratio of bins-per-box is less than two, Vegas
+switches to a kind variance reduction (rather than importance sampling).
+
+ -- Function: gsl_monte_vegas_state * gsl_monte_vegas_alloc (size_t DIM)
+ This function allocates and initializes a workspace for Monte Carlo
+ integration in DIM dimensions. The workspace is used to maintain
+ the state of the integration.
+
+ -- Function: int gsl_monte_vegas_init (gsl_monte_vegas_state* S)
+ This function initializes a previously allocated integration state.
+ This allows an existing workspace to be reused for different
+ integrations.
+
+ -- Function: int gsl_monte_vegas_integrate (gsl_monte_function * F,
+ double * XL, double * XU, size_t DIM, size_t CALLS, gsl_rng *
+ R, gsl_monte_vegas_state * S, double * RESULT, double *
+ ABSERR)
+ This routines uses the VEGAS Monte Carlo algorithm to integrate the
+ function F over the DIM-dimensional hypercubic region defined by
+ the lower and upper limits in the arrays XL and XU, each of size
+ DIM. The integration uses a fixed number of function calls CALLS,
+ and obtains random sampling points using the random number
+ generator R. A previously allocated workspace S must be supplied.
+ The result of the integration is returned in RESULT, with an
+ estimated absolute error ABSERR. The result and its error
+ estimate are based on a weighted average of independent samples.
+ The chi-squared per degree of freedom for the weighted average is
+ returned via the state struct component, S->CHISQ, and must be
+ consistent with 1 for the weighted average to be reliable.
+
+ -- Function: void gsl_monte_vegas_free (gsl_monte_vegas_state * S)
+ This function frees the memory associated with the integrator state
+ S.
+
+ The VEGAS algorithm computes a number of independent estimates of the
+integral internally, according to the `iterations' parameter described
+below, and returns their weighted average. Random sampling of the
+integrand can occasionally produce an estimate where the error is zero,
+particularly if the function is constant in some regions. An estimate
+with zero error causes the weighted average to break down and must be
+handled separately. In the original Fortran implementations of VEGAS
+the error estimate is made non-zero by substituting a small value
+(typically `1e-30'). The implementation in GSL differs from this and
+avoids the use of an arbitrary constant--it either assigns the value a
+weight which is the average weight of the preceding estimates or
+discards it according to the following procedure,
+
+current estimate has zero error, weighted average has finite error
+ The current estimate is assigned a weight which is the average
+ weight of the preceding estimates.
+
+current estimate has finite error, previous estimates had zero error
+ The previous estimates are discarded and the weighted averaging
+ procedure begins with the current estimate.
+
+current estimate has zero error, previous estimates had zero error
+ The estimates are averaged using the arithmetic mean, but no error
+ is computed.
+
+ The VEGAS algorithm is highly configurable. The following variables
+can be accessed through the `gsl_monte_vegas_state' struct,
+
+ -- Variable: double result
+ -- Variable: double sigma
+ These parameters contain the raw value of the integral RESULT and
+ its error SIGMA from the last iteration of the algorithm.
+
+ -- Variable: double chisq
+ This parameter gives the chi-squared per degree of freedom for the
+ weighted estimate of the integral. The value of CHISQ should be
+ close to 1. A value of CHISQ which differs significantly from 1
+ indicates that the values from different iterations are
+ inconsistent. In this case the weighted error will be
+ under-estimated, and further iterations of the algorithm are
+ needed to obtain reliable results.
+
+ -- Variable: double alpha
+ The parameter `alpha' controls the stiffness of the rebinning
+ algorithm. It is typically set between one and two. A value of
+ zero prevents rebinning of the grid. The default value is 1.5.
+
+ -- Variable: size_t iterations
+ The number of iterations to perform for each call to the routine.
+ The default value is 5 iterations.
+
+ -- Variable: int stage
+ Setting this determines the "stage" of the calculation. Normally,
+ `stage = 0' which begins with a new uniform grid and empty weighted
+ average. Calling vegas with `stage = 1' retains the grid from the
+ previous run but discards the weighted average, so that one can
+ "tune" the grid using a relatively small number of points and then
+ do a large run with `stage = 1' on the optimized grid. Setting
+ `stage = 2' keeps the grid and the weighted average from the
+ previous run, but may increase (or decrease) the number of
+ histogram bins in the grid depending on the number of calls
+ available. Choosing `stage = 3' enters at the main loop, so that
+ nothing is changed, and is equivalent to performing additional
+ iterations in a previous call.
+
+ -- Variable: int mode
+ The possible choices are `GSL_VEGAS_MODE_IMPORTANCE',
+ `GSL_VEGAS_MODE_STRATIFIED', `GSL_VEGAS_MODE_IMPORTANCE_ONLY'.
+ This determines whether VEGAS will use importance sampling or
+ stratified sampling, or whether it can pick on its own. In low
+ dimensions VEGAS uses strict stratified sampling (more precisely,
+ stratified sampling is chosen if there are fewer than 2 bins per
+ box).
+
+ -- Variable: int verbose
+ -- Variable: FILE * ostream
+ These parameters set the level of information printed by VEGAS. All
+ information is written to the stream OSTREAM. The default setting
+ of VERBOSE is `-1', which turns off all output. A VERBOSE value
+ of `0' prints summary information about the weighted average and
+ final result, while a value of `1' also displays the grid
+ coordinates. A value of `2' prints information from the rebinning
+ procedure for each iteration.
+
+
+File: gsl-ref.info, Node: Monte Carlo Examples, Next: Monte Carlo Integration References and Further Reading, Prev: VEGAS, Up: Monte Carlo Integration
+
+23.5 Examples
+=============
+
+The example program below uses the Monte Carlo routines to estimate the
+value of the following 3-dimensional integral from the theory of random
+walks,
+
+ I = \int_{-pi}^{+pi} {dk_x/(2 pi)}
+ \int_{-pi}^{+pi} {dk_y/(2 pi)}
+ \int_{-pi}^{+pi} {dk_z/(2 pi)}
+ 1 / (1 - cos(k_x)cos(k_y)cos(k_z)).
+
+The analytic value of this integral can be shown to be I =
+\Gamma(1/4)^4/(4 \pi^3) = 1.393203929685676859.... The integral gives
+the mean time spent at the origin by a random walk on a body-centered
+cubic lattice in three dimensions.
+
+ For simplicity we will compute the integral over the region (0,0,0)
+to (\pi,\pi,\pi) and multiply by 8 to obtain the full result. The
+integral is slowly varying in the middle of the region but has
+integrable singularities at the corners (0,0,0), (0,\pi,\pi),
+(\pi,0,\pi) and (\pi,\pi,0). The Monte Carlo routines only select
+points which are strictly within the integration region and so no
+special measures are needed to avoid these singularities.
+
+ #include <stdlib.h>
+ #include <gsl/gsl_math.h>
+ #include <gsl/gsl_monte.h>
+ #include <gsl/gsl_monte_plain.h>
+ #include <gsl/gsl_monte_miser.h>
+ #include <gsl/gsl_monte_vegas.h>
+
+ /* Computation of the integral,
+
+ I = int (dx dy dz)/(2pi)^3 1/(1-cos(x)cos(y)cos(z))
+
+ over (-pi,-pi,-pi) to (+pi, +pi, +pi). The exact answer
+ is Gamma(1/4)^4/(4 pi^3). This example is taken from
+ C.Itzykson, J.M.Drouffe, "Statistical Field Theory -
+ Volume 1", Section 1.1, p21, which cites the original
+ paper M.L.Glasser, I.J.Zucker, Proc.Natl.Acad.Sci.USA 74
+ 1800 (1977) */
+
+ /* For simplicity we compute the integral over the region
+ (0,0,0) -> (pi,pi,pi) and multiply by 8 */
+
+ double exact = 1.3932039296856768591842462603255;
+
+ double
+ g (double *k, size_t dim, void *params)
+ {
+ double A = 1.0 / (M_PI * M_PI * M_PI);
+ return A / (1.0 - cos (k[0]) * cos (k[1]) * cos (k[2]));
+ }
+
+ void
+ display_results (char *title, double result, double error)
+ {
+ printf ("%s ==================\n", title);
+ printf ("result = % .6f\n", result);
+ printf ("sigma = % .6f\n", error);
+ printf ("exact = % .6f\n", exact);
+ printf ("error = % .6f = %.1g sigma\n", result - exact,
+ fabs (result - exact) / error);
+ }
+
+ int
+ main (void)
+ {
+ double res, err;
+
+ double xl[3] = { 0, 0, 0 };
+ double xu[3] = { M_PI, M_PI, M_PI };
+
+ const gsl_rng_type *T;
+ gsl_rng *r;
+
+ gsl_monte_function G = { &g, 3, 0 };
+
+ size_t calls = 500000;
+
+ gsl_rng_env_setup ();
+
+ T = gsl_rng_default;
+ r = gsl_rng_alloc (T);
+
+ {
+ gsl_monte_plain_state *s = gsl_monte_plain_alloc (3);
+ gsl_monte_plain_integrate (&G, xl, xu, 3, calls, r, s,
+ &res, &err);
+ gsl_monte_plain_free (s);
+
+ display_results ("plain", res, err);
+ }
+
+ {
+ gsl_monte_miser_state *s = gsl_monte_miser_alloc (3);
+ gsl_monte_miser_integrate (&G, xl, xu, 3, calls, r, s,
+ &res, &err);
+ gsl_monte_miser_free (s);
+
+ display_results ("miser", res, err);
+ }
+
+ {
+ gsl_monte_vegas_state *s = gsl_monte_vegas_alloc (3);
+
+ gsl_monte_vegas_integrate (&G, xl, xu, 3, 10000, r, s,
+ &res, &err);
+ display_results ("vegas warm-up", res, err);
+
+ printf ("converging...\n");
+
+ do
+ {
+ gsl_monte_vegas_integrate (&G, xl, xu, 3, calls/5, r, s,
+ &res, &err);
+ printf ("result = % .6f sigma = % .6f "
+ "chisq/dof = %.1f\n", res, err, s->chisq);
+ }
+ while (fabs (s->chisq - 1.0) > 0.5);
+
+ display_results ("vegas final", res, err);
+
+ gsl_monte_vegas_free (s);
+ }
+
+ gsl_rng_free (r);
+
+ return 0;
+ }
+
+With 500,000 function calls the plain Monte Carlo algorithm achieves a
+fractional error of 0.6%. The estimated error `sigma' is consistent
+with the actual error, and the computed result differs from the true
+result by about one standard deviation,
+
+ plain ==================
+ result = 1.385867
+ sigma = 0.007938
+ exact = 1.393204
+ error = -0.007337 = 0.9 sigma
+
+The MISER algorithm reduces the error by a factor of two, and also
+correctly estimates the error,
+
+ miser ==================
+ result = 1.390656
+ sigma = 0.003743
+ exact = 1.393204
+ error = -0.002548 = 0.7 sigma
+
+In the case of the VEGAS algorithm the program uses an initial warm-up
+run of 10,000 function calls to prepare, or "warm up", the grid. This
+is followed by a main run with five iterations of 100,000 function
+calls. The chi-squared per degree of freedom for the five iterations are
+checked for consistency with 1, and the run is repeated if the results
+have not converged. In this case the estimates are consistent on the
+first pass.
+
+ vegas warm-up ==================
+ result = 1.386925
+ sigma = 0.002651
+ exact = 1.393204
+ error = -0.006278 = 2 sigma
+ converging...
+ result = 1.392957 sigma = 0.000452 chisq/dof = 1.1
+ vegas final ==================
+ result = 1.392957
+ sigma = 0.000452
+ exact = 1.393204
+ error = -0.000247 = 0.5 sigma
+
+If the value of `chisq' had differed significantly from 1 it would
+indicate inconsistent results, with a correspondingly underestimated
+error. The final estimate from VEGAS (using a similar number of
+function calls) is significantly more accurate than the other two
+algorithms.
+
+
+File: gsl-ref.info, Node: Monte Carlo Integration References and Further Reading, Prev: Monte Carlo Examples, Up: Monte Carlo Integration
+
+23.6 References and Further Reading
+===================================
+
+The MISER algorithm is described in the following article by Press and
+Farrar,
+
+ W.H. Press, G.R. Farrar, `Recursive Stratified Sampling for
+ Multidimensional Monte Carlo Integration', Computers in Physics,
+ v4 (1990), pp190-195.
+
+The VEGAS algorithm is described in the following papers,
+
+ G.P. Lepage, `A New Algorithm for Adaptive Multidimensional
+ Integration', Journal of Computational Physics 27, 192-203, (1978)
+
+ G.P. Lepage, `VEGAS: An Adaptive Multi-dimensional Integration
+ Program', Cornell preprint CLNS 80-447, March 1980
+
+
+File: gsl-ref.info, Node: Simulated Annealing, Next: Ordinary Differential Equations, Prev: Monte Carlo Integration, Up: Top
+
+24 Simulated Annealing
+**********************
+
+Stochastic search techniques are used when the structure of a space is
+not well understood or is not smooth, so that techniques like Newton's
+method (which requires calculating Jacobian derivative matrices) cannot
+be used. In particular, these techniques are frequently used to solve
+combinatorial optimization problems, such as the traveling salesman
+problem.
+
+ The goal is to find a point in the space at which a real valued
+"energy function" (or "cost function") is minimized. Simulated
+annealing is a minimization technique which has given good results in
+avoiding local minima; it is based on the idea of taking a random walk
+through the space at successively lower temperatures, where the
+probability of taking a step is given by a Boltzmann distribution.
+
+ The functions described in this chapter are declared in the header
+file `gsl_siman.h'.
+
+* Menu:
+
+* Simulated Annealing algorithm::
+* Simulated Annealing functions::
+* Examples with Simulated Annealing::
+* Simulated Annealing References and Further Reading::
+
+
+File: gsl-ref.info, Node: Simulated Annealing algorithm, Next: Simulated Annealing functions, Up: Simulated Annealing
+
+24.1 Simulated Annealing algorithm
+==================================
+
+The simulated annealing algorithm takes random walks through the problem
+space, looking for points with low energies; in these random walks, the
+probability of taking a step is determined by the Boltzmann
+distribution,
+
+ p = e^{-(E_{i+1} - E_i)/(kT)}
+
+if E_{i+1} > E_i, and p = 1 when E_{i+1} <= E_i.
+
+ In other words, a step will occur if the new energy is lower. If
+the new energy is higher, the transition can still occur, and its
+likelihood is proportional to the temperature T and inversely
+proportional to the energy difference E_{i+1} - E_i.
+
+ The temperature T is initially set to a high value, and a random
+walk is carried out at that temperature. Then the temperature is
+lowered very slightly according to a "cooling schedule", for example: T
+-> T/mu_T where \mu_T is slightly greater than 1.
+
+ The slight probability of taking a step that gives higher energy is
+what allows simulated annealing to frequently get out of local minima.
+
+
+File: gsl-ref.info, Node: Simulated Annealing functions, Next: Examples with Simulated Annealing, Prev: Simulated Annealing algorithm, Up: Simulated Annealing
+
+24.2 Simulated Annealing functions
+==================================
+
+ -- Function: void gsl_siman_solve (const gsl_rng * R, void * X0_P,
+ gsl_siman_Efunc_t EF, gsl_siman_step_t TAKE_STEP,
+ gsl_siman_metric_t DISTANCE, gsl_siman_print_t
+ PRINT_POSITION, gsl_siman_copy_t COPYFUNC,
+ gsl_siman_copy_construct_t COPY_CONSTRUCTOR,
+ gsl_siman_destroy_t DESTRUCTOR, size_t ELEMENT_SIZE,
+ gsl_siman_params_t PARAMS)
+ This function performs a simulated annealing search through a given
+ space. The space is specified by providing the functions EF and
+ DISTANCE. The simulated annealing steps are generated using the
+ random number generator R and the function TAKE_STEP.
+
+ The starting configuration of the system should be given by X0_P.
+ The routine offers two modes for updating configurations, a
+ fixed-size mode and a variable-size mode. In the fixed-size mode
+ the configuration is stored as a single block of memory of size
+ ELEMENT_SIZE. Copies of this configuration are created, copied
+ and destroyed internally using the standard library functions
+ `malloc', `memcpy' and `free'. The function pointers COPYFUNC,
+ COPY_CONSTRUCTOR and DESTRUCTOR should be null pointers in
+ fixed-size mode. In the variable-size mode the functions
+ COPYFUNC, COPY_CONSTRUCTOR and DESTRUCTOR are used to create, copy
+ and destroy configurations internally. The variable ELEMENT_SIZE
+ should be zero in the variable-size mode.
+
+ The PARAMS structure (described below) controls the run by
+ providing the temperature schedule and other tunable parameters to
+ the algorithm.
+
+ On exit the best result achieved during the search is placed in
+ `*X0_P'. If the annealing process has been successful this should
+ be a good approximation to the optimal point in the space.
+
+ If the function pointer PRINT_POSITION is not null, a debugging
+ log will be printed to `stdout' with the following columns:
+
+ number_of_iterations temperature x x-(*x0_p) Ef(x)
+
+ and the output of the function PRINT_POSITION itself. If
+ PRINT_POSITION is null then no information is printed.
+
+The simulated annealing routines require several user-specified
+functions to define the configuration space and energy function. The
+prototypes for these functions are given below.
+
+ -- Data Type: gsl_siman_Efunc_t
+ This function type should return the energy of a configuration XP.
+
+ double (*gsl_siman_Efunc_t) (void *xp)
+
+ -- Data Type: gsl_siman_step_t
+ This function type should modify the configuration XP using a
+ random step taken from the generator R, up to a maximum distance of
+ STEP_SIZE.
+
+ void (*gsl_siman_step_t) (const gsl_rng *r, void *xp,
+ double step_size)
+
+ -- Data Type: gsl_siman_metric_t
+ This function type should return the distance between two
+ configurations XP and YP.
+
+ double (*gsl_siman_metric_t) (void *xp, void *yp)
+
+ -- Data Type: gsl_siman_print_t
+ This function type should print the contents of the configuration
+ XP.
+
+ void (*gsl_siman_print_t) (void *xp)
+
+ -- Data Type: gsl_siman_copy_t
+ This function type should copy the configuration SOURCE into DEST.
+
+ void (*gsl_siman_copy_t) (void *source, void *dest)
+
+ -- Data Type: gsl_siman_copy_construct_t
+ This function type should create a new copy of the configuration
+ XP.
+
+ void * (*gsl_siman_copy_construct_t) (void *xp)
+
+ -- Data Type: gsl_siman_destroy_t
+ This function type should destroy the configuration XP, freeing its
+ memory.
+
+ void (*gsl_siman_destroy_t) (void *xp)
+
+ -- Data Type: gsl_siman_params_t
+ These are the parameters that control a run of `gsl_siman_solve'.
+ This structure contains all the information needed to control the
+ search, beyond the energy function, the step function and the
+ initial guess.
+
+ `int n_tries'
+ The number of points to try for each step.
+
+ `int iters_fixed_T'
+ The number of iterations at each temperature.
+
+ `double step_size'
+ The maximum step size in the random walk.
+
+ `double k, t_initial, mu_t, t_min'
+ The parameters of the Boltzmann distribution and cooling
+ schedule.
+
+
+File: gsl-ref.info, Node: Examples with Simulated Annealing, Next: Simulated Annealing References and Further Reading, Prev: Simulated Annealing functions, Up: Simulated Annealing
+
+24.3 Examples
+=============
+
+The simulated annealing package is clumsy, and it has to be because it
+is written in C, for C callers, and tries to be polymorphic at the same
+time. But here we provide some examples which can be pasted into your
+application with little change and should make things easier.
+
+* Menu:
+
+* Trivial example::
+* Traveling Salesman Problem::
+
+
+File: gsl-ref.info, Node: Trivial example, Next: Traveling Salesman Problem, Up: Examples with Simulated Annealing
+
+24.3.1 Trivial example
+----------------------
+
+The first example, in one dimensional cartesian space, sets up an energy
+function which is a damped sine wave; this has many local minima, but
+only one global minimum, somewhere between 1.0 and 1.5. The initial
+guess given is 15.5, which is several local minima away from the global
+minimum.
+
+ #include <math.h>
+ #include <stdlib.h>
+ #include <string.h>
+ #include <gsl/gsl_siman.h>
+
+ /* set up parameters for this simulated annealing run */
+
+ /* how many points do we try before stepping */
+ #define N_TRIES 200
+
+ /* how many iterations for each T? */
+ #define ITERS_FIXED_T 1000
+
+ /* max step size in random walk */
+ #define STEP_SIZE 1.0
+
+ /* Boltzmann constant */
+ #define K 1.0
+
+ /* initial temperature */
+ #define T_INITIAL 0.008
+
+ /* damping factor for temperature */
+ #define MU_T 1.003
+ #define T_MIN 2.0e-6
+
+ gsl_siman_params_t params
+ = {N_TRIES, ITERS_FIXED_T, STEP_SIZE,
+ K, T_INITIAL, MU_T, T_MIN};
+
+ /* now some functions to test in one dimension */
+ double E1(void *xp)
+ {
+ double x = * ((double *) xp);
+
+ return exp(-pow((x-1.0),2.0))*sin(8*x);
+ }
+
+ double M1(void *xp, void *yp)
+ {
+ double x = *((double *) xp);
+ double y = *((double *) yp);
+
+ return fabs(x - y);
+ }
+
+ void S1(const gsl_rng * r, void *xp, double step_size)
+ {
+ double old_x = *((double *) xp);
+ double new_x;
+
+ double u = gsl_rng_uniform(r);
+ new_x = u * 2 * step_size - step_size + old_x;
+
+ memcpy(xp, &new_x, sizeof(new_x));
+ }
+
+ void P1(void *xp)
+ {
+ printf ("%12g", *((double *) xp));
+ }
+
+ int
+ main(int argc, char *argv[])
+ {
+ const gsl_rng_type * T;
+ gsl_rng * r;
+
+ double x_initial = 15.5;
+
+ gsl_rng_env_setup();
+
+ T = gsl_rng_default;
+ r = gsl_rng_alloc(T);
+
+ gsl_siman_solve(r, &x_initial, E1, S1, M1, P1,
+ NULL, NULL, NULL,
+ sizeof(double), params);
+
+ gsl_rng_free (r);
+ return 0;
+ }
+
+ Here are a couple of plots that are generated by running
+`siman_test' in the following way:
+
+ $ ./siman_test | awk '!/^#/ {print $1, $4}'
+ | graph -y 1.34 1.4 -W0 -X generation -Y position
+ | plot -Tps > siman-test.eps
+ $ ./siman_test | awk '!/^#/ {print $1, $4}'
+ | graph -y -0.88 -0.83 -W0 -X generation -Y energy
+ | plot -Tps > siman-energy.eps
+
+
+File: gsl-ref.info, Node: Traveling Salesman Problem, Prev: Trivial example, Up: Examples with Simulated Annealing
+
+24.3.2 Traveling Salesman Problem
+---------------------------------
+
+The TSP ("Traveling Salesman Problem") is the classic combinatorial
+optimization problem. I have provided a very simple version of it,
+based on the coordinates of twelve cities in the southwestern United
+States. This should maybe be called the "Flying Salesman Problem",
+since I am using the great-circle distance between cities, rather than
+the driving distance. Also: I assume the earth is a sphere, so I don't
+use geoid distances.
+
+ The `gsl_siman_solve' routine finds a route which is 3490.62
+Kilometers long; this is confirmed by an exhaustive search of all
+possible routes with the same initial city.
+
+ The full code can be found in `siman/siman_tsp.c', but I include
+here some plots generated in the following way:
+
+ $ ./siman_tsp > tsp.output
+ $ grep -v "^#" tsp.output
+ | awk '{print $1, $NF}'
+ | graph -y 3300 6500 -W0 -X generation -Y distance
+ -L "TSP - 12 southwest cities"
+ | plot -Tps > 12-cities.eps
+ $ grep initial_city_coord tsp.output
+ | awk '{print $2, $3}'
+ | graph -X "longitude (- means west)" -Y "latitude"
+ -L "TSP - initial-order" -f 0.03 -S 1 0.1
+ | plot -Tps > initial-route.eps
+ $ grep final_city_coord tsp.output
+ | awk '{print $2, $3}'
+ | graph -X "longitude (- means west)" -Y "latitude"
+ -L "TSP - final-order" -f 0.03 -S 1 0.1
+ | plot -Tps > final-route.eps
+
+This is the output showing the initial order of the cities; longitude is
+negative, since it is west and I want the plot to look like a map.
+
+ # initial coordinates of cities (longitude and latitude)
+ ###initial_city_coord: -105.95 35.68 Santa Fe
+ ###initial_city_coord: -112.07 33.54 Phoenix
+ ###initial_city_coord: -106.62 35.12 Albuquerque
+ ###initial_city_coord: -103.2 34.41 Clovis
+ ###initial_city_coord: -107.87 37.29 Durango
+ ###initial_city_coord: -96.77 32.79 Dallas
+ ###initial_city_coord: -105.92 35.77 Tesuque
+ ###initial_city_coord: -107.84 35.15 Grants
+ ###initial_city_coord: -106.28 35.89 Los Alamos
+ ###initial_city_coord: -106.76 32.34 Las Cruces
+ ###initial_city_coord: -108.58 37.35 Cortez
+ ###initial_city_coord: -108.74 35.52 Gallup
+ ###initial_city_coord: -105.95 35.68 Santa Fe
+
+ The optimal route turns out to be:
+
+ # final coordinates of cities (longitude and latitude)
+ ###final_city_coord: -105.95 35.68 Santa Fe
+ ###final_city_coord: -106.28 35.89 Los Alamos
+ ###final_city_coord: -106.62 35.12 Albuquerque
+ ###final_city_coord: -107.84 35.15 Grants
+ ###final_city_coord: -107.87 37.29 Durango
+ ###final_city_coord: -108.58 37.35 Cortez
+ ###final_city_coord: -108.74 35.52 Gallup
+ ###final_city_coord: -112.07 33.54 Phoenix
+ ###final_city_coord: -106.76 32.34 Las Cruces
+ ###final_city_coord: -96.77 32.79 Dallas
+ ###final_city_coord: -103.2 34.41 Clovis
+ ###final_city_coord: -105.92 35.77 Tesuque
+ ###final_city_coord: -105.95 35.68 Santa Fe
+
+Here's a plot of the cost function (energy) versus generation (point in
+the calculation at which a new temperature is set) for this problem:
+
+
+File: gsl-ref.info, Node: Simulated Annealing References and Further Reading, Prev: Examples with Simulated Annealing, Up: Simulated Annealing
+
+24.4 References and Further Reading
+===================================
+
+Further information is available in the following book,
+
+ `Modern Heuristic Techniques for Combinatorial Problems', Colin R.
+ Reeves (ed.), McGraw-Hill, 1995 (ISBN 0-07-709239-2).
+
+
+File: gsl-ref.info, Node: Ordinary Differential Equations, Next: Interpolation, Prev: Simulated Annealing, Up: Top
+
+25 Ordinary Differential Equations
+**********************************
+
+This chapter describes functions for solving ordinary differential
+equation (ODE) initial value problems. The library provides a variety
+of low-level methods, such as Runge-Kutta and Bulirsch-Stoer routines,
+and higher-level components for adaptive step-size control. The
+components can be combined by the user to achieve the desired solution,
+with full access to any intermediate steps.
+
+ These functions are declared in the header file `gsl_odeiv.h'.
+
+* Menu:
+
+* Defining the ODE System::
+* Stepping Functions::
+* Adaptive Step-size Control::
+* Evolution::
+* ODE Example programs::
+* ODE References and Further Reading::
+
+
+File: gsl-ref.info, Node: Defining the ODE System, Next: Stepping Functions, Up: Ordinary Differential Equations
+
+25.1 Defining the ODE System
+============================
+
+The routines solve the general n-dimensional first-order system,
+
+ dy_i(t)/dt = f_i(t, y_1(t), ..., y_n(t))
+
+for i = 1, \dots, n. The stepping functions rely on the vector of
+derivatives f_i and the Jacobian matrix, J_{ij} = df_i(t,y(t)) / dy_j.
+A system of equations is defined using the `gsl_odeiv_system' datatype.
+
+ -- Data Type: gsl_odeiv_system
+ This data type defines a general ODE system with arbitrary
+ parameters.
+
+ `int (* function) (double t, const double y[], double dydt[], void * params)'
+ This function should store the vector elements
+ f_i(t,y,params) in the array DYDT, for arguments (T,Y) and
+ parameters PARAMS. The function should return `GSL_SUCCESS'
+ if the calculation was completed successfully. Any other
+ return value indicates an error.
+
+ `int (* jacobian) (double t, const double y[], double * dfdy, double dfdt[], void * params);'
+ This function should store the vector of derivative elements
+ df_i(t,y,params)/dt in the array DFDT and the Jacobian matrix
+ J_{ij} in the array DFDY, regarded as a row-ordered matrix
+ `J(i,j) = dfdy[i * dimension + j]' where `dimension' is the
+ dimension of the system. The function should return
+ `GSL_SUCCESS' if the calculation was completed successfully.
+ Any other return value indicates an error.
+
+ Some of the simpler solver algorithms do not make use of the
+ Jacobian matrix, so it is not always strictly necessary to
+ provide it (the `jacobian' element of the struct can be
+ replaced by a null pointer for those algorithms). However,
+ it is useful to provide the Jacobian to allow the solver
+ algorithms to be interchanged--the best algorithms make use
+ of the Jacobian.
+
+ `size_t dimension;'
+ This is the dimension of the system of equations.
+
+ `void * params'
+ This is a pointer to the arbitrary parameters of the system.
+
+
+File: gsl-ref.info, Node: Stepping Functions, Next: Adaptive Step-size Control, Prev: Defining the ODE System, Up: Ordinary Differential Equations
+
+25.2 Stepping Functions
+=======================
+
+The lowest level components are the "stepping functions" which advance
+a solution from time t to t+h for a fixed step-size h and estimate the
+resulting local error.
+
+ -- Function: gsl_odeiv_step * gsl_odeiv_step_alloc (const
+ gsl_odeiv_step_type * T, size_t DIM)
+ This function returns a pointer to a newly allocated instance of a
+ stepping function of type T for a system of DIM dimensions.
+
+ -- Function: int gsl_odeiv_step_reset (gsl_odeiv_step * S)
+ This function resets the stepping function S. It should be used
+ whenever the next use of S will not be a continuation of a
+ previous step.
+
+ -- Function: void gsl_odeiv_step_free (gsl_odeiv_step * S)
+ This function frees all the memory associated with the stepping
+ function S.
+
+ -- Function: const char * gsl_odeiv_step_name (const gsl_odeiv_step *
+ S)
+ This function returns a pointer to the name of the stepping
+ function. For example,
+
+ printf ("step method is '%s'\n",
+ gsl_odeiv_step_name (s));
+
+ would print something like `step method is 'rk4''.
+
+ -- Function: unsigned int gsl_odeiv_step_order (const gsl_odeiv_step *
+ S)
+ This function returns the order of the stepping function on the
+ previous step. This order can vary if the stepping function
+ itself is adaptive.
+
+ -- Function: int gsl_odeiv_step_apply (gsl_odeiv_step * S, double T,
+ double H, double Y[], double YERR[], const double DYDT_IN[],
+ double DYDT_OUT[], const gsl_odeiv_system * DYDT)
+ This function applies the stepping function S to the system of
+ equations defined by DYDT, using the step size H to advance the
+ system from time T and state Y to time T+H. The new state of the
+ system is stored in Y on output, with an estimate of the absolute
+ error in each component stored in YERR. If the argument DYDT_IN
+ is not null it should point an array containing the derivatives
+ for the system at time T on input. This is optional as the
+ derivatives will be computed internally if they are not provided,
+ but allows the reuse of existing derivative information. On
+ output the new derivatives of the system at time T+H will be
+ stored in DYDT_OUT if it is not null.
+
+ If the user-supplied functions defined in the system DYDT return a
+ status other than `GSL_SUCCESS' the step will be aborted. In this
+ case, the elements of Y will be restored to their pre-step values
+ and the error code from the user-supplied function will be
+ returned. The step-size H will be set to the step-size which
+ caused the error. If the function is called again with a smaller
+ step-size, e.g. H/10, it should be possible to get closer to any
+ singularity. To distinguish between error codes from the
+ user-supplied functions and those from `gsl_odeiv_step_apply'
+ itself, any user-defined return values should be distinct from the
+ standard GSL error codes.
+
+ The following algorithms are available,
+
+ -- Step Type: gsl_odeiv_step_rk2
+ Embedded Runge-Kutta (2, 3) method.
+
+ -- Step Type: gsl_odeiv_step_rk4
+ 4th order (classical) Runge-Kutta.
+
+ -- Step Type: gsl_odeiv_step_rkf45
+ Embedded Runge-Kutta-Fehlberg (4, 5) method. This method is a good
+ general-purpose integrator.
+
+ -- Step Type: gsl_odeiv_step_rkck
+ Embedded Runge-Kutta Cash-Karp (4, 5) method.
+
+ -- Step Type: gsl_odeiv_step_rk8pd
+ Embedded Runge-Kutta Prince-Dormand (8,9) method.
+
+ -- Step Type: gsl_odeiv_step_rk2imp
+ Implicit 2nd order Runge-Kutta at Gaussian points.
+
+ -- Step Type: gsl_odeiv_step_rk4imp
+ Implicit 4th order Runge-Kutta at Gaussian points.
+
+ -- Step Type: gsl_odeiv_step_bsimp
+ Implicit Bulirsch-Stoer method of Bader and Deuflhard. This
+ algorithm requires the Jacobian.
+
+ -- Step Type: gsl_odeiv_step_gear1
+ M=1 implicit Gear method.
+
+ -- Step Type: gsl_odeiv_step_gear2
+ M=2 implicit Gear method.
+
+
+File: gsl-ref.info, Node: Adaptive Step-size Control, Next: Evolution, Prev: Stepping Functions, Up: Ordinary Differential Equations
+
+25.3 Adaptive Step-size Control
+===============================
+
+The control function examines the proposed change to the solution
+produced by a stepping function and attempts to determine the optimal
+step-size for a user-specified level of error.
+
+ -- Function: gsl_odeiv_control * gsl_odeiv_control_standard_new
+ (double EPS_ABS, double EPS_REL, double A_Y, double A_DYDT)
+ The standard control object is a four parameter heuristic based on
+ absolute and relative errors EPS_ABS and EPS_REL, and scaling
+ factors A_Y and A_DYDT for the system state y(t) and derivatives
+ y'(t) respectively.
+
+ The step-size adjustment procedure for this method begins by
+ computing the desired error level D_i for each component,
+
+ D_i = eps_abs + eps_rel * (a_y |y_i| + a_dydt h |y'_i|)
+
+ and comparing it with the observed error E_i = |yerr_i|. If the
+ observed error E exceeds the desired error level D by more than
+ 10% for any component then the method reduces the step-size by an
+ appropriate factor,
+
+ h_new = h_old * S * (E/D)^(-1/q)
+
+ where q is the consistency order of the method (e.g. q=4 for 4(5)
+ embedded RK), and S is a safety factor of 0.9. The ratio E/D is
+ taken to be the maximum of the ratios E_i/D_i.
+
+ If the observed error E is less than 50% of the desired error
+ level D for the maximum ratio E_i/D_i then the algorithm takes the
+ opportunity to increase the step-size to bring the error in line
+ with the desired level,
+
+ h_new = h_old * S * (E/D)^(-1/(q+1))
+
+ This encompasses all the standard error scaling methods. To avoid
+ uncontrolled changes in the stepsize, the overall scaling factor is
+ limited to the range 1/5 to 5.
+
+ -- Function: gsl_odeiv_control * gsl_odeiv_control_y_new (double
+ EPS_ABS, double EPS_REL)
+ This function creates a new control object which will keep the
+ local error on each step within an absolute error of EPS_ABS and
+ relative error of EPS_REL with respect to the solution y_i(t).
+ This is equivalent to the standard control object with A_Y=1 and
+ A_DYDT=0.
+
+ -- Function: gsl_odeiv_control * gsl_odeiv_control_yp_new (double
+ EPS_ABS, double EPS_REL)
+ This function creates a new control object which will keep the
+ local error on each step within an absolute error of EPS_ABS and
+ relative error of EPS_REL with respect to the derivatives of the
+ solution y'_i(t). This is equivalent to the standard control
+ object with A_Y=0 and A_DYDT=1.
+
+ -- Function: gsl_odeiv_control * gsl_odeiv_control_scaled_new (double
+ EPS_ABS, double EPS_REL, double A_Y, double A_DYDT, const
+ double SCALE_ABS[], size_t DIM)
+ This function creates a new control object which uses the same
+ algorithm as `gsl_odeiv_control_standard_new' but with an absolute
+ error which is scaled for each component by the array SCALE_ABS.
+ The formula for D_i for this control object is,
+
+ D_i = eps_abs * s_i + eps_rel * (a_y |y_i| + a_dydt h |y'_i|)
+
+ where s_i is the i-th component of the array SCALE_ABS. The same
+ error control heuristic is used by the Matlab ODE suite.
+
+ -- Function: gsl_odeiv_control * gsl_odeiv_control_alloc (const
+ gsl_odeiv_control_type * T)
+ This function returns a pointer to a newly allocated instance of a
+ control function of type T. This function is only needed for
+ defining new types of control functions. For most purposes the
+ standard control functions described above should be sufficient.
+
+ -- Function: int gsl_odeiv_control_init (gsl_odeiv_control * C, double
+ EPS_ABS, double EPS_REL, double A_Y, double A_DYDT)
+ This function initializes the control function C with the
+ parameters EPS_ABS (absolute error), EPS_REL (relative error), A_Y
+ (scaling factor for y) and A_DYDT (scaling factor for derivatives).
+
+ -- Function: void gsl_odeiv_control_free (gsl_odeiv_control * C)
+ This function frees all the memory associated with the control
+ function C.
+
+ -- Function: int gsl_odeiv_control_hadjust (gsl_odeiv_control * C,
+ gsl_odeiv_step * S, const double Y[], const double YERR[],
+ const double DYDT[], double * H)
+ This function adjusts the step-size H using the control function
+ C, and the current values of Y, YERR and DYDT. The stepping
+ function STEP is also needed to determine the order of the method.
+ If the error in the y-values YERR is found to be too large then
+ the step-size H is reduced and the function returns
+ `GSL_ODEIV_HADJ_DEC'. If the error is sufficiently small then H
+ may be increased and `GSL_ODEIV_HADJ_INC' is returned. The
+ function returns `GSL_ODEIV_HADJ_NIL' if the step-size is
+ unchanged. The goal of the function is to estimate the largest
+ step-size which satisfies the user-specified accuracy requirements
+ for the current point.
+
+ -- Function: const char * gsl_odeiv_control_name (const
+ gsl_odeiv_control * C)
+ This function returns a pointer to the name of the control
+ function. For example,
+
+ printf ("control method is '%s'\n",
+ gsl_odeiv_control_name (c));
+
+ would print something like `control method is 'standard''
+
+
+File: gsl-ref.info, Node: Evolution, Next: ODE Example programs, Prev: Adaptive Step-size Control, Up: Ordinary Differential Equations
+
+25.4 Evolution
+==============
+
+The highest level of the system is the evolution function which combines
+the results of a stepping function and control function to reliably
+advance the solution forward over an interval (t_0, t_1). If the
+control function signals that the step-size should be decreased the
+evolution function backs out of the current step and tries the proposed
+smaller step-size. This process is continued until an acceptable
+step-size is found.
+
+ -- Function: gsl_odeiv_evolve * gsl_odeiv_evolve_alloc (size_t DIM)
+ This function returns a pointer to a newly allocated instance of an
+ evolution function for a system of DIM dimensions.
+
+ -- Function: int gsl_odeiv_evolve_apply (gsl_odeiv_evolve * E,
+ gsl_odeiv_control * CON, gsl_odeiv_step * STEP, const
+ gsl_odeiv_system * DYDT, double * T, double T1, double * H,
+ double Y[])
+ This function advances the system (E, DYDT) from time T and
+ position Y using the stepping function STEP. The new time and
+ position are stored in T and Y on output. The initial step-size
+ is taken as H, but this will be modified using the control
+ function C to achieve the appropriate error bound if necessary.
+ The routine may make several calls to STEP in order to determine
+ the optimum step-size. An estimate of the local error for the step
+ can be obtained from the components of the array `E->yerr[]'. If
+ the step-size has been changed the value of H will be modified on
+ output. The maximum time T1 is guaranteed not to be exceeded by
+ the time-step. On the final time-step the value of T will be set
+ to T1 exactly.
+
+ If the user-supplied functions defined in the system DYDT return a
+ status other than `GSL_SUCCESS' the step will be aborted. In this
+ case, T and Y will be restored to their pre-step values and the
+ error code from the user-supplied function will be returned. To
+ distinguish between error codes from the user-supplied functions
+ and those from `gsl_odeiv_evolve_apply' itself, any user-defined
+ return values should be distinct from the standard GSL error codes.
+
+ -- Function: int gsl_odeiv_evolve_reset (gsl_odeiv_evolve * E)
+ This function resets the evolution function E. It should be used
+ whenever the next use of E will not be a continuation of a
+ previous step.
+
+ -- Function: void gsl_odeiv_evolve_free (gsl_odeiv_evolve * E)
+ This function frees all the memory associated with the evolution
+ function E.
+
+
+File: gsl-ref.info, Node: ODE Example programs, Next: ODE References and Further Reading, Prev: Evolution, Up: Ordinary Differential Equations
+
+25.5 Examples
+=============
+
+The following program solves the second-order nonlinear Van der Pol
+oscillator equation,
+
+ x''(t) + \mu x'(t) (x(t)^2 - 1) + x(t) = 0
+
+This can be converted into a first order system suitable for use with
+the routines described in this chapter by introducing a separate
+variable for the velocity, y = x'(t),
+
+ x' = y
+ y' = -x + \mu y (1-x^2)
+
+The program begins by defining functions for these derivatives and
+their Jacobian,
+
+ #include <stdio.h>
+ #include <gsl/gsl_errno.h>
+ #include <gsl/gsl_matrix.h>
+ #include <gsl/gsl_odeiv.h>
+
+ int
+ func (double t, const double y[], double f[],
+ void *params)
+ {
+ double mu = *(double *)params;
+ f[0] = y[1];
+ f[1] = -y[0] - mu*y[1]*(y[0]*y[0] - 1);
+ return GSL_SUCCESS;
+ }
+
+ int
+ jac (double t, const double y[], double *dfdy,
+ double dfdt[], void *params)
+ {
+ double mu = *(double *)params;
+ gsl_matrix_view dfdy_mat
+ = gsl_matrix_view_array (dfdy, 2, 2);
+ gsl_matrix * m = &dfdy_mat.matrix;
+ gsl_matrix_set (m, 0, 0, 0.0);
+ gsl_matrix_set (m, 0, 1, 1.0);
+ gsl_matrix_set (m, 1, 0, -2.0*mu*y[0]*y[1] - 1.0);
+ gsl_matrix_set (m, 1, 1, -mu*(y[0]*y[0] - 1.0));
+ dfdt[0] = 0.0;
+ dfdt[1] = 0.0;
+ return GSL_SUCCESS;
+ }
+
+ int
+ main (void)
+ {
+ const gsl_odeiv_step_type * T
+ = gsl_odeiv_step_rk8pd;
+
+ gsl_odeiv_step * s
+ = gsl_odeiv_step_alloc (T, 2);
+ gsl_odeiv_control * c
+ = gsl_odeiv_control_y_new (1e-6, 0.0);
+ gsl_odeiv_evolve * e
+ = gsl_odeiv_evolve_alloc (2);
+
+ double mu = 10;
+ gsl_odeiv_system sys = {func, jac, 2, &mu};
+
+ double t = 0.0, t1 = 100.0;
+ double h = 1e-6;
+ double y[2] = { 1.0, 0.0 };
+
+ while (t < t1)
+ {
+ int status = gsl_odeiv_evolve_apply (e, c, s,
+ &sys,
+ &t, t1,
+ &h, y);
+
+ if (status != GSL_SUCCESS)
+ break;
+
+ printf ("%.5e %.5e %.5e\n", t, y[0], y[1]);
+ }
+
+ gsl_odeiv_evolve_free (e);
+ gsl_odeiv_control_free (c);
+ gsl_odeiv_step_free (s);
+ return 0;
+ }
+
+For functions with multiple parameters, the appropriate information can
+be passed in through the PARAMS argument using a pointer to a struct.
+
+ The main loop of the program evolves the solution from (y, y') = (1,
+0) at t=0 to t=100. The step-size h is automatically adjusted by the
+controller to maintain an absolute accuracy of 10^{-6} in the function
+values Y.
+
+To obtain the values at regular intervals, rather than the variable
+spacings chosen by the control function, the main loop can be modified
+to advance the solution from one point to the next. For example, the
+following main loop prints the solution at the fixed points t = 0, 1,
+2, \dots, 100,
+
+ for (i = 1; i <= 100; i++)
+ {
+ double ti = i * t1 / 100.0;
+
+ while (t < ti)
+ {
+ gsl_odeiv_evolve_apply (e, c, s,
+ &sys,
+ &t, ti, &h,
+ y);
+ }
+
+ printf ("%.5e %.5e %.5e\n", t, y[0], y[1]);
+ }
+
+It is also possible to work with a non-adaptive integrator, using only
+the stepping function itself. The following program uses the `rk4'
+fourth-order Runge-Kutta stepping function with a fixed stepsize of
+0.01,
+
+ int
+ main (void)
+ {
+ const gsl_odeiv_step_type * T
+ = gsl_odeiv_step_rk4;
+
+ gsl_odeiv_step * s
+ = gsl_odeiv_step_alloc (T, 2);
+
+ double mu = 10;
+ gsl_odeiv_system sys = {func, jac, 2, &mu};
+
+ double t = 0.0, t1 = 100.0;
+ double h = 1e-2;
+ double y[2] = { 1.0, 0.0 }, y_err[2];
+ double dydt_in[2], dydt_out[2];
+
+ /* initialise dydt_in from system parameters */
+ GSL_ODEIV_FN_EVAL(&sys, t, y, dydt_in);
+
+ while (t < t1)
+ {
+ int status = gsl_odeiv_step_apply (s, t, h,
+ y, y_err,
+ dydt_in,
+ dydt_out,
+ &sys);
+
+ if (status != GSL_SUCCESS)
+ break;
+
+ dydt_in[0] = dydt_out[0];
+ dydt_in[1] = dydt_out[1];
+
+ t += h;
+
+ printf ("%.5e %.5e %.5e\n", t, y[0], y[1]);
+ }
+
+ gsl_odeiv_step_free (s);
+ return 0;
+ }
+
+The derivatives must be initialized for the starting point t=0 before
+the first step is taken. Subsequent steps use the output derivatives
+DYDT_OUT as inputs to the next step by copying their values into
+DYDT_IN.
+
+
+File: gsl-ref.info, Node: ODE References and Further Reading, Prev: ODE Example programs, Up: Ordinary Differential Equations
+
+25.6 References and Further Reading
+===================================
+
+Many of the basic Runge-Kutta formulas can be found in the Handbook of
+Mathematical Functions,
+
+ Abramowitz & Stegun (eds.), `Handbook of Mathematical Functions',
+ Section 25.5.
+
+The implicit Bulirsch-Stoer algorithm `bsimp' is described in the
+following paper,
+
+ G. Bader and P. Deuflhard, "A Semi-Implicit Mid-Point Rule for
+ Stiff Systems of Ordinary Differential Equations.", Numer. Math.
+ 41, 373-398, 1983.
+
+
+File: gsl-ref.info, Node: Interpolation, Next: Numerical Differentiation, Prev: Ordinary Differential Equations, Up: Top
+
+26 Interpolation
+****************
+
+This chapter describes functions for performing interpolation. The
+library provides a variety of interpolation methods, including Cubic
+splines and Akima splines. The interpolation types are interchangeable,
+allowing different methods to be used without recompiling.
+Interpolations can be defined for both normal and periodic boundary
+conditions. Additional functions are available for computing
+derivatives and integrals of interpolating functions.
+
+ The functions described in this section are declared in the header
+files `gsl_interp.h' and `gsl_spline.h'.
+
+* Menu:
+
+* Introduction to Interpolation::
+* Interpolation Functions::
+* Interpolation Types::
+* Index Look-up and Acceleration::
+* Evaluation of Interpolating Functions::
+* Higher-level Interface::
+* Interpolation Example programs::
+* Interpolation References and Further Reading::
+
+
+File: gsl-ref.info, Node: Introduction to Interpolation, Next: Interpolation Functions, Up: Interpolation
+
+26.1 Introduction
+=================
+
+Given a set of data points (x_1, y_1) \dots (x_n, y_n) the routines
+described in this section compute a continuous interpolating function
+y(x) such that y(x_i) = y_i. The interpolation is piecewise smooth,
+and its behavior at the end-points is determined by the type of
+interpolation used.
+
+
+File: gsl-ref.info, Node: Interpolation Functions, Next: Interpolation Types, Prev: Introduction to Interpolation, Up: Interpolation
+
+26.2 Interpolation Functions
+============================
+
+The interpolation function for a given dataset is stored in a
+`gsl_interp' object. These are created by the following functions.
+
+ -- Function: gsl_interp * gsl_interp_alloc (const gsl_interp_type * T,
+ size_t SIZE)
+ This function returns a pointer to a newly allocated interpolation
+ object of type T for SIZE data-points.
+
+ -- Function: int gsl_interp_init (gsl_interp * INTERP, const double
+ XA[], const double YA[], size_t SIZE)
+ This function initializes the interpolation object INTERP for the
+ data (XA,YA) where XA and YA are arrays of size SIZE. The
+ interpolation object (`gsl_interp') does not save the data arrays
+ XA and YA and only stores the static state computed from the data.
+ The XA data array is always assumed to be strictly ordered; the
+ behavior for other arrangements is not defined.
+
+ -- Function: void gsl_interp_free (gsl_interp * INTERP)
+ This function frees the interpolation object INTERP.
+
+
+File: gsl-ref.info, Node: Interpolation Types, Next: Index Look-up and Acceleration, Prev: Interpolation Functions, Up: Interpolation
+
+26.3 Interpolation Types
+========================
+
+The interpolation library provides five interpolation types:
+
+ -- Interpolation Type: gsl_interp_linear
+ Linear interpolation. This interpolation method does not require
+ any additional memory.
+
+ -- Interpolation Type: gsl_interp_polynomial
+ Polynomial interpolation. This method should only be used for
+ interpolating small numbers of points because polynomial
+ interpolation introduces large oscillations, even for well-behaved
+ datasets. The number of terms in the interpolating polynomial is
+ equal to the number of points.
+
+ -- Interpolation Type: gsl_interp_cspline
+ Cubic spline with natural boundary conditions. The resulting
+ curve is piecewise cubic on each interval, with matching first and
+ second derivatives at the supplied data-points. The second
+ derivative is chosen to be zero at the first point and last point.
+
+ -- Interpolation Type: gsl_interp_cspline_periodic
+ Cubic spline with periodic boundary conditions. The resulting
+ curve is piecewise cubic on each interval, with matching first and
+ second derivatives at the supplied data-points. The derivatives
+ at the first and last points are also matched. Note that the last
+ point in the data must have the same y-value as the first point,
+ otherwise the resulting periodic interpolation will have a
+ discontinuity at the boundary.
+
+
+ -- Interpolation Type: gsl_interp_akima
+ Non-rounded Akima spline with natural boundary conditions. This
+ method uses the non-rounded corner algorithm of Wodicka.
+
+ -- Interpolation Type: gsl_interp_akima_periodic
+ Non-rounded Akima spline with periodic boundary conditions. This
+ method uses the non-rounded corner algorithm of Wodicka.
+
+ The following related functions are available:
+
+ -- Function: const char * gsl_interp_name (const gsl_interp * INTERP)
+ This function returns the name of the interpolation type used by
+ INTERP. For example,
+
+ printf ("interp uses '%s' interpolation.\n",
+ gsl_interp_name (interp));
+
+ would print something like,
+
+ interp uses 'cspline' interpolation.
+
+ -- Function: unsigned int gsl_interp_min_size (const gsl_interp *
+ INTERP)
+ This function returns the minimum number of points required by the
+ interpolation type of INTERP. For example, Akima spline
+ interpolation requires a minimum of 5 points.
+
+
+File: gsl-ref.info, Node: Index Look-up and Acceleration, Next: Evaluation of Interpolating Functions, Prev: Interpolation Types, Up: Interpolation
+
+26.4 Index Look-up and Acceleration
+===================================
+
+The state of searches can be stored in a `gsl_interp_accel' object,
+which is a kind of iterator for interpolation lookups. It caches the
+previous value of an index lookup. When the subsequent interpolation
+point falls in the same interval its index value can be returned
+immediately.
+
+ -- Function: size_t gsl_interp_bsearch (const double X_ARRAY[], double
+ X, size_t INDEX_LO, size_t INDEX_HI)
+ This function returns the index i of the array X_ARRAY such that
+ `x_array[i] <= x < x_array[i+1]'. The index is searched for in
+ the range [INDEX_LO,INDEX_HI].
+
+ -- Function: gsl_interp_accel * gsl_interp_accel_alloc (void)
+ This function returns a pointer to an accelerator object, which is
+ a kind of iterator for interpolation lookups. It tracks the state
+ of lookups, thus allowing for application of various acceleration
+ strategies.
+
+ -- Function: size_t gsl_interp_accel_find (gsl_interp_accel * A, const
+ double X_ARRAY[], size_t SIZE, double X)
+ This function performs a lookup action on the data array X_ARRAY
+ of size SIZE, using the given accelerator A. This is how lookups
+ are performed during evaluation of an interpolation. The function
+ returns an index i such that `x_array[i] <= x < x_array[i+1]'.
+
+ -- Function: void gsl_interp_accel_free (gsl_interp_accel* ACC)
+ This function frees the accelerator object ACC.
+
+
+File: gsl-ref.info, Node: Evaluation of Interpolating Functions, Next: Higher-level Interface, Prev: Index Look-up and Acceleration, Up: Interpolation
+
+26.5 Evaluation of Interpolating Functions
+==========================================
+
+ -- Function: double gsl_interp_eval (const gsl_interp * INTERP, const
+ double XA[], const double YA[], double X, gsl_interp_accel *
+ ACC)
+ -- Function: int gsl_interp_eval_e (const gsl_interp * INTERP, const
+ double XA[], const double YA[], double X, gsl_interp_accel *
+ ACC, double * Y)
+ These functions return the interpolated value of Y for a given
+ point X, using the interpolation object INTERP, data arrays XA and
+ YA and the accelerator ACC.
+
+ -- Function: double gsl_interp_eval_deriv (const gsl_interp * INTERP,
+ const double XA[], const double YA[], double X,
+ gsl_interp_accel * ACC)
+ -- Function: int gsl_interp_eval_deriv_e (const gsl_interp * INTERP,
+ const double XA[], const double YA[], double X,
+ gsl_interp_accel * ACC, double * D)
+ These functions return the derivative D of an interpolated
+ function for a given point X, using the interpolation object
+ INTERP, data arrays XA and YA and the accelerator ACC.
+
+ -- Function: double gsl_interp_eval_deriv2 (const gsl_interp * INTERP,
+ const double XA[], const double YA[], double X,
+ gsl_interp_accel * ACC)
+ -- Function: int gsl_interp_eval_deriv2_e (const gsl_interp * INTERP,
+ const double XA[], const double YA[], double X,
+ gsl_interp_accel * ACC, double * D2)
+ These functions return the second derivative D2 of an interpolated
+ function for a given point X, using the interpolation object
+ INTERP, data arrays XA and YA and the accelerator ACC.
+
+ -- Function: double gsl_interp_eval_integ (const gsl_interp * INTERP,
+ const double XA[], const double YA[], double A, double B,
+ gsl_interp_accel * ACC)
+ -- Function: int gsl_interp_eval_integ_e (const gsl_interp * INTERP,
+ const double XA[], const double YA[], double A, double B,
+ gsl_interp_accel * ACC, double * RESULT)
+ These functions return the numerical integral RESULT of an
+ interpolated function over the range [A, B], using the
+ interpolation object INTERP, data arrays XA and YA and the
+ accelerator ACC.
+
+
+File: gsl-ref.info, Node: Higher-level Interface, Next: Interpolation Example programs, Prev: Evaluation of Interpolating Functions, Up: Interpolation
+
+26.6 Higher-level Interface
+===========================
+
+The functions described in the previous sections required the user to
+supply pointers to the x and y arrays on each call. The following
+functions are equivalent to the corresponding `gsl_interp' functions
+but maintain a copy of this data in the `gsl_spline' object. This
+removes the need to pass both XA and YA as arguments on each
+evaluation. These functions are defined in the header file
+`gsl_spline.h'.
+
+ -- Function: gsl_spline * gsl_spline_alloc (const gsl_interp_type * T,
+ size_t SIZE)
+
+ -- Function: int gsl_spline_init (gsl_spline * SPLINE, const double
+ XA[], const double YA[], size_t SIZE)
+
+ -- Function: void gsl_spline_free (gsl_spline * SPLINE)
+
+ -- Function: const char * gsl_spline_name (const gsl_spline * SPLINE)
+
+ -- Function: unsigned int gsl_spline_min_size (const gsl_spline *
+ SPLINE)
+
+ -- Function: double gsl_spline_eval (const gsl_spline * SPLINE, double
+ X, gsl_interp_accel * ACC)
+ -- Function: int gsl_spline_eval_e (const gsl_spline * SPLINE, double
+ X, gsl_interp_accel * ACC, double * Y)
+
+ -- Function: double gsl_spline_eval_deriv (const gsl_spline * SPLINE,
+ double X, gsl_interp_accel * ACC)
+ -- Function: int gsl_spline_eval_deriv_e (const gsl_spline * SPLINE,
+ double X, gsl_interp_accel * ACC, double * D)
+
+ -- Function: double gsl_spline_eval_deriv2 (const gsl_spline * SPLINE,
+ double X, gsl_interp_accel * ACC)
+ -- Function: int gsl_spline_eval_deriv2_e (const gsl_spline * SPLINE,
+ double X, gsl_interp_accel * ACC, double * D2)
+
+ -- Function: double gsl_spline_eval_integ (const gsl_spline * SPLINE,
+ double A, double B, gsl_interp_accel * ACC)
+ -- Function: int gsl_spline_eval_integ_e (const gsl_spline * SPLINE,
+ double A, double B, gsl_interp_accel * ACC, double * RESULT)
+
+
+File: gsl-ref.info, Node: Interpolation Example programs, Next: Interpolation References and Further Reading, Prev: Higher-level Interface, Up: Interpolation
+
+26.7 Examples
+=============
+
+The following program demonstrates the use of the interpolation and
+spline functions. It computes a cubic spline interpolation of the
+10-point dataset (x_i, y_i) where x_i = i + \sin(i)/2 and y_i = i +
+\cos(i^2) for i = 0 \dots 9.
+
+ #include <stdlib.h>
+ #include <stdio.h>
+ #include <math.h>
+ #include <gsl/gsl_errno.h>
+ #include <gsl/gsl_spline.h>
+
+ int
+ main (void)
+ {
+ int i;
+ double xi, yi, x[10], y[10];
+
+ printf ("#m=0,S=2\n");
+
+ for (i = 0; i < 10; i++)
+ {
+ x[i] = i + 0.5 * sin (i);
+ y[i] = i + cos (i * i);
+ printf ("%g %g\n", x[i], y[i]);
+ }
+
+ printf ("#m=1,S=0\n");
+
+ {
+ gsl_interp_accel *acc
+ = gsl_interp_accel_alloc ();
+ gsl_spline *spline
+ = gsl_spline_alloc (gsl_interp_cspline, 10);
+
+ gsl_spline_init (spline, x, y, 10);
+
+ for (xi = x[0]; xi < x[9]; xi += 0.01)
+ {
+ yi = gsl_spline_eval (spline, xi, acc);
+ printf ("%g %g\n", xi, yi);
+ }
+ gsl_spline_free (spline);
+ gsl_interp_accel_free (acc);
+ }
+ return 0;
+ }
+
+The output is designed to be used with the GNU plotutils `graph'
+program,
+
+ $ ./a.out > interp.dat
+ $ graph -T ps < interp.dat > interp.ps
+
+The result shows a smooth interpolation of the original points. The
+interpolation method can changed simply by varying the first argument of
+`gsl_spline_alloc'.
+
+ The next program demonstrates a periodic cubic spline with 4 data
+points. Note that the first and last points must be supplied with the
+same y-value for a periodic spline.
+
+ #include <stdlib.h>
+ #include <stdio.h>
+ #include <math.h>
+ #include <gsl/gsl_errno.h>
+ #include <gsl/gsl_spline.h>
+
+ int
+ main (void)
+ {
+ int N = 4;
+ double x[4] = {0.00, 0.10, 0.27, 0.30};
+ double y[4] = {0.15, 0.70, -0.10, 0.15}; /* Note: first = last
+ for periodic data */
+
+ gsl_interp_accel *acc = gsl_interp_accel_alloc ();
+ const gsl_interp_type *t = gsl_interp_cspline_periodic;
+ gsl_spline *spline = gsl_spline_alloc (t, N);
+
+ int i; double xi, yi;
+
+ printf ("#m=0,S=5\n");
+ for (i = 0; i < N; i++)
+ {
+ printf ("%g %g\n", x[i], y[i]);
+ }
+
+ printf ("#m=1,S=0\n");
+ gsl_spline_init (spline, x, y, N);
+
+ for (i = 0; i <= 100; i++)
+ {
+ xi = (1 - i / 100.0) * x[0] + (i / 100.0) * x[N-1];
+ yi = gsl_spline_eval (spline, xi, acc);
+ printf ("%g %g\n", xi, yi);
+ }
+
+ gsl_spline_free (spline);
+ gsl_interp_accel_free (acc);
+ return 0;
+ }
+
+The output can be plotted with GNU `graph'.
+
+ $ ./a.out > interp.dat
+ $ graph -T ps < interp.dat > interp.ps
+
+The result shows a periodic interpolation of the original points. The
+slope of the fitted curve is the same at the beginning and end of the
+data, and the second derivative is also.
+
+
+File: gsl-ref.info, Node: Interpolation References and Further Reading, Prev: Interpolation Example programs, Up: Interpolation
+
+26.8 References and Further Reading
+===================================
+
+Descriptions of the interpolation algorithms and further references can
+be found in the following books:
+
+ C.W. Ueberhuber, `Numerical Computation (Volume 1), Chapter 9
+ "Interpolation"', Springer (1997), ISBN 3-540-62058-3.
+
+ D.M. Young, R.T. Gregory `A Survey of Numerical Mathematics
+ (Volume 1), Chapter 6.8', Dover (1988), ISBN 0-486-65691-8.
+
+
+
+File: gsl-ref.info, Node: Numerical Differentiation, Next: Chebyshev Approximations, Prev: Interpolation, Up: Top
+
+27 Numerical Differentiation
+****************************
+
+The functions described in this chapter compute numerical derivatives by
+finite differencing. An adaptive algorithm is used to find the best
+choice of finite difference and to estimate the error in the derivative.
+These functions are declared in the header file `gsl_deriv.h'.
+
+* Menu:
+
+* Numerical Differentiation functions::
+* Numerical Differentiation Examples::
+* Numerical Differentiation References::
+
+
+File: gsl-ref.info, Node: Numerical Differentiation functions, Next: Numerical Differentiation Examples, Up: Numerical Differentiation
+
+27.1 Functions
+==============
+
+ -- Function: int gsl_deriv_central (const gsl_function * F, double X,
+ double H, double * RESULT, double * ABSERR)
+ This function computes the numerical derivative of the function F
+ at the point X using an adaptive central difference algorithm with
+ a step-size of H. The derivative is returned in RESULT and an
+ estimate of its absolute error is returned in ABSERR.
+
+ The initial value of H is used to estimate an optimal step-size,
+ based on the scaling of the truncation error and round-off error
+ in the derivative calculation. The derivative is computed using a
+ 5-point rule for equally spaced abscissae at x-h, x-h/2, x, x+h/2,
+ x+h, with an error estimate taken from the difference between the
+ 5-point rule and the corresponding 3-point rule x-h, x, x+h. Note
+ that the value of the function at x does not contribute to the
+ derivative calculation, so only 4-points are actually used.
+
+ -- Function: int gsl_deriv_forward (const gsl_function * F, double X,
+ double H, double * RESULT, double * ABSERR)
+ This function computes the numerical derivative of the function F
+ at the point X using an adaptive forward difference algorithm with
+ a step-size of H. The function is evaluated only at points greater
+ than X, and never at X itself. The derivative is returned in
+ RESULT and an estimate of its absolute error is returned in
+ ABSERR. This function should be used if f(x) has a discontinuity
+ at X, or is undefined for values less than X.
+
+ The initial value of H is used to estimate an optimal step-size,
+ based on the scaling of the truncation error and round-off error
+ in the derivative calculation. The derivative at x is computed
+ using an "open" 4-point rule for equally spaced abscissae at x+h/4,
+ x+h/2, x+3h/4, x+h, with an error estimate taken from the
+ difference between the 4-point rule and the corresponding 2-point
+ rule x+h/2, x+h.
+
+ -- Function: int gsl_deriv_backward (const gsl_function * F, double X,
+ double H, double * RESULT, double * ABSERR)
+ This function computes the numerical derivative of the function F
+ at the point X using an adaptive backward difference algorithm
+ with a step-size of H. The function is evaluated only at points
+ less than X, and never at X itself. The derivative is returned in
+ RESULT and an estimate of its absolute error is returned in
+ ABSERR. This function should be used if f(x) has a discontinuity
+ at X, or is undefined for values greater than X.
+
+ This function is equivalent to calling `gsl_deriv_forward' with a
+ negative step-size.
+
+
+File: gsl-ref.info, Node: Numerical Differentiation Examples, Next: Numerical Differentiation References, Prev: Numerical Differentiation functions, Up: Numerical Differentiation
+
+27.2 Examples
+=============
+
+The following code estimates the derivative of the function f(x) =
+x^{3/2} at x=2 and at x=0. The function f(x) is undefined for x<0 so
+the derivative at x=0 is computed using `gsl_deriv_forward'.
+
+ #include <stdio.h>
+ #include <gsl/gsl_math.h>
+ #include <gsl/gsl_deriv.h>
+
+ double f (double x, void * params)
+ {
+ return pow (x, 1.5);
+ }
+
+ int
+ main (void)
+ {
+ gsl_function F;
+ double result, abserr;
+
+ F.function = &f;
+ F.params = 0;
+
+ printf ("f(x) = x^(3/2)\n");
+
+ gsl_deriv_central (&F, 2.0, 1e-8, &result, &abserr);
+ printf ("x = 2.0\n");
+ printf ("f'(x) = %.10f +/- %.10f\n", result, abserr);
+ printf ("exact = %.10f\n\n", 1.5 * sqrt(2.0));
+
+ gsl_deriv_forward (&F, 0.0, 1e-8, &result, &abserr);
+ printf ("x = 0.0\n");
+ printf ("f'(x) = %.10f +/- %.10f\n", result, abserr);
+ printf ("exact = %.10f\n", 0.0);
+
+ return 0;
+ }
+
+Here is the output of the program,
+
+ $ ./a.out
+ f(x) = x^(3/2)
+ x = 2.0
+ f'(x) = 2.1213203120 +/- 0.0000004064
+ exact = 2.1213203436
+
+ x = 0.0
+ f'(x) = 0.0000000160 +/- 0.0000000339
+ exact = 0.0000000000
+
+
+File: gsl-ref.info, Node: Numerical Differentiation References, Prev: Numerical Differentiation Examples, Up: Numerical Differentiation
+
+27.3 References and Further Reading
+===================================
+
+The algorithms used by these functions are described in the following
+sources:
+
+ Abramowitz and Stegun, `Handbook of Mathematical Functions',
+ Section 25.3.4, and Table 25.5 (Coefficients for Differentiation).
+
+ S.D. Conte and Carl de Boor, `Elementary Numerical Analysis: An
+ Algorithmic Approach', McGraw-Hill, 1972.
+
+
+File: gsl-ref.info, Node: Chebyshev Approximations, Next: Series Acceleration, Prev: Numerical Differentiation, Up: Top
+
+28 Chebyshev Approximations
+***************************
+
+This chapter describes routines for computing Chebyshev approximations
+to univariate functions. A Chebyshev approximation is a truncation of
+the series f(x) = \sum c_n T_n(x), where the Chebyshev polynomials
+T_n(x) = \cos(n \arccos x) provide an orthogonal basis of polynomials
+on the interval [-1,1] with the weight function 1 / \sqrt{1-x^2}. The
+first few Chebyshev polynomials are, T_0(x) = 1, T_1(x) = x, T_2(x) = 2
+x^2 - 1. For further information see Abramowitz & Stegun, Chapter 22.
+
+ The functions described in this chapter are declared in the header
+file `gsl_chebyshev.h'.
+
+* Menu:
+
+* Chebyshev Definitions::
+* Creation and Calculation of Chebyshev Series::
+* Chebyshev Series Evaluation::
+* Derivatives and Integrals::
+* Chebyshev Approximation examples::
+* Chebyshev Approximation References and Further Reading::
+
+
+File: gsl-ref.info, Node: Chebyshev Definitions, Next: Creation and Calculation of Chebyshev Series, Up: Chebyshev Approximations
+
+28.1 Definitions
+================
+
+A Chebyshev series is stored using the following structure,
+
+ typedef struct
+ {
+ double * c; /* coefficients c[0] .. c[order] */
+ int order; /* order of expansion */
+ double a; /* lower interval point */
+ double b; /* upper interval point */
+ ...
+ } gsl_cheb_series
+
+The approximation is made over the range [a,b] using ORDER+1 terms,
+including the coefficient c[0]. The series is computed using the
+following convention,
+
+ f(x) = (c_0 / 2) + \sum_{n=1} c_n T_n(x)
+
+which is needed when accessing the coefficients directly.
+
+
+File: gsl-ref.info, Node: Creation and Calculation of Chebyshev Series, Next: Chebyshev Series Evaluation, Prev: Chebyshev Definitions, Up: Chebyshev Approximations
+
+28.2 Creation and Calculation of Chebyshev Series
+=================================================
+
+ -- Function: gsl_cheb_series * gsl_cheb_alloc (const size_t N)
+ This function allocates space for a Chebyshev series of order N
+ and returns a pointer to a new `gsl_cheb_series' struct.
+
+ -- Function: void gsl_cheb_free (gsl_cheb_series * CS)
+ This function frees a previously allocated Chebyshev series CS.
+
+ -- Function: int gsl_cheb_init (gsl_cheb_series * CS, const
+ gsl_function * F, const double A, const double B)
+ This function computes the Chebyshev approximation CS for the
+ function F over the range (a,b) to the previously specified order.
+ The computation of the Chebyshev approximation is an O(n^2)
+ process, and requires n function evaluations.
+
+
+File: gsl-ref.info, Node: Chebyshev Series Evaluation, Next: Derivatives and Integrals, Prev: Creation and Calculation of Chebyshev Series, Up: Chebyshev Approximations
+
+28.3 Chebyshev Series Evaluation
+================================
+
+ -- Function: double gsl_cheb_eval (const gsl_cheb_series * CS, double
+ X)
+ This function evaluates the Chebyshev series CS at a given point X.
+
+ -- Function: int gsl_cheb_eval_err (const gsl_cheb_series * CS, const
+ double X, double * RESULT, double * ABSERR)
+ This function computes the Chebyshev series CS at a given point X,
+ estimating both the series RESULT and its absolute error ABSERR.
+ The error estimate is made from the first neglected term in the
+ series.
+
+ -- Function: double gsl_cheb_eval_n (const gsl_cheb_series * CS,
+ size_t ORDER, double X)
+ This function evaluates the Chebyshev series CS at a given point
+ N, to (at most) the given order ORDER.
+
+ -- Function: int gsl_cheb_eval_n_err (const gsl_cheb_series * CS,
+ const size_t ORDER, const double X, double * RESULT, double *
+ ABSERR)
+ This function evaluates a Chebyshev series CS at a given point X,
+ estimating both the series RESULT and its absolute error ABSERR,
+ to (at most) the given order ORDER. The error estimate is made
+ from the first neglected term in the series.
+
+
+File: gsl-ref.info, Node: Derivatives and Integrals, Next: Chebyshev Approximation examples, Prev: Chebyshev Series Evaluation, Up: Chebyshev Approximations
+
+28.4 Derivatives and Integrals
+==============================
+
+The following functions allow a Chebyshev series to be differentiated or
+integrated, producing a new Chebyshev series. Note that the error
+estimate produced by evaluating the derivative series will be
+underestimated due to the contribution of higher order terms being
+neglected.
+
+ -- Function: int gsl_cheb_calc_deriv (gsl_cheb_series * DERIV, const
+ gsl_cheb_series * CS)
+ This function computes the derivative of the series CS, storing
+ the derivative coefficients in the previously allocated DERIV.
+ The two series CS and DERIV must have been allocated with the same
+ order.
+
+ -- Function: int gsl_cheb_calc_integ (gsl_cheb_series * INTEG, const
+ gsl_cheb_series * CS)
+ This function computes the integral of the series CS, storing the
+ integral coefficients in the previously allocated INTEG. The two
+ series CS and INTEG must have been allocated with the same order.
+ The lower limit of the integration is taken to be the left hand
+ end of the range A.
+
+
+File: gsl-ref.info, Node: Chebyshev Approximation examples, Next: Chebyshev Approximation References and Further Reading, Prev: Derivatives and Integrals, Up: Chebyshev Approximations
+
+28.5 Examples
+=============
+
+The following example program computes Chebyshev approximations to a
+step function. This is an extremely difficult approximation to make,
+due to the discontinuity, and was chosen as an example where
+approximation error is visible. For smooth functions the Chebyshev
+approximation converges extremely rapidly and errors would not be
+visible.
+
+ #include <stdio.h>
+ #include <gsl/gsl_math.h>
+ #include <gsl/gsl_chebyshev.h>
+
+ double
+ f (double x, void *p)
+ {
+ if (x < 0.5)
+ return 0.25;
+ else
+ return 0.75;
+ }
+
+ int
+ main (void)
+ {
+ int i, n = 10000;
+
+ gsl_cheb_series *cs = gsl_cheb_alloc (40);
+
+ gsl_function F;
+
+ F.function = f;
+ F.params = 0;
+
+ gsl_cheb_init (cs, &F, 0.0, 1.0);
+
+ for (i = 0; i < n; i++)
+ {
+ double x = i / (double)n;
+ double r10 = gsl_cheb_eval_n (cs, 10, x);
+ double r40 = gsl_cheb_eval (cs, x);
+ printf ("%g %g %g %g\n",
+ x, GSL_FN_EVAL (&F, x), r10, r40);
+ }
+
+ gsl_cheb_free (cs);
+
+ return 0;
+ }
+
+The output from the program gives the original function, 10-th order
+approximation and 40-th order approximation, all sampled at intervals of
+0.001 in x.
+
+
+File: gsl-ref.info, Node: Chebyshev Approximation References and Further Reading, Prev: Chebyshev Approximation examples, Up: Chebyshev Approximations
+
+28.6 References and Further Reading
+===================================
+
+The following paper describes the use of Chebyshev series,
+
+ R. Broucke, "Ten Subroutines for the Manipulation of Chebyshev
+ Series [C1] (Algorithm 446)". `Communications of the ACM' 16(4),
+ 254-256 (1973)
+
+
+File: gsl-ref.info, Node: Series Acceleration, Next: Wavelet Transforms, Prev: Chebyshev Approximations, Up: Top
+
+29 Series Acceleration
+**********************
+
+The functions described in this chapter accelerate the convergence of a
+series using the Levin u-transform. This method takes a small number of
+terms from the start of a series and uses a systematic approximation to
+compute an extrapolated value and an estimate of its error. The
+u-transform works for both convergent and divergent series, including
+asymptotic series.
+
+ These functions are declared in the header file `gsl_sum.h'.
+
+* Menu:
+
+* Acceleration functions::
+* Acceleration functions without error estimation::
+* Example of accelerating a series::
+* Series Acceleration References::
+
+
+File: gsl-ref.info, Node: Acceleration functions, Next: Acceleration functions without error estimation, Up: Series Acceleration
+
+29.1 Acceleration functions
+===========================
+
+The following functions compute the full Levin u-transform of a series
+with its error estimate. The error estimate is computed by propagating
+rounding errors from each term through to the final extrapolation.
+
+ These functions are intended for summing analytic series where each
+term is known to high accuracy, and the rounding errors are assumed to
+originate from finite precision. They are taken to be relative errors of
+order `GSL_DBL_EPSILON' for each term.
+
+ The calculation of the error in the extrapolated value is an O(N^2)
+process, which is expensive in time and memory. A faster but less
+reliable method which estimates the error from the convergence of the
+extrapolated value is described in the next section. For the method
+described here a full table of intermediate values and derivatives
+through to O(N) must be computed and stored, but this does give a
+reliable error estimate.
+
+ -- Function: gsl_sum_levin_u_workspace * gsl_sum_levin_u_alloc (size_t
+ N)
+ This function allocates a workspace for a Levin u-transform of N
+ terms. The size of the workspace is O(2n^2 + 3n).
+
+ -- Function: void gsl_sum_levin_u_free (gsl_sum_levin_u_workspace * W)
+ This function frees the memory associated with the workspace W.
+
+ -- Function: int gsl_sum_levin_u_accel (const double * ARRAY, size_t
+ ARRAY_SIZE, gsl_sum_levin_u_workspace * W, double *
+ SUM_ACCEL, double * ABSERR)
+ This function takes the terms of a series in ARRAY of size
+ ARRAY_SIZE and computes the extrapolated limit of the series using
+ a Levin u-transform. Additional working space must be provided in
+ W. The extrapolated sum is stored in SUM_ACCEL, with an estimate
+ of the absolute error stored in ABSERR. The actual term-by-term
+ sum is returned in `w->sum_plain'. The algorithm calculates the
+ truncation error (the difference between two successive
+ extrapolations) and round-off error (propagated from the individual
+ terms) to choose an optimal number of terms for the extrapolation.
+ All the terms of the series passed in through ARRAY should be
+ non-zero.
+
+
+File: gsl-ref.info, Node: Acceleration functions without error estimation, Next: Example of accelerating a series, Prev: Acceleration functions, Up: Series Acceleration
+
+29.2 Acceleration functions without error estimation
+====================================================
+
+The functions described in this section compute the Levin u-transform of
+series and attempt to estimate the error from the "truncation error" in
+the extrapolation, the difference between the final two approximations.
+Using this method avoids the need to compute an intermediate table of
+derivatives because the error is estimated from the behavior of the
+extrapolated value itself. Consequently this algorithm is an O(N)
+process and only requires O(N) terms of storage. If the series
+converges sufficiently fast then this procedure can be acceptable. It
+is appropriate to use this method when there is a need to compute many
+extrapolations of series with similar convergence properties at
+high-speed. For example, when numerically integrating a function
+defined by a parameterized series where the parameter varies only
+slightly. A reliable error estimate should be computed first using the
+full algorithm described above in order to verify the consistency of the
+results.
+
+ -- Function: gsl_sum_levin_utrunc_workspace *
+gsl_sum_levin_utrunc_alloc (size_t N)
+ This function allocates a workspace for a Levin u-transform of N
+ terms, without error estimation. The size of the workspace is
+ O(3n).
+
+ -- Function: void gsl_sum_levin_utrunc_free
+ (gsl_sum_levin_utrunc_workspace * W)
+ This function frees the memory associated with the workspace W.
+
+ -- Function: int gsl_sum_levin_utrunc_accel (const double * ARRAY,
+ size_t ARRAY_SIZE, gsl_sum_levin_utrunc_workspace * W, double
+ * SUM_ACCEL, double * ABSERR_TRUNC)
+ This function takes the terms of a series in ARRAY of size
+ ARRAY_SIZE and computes the extrapolated limit of the series using
+ a Levin u-transform. Additional working space must be provided in
+ W. The extrapolated sum is stored in SUM_ACCEL. The actual
+ term-by-term sum is returned in `w->sum_plain'. The algorithm
+ terminates when the difference between two successive
+ extrapolations reaches a minimum or is sufficiently small. The
+ difference between these two values is used as estimate of the
+ error and is stored in ABSERR_TRUNC. To improve the reliability
+ of the algorithm the extrapolated values are replaced by moving
+ averages when calculating the truncation error, smoothing out any
+ fluctuations.
+
+
+File: gsl-ref.info, Node: Example of accelerating a series, Next: Series Acceleration References, Prev: Acceleration functions without error estimation, Up: Series Acceleration
+
+29.3 Examples
+=============
+
+The following code calculates an estimate of \zeta(2) = \pi^2 / 6 using
+the series,
+
+ \zeta(2) = 1 + 1/2^2 + 1/3^2 + 1/4^2 + ...
+
+After N terms the error in the sum is O(1/N), making direct summation
+of the series converge slowly.
+
+ #include <stdio.h>
+ #include <gsl/gsl_math.h>
+ #include <gsl/gsl_sum.h>
+
+ #define N 20
+
+ int
+ main (void)
+ {
+ double t[N];
+ double sum_accel, err;
+ double sum = 0;
+ int n;
+
+ gsl_sum_levin_u_workspace * w
+ = gsl_sum_levin_u_alloc (N);
+
+ const double zeta_2 = M_PI * M_PI / 6.0;
+
+ /* terms for zeta(2) = \sum_{n=1}^{\infty} 1/n^2 */
+
+ for (n = 0; n < N; n++)
+ {
+ double np1 = n + 1.0;
+ t[n] = 1.0 / (np1 * np1);
+ sum += t[n];
+ }
+
+ gsl_sum_levin_u_accel (t, N, w, &sum_accel, &err);
+
+ printf ("term-by-term sum = % .16f using %d terms\n",
+ sum, N);
+
+ printf ("term-by-term sum = % .16f using %d terms\n",
+ w->sum_plain, w->terms_used);
+
+ printf ("exact value = % .16f\n", zeta_2);
+ printf ("accelerated sum = % .16f using %d terms\n",
+ sum_accel, w->terms_used);
+
+ printf ("estimated error = % .16f\n", err);
+ printf ("actual error = % .16f\n",
+ sum_accel - zeta_2);
+
+ gsl_sum_levin_u_free (w);
+ return 0;
+ }
+
+The output below shows that the Levin u-transform is able to obtain an
+estimate of the sum to 1 part in 10^10 using the first eleven terms of
+the series. The error estimate returned by the function is also
+accurate, giving the correct number of significant digits.
+
+ $ ./a.out
+ term-by-term sum = 1.5961632439130233 using 20 terms
+ term-by-term sum = 1.5759958390005426 using 13 terms
+ exact value = 1.6449340668482264
+ accelerated sum = 1.6449340668166479 using 13 terms
+ estimated error = 0.0000000000508580
+ actual error = -0.0000000000315785
+
+Note that a direct summation of this series would require 10^10 terms
+to achieve the same precision as the accelerated sum does in 13 terms.
+
+
+File: gsl-ref.info, Node: Series Acceleration References, Prev: Example of accelerating a series, Up: Series Acceleration
+
+29.4 References and Further Reading
+===================================
+
+The algorithms used by these functions are described in the following
+papers,
+
+ T. Fessler, W.F. Ford, D.A. Smith, HURRY: An acceleration
+ algorithm for scalar sequences and series `ACM Transactions on
+ Mathematical Software', 9(3):346-354, 1983. and Algorithm 602
+ 9(3):355-357, 1983.
+
+The theory of the u-transform was presented by Levin,
+
+ D. Levin, Development of Non-Linear Transformations for Improving
+ Convergence of Sequences, `Intern. J. Computer Math.' B3:371-388,
+ 1973.
+
+A review paper on the Levin Transform is available online,
+ Herbert H. H. Homeier, Scalar Levin-Type Sequence Transformations,
+ `http://arxiv.org/abs/math/0005209'.
+
+
+File: gsl-ref.info, Node: Wavelet Transforms, Next: Discrete Hankel Transforms, Prev: Series Acceleration, Up: Top
+
+30 Wavelet Transforms
+*********************
+
+This chapter describes functions for performing Discrete Wavelet
+Transforms (DWTs). The library includes wavelets for real data in both
+one and two dimensions. The wavelet functions are declared in the
+header files `gsl_wavelet.h' and `gsl_wavelet2d.h'.
+
+* Menu:
+
+* DWT Definitions::
+* DWT Initialization::
+* DWT Transform Functions::
+* DWT Examples::
+* DWT References::
+
+
+File: gsl-ref.info, Node: DWT Definitions, Next: DWT Initialization, Up: Wavelet Transforms
+
+30.1 Definitions
+================
+
+The continuous wavelet transform and its inverse are defined by the
+relations,
+
+ w(s,\tau) = \int f(t) * \psi^*_{s,\tau}(t) dt
+
+and,
+
+ f(t) = \int \int_{-\infty}^\infty w(s, \tau) * \psi_{s,\tau}(t) d\tau ds
+
+where the basis functions \psi_{s,\tau} are obtained by scaling and
+translation from a single function, referred to as the "mother wavelet".
+
+ The discrete version of the wavelet transform acts on equally-spaced
+samples, with fixed scaling and translation steps (s, \tau). The
+frequency and time axes are sampled "dyadically" on scales of 2^j
+through a level parameter j. The resulting family of functions
+{\psi_{j,n}} constitutes an orthonormal basis for square-integrable
+signals.
+
+ The discrete wavelet transform is an O(N) algorithm, and is also
+referred to as the "fast wavelet transform".
+
+
+File: gsl-ref.info, Node: DWT Initialization, Next: DWT Transform Functions, Prev: DWT Definitions, Up: Wavelet Transforms
+
+30.2 Initialization
+===================
+
+The `gsl_wavelet' structure contains the filter coefficients defining
+the wavelet and any associated offset parameters.
+
+ -- Function: gsl_wavelet * gsl_wavelet_alloc (const gsl_wavelet_type *
+ T, size_t K)
+ This function allocates and initializes a wavelet object of type
+ T. The parameter K selects the specific member of the wavelet
+ family. A null pointer is returned if insufficient memory is
+ available or if a unsupported member is selected.
+
+ The following wavelet types are implemented:
+
+ -- Wavelet: gsl_wavelet_daubechies
+ -- Wavelet: gsl_wavelet_daubechies_centered
+ The is the Daubechies wavelet family of maximum phase with k/2
+ vanishing moments. The implemented wavelets are k=4, 6, ..., 20,
+ with K even.
+
+ -- Wavelet: gsl_wavelet_haar
+ -- Wavelet: gsl_wavelet_haar_centered
+ This is the Haar wavelet. The only valid choice of k for the Haar
+ wavelet is k=2.
+
+ -- Wavelet: gsl_wavelet_bspline
+ -- Wavelet: gsl_wavelet_bspline_centered
+ This is the biorthogonal B-spline wavelet family of order (i,j).
+ The implemented values of k = 100*i + j are 103, 105, 202, 204,
+ 206, 208, 301, 303, 305 307, 309.
+
+The centered forms of the wavelets align the coefficients of the various
+sub-bands on edges. Thus the resulting visualization of the
+coefficients of the wavelet transform in the phase plane is easier to
+understand.
+
+ -- Function: const char * gsl_wavelet_name (const gsl_wavelet * W)
+ This function returns a pointer to the name of the wavelet family
+ for W.
+
+ -- Function: void gsl_wavelet_free (gsl_wavelet * W)
+ This function frees the wavelet object W.
+
+ The `gsl_wavelet_workspace' structure contains scratch space of the
+same size as the input data and is used to hold intermediate results
+during the transform.
+
+ -- Function: gsl_wavelet_workspace * gsl_wavelet_workspace_alloc
+ (size_t N)
+ This function allocates a workspace for the discrete wavelet
+ transform. To perform a one-dimensional transform on N elements,
+ a workspace of size N must be provided. For two-dimensional
+ transforms of N-by-N matrices it is sufficient to allocate a
+ workspace of size N, since the transform operates on individual
+ rows and columns.
+
+ -- Function: void gsl_wavelet_workspace_free (gsl_wavelet_workspace *
+ WORK)
+ This function frees the allocated workspace WORK.
+
+
+File: gsl-ref.info, Node: DWT Transform Functions, Next: DWT Examples, Prev: DWT Initialization, Up: Wavelet Transforms
+
+30.3 Transform Functions
+========================
+
+This sections describes the actual functions performing the discrete
+wavelet transform. Note that the transforms use periodic boundary
+conditions. If the signal is not periodic in the sample length then
+spurious coefficients will appear at the beginning and end of each level
+of the transform.
+
+* Menu:
+
+* DWT in one dimension::
+* DWT in two dimension::
+
+
+File: gsl-ref.info, Node: DWT in one dimension, Next: DWT in two dimension, Up: DWT Transform Functions
+
+30.3.1 Wavelet transforms in one dimension
+------------------------------------------
+
+ -- Function: int gsl_wavelet_transform (const gsl_wavelet * W, double
+ * DATA, size_t STRIDE, size_t N, gsl_wavelet_direction DIR,
+ gsl_wavelet_workspace * WORK)
+ -- Function: int gsl_wavelet_transform_forward (const gsl_wavelet * W,
+ double * DATA, size_t STRIDE, size_t N, gsl_wavelet_workspace
+ * WORK)
+ -- Function: int gsl_wavelet_transform_inverse (const gsl_wavelet * W,
+ double * DATA, size_t STRIDE, size_t N, gsl_wavelet_workspace
+ * WORK)
+ These functions compute in-place forward and inverse discrete
+ wavelet transforms of length N with stride STRIDE on the array
+ DATA. The length of the transform N is restricted to powers of
+ two. For the `transform' version of the function the argument DIR
+ can be either `forward' (+1) or `backward' (-1). A workspace WORK
+ of length N must be provided.
+
+ For the forward transform, the elements of the original array are
+ replaced by the discrete wavelet transform f_i -> w_{j,k} in a
+ packed triangular storage layout, where J is the index of the level
+ j = 0 ... J-1 and K is the index of the coefficient within each
+ level, k = 0 ... (2^j)-1. The total number of levels is J =
+ \log_2(n). The output data has the following form,
+
+ (s_{-1,0}, d_{0,0}, d_{1,0}, d_{1,1}, d_{2,0}, ...,
+ d_{j,k}, ..., d_{J-1,2^{J-1}-1})
+
+ where the first element is the smoothing coefficient s_{-1,0},
+ followed by the detail coefficients d_{j,k} for each level j. The
+ backward transform inverts these coefficients to obtain the
+ original data.
+
+ These functions return a status of `GSL_SUCCESS' upon successful
+ completion. `GSL_EINVAL' is returned if N is not an integer power
+ of 2 or if insufficient workspace is provided.
+
+
+File: gsl-ref.info, Node: DWT in two dimension, Prev: DWT in one dimension, Up: DWT Transform Functions
+
+30.3.2 Wavelet transforms in two dimension
+------------------------------------------
+
+The library provides functions to perform two-dimensional discrete
+wavelet transforms on square matrices. The matrix dimensions must be an
+integer power of two. There are two possible orderings of the rows and
+columns in the two-dimensional wavelet transform, referred to as the
+"standard" and "non-standard" forms.
+
+ The "standard" transform performs a complete discrete wavelet
+transform on the rows of the matrix, followed by a separate complete
+discrete wavelet transform on the columns of the resulting
+row-transformed matrix. This procedure uses the same ordering as a
+two-dimensional fourier transform.
+
+ The "non-standard" transform is performed in interleaved passes on
+the rows and columns of the matrix for each level of the transform. The
+first level of the transform is applied to the matrix rows, and then to
+the matrix columns. This procedure is then repeated across the rows and
+columns of the data for the subsequent levels of the transform, until
+the full discrete wavelet transform is complete. The non-standard form
+of the discrete wavelet transform is typically used in image analysis.
+
+ The functions described in this section are declared in the header
+file `gsl_wavelet2d.h'.
+
+ -- Function: int gsl_wavelet2d_transform (const gsl_wavelet * W,
+ double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
+ gsl_wavelet_direction DIR, gsl_wavelet_workspace * WORK)
+ -- Function: int gsl_wavelet2d_transform_forward (const gsl_wavelet *
+ W, double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
+ gsl_wavelet_workspace * WORK)
+ -- Function: int gsl_wavelet2d_transform_inverse (const gsl_wavelet *
+ W, double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
+ gsl_wavelet_workspace * WORK)
+ These functions compute two-dimensional in-place forward and
+ inverse discrete wavelet transforms in standard and non-standard
+ forms on the array DATA stored in row-major form with dimensions
+ SIZE1 and SIZE2 and physical row length TDA. The dimensions must
+ be equal (square matrix) and are restricted to powers of two. For
+ the `transform' version of the function the argument DIR can be
+ either `forward' (+1) or `backward' (-1). A workspace WORK of the
+ appropriate size must be provided. On exit, the appropriate
+ elements of the array DATA are replaced by their two-dimensional
+ wavelet transform.
+
+ The functions return a status of `GSL_SUCCESS' upon successful
+ completion. `GSL_EINVAL' is returned if SIZE1 and SIZE2 are not
+ equal and integer powers of 2, or if insufficient workspace is
+ provided.
+
+ -- Function: int gsl_wavelet2d_transform_matrix (const gsl_wavelet *
+ W, gsl_matrix * M, gsl_wavelet_direction DIR,
+ gsl_wavelet_workspace * WORK)
+ -- Function: int gsl_wavelet2d_transform_matrix_forward (const
+ gsl_wavelet * W, gsl_matrix * M, gsl_wavelet_workspace * WORK)
+ -- Function: int gsl_wavelet2d_transform_matrix_inverse (const
+ gsl_wavelet * W, gsl_matrix * M, gsl_wavelet_workspace * WORK)
+ These functions compute the two-dimensional in-place wavelet
+ transform on a matrix A.
+
+ -- Function: int gsl_wavelet2d_nstransform (const gsl_wavelet * W,
+ double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
+ gsl_wavelet_direction DIR, gsl_wavelet_workspace * WORK)
+ -- Function: int gsl_wavelet2d_nstransform_forward (const gsl_wavelet
+ * W, double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
+ gsl_wavelet_workspace * WORK)
+ -- Function: int gsl_wavelet2d_nstransform_inverse (const gsl_wavelet
+ * W, double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
+ gsl_wavelet_workspace * WORK)
+ These functions compute the two-dimensional wavelet transform in
+ non-standard form.
+
+ -- Function: int gsl_wavelet2d_nstransform_matrix (const gsl_wavelet *
+ W, gsl_matrix * M, gsl_wavelet_direction DIR,
+ gsl_wavelet_workspace * WORK)
+ -- Function: int gsl_wavelet2d_nstransform_matrix_forward (const
+ gsl_wavelet * W, gsl_matrix * M, gsl_wavelet_workspace * WORK)
+ -- Function: int gsl_wavelet2d_nstransform_matrix_inverse (const
+ gsl_wavelet * W, gsl_matrix * M, gsl_wavelet_workspace * WORK)
+ These functions compute the non-standard form of the
+ two-dimensional in-place wavelet transform on a matrix A.
+
+
+File: gsl-ref.info, Node: DWT Examples, Next: DWT References, Prev: DWT Transform Functions, Up: Wavelet Transforms
+
+30.4 Examples
+=============
+
+The following program demonstrates the use of the one-dimensional
+wavelet transform functions. It computes an approximation to an input
+signal (of length 256) using the 20 largest components of the wavelet
+transform, while setting the others to zero.
+
+ #include <stdio.h>
+ #include <math.h>
+ #include <gsl/gsl_sort.h>
+ #include <gsl/gsl_wavelet.h>
+
+ int
+ main (int argc, char **argv)
+ {
+ int i, n = 256, nc = 20;
+ double *data = malloc (n * sizeof (double));
+ double *abscoeff = malloc (n * sizeof (double));
+ size_t *p = malloc (n * sizeof (size_t));
+
+ FILE * f;
+ gsl_wavelet *w;
+ gsl_wavelet_workspace *work;
+
+ w = gsl_wavelet_alloc (gsl_wavelet_daubechies, 4);
+ work = gsl_wavelet_workspace_alloc (n);
+
+ f = fopen (argv[1], "r");
+ for (i = 0; i < n; i++)
+ {
+ fscanf (f, "%lg", &data[i]);
+ }
+ fclose (f);
+
+ gsl_wavelet_transform_forward (w, data, 1, n, work);
+
+ for (i = 0; i < n; i++)
+ {
+ abscoeff[i] = fabs (data[i]);
+ }
+
+ gsl_sort_index (p, abscoeff, 1, n);
+
+ for (i = 0; (i + nc) < n; i++)
+ data[p[i]] = 0;
+
+ gsl_wavelet_transform_inverse (w, data, 1, n, work);
+
+ for (i = 0; i < n; i++)
+ {
+ printf ("%g\n", data[i]);
+ }
+
+ gsl_wavelet_free (w);
+ gsl_wavelet_workspace_free (work);
+
+ free (data);
+ free (abscoeff);
+ free (p);
+ return 0;
+ }
+
+The output can be used with the GNU plotutils `graph' program,
+
+ $ ./a.out ecg.dat > dwt.dat
+ $ graph -T ps -x 0 256 32 -h 0.3 -a dwt.dat > dwt.ps
+
+
+File: gsl-ref.info, Node: DWT References, Prev: DWT Examples, Up: Wavelet Transforms
+
+30.5 References and Further Reading
+===================================
+
+The mathematical background to wavelet transforms is covered in the
+original lectures by Daubechies,
+
+ Ingrid Daubechies. Ten Lectures on Wavelets. `CBMS-NSF Regional
+ Conference Series in Applied Mathematics' (1992), SIAM, ISBN
+ 0898712742.
+
+An easy to read introduction to the subject with an emphasis on the
+application of the wavelet transform in various branches of science is,
+
+ Paul S. Addison. `The Illustrated Wavelet Transform Handbook'.
+ Institute of Physics Publishing (2002), ISBN 0750306920.
+
+For extensive coverage of signal analysis by wavelets, wavelet packets
+and local cosine bases see,
+
+ S. G. Mallat. `A wavelet tour of signal processing' (Second
+ edition). Academic Press (1999), ISBN 012466606X.
+
+The concept of multiresolution analysis underlying the wavelet transform
+is described in,
+
+ S. G. Mallat. Multiresolution Approximations and Wavelet
+ Orthonormal Bases of L^2(R). `Transactions of the American
+ Mathematical Society', 315(1), 1989, 69-87.
+
+ S. G. Mallat. A Theory for Multiresolution Signal
+ Decomposition--The Wavelet Representation. `IEEE Transactions on
+ Pattern Analysis and Machine Intelligence', 11, 1989, 674-693.
+
+The coefficients for the individual wavelet families implemented by the
+library can be found in the following papers,
+
+ I. Daubechies. Orthonormal Bases of Compactly Supported Wavelets.
+ `Communications on Pure and Applied Mathematics', 41 (1988)
+ 909-996.
+
+ A. Cohen, I. Daubechies, and J.-C. Feauveau. Biorthogonal Bases
+ of Compactly Supported Wavelets. `Communications on Pure and
+ Applied Mathematics', 45 (1992) 485-560.
+
+The PhysioNet archive of physiological datasets can be found online at
+`http://www.physionet.org/' and is described in the following paper,
+
+ Goldberger et al. PhysioBank, PhysioToolkit, and PhysioNet:
+ Components of a New Research Resource for Complex Physiologic
+ Signals. `Circulation' 101(23):e215-e220 2000.
+
+
+File: gsl-ref.info, Node: Discrete Hankel Transforms, Next: One dimensional Root-Finding, Prev: Wavelet Transforms, Up: Top
+
+31 Discrete Hankel Transforms
+*****************************
+
+This chapter describes functions for performing Discrete Hankel
+Transforms (DHTs). The functions are declared in the header file
+`gsl_dht.h'.
+
+* Menu:
+
+* Discrete Hankel Transform Definition::
+* Discrete Hankel Transform Functions::
+* Discrete Hankel Transform References::
+
+
+File: gsl-ref.info, Node: Discrete Hankel Transform Definition, Next: Discrete Hankel Transform Functions, Up: Discrete Hankel Transforms
+
+31.1 Definitions
+================
+
+The discrete Hankel transform acts on a vector of sampled data, where
+the samples are assumed to have been taken at points related to the
+zeroes of a Bessel function of fixed order; compare this to the case of
+the discrete Fourier transform, where samples are taken at points
+related to the zeroes of the sine or cosine function.
+
+ Specifically, let f(t) be a function on the unit interval. Then the
+finite \nu-Hankel transform of f(t) is defined to be the set of numbers
+g_m given by,
+ g_m = \int_0^1 t dt J_\nu(j_(\nu,m)t) f(t),
+
+so that,
+ f(t) = \sum_{m=1}^\infty (2 J_\nu(j_(\nu,m)x) / J_(\nu+1)(j_(\nu,m))^2) g_m.
+
+Suppose that f is band-limited in the sense that g_m=0 for m > M. Then
+we have the following fundamental sampling theorem.
+ g_m = (2 / j_(\nu,M)^2)
+ \sum_{k=1}^{M-1} f(j_(\nu,k)/j_(\nu,M))
+ (J_\nu(j_(\nu,m) j_(\nu,k) / j_(\nu,M)) / J_(\nu+1)(j_(\nu,k))^2).
+
+It is this discrete expression which defines the discrete Hankel
+transform. The kernel in the summation above defines the matrix of the
+\nu-Hankel transform of size M-1. The coefficients of this matrix,
+being dependent on \nu and M, must be precomputed and stored; the
+`gsl_dht' object encapsulates this data. The allocation function
+`gsl_dht_alloc' returns a `gsl_dht' object which must be properly
+initialized with `gsl_dht_init' before it can be used to perform
+transforms on data sample vectors, for fixed \nu and M, using the
+`gsl_dht_apply' function. The implementation allows a scaling of the
+fundamental interval, for convenience, so that one can assume the
+function is defined on the interval [0,X], rather than the unit
+interval.
+
+ Notice that by assumption f(t) vanishes at the endpoints of the
+interval, consistent with the inversion formula and the sampling
+formula given above. Therefore, this transform corresponds to an
+orthogonal expansion in eigenfunctions of the Dirichlet problem for the
+Bessel differential equation.
+
+
+File: gsl-ref.info, Node: Discrete Hankel Transform Functions, Next: Discrete Hankel Transform References, Prev: Discrete Hankel Transform Definition, Up: Discrete Hankel Transforms
+
+31.2 Functions
+==============
+
+ -- Function: gsl_dht * gsl_dht_alloc (size_t SIZE)
+ This function allocates a Discrete Hankel transform object of size
+ SIZE.
+
+ -- Function: int gsl_dht_init (gsl_dht * T, double NU, double XMAX)
+ This function initializes the transform T for the given values of
+ NU and X.
+
+ -- Function: gsl_dht * gsl_dht_new (size_t SIZE, double NU, double
+ XMAX)
+ This function allocates a Discrete Hankel transform object of size
+ SIZE and initializes it for the given values of NU and X.
+
+ -- Function: void gsl_dht_free (gsl_dht * T)
+ This function frees the transform T.
+
+ -- Function: int gsl_dht_apply (const gsl_dht * T, double * F_IN,
+ double * F_OUT)
+ This function applies the transform T to the array F_IN whose size
+ is equal to the size of the transform. The result is stored in
+ the array F_OUT which must be of the same length.
+
+ -- Function: double gsl_dht_x_sample (const gsl_dht * T, int N)
+ This function returns the value of the N-th sample point in the
+ unit interval, (j_{\nu,n+1}/j_{\nu,M}) X. These are the points
+ where the function f(t) is assumed to be sampled.
+
+ -- Function: double gsl_dht_k_sample (const gsl_dht * T, int N)
+ This function returns the value of the N-th sample point in
+ "k-space", j_{\nu,n+1}/X.
+
+
+File: gsl-ref.info, Node: Discrete Hankel Transform References, Prev: Discrete Hankel Transform Functions, Up: Discrete Hankel Transforms
+
+31.3 References and Further Reading
+===================================
+
+The algorithms used by these functions are described in the following
+papers,
+
+ H. Fisk Johnson, Comp. Phys. Comm. 43, 181 (1987).
+
+ D. Lemoine, J. Chem. Phys. 101, 3936 (1994).
+
+
+File: gsl-ref.info, Node: One dimensional Root-Finding, Next: One dimensional Minimization, Prev: Discrete Hankel Transforms, Up: Top
+
+32 One dimensional Root-Finding
+*******************************
+
+This chapter describes routines for finding roots of arbitrary
+one-dimensional functions. The library provides low level components
+for a variety of iterative solvers and convergence tests. These can be
+combined by the user to achieve the desired solution, with full access
+to the intermediate steps of the iteration. Each class of methods uses
+the same framework, so that you can switch between solvers at runtime
+without needing to recompile your program. Each instance of a solver
+keeps track of its own state, allowing the solvers to be used in
+multi-threaded programs.
+
+ The header file `gsl_roots.h' contains prototypes for the root
+finding functions and related declarations.
+
+* Menu:
+
+* Root Finding Overview::
+* Root Finding Caveats::
+* Initializing the Solver::
+* Providing the function to solve::
+* Search Bounds and Guesses::
+* Root Finding Iteration::
+* Search Stopping Parameters::
+* Root Bracketing Algorithms::
+* Root Finding Algorithms using Derivatives::
+* Root Finding Examples::
+* Root Finding References and Further Reading::
+
+
+File: gsl-ref.info, Node: Root Finding Overview, Next: Root Finding Caveats, Up: One dimensional Root-Finding
+
+32.1 Overview
+=============
+
+One-dimensional root finding algorithms can be divided into two classes,
+"root bracketing" and "root polishing". Algorithms which proceed by
+bracketing a root are guaranteed to converge. Bracketing algorithms
+begin with a bounded region known to contain a root. The size of this
+bounded region is reduced, iteratively, until it encloses the root to a
+desired tolerance. This provides a rigorous error estimate for the
+location of the root.
+
+ The technique of "root polishing" attempts to improve an initial
+guess to the root. These algorithms converge only if started "close
+enough" to a root, and sacrifice a rigorous error bound for speed. By
+approximating the behavior of a function in the vicinity of a root they
+attempt to find a higher order improvement of an initial guess. When
+the behavior of the function is compatible with the algorithm and a good
+initial guess is available a polishing algorithm can provide rapid
+convergence.
+
+ In GSL both types of algorithm are available in similar frameworks.
+The user provides a high-level driver for the algorithms, and the
+library provides the individual functions necessary for each of the
+steps. There are three main phases of the iteration. The steps are,
+
+ * initialize solver state, S, for algorithm T
+
+ * update S using the iteration T
+
+ * test S for convergence, and repeat iteration if necessary
+
+The state for bracketing solvers is held in a `gsl_root_fsolver'
+struct. The updating procedure uses only function evaluations (not
+derivatives). The state for root polishing solvers is held in a
+`gsl_root_fdfsolver' struct. The updates require both the function and
+its derivative (hence the name `fdf') to be supplied by the user.
+
+
+File: gsl-ref.info, Node: Root Finding Caveats, Next: Initializing the Solver, Prev: Root Finding Overview, Up: One dimensional Root-Finding
+
+32.2 Caveats
+============
+
+Note that root finding functions can only search for one root at a time.
+When there are several roots in the search area, the first root to be
+found will be returned; however it is difficult to predict which of the
+roots this will be. _In most cases, no error will be reported if you
+try to find a root in an area where there is more than one._
+
+ Care must be taken when a function may have a multiple root (such as
+f(x) = (x-x_0)^2 or f(x) = (x-x_0)^3). It is not possible to use
+root-bracketing algorithms on even-multiplicity roots. For these
+algorithms the initial interval must contain a zero-crossing, where the
+function is negative at one end of the interval and positive at the
+other end. Roots with even-multiplicity do not cross zero, but only
+touch it instantaneously. Algorithms based on root bracketing will
+still work for odd-multiplicity roots (e.g. cubic, quintic, ...). Root
+polishing algorithms generally work with higher multiplicity roots, but
+at a reduced rate of convergence. In these cases the "Steffenson
+algorithm" can be used to accelerate the convergence of multiple roots.
+
+ While it is not absolutely required that f have a root within the
+search region, numerical root finding functions should not be used
+haphazardly to check for the _existence_ of roots. There are better
+ways to do this. Because it is easy to create situations where
+numerical root finders can fail, it is a bad idea to throw a root
+finder at a function you do not know much about. In general it is best
+to examine the function visually by plotting before searching for a
+root.
+
+
+File: gsl-ref.info, Node: Initializing the Solver, Next: Providing the function to solve, Prev: Root Finding Caveats, Up: One dimensional Root-Finding
+
+32.3 Initializing the Solver
+============================
+
+ -- Function: gsl_root_fsolver * gsl_root_fsolver_alloc (const
+ gsl_root_fsolver_type * T)
+ This function returns a pointer to a newly allocated instance of a
+ solver of type T. For example, the following code creates an
+ instance of a bisection solver,
+
+ const gsl_root_fsolver_type * T
+ = gsl_root_fsolver_bisection;
+ gsl_root_fsolver * s
+ = gsl_root_fsolver_alloc (T);
+
+ If there is insufficient memory to create the solver then the
+ function returns a null pointer and the error handler is invoked
+ with an error code of `GSL_ENOMEM'.
+
+ -- Function: gsl_root_fdfsolver * gsl_root_fdfsolver_alloc (const
+ gsl_root_fdfsolver_type * T)
+ This function returns a pointer to a newly allocated instance of a
+ derivative-based solver of type T. For example, the following
+ code creates an instance of a Newton-Raphson solver,
+
+ const gsl_root_fdfsolver_type * T
+ = gsl_root_fdfsolver_newton;
+ gsl_root_fdfsolver * s
+ = gsl_root_fdfsolver_alloc (T);
+
+ If there is insufficient memory to create the solver then the
+ function returns a null pointer and the error handler is invoked
+ with an error code of `GSL_ENOMEM'.
+
+ -- Function: int gsl_root_fsolver_set (gsl_root_fsolver * S,
+ gsl_function * F, double X_LOWER, double X_UPPER)
+ This function initializes, or reinitializes, an existing solver S
+ to use the function F and the initial search interval [X_LOWER,
+ X_UPPER].
+
+ -- Function: int gsl_root_fdfsolver_set (gsl_root_fdfsolver * S,
+ gsl_function_fdf * FDF, double ROOT)
+ This function initializes, or reinitializes, an existing solver S
+ to use the function and derivative FDF and the initial guess ROOT.
+
+ -- Function: void gsl_root_fsolver_free (gsl_root_fsolver * S)
+ -- Function: void gsl_root_fdfsolver_free (gsl_root_fdfsolver * S)
+ These functions free all the memory associated with the solver S.
+
+ -- Function: const char * gsl_root_fsolver_name (const
+ gsl_root_fsolver * S)
+ -- Function: const char * gsl_root_fdfsolver_name (const
+ gsl_root_fdfsolver * S)
+ These functions return a pointer to the name of the solver. For
+ example,
+
+ printf ("s is a '%s' solver\n",
+ gsl_root_fsolver_name (s));
+
+ would print something like `s is a 'bisection' solver'.
+
+
+File: gsl-ref.info, Node: Providing the function to solve, Next: Search Bounds and Guesses, Prev: Initializing the Solver, Up: One dimensional Root-Finding
+
+32.4 Providing the function to solve
+====================================
+
+You must provide a continuous function of one variable for the root
+finders to operate on, and, sometimes, its first derivative. In order
+to allow for general parameters the functions are defined by the
+following data types:
+
+ -- Data Type: gsl_function
+ This data type defines a general function with parameters.
+
+ `double (* function) (double X, void * PARAMS)'
+ this function should return the value f(x,params) for
+ argument X and parameters PARAMS
+
+ `void * params'
+ a pointer to the parameters of the function
+
+ Here is an example for the general quadratic function,
+
+ f(x) = a x^2 + b x + c
+
+with a = 3, b = 2, c = 1. The following code defines a `gsl_function'
+`F' which you could pass to a root finder:
+
+ struct my_f_params { double a; double b; double c; };
+
+ double
+ my_f (double x, void * p) {
+ struct my_f_params * params
+ = (struct my_f_params *)p;
+ double a = (params->a);
+ double b = (params->b);
+ double c = (params->c);
+
+ return (a * x + b) * x + c;
+ }
+
+ gsl_function F;
+ struct my_f_params params = { 3.0, 2.0, 1.0 };
+
+ F.function = &my_f;
+ F.params = &params;
+
+The function f(x) can be evaluated using the following macro,
+
+ #define GSL_FN_EVAL(F,x)
+ (*((F)->function))(x,(F)->params)
+
+ -- Data Type: gsl_function_fdf
+ This data type defines a general function with parameters and its
+ first derivative.
+
+ `double (* f) (double X, void * PARAMS)'
+ this function should return the value of f(x,params) for
+ argument X and parameters PARAMS
+
+ `double (* df) (double X, void * PARAMS)'
+ this function should return the value of the derivative of F
+ with respect to X, f'(x,params), for argument X and
+ parameters PARAMS
+
+ `void (* fdf) (double X, void * PARAMS, double * F, double * Df)'
+ this function should set the values of the function F to
+ f(x,params) and its derivative DF to f'(x,params) for
+ argument X and parameters PARAMS. This function provides an
+ optimization of the separate functions for f(x) and f'(x)--it
+ is always faster to compute the function and its derivative
+ at the same time.
+
+ `void * params'
+ a pointer to the parameters of the function
+
+ Here is an example where f(x) = 2\exp(2x):
+
+ double
+ my_f (double x, void * params)
+ {
+ return exp (2 * x);
+ }
+
+ double
+ my_df (double x, void * params)
+ {
+ return 2 * exp (2 * x);
+ }
+
+ void
+ my_fdf (double x, void * params,
+ double * f, double * df)
+ {
+ double t = exp (2 * x);
+
+ *f = t;
+ *df = 2 * t; /* uses existing value */
+ }
+
+ gsl_function_fdf FDF;
+
+ FDF.f = &my_f;
+ FDF.df = &my_df;
+ FDF.fdf = &my_fdf;
+ FDF.params = 0;
+
+The function f(x) can be evaluated using the following macro,
+
+ #define GSL_FN_FDF_EVAL_F(FDF,x)
+ (*((FDF)->f))(x,(FDF)->params)
+
+The derivative f'(x) can be evaluated using the following macro,
+
+ #define GSL_FN_FDF_EVAL_DF(FDF,x)
+ (*((FDF)->df))(x,(FDF)->params)
+
+and both the function y = f(x) and its derivative dy = f'(x) can be
+evaluated at the same time using the following macro,
+
+ #define GSL_FN_FDF_EVAL_F_DF(FDF,x,y,dy)
+ (*((FDF)->fdf))(x,(FDF)->params,(y),(dy))
+
+The macro stores f(x) in its Y argument and f'(x) in its DY
+argument--both of these should be pointers to `double'.
+
+
+File: gsl-ref.info, Node: Search Bounds and Guesses, Next: Root Finding Iteration, Prev: Providing the function to solve, Up: One dimensional Root-Finding
+
+32.5 Search Bounds and Guesses
+==============================
+
+You provide either search bounds or an initial guess; this section
+explains how search bounds and guesses work and how function arguments
+control them.
+
+ A guess is simply an x value which is iterated until it is within
+the desired precision of a root. It takes the form of a `double'.
+
+ Search bounds are the endpoints of a interval which is iterated until
+the length of the interval is smaller than the requested precision. The
+interval is defined by two values, the lower limit and the upper limit.
+Whether the endpoints are intended to be included in the interval or not
+depends on the context in which the interval is used.
+
+
+File: gsl-ref.info, Node: Root Finding Iteration, Next: Search Stopping Parameters, Prev: Search Bounds and Guesses, Up: One dimensional Root-Finding
+
+32.6 Iteration
+==============
+
+The following functions drive the iteration of each algorithm. Each
+function performs one iteration to update the state of any solver of the
+corresponding type. The same functions work for all solvers so that
+different methods can be substituted at runtime without modifications to
+the code.
+
+ -- Function: int gsl_root_fsolver_iterate (gsl_root_fsolver * S)
+ -- Function: int gsl_root_fdfsolver_iterate (gsl_root_fdfsolver * S)
+ These functions perform a single iteration of the solver S. If the
+ iteration encounters an unexpected problem then an error code will
+ be returned,
+
+ `GSL_EBADFUNC'
+ the iteration encountered a singular point where the function
+ or its derivative evaluated to `Inf' or `NaN'.
+
+ `GSL_EZERODIV'
+ the derivative of the function vanished at the iteration
+ point, preventing the algorithm from continuing without a
+ division by zero.
+
+ The solver maintains a current best estimate of the root at all
+times. The bracketing solvers also keep track of the current best
+interval bounding the root. This information can be accessed with the
+following auxiliary functions,
+
+ -- Function: double gsl_root_fsolver_root (const gsl_root_fsolver * S)
+ -- Function: double gsl_root_fdfsolver_root (const gsl_root_fdfsolver
+ * S)
+ These functions return the current estimate of the root for the
+ solver S.
+
+ -- Function: double gsl_root_fsolver_x_lower (const gsl_root_fsolver *
+ S)
+ -- Function: double gsl_root_fsolver_x_upper (const gsl_root_fsolver *
+ S)
+ These functions return the current bracketing interval for the
+ solver S.
+
+
+File: gsl-ref.info, Node: Search Stopping Parameters, Next: Root Bracketing Algorithms, Prev: Root Finding Iteration, Up: One dimensional Root-Finding
+
+32.7 Search Stopping Parameters
+===============================
+
+A root finding procedure should stop when one of the following
+conditions is true:
+
+ * A root has been found to within the user-specified precision.
+
+ * A user-specified maximum number of iterations has been reached.
+
+ * An error has occurred.
+
+The handling of these conditions is under user control. The functions
+below allow the user to test the precision of the current result in
+several standard ways.
+
+ -- Function: int gsl_root_test_interval (double X_LOWER, double
+ X_UPPER, double EPSABS, double EPSREL)
+ This function tests for the convergence of the interval [X_LOWER,
+ X_UPPER] with absolute error EPSABS and relative error EPSREL.
+ The test returns `GSL_SUCCESS' if the following condition is
+ achieved,
+
+ |a - b| < epsabs + epsrel min(|a|,|b|)
+
+ when the interval x = [a,b] does not include the origin. If the
+ interval includes the origin then \min(|a|,|b|) is replaced by
+ zero (which is the minimum value of |x| over the interval). This
+ ensures that the relative error is accurately estimated for roots
+ close to the origin.
+
+ This condition on the interval also implies that any estimate of
+ the root r in the interval satisfies the same condition with
+ respect to the true root r^*,
+
+ |r - r^*| < epsabs + epsrel r^*
+
+ assuming that the true root r^* is contained within the interval.
+
+ -- Function: int gsl_root_test_delta (double X1, double X0, double
+ EPSABS, double EPSREL)
+ This function tests for the convergence of the sequence ..., X0,
+ X1 with absolute error EPSABS and relative error EPSREL. The test
+ returns `GSL_SUCCESS' if the following condition is achieved,
+
+ |x_1 - x_0| < epsabs + epsrel |x_1|
+
+ and returns `GSL_CONTINUE' otherwise.
+
+ -- Function: int gsl_root_test_residual (double F, double EPSABS)
+ This function tests the residual value F against the absolute
+ error bound EPSABS. The test returns `GSL_SUCCESS' if the
+ following condition is achieved,
+
+ |f| < epsabs
+
+ and returns `GSL_CONTINUE' otherwise. This criterion is suitable
+ for situations where the precise location of the root, x, is
+ unimportant provided a value can be found where the residual,
+ |f(x)|, is small enough.
+
+
+File: gsl-ref.info, Node: Root Bracketing Algorithms, Next: Root Finding Algorithms using Derivatives, Prev: Search Stopping Parameters, Up: One dimensional Root-Finding
+
+32.8 Root Bracketing Algorithms
+===============================
+
+The root bracketing algorithms described in this section require an
+initial interval which is guaranteed to contain a root--if a and b are
+the endpoints of the interval then f(a) must differ in sign from f(b).
+This ensures that the function crosses zero at least once in the
+interval. If a valid initial interval is used then these algorithm
+cannot fail, provided the function is well-behaved.
+
+ Note that a bracketing algorithm cannot find roots of even degree,
+since these do not cross the x-axis.
+
+ -- Solver: gsl_root_fsolver_bisection
+ The "bisection algorithm" is the simplest method of bracketing the
+ roots of a function. It is the slowest algorithm provided by the
+ library, with linear convergence.
+
+ On each iteration, the interval is bisected and the value of the
+ function at the midpoint is calculated. The sign of this value is
+ used to determine which half of the interval does not contain a
+ root. That half is discarded to give a new, smaller interval
+ containing the root. This procedure can be continued indefinitely
+ until the interval is sufficiently small.
+
+ At any time the current estimate of the root is taken as the
+ midpoint of the interval.
+
+
+ -- Solver: gsl_root_fsolver_falsepos
+ The "false position algorithm" is a method of finding roots based
+ on linear interpolation. Its convergence is linear, but it is
+ usually faster than bisection.
+
+ On each iteration a line is drawn between the endpoints (a,f(a))
+ and (b,f(b)) and the point where this line crosses the x-axis
+ taken as a "midpoint". The value of the function at this point is
+ calculated and its sign is used to determine which side of the
+ interval does not contain a root. That side is discarded to give a
+ new, smaller interval containing the root. This procedure can be
+ continued indefinitely until the interval is sufficiently small.
+
+ The best estimate of the root is taken from the linear
+ interpolation of the interval on the current iteration.
+
+
+ -- Solver: gsl_root_fsolver_brent
+ The "Brent-Dekker method" (referred to here as "Brent's method")
+ combines an interpolation strategy with the bisection algorithm.
+ This produces a fast algorithm which is still robust.
+
+ On each iteration Brent's method approximates the function using an
+ interpolating curve. On the first iteration this is a linear
+ interpolation of the two endpoints. For subsequent iterations the
+ algorithm uses an inverse quadratic fit to the last three points,
+ for higher accuracy. The intercept of the interpolating curve
+ with the x-axis is taken as a guess for the root. If it lies
+ within the bounds of the current interval then the interpolating
+ point is accepted, and used to generate a smaller interval. If
+ the interpolating point is not accepted then the algorithm falls
+ back to an ordinary bisection step.
+
+ The best estimate of the root is taken from the most recent
+ interpolation or bisection.
+
+
+File: gsl-ref.info, Node: Root Finding Algorithms using Derivatives, Next: Root Finding Examples, Prev: Root Bracketing Algorithms, Up: One dimensional Root-Finding
+
+32.9 Root Finding Algorithms using Derivatives
+==============================================
+
+The root polishing algorithms described in this section require an
+initial guess for the location of the root. There is no absolute
+guarantee of convergence--the function must be suitable for this
+technique and the initial guess must be sufficiently close to the root
+for it to work. When these conditions are satisfied then convergence is
+quadratic.
+
+ These algorithms make use of both the function and its derivative.
+
+ -- Derivative Solver: gsl_root_fdfsolver_newton
+ Newton's Method is the standard root-polishing algorithm. The
+ algorithm begins with an initial guess for the location of the
+ root. On each iteration, a line tangent to the function f is
+ drawn at that position. The point where this line crosses the
+ x-axis becomes the new guess. The iteration is defined by the
+ following sequence,
+
+ x_{i+1} = x_i - f(x_i)/f'(x_i)
+
+ Newton's method converges quadratically for single roots, and
+ linearly for multiple roots.
+
+
+ -- Derivative Solver: gsl_root_fdfsolver_secant
+ The "secant method" is a simplified version of Newton's method
+ which does not require the computation of the derivative on every
+ step.
+
+ On its first iteration the algorithm begins with Newton's method,
+ using the derivative to compute a first step,
+
+ x_1 = x_0 - f(x_0)/f'(x_0)
+
+ Subsequent iterations avoid the evaluation of the derivative by
+ replacing it with a numerical estimate, the slope of the line
+ through the previous two points,
+
+ x_{i+1} = x_i f(x_i) / f'_{est} where
+ f'_{est} = (f(x_i) - f(x_{i-1})/(x_i - x_{i-1})
+
+ When the derivative does not change significantly in the vicinity
+ of the root the secant method gives a useful saving.
+ Asymptotically the secant method is faster than Newton's method
+ whenever the cost of evaluating the derivative is more than 0.44
+ times the cost of evaluating the function itself. As with all
+ methods of computing a numerical derivative the estimate can
+ suffer from cancellation errors if the separation of the points
+ becomes too small.
+
+ On single roots, the method has a convergence of order (1 + \sqrt
+ 5)/2 (approximately 1.62). It converges linearly for multiple
+ roots.
+
+
+ -- Derivative Solver: gsl_root_fdfsolver_steffenson
+ The "Steffenson Method" provides the fastest convergence of all the
+ routines. It combines the basic Newton algorithm with an Aitken
+ "delta-squared" acceleration. If the Newton iterates are x_i then
+ the acceleration procedure generates a new sequence R_i,
+
+ R_i = x_i - (x_{i+1} - x_i)^2 / (x_{i+2} - 2 x_{i+1} + x_{i})
+
+ which converges faster than the original sequence under reasonable
+ conditions. The new sequence requires three terms before it can
+ produce its first value so the method returns accelerated values
+ on the second and subsequent iterations. On the first iteration
+ it returns the ordinary Newton estimate. The Newton iterate is
+ also returned if the denominator of the acceleration term ever
+ becomes zero.
+
+ As with all acceleration procedures this method can become
+ unstable if the function is not well-behaved.
+
+
+File: gsl-ref.info, Node: Root Finding Examples, Next: Root Finding References and Further Reading, Prev: Root Finding Algorithms using Derivatives, Up: One dimensional Root-Finding
+
+32.10 Examples
+==============
+
+For any root finding algorithm we need to prepare the function to be
+solved. For this example we will use the general quadratic equation
+described earlier. We first need a header file (`demo_fn.h') to define
+the function parameters,
+
+ struct quadratic_params
+ {
+ double a, b, c;
+ };
+
+ double quadratic (double x, void *params);
+ double quadratic_deriv (double x, void *params);
+ void quadratic_fdf (double x, void *params,
+ double *y, double *dy);
+
+We place the function definitions in a separate file (`demo_fn.c'),
+
+ double
+ quadratic (double x, void *params)
+ {
+ struct quadratic_params *p
+ = (struct quadratic_params *) params;
+
+ double a = p->a;
+ double b = p->b;
+ double c = p->c;
+
+ return (a * x + b) * x + c;
+ }
+
+ double
+ quadratic_deriv (double x, void *params)
+ {
+ struct quadratic_params *p
+ = (struct quadratic_params *) params;
+
+ double a = p->a;
+ double b = p->b;
+ double c = p->c;
+
+ return 2.0 * a * x + b;
+ }
+
+ void
+ quadratic_fdf (double x, void *params,
+ double *y, double *dy)
+ {
+ struct quadratic_params *p
+ = (struct quadratic_params *) params;
+
+ double a = p->a;
+ double b = p->b;
+ double c = p->c;
+
+ *y = (a * x + b) * x + c;
+ *dy = 2.0 * a * x + b;
+ }
+
+The first program uses the function solver `gsl_root_fsolver_brent' for
+Brent's method and the general quadratic defined above to solve the
+following equation,
+
+ x^2 - 5 = 0
+
+with solution x = \sqrt 5 = 2.236068...
+
+ #include <stdio.h>
+ #include <gsl/gsl_errno.h>
+ #include <gsl/gsl_math.h>
+ #include <gsl/gsl_roots.h>
+
+ #include "demo_fn.h"
+ #include "demo_fn.c"
+
+ int
+ main (void)
+ {
+ int status;
+ int iter = 0, max_iter = 100;
+ const gsl_root_fsolver_type *T;
+ gsl_root_fsolver *s;
+ double r = 0, r_expected = sqrt (5.0);
+ double x_lo = 0.0, x_hi = 5.0;
+ gsl_function F;
+ struct quadratic_params params = {1.0, 0.0, -5.0};
+
+ F.function = &quadratic;
+ F.params = &params;
+
+ T = gsl_root_fsolver_brent;
+ s = gsl_root_fsolver_alloc (T);
+ gsl_root_fsolver_set (s, &F, x_lo, x_hi);
+
+ printf ("using %s method\n",
+ gsl_root_fsolver_name (s));
+
+ printf ("%5s [%9s, %9s] %9s %10s %9s\n",
+ "iter", "lower", "upper", "root",
+ "err", "err(est)");
+
+ do
+ {
+ iter++;
+ status = gsl_root_fsolver_iterate (s);
+ r = gsl_root_fsolver_root (s);
+ x_lo = gsl_root_fsolver_x_lower (s);
+ x_hi = gsl_root_fsolver_x_upper (s);
+ status = gsl_root_test_interval (x_lo, x_hi,
+ 0, 0.001);
+
+ if (status == GSL_SUCCESS)
+ printf ("Converged:\n");
+
+ printf ("%5d [%.7f, %.7f] %.7f %+.7f %.7f\n",
+ iter, x_lo, x_hi,
+ r, r - r_expected,
+ x_hi - x_lo);
+ }
+ while (status == GSL_CONTINUE && iter < max_iter);
+
+ gsl_root_fsolver_free (s);
+
+ return status;
+ }
+
+Here are the results of the iterations,
+
+ $ ./a.out
+ using brent method
+ iter [ lower, upper] root err err(est)
+ 1 [1.0000000, 5.0000000] 1.0000000 -1.2360680 4.0000000
+ 2 [1.0000000, 3.0000000] 3.0000000 +0.7639320 2.0000000
+ 3 [2.0000000, 3.0000000] 2.0000000 -0.2360680 1.0000000
+ 4 [2.2000000, 3.0000000] 2.2000000 -0.0360680 0.8000000
+ 5 [2.2000000, 2.2366300] 2.2366300 +0.0005621 0.0366300
+ Converged:
+ 6 [2.2360634, 2.2366300] 2.2360634 -0.0000046 0.0005666
+
+If the program is modified to use the bisection solver instead of
+Brent's method, by changing `gsl_root_fsolver_brent' to
+`gsl_root_fsolver_bisection' the slower convergence of the Bisection
+method can be observed,
+
+ $ ./a.out
+ using bisection method
+ iter [ lower, upper] root err err(est)
+ 1 [0.0000000, 2.5000000] 1.2500000 -0.9860680 2.5000000
+ 2 [1.2500000, 2.5000000] 1.8750000 -0.3610680 1.2500000
+ 3 [1.8750000, 2.5000000] 2.1875000 -0.0485680 0.6250000
+ 4 [2.1875000, 2.5000000] 2.3437500 +0.1076820 0.3125000
+ 5 [2.1875000, 2.3437500] 2.2656250 +0.0295570 0.1562500
+ 6 [2.1875000, 2.2656250] 2.2265625 -0.0095055 0.0781250
+ 7 [2.2265625, 2.2656250] 2.2460938 +0.0100258 0.0390625
+ 8 [2.2265625, 2.2460938] 2.2363281 +0.0002601 0.0195312
+ 9 [2.2265625, 2.2363281] 2.2314453 -0.0046227 0.0097656
+ 10 [2.2314453, 2.2363281] 2.2338867 -0.0021813 0.0048828
+ 11 [2.2338867, 2.2363281] 2.2351074 -0.0009606 0.0024414
+ Converged:
+ 12 [2.2351074, 2.2363281] 2.2357178 -0.0003502 0.0012207
+
+ The next program solves the same function using a derivative solver
+instead.
+
+ #include <stdio.h>
+ #include <gsl/gsl_errno.h>
+ #include <gsl/gsl_math.h>
+ #include <gsl/gsl_roots.h>
+
+ #include "demo_fn.h"
+ #include "demo_fn.c"
+
+ int
+ main (void)
+ {
+ int status;
+ int iter = 0, max_iter = 100;
+ const gsl_root_fdfsolver_type *T;
+ gsl_root_fdfsolver *s;
+ double x0, x = 5.0, r_expected = sqrt (5.0);
+ gsl_function_fdf FDF;
+ struct quadratic_params params = {1.0, 0.0, -5.0};
+
+ FDF.f = &quadratic;
+ FDF.df = &quadratic_deriv;
+ FDF.fdf = &quadratic_fdf;
+ FDF.params = &params;
+
+ T = gsl_root_fdfsolver_newton;
+ s = gsl_root_fdfsolver_alloc (T);
+ gsl_root_fdfsolver_set (s, &FDF, x);
+
+ printf ("using %s method\n",
+ gsl_root_fdfsolver_name (s));
+
+ printf ("%-5s %10s %10s %10s\n",
+ "iter", "root", "err", "err(est)");
+ do
+ {
+ iter++;
+ status = gsl_root_fdfsolver_iterate (s);
+ x0 = x;
+ x = gsl_root_fdfsolver_root (s);
+ status = gsl_root_test_delta (x, x0, 0, 1e-3);
+
+ if (status == GSL_SUCCESS)
+ printf ("Converged:\n");
+
+ printf ("%5d %10.7f %+10.7f %10.7f\n",
+ iter, x, x - r_expected, x - x0);
+ }
+ while (status == GSL_CONTINUE && iter < max_iter);
+
+ gsl_root_fdfsolver_free (s);
+ return status;
+ }
+
+Here are the results for Newton's method,
+
+ $ ./a.out
+ using newton method
+ iter root err err(est)
+ 1 3.0000000 +0.7639320 -2.0000000
+ 2 2.3333333 +0.0972654 -0.6666667
+ 3 2.2380952 +0.0020273 -0.0952381
+ Converged:
+ 4 2.2360689 +0.0000009 -0.0020263
+
+Note that the error can be estimated more accurately by taking the
+difference between the current iterate and next iterate rather than the
+previous iterate. The other derivative solvers can be investigated by
+changing `gsl_root_fdfsolver_newton' to `gsl_root_fdfsolver_secant' or
+`gsl_root_fdfsolver_steffenson'.
+
+
+File: gsl-ref.info, Node: Root Finding References and Further Reading, Prev: Root Finding Examples, Up: One dimensional Root-Finding
+
+32.11 References and Further Reading
+====================================
+
+For information on the Brent-Dekker algorithm see the following two
+papers,
+
+ R. P. Brent, "An algorithm with guaranteed convergence for finding
+ a zero of a function", `Computer Journal', 14 (1971) 422-425
+
+ J. C. P. Bus and T. J. Dekker, "Two Efficient Algorithms with
+ Guaranteed Convergence for Finding a Zero of a Function", `ACM
+ Transactions of Mathematical Software', Vol. 1 No. 4 (1975) 330-345
+
+
+File: gsl-ref.info, Node: One dimensional Minimization, Next: Multidimensional Root-Finding, Prev: One dimensional Root-Finding, Up: Top
+
+33 One dimensional Minimization
+*******************************
+
+This chapter describes routines for finding minima of arbitrary
+one-dimensional functions. The library provides low level components
+for a variety of iterative minimizers and convergence tests. These can
+be combined by the user to achieve the desired solution, with full
+access to the intermediate steps of the algorithms. Each class of
+methods uses the same framework, so that you can switch between
+minimizers at runtime without needing to recompile your program. Each
+instance of a minimizer keeps track of its own state, allowing the
+minimizers to be used in multi-threaded programs.
+
+ The header file `gsl_min.h' contains prototypes for the minimization
+functions and related declarations. To use the minimization algorithms
+to find the maximum of a function simply invert its sign.
+
+* Menu:
+
+* Minimization Overview::
+* Minimization Caveats::
+* Initializing the Minimizer::
+* Providing the function to minimize::
+* Minimization Iteration::
+* Minimization Stopping Parameters::
+* Minimization Algorithms::
+* Minimization Examples::
+* Minimization References and Further Reading::
+
+
+File: gsl-ref.info, Node: Minimization Overview, Next: Minimization Caveats, Up: One dimensional Minimization
+
+33.1 Overview
+=============
+
+The minimization algorithms begin with a bounded region known to contain
+a minimum. The region is described by a lower bound a and an upper
+bound b, with an estimate of the location of the minimum x.
+
+The value of the function at x must be less than the value of the
+function at the ends of the interval,
+
+ f(a) > f(x) < f(b)
+
+This condition guarantees that a minimum is contained somewhere within
+the interval. On each iteration a new point x' is selected using one
+of the available algorithms. If the new point is a better estimate of
+the minimum, i.e. where f(x') < f(x), then the current estimate of the
+minimum x is updated. The new point also allows the size of the
+bounded interval to be reduced, by choosing the most compact set of
+points which satisfies the constraint f(a) > f(x) < f(b). The interval
+is reduced until it encloses the true minimum to a desired tolerance.
+This provides a best estimate of the location of the minimum and a
+rigorous error estimate.
+
+ Several bracketing algorithms are available within a single
+framework. The user provides a high-level driver for the algorithm,
+and the library provides the individual functions necessary for each of
+the steps. There are three main phases of the iteration. The steps
+are,
+
+ * initialize minimizer state, S, for algorithm T
+
+ * update S using the iteration T
+
+ * test S for convergence, and repeat iteration if necessary
+
+The state for the minimizers is held in a `gsl_min_fminimizer' struct.
+The updating procedure uses only function evaluations (not derivatives).
+
+
+File: gsl-ref.info, Node: Minimization Caveats, Next: Initializing the Minimizer, Prev: Minimization Overview, Up: One dimensional Minimization
+
+33.2 Caveats
+============
+
+Note that minimization functions can only search for one minimum at a
+time. When there are several minima in the search area, the first
+minimum to be found will be returned; however it is difficult to predict
+which of the minima this will be. _In most cases, no error will be
+reported if you try to find a minimum in an area where there is more
+than one._
+
+ With all minimization algorithms it can be difficult to determine the
+location of the minimum to full numerical precision. The behavior of
+the function in the region of the minimum x^* can be approximated by a
+Taylor expansion,
+
+ y = f(x^*) + (1/2) f''(x^*) (x - x^*)^2
+
+and the second term of this expansion can be lost when added to the
+first term at finite precision. This magnifies the error in locating
+x^*, making it proportional to \sqrt \epsilon (where \epsilon is the
+relative accuracy of the floating point numbers). For functions with
+higher order minima, such as x^4, the magnification of the error is
+correspondingly worse. The best that can be achieved is to converge to
+the limit of numerical accuracy in the function values, rather than the
+location of the minimum itself.
+
+
+File: gsl-ref.info, Node: Initializing the Minimizer, Next: Providing the function to minimize, Prev: Minimization Caveats, Up: One dimensional Minimization
+
+33.3 Initializing the Minimizer
+===============================
+
+ -- Function: gsl_min_fminimizer * gsl_min_fminimizer_alloc (const
+ gsl_min_fminimizer_type * T)
+ This function returns a pointer to a newly allocated instance of a
+ minimizer of type T. For example, the following code creates an
+ instance of a golden section minimizer,
+
+ const gsl_min_fminimizer_type * T
+ = gsl_min_fminimizer_goldensection;
+ gsl_min_fminimizer * s
+ = gsl_min_fminimizer_alloc (T);
+
+ If there is insufficient memory to create the minimizer then the
+ function returns a null pointer and the error handler is invoked
+ with an error code of `GSL_ENOMEM'.
+
+ -- Function: int gsl_min_fminimizer_set (gsl_min_fminimizer * S,
+ gsl_function * F, double X_MINIMUM, double X_LOWER, double
+ X_UPPER)
+ This function sets, or resets, an existing minimizer S to use the
+ function F and the initial search interval [X_LOWER, X_UPPER],
+ with a guess for the location of the minimum X_MINIMUM.
+
+ If the interval given does not contain a minimum, then the function
+ returns an error code of `GSL_EINVAL'.
+
+ -- Function: int gsl_min_fminimizer_set_with_values
+ (gsl_min_fminimizer * S, gsl_function * F, double X_MINIMUM,
+ double F_MINIMUM, double X_LOWER, double F_LOWER, double
+ X_UPPER, double F_UPPER)
+ This function is equivalent to `gsl_min_fminimizer_set' but uses
+ the values F_MINIMUM, F_LOWER and F_UPPER instead of computing
+ `f(x_minimum)', `f(x_lower)' and `f(x_upper)'.
+
+ -- Function: void gsl_min_fminimizer_free (gsl_min_fminimizer * S)
+ This function frees all the memory associated with the minimizer S.
+
+ -- Function: const char * gsl_min_fminimizer_name (const
+ gsl_min_fminimizer * S)
+ This function returns a pointer to the name of the minimizer. For
+ example,
+
+ printf ("s is a '%s' minimizer\n",
+ gsl_min_fminimizer_name (s));
+
+ would print something like `s is a 'brent' minimizer'.
+
+
+File: gsl-ref.info, Node: Providing the function to minimize, Next: Minimization Iteration, Prev: Initializing the Minimizer, Up: One dimensional Minimization
+
+33.4 Providing the function to minimize
+=======================================
+
+You must provide a continuous function of one variable for the
+minimizers to operate on. In order to allow for general parameters the
+functions are defined by a `gsl_function' data type (*note Providing
+the function to solve::).
+
+
+File: gsl-ref.info, Node: Minimization Iteration, Next: Minimization Stopping Parameters, Prev: Providing the function to minimize, Up: One dimensional Minimization
+
+33.5 Iteration
+==============
+
+The following functions drive the iteration of each algorithm. Each
+function performs one iteration to update the state of any minimizer of
+the corresponding type. The same functions work for all minimizers so
+that different methods can be substituted at runtime without
+modifications to the code.
+
+ -- Function: int gsl_min_fminimizer_iterate (gsl_min_fminimizer * S)
+ This function performs a single iteration of the minimizer S. If
+ the iteration encounters an unexpected problem then an error code
+ will be returned,
+
+ `GSL_EBADFUNC'
+ the iteration encountered a singular point where the function
+ evaluated to `Inf' or `NaN'.
+
+ `GSL_FAILURE'
+ the algorithm could not improve the current best
+ approximation or bounding interval.
+
+ The minimizer maintains a current best estimate of the position of
+the minimum at all times, and the current interval bounding the minimum.
+This information can be accessed with the following auxiliary functions,
+
+ -- Function: double gsl_min_fminimizer_x_minimum (const
+ gsl_min_fminimizer * S)
+ This function returns the current estimate of the position of the
+ minimum for the minimizer S.
+
+ -- Function: double gsl_min_fminimizer_x_upper (const
+ gsl_min_fminimizer * S)
+ -- Function: double gsl_min_fminimizer_x_lower (const
+ gsl_min_fminimizer * S)
+ These functions return the current upper and lower bound of the
+ interval for the minimizer S.
+
+ -- Function: double gsl_min_fminimizer_f_minimum (const
+ gsl_min_fminimizer * S)
+ -- Function: double gsl_min_fminimizer_f_upper (const
+ gsl_min_fminimizer * S)
+ -- Function: double gsl_min_fminimizer_f_lower (const
+ gsl_min_fminimizer * S)
+ These functions return the value of the function at the current
+ estimate of the minimum and at the upper and lower bounds of the
+ interval for the minimizer S.
+
+
+File: gsl-ref.info, Node: Minimization Stopping Parameters, Next: Minimization Algorithms, Prev: Minimization Iteration, Up: One dimensional Minimization
+
+33.6 Stopping Parameters
+========================
+
+A minimization procedure should stop when one of the following
+conditions is true:
+
+ * A minimum has been found to within the user-specified precision.
+
+ * A user-specified maximum number of iterations has been reached.
+
+ * An error has occurred.
+
+The handling of these conditions is under user control. The function
+below allows the user to test the precision of the current result.
+
+ -- Function: int gsl_min_test_interval (double X_LOWER, double
+ X_UPPER, double EPSABS, double EPSREL)
+ This function tests for the convergence of the interval [X_LOWER,
+ X_UPPER] with absolute error EPSABS and relative error EPSREL.
+ The test returns `GSL_SUCCESS' if the following condition is
+ achieved,
+
+ |a - b| < epsabs + epsrel min(|a|,|b|)
+
+ when the interval x = [a,b] does not include the origin. If the
+ interval includes the origin then \min(|a|,|b|) is replaced by
+ zero (which is the minimum value of |x| over the interval). This
+ ensures that the relative error is accurately estimated for minima
+ close to the origin.
+
+ This condition on the interval also implies that any estimate of
+ the minimum x_m in the interval satisfies the same condition with
+ respect to the true minimum x_m^*,
+
+ |x_m - x_m^*| < epsabs + epsrel x_m^*
+
+ assuming that the true minimum x_m^* is contained within the
+ interval.
+
+
+File: gsl-ref.info, Node: Minimization Algorithms, Next: Minimization Examples, Prev: Minimization Stopping Parameters, Up: One dimensional Minimization
+
+33.7 Minimization Algorithms
+============================
+
+The minimization algorithms described in this section require an initial
+interval which is guaranteed to contain a minimum--if a and b are the
+endpoints of the interval and x is an estimate of the minimum then f(a)
+> f(x) < f(b). This ensures that the function has at least one minimum
+somewhere in the interval. If a valid initial interval is used then
+these algorithm cannot fail, provided the function is well-behaved.
+
+ -- Minimizer: gsl_min_fminimizer_goldensection
+ The "golden section algorithm" is the simplest method of bracketing
+ the minimum of a function. It is the slowest algorithm provided
+ by the library, with linear convergence.
+
+ On each iteration, the algorithm first compares the subintervals
+ from the endpoints to the current minimum. The larger subinterval
+ is divided in a golden section (using the famous ratio (3-\sqrt
+ 5)/2 = 0.3189660...) and the value of the function at this new
+ point is calculated. The new value is used with the constraint
+ f(a') > f(x') < f(b') to a select new interval containing the
+ minimum, by discarding the least useful point. This procedure can
+ be continued indefinitely until the interval is sufficiently
+ small. Choosing the golden section as the bisection ratio can be
+ shown to provide the fastest convergence for this type of
+ algorithm.
+
+
+ -- Minimizer: gsl_min_fminimizer_brent
+ The "Brent minimization algorithm" combines a parabolic
+ interpolation with the golden section algorithm. This produces a
+ fast algorithm which is still robust.
+
+ The outline of the algorithm can be summarized as follows: on each
+ iteration Brent's method approximates the function using an
+ interpolating parabola through three existing points. The minimum
+ of the parabola is taken as a guess for the minimum. If it lies
+ within the bounds of the current interval then the interpolating
+ point is accepted, and used to generate a smaller interval. If
+ the interpolating point is not accepted then the algorithm falls
+ back to an ordinary golden section step. The full details of
+ Brent's method include some additional checks to improve
+ convergence.
+
+
+File: gsl-ref.info, Node: Minimization Examples, Next: Minimization References and Further Reading, Prev: Minimization Algorithms, Up: One dimensional Minimization
+
+33.8 Examples
+=============
+
+The following program uses the Brent algorithm to find the minimum of
+the function f(x) = \cos(x) + 1, which occurs at x = \pi. The starting
+interval is (0,6), with an initial guess for the minimum of 2.
+
+ #include <stdio.h>
+ #include <gsl/gsl_errno.h>
+ #include <gsl/gsl_math.h>
+ #include <gsl/gsl_min.h>
+
+ double fn1 (double x, void * params)
+ {
+ return cos(x) + 1.0;
+ }
+
+ int
+ main (void)
+ {
+ int status;
+ int iter = 0, max_iter = 100;
+ const gsl_min_fminimizer_type *T;
+ gsl_min_fminimizer *s;
+ double m = 2.0, m_expected = M_PI;
+ double a = 0.0, b = 6.0;
+ gsl_function F;
+
+ F.function = &fn1;
+ F.params = 0;
+
+ T = gsl_min_fminimizer_brent;
+ s = gsl_min_fminimizer_alloc (T);
+ gsl_min_fminimizer_set (s, &F, m, a, b);
+
+ printf ("using %s method\n",
+ gsl_min_fminimizer_name (s));
+
+ printf ("%5s [%9s, %9s] %9s %10s %9s\n",
+ "iter", "lower", "upper", "min",
+ "err", "err(est)");
+
+ printf ("%5d [%.7f, %.7f] %.7f %+.7f %.7f\n",
+ iter, a, b,
+ m, m - m_expected, b - a);
+
+ do
+ {
+ iter++;
+ status = gsl_min_fminimizer_iterate (s);
+
+ m = gsl_min_fminimizer_x_minimum (s);
+ a = gsl_min_fminimizer_x_lower (s);
+ b = gsl_min_fminimizer_x_upper (s);
+
+ status
+ = gsl_min_test_interval (a, b, 0.001, 0.0);
+
+ if (status == GSL_SUCCESS)
+ printf ("Converged:\n");
+
+ printf ("%5d [%.7f, %.7f] "
+ "%.7f %.7f %+.7f %.7f\n",
+ iter, a, b,
+ m, m_expected, m - m_expected, b - a);
+ }
+ while (status == GSL_CONTINUE && iter < max_iter);
+
+ gsl_min_fminimizer_free (s);
+
+ return status;
+ }
+
+Here are the results of the minimization procedure.
+
+ $ ./a.out
+ 0 [0.0000000, 6.0000000] 2.0000000 -1.1415927 6.0000000
+ 1 [2.0000000, 6.0000000] 3.2758640 +0.1342713 4.0000000
+ 2 [2.0000000, 3.2831929] 3.2758640 +0.1342713 1.2831929
+ 3 [2.8689068, 3.2831929] 3.2758640 +0.1342713 0.4142862
+ 4 [2.8689068, 3.2831929] 3.2758640 +0.1342713 0.4142862
+ 5 [2.8689068, 3.2758640] 3.1460585 +0.0044658 0.4069572
+ 6 [3.1346075, 3.2758640] 3.1460585 +0.0044658 0.1412565
+ 7 [3.1346075, 3.1874620] 3.1460585 +0.0044658 0.0528545
+ 8 [3.1346075, 3.1460585] 3.1460585 +0.0044658 0.0114510
+ 9 [3.1346075, 3.1460585] 3.1424060 +0.0008133 0.0114510
+ 10 [3.1346075, 3.1424060] 3.1415885 -0.0000041 0.0077985
+ Converged:
+ 11 [3.1415885, 3.1424060] 3.1415927 -0.0000000 0.0008175
+
+
+File: gsl-ref.info, Node: Minimization References and Further Reading, Prev: Minimization Examples, Up: One dimensional Minimization
+
+33.9 References and Further Reading
+===================================
+
+Further information on Brent's algorithm is available in the following
+book,
+
+ Richard Brent, `Algorithms for minimization without derivatives',
+ Prentice-Hall (1973), republished by Dover in paperback (2002),
+ ISBN 0-486-41998-3.
+
+
+File: gsl-ref.info, Node: Multidimensional Root-Finding, Next: Multidimensional Minimization, Prev: One dimensional Minimization, Up: Top
+
+34 Multidimensional Root-Finding
+********************************
+
+This chapter describes functions for multidimensional root-finding
+(solving nonlinear systems with n equations in n unknowns). The
+library provides low level components for a variety of iterative
+solvers and convergence tests. These can be combined by the user to
+achieve the desired solution, with full access to the intermediate
+steps of the iteration. Each class of methods uses the same framework,
+so that you can switch between solvers at runtime without needing to
+recompile your program. Each instance of a solver keeps track of its
+own state, allowing the solvers to be used in multi-threaded programs.
+The solvers are based on the original Fortran library MINPACK.
+
+ The header file `gsl_multiroots.h' contains prototypes for the
+multidimensional root finding functions and related declarations.
+
+* Menu:
+
+* Overview of Multidimensional Root Finding::
+* Initializing the Multidimensional Solver::
+* Providing the multidimensional system of equations to solve::
+* Iteration of the multidimensional solver::
+* Search Stopping Parameters for the multidimensional solver::
+* Algorithms using Derivatives::
+* Algorithms without Derivatives::
+* Example programs for Multidimensional Root finding::
+* References and Further Reading for Multidimensional Root Finding::
+
+
+File: gsl-ref.info, Node: Overview of Multidimensional Root Finding, Next: Initializing the Multidimensional Solver, Up: Multidimensional Root-Finding
+
+34.1 Overview
+=============
+
+The problem of multidimensional root finding requires the simultaneous
+solution of n equations, f_i, in n variables, x_i,
+
+ f_i (x_1, ..., x_n) = 0 for i = 1 ... n.
+
+In general there are no bracketing methods available for n dimensional
+systems, and no way of knowing whether any solutions exist. All
+algorithms proceed from an initial guess using a variant of the Newton
+iteration,
+
+ x -> x' = x - J^{-1} f(x)
+
+where x, f are vector quantities and J is the Jacobian matrix J_{ij} =
+d f_i / d x_j. Additional strategies can be used to enlarge the region
+of convergence. These include requiring a decrease in the norm |f| on
+each step proposed by Newton's method, or taking steepest-descent steps
+in the direction of the negative gradient of |f|.
+
+ Several root-finding algorithms are available within a single
+framework. The user provides a high-level driver for the algorithms,
+and the library provides the individual functions necessary for each of
+the steps. There are three main phases of the iteration. The steps
+are,
+
+ * initialize solver state, S, for algorithm T
+
+ * update S using the iteration T
+
+ * test S for convergence, and repeat iteration if necessary
+
+The evaluation of the Jacobian matrix can be problematic, either because
+programming the derivatives is intractable or because computation of the
+n^2 terms of the matrix becomes too expensive. For these reasons the
+algorithms provided by the library are divided into two classes
+according to whether the derivatives are available or not.
+
+ The state for solvers with an analytic Jacobian matrix is held in a
+`gsl_multiroot_fdfsolver' struct. The updating procedure requires both
+the function and its derivatives to be supplied by the user.
+
+ The state for solvers which do not use an analytic Jacobian matrix is
+held in a `gsl_multiroot_fsolver' struct. The updating procedure uses
+only function evaluations (not derivatives). The algorithms estimate
+the matrix J or J^{-1} by approximate methods.
+
+
+File: gsl-ref.info, Node: Initializing the Multidimensional Solver, Next: Providing the multidimensional system of equations to solve, Prev: Overview of Multidimensional Root Finding, Up: Multidimensional Root-Finding
+
+34.2 Initializing the Solver
+============================
+
+The following functions initialize a multidimensional solver, either
+with or without derivatives. The solver itself depends only on the
+dimension of the problem and the algorithm and can be reused for
+different problems.
+
+ -- Function: gsl_multiroot_fsolver * gsl_multiroot_fsolver_alloc
+ (const gsl_multiroot_fsolver_type * T, size_t N)
+ This function returns a pointer to a newly allocated instance of a
+ solver of type T for a system of N dimensions. For example, the
+ following code creates an instance of a hybrid solver, to solve a
+ 3-dimensional system of equations.
+
+ const gsl_multiroot_fsolver_type * T
+ = gsl_multiroot_fsolver_hybrid;
+ gsl_multiroot_fsolver * s
+ = gsl_multiroot_fsolver_alloc (T, 3);
+
+ If there is insufficient memory to create the solver then the
+ function returns a null pointer and the error handler is invoked
+ with an error code of `GSL_ENOMEM'.
+
+ -- Function: gsl_multiroot_fdfsolver * gsl_multiroot_fdfsolver_alloc
+ (const gsl_multiroot_fdfsolver_type * T, size_t N)
+ This function returns a pointer to a newly allocated instance of a
+ derivative solver of type T for a system of N dimensions. For
+ example, the following code creates an instance of a
+ Newton-Raphson solver, for a 2-dimensional system of equations.
+
+ const gsl_multiroot_fdfsolver_type * T
+ = gsl_multiroot_fdfsolver_newton;
+ gsl_multiroot_fdfsolver * s =
+ gsl_multiroot_fdfsolver_alloc (T, 2);
+
+ If there is insufficient memory to create the solver then the
+ function returns a null pointer and the error handler is invoked
+ with an error code of `GSL_ENOMEM'.
+
+ -- Function: int gsl_multiroot_fsolver_set (gsl_multiroot_fsolver * S,
+ gsl_multiroot_function * F, gsl_vector * X)
+ This function sets, or resets, an existing solver S to use the
+ function F and the initial guess X.
+
+ -- Function: int gsl_multiroot_fdfsolver_set (gsl_multiroot_fdfsolver
+ * S, gsl_multiroot_function_fdf * FDF, gsl_vector * X)
+ This function sets, or resets, an existing solver S to use the
+ function and derivative FDF and the initial guess X.
+
+ -- Function: void gsl_multiroot_fsolver_free (gsl_multiroot_fsolver *
+ S)
+ -- Function: void gsl_multiroot_fdfsolver_free
+ (gsl_multiroot_fdfsolver * S)
+ These functions free all the memory associated with the solver S.
+
+ -- Function: const char * gsl_multiroot_fsolver_name (const
+ gsl_multiroot_fsolver * S)
+ -- Function: const char * gsl_multiroot_fdfsolver_name (const
+ gsl_multiroot_fdfsolver * S)
+ These functions return a pointer to the name of the solver. For
+ example,
+
+ printf ("s is a '%s' solver\n",
+ gsl_multiroot_fdfsolver_name (s));
+
+ would print something like `s is a 'newton' solver'.
+
+
+File: gsl-ref.info, Node: Providing the multidimensional system of equations to solve, Next: Iteration of the multidimensional solver, Prev: Initializing the Multidimensional Solver, Up: Multidimensional Root-Finding
+
+34.3 Providing the function to solve
+====================================
+
+You must provide n functions of n variables for the root finders to
+operate on. In order to allow for general parameters the functions are
+defined by the following data types:
+
+ -- Data Type: gsl_multiroot_function
+ This data type defines a general system of functions with
+ parameters.
+
+ `int (* f) (const gsl_vector * X, void * PARAMS, gsl_vector * F)'
+ this function should store the vector result f(x,params) in F
+ for argument X and parameters PARAMS, returning an
+ appropriate error code if the function cannot be computed.
+
+ `size_t n'
+ the dimension of the system, i.e. the number of components of
+ the vectors X and F.
+
+ `void * params'
+ a pointer to the parameters of the function.
+
+Here is an example using Powell's test function,
+
+ f_1(x) = A x_0 x_1 - 1,
+ f_2(x) = exp(-x_0) + exp(-x_1) - (1 + 1/A)
+
+with A = 10^4. The following code defines a `gsl_multiroot_function'
+system `F' which you could pass to a solver:
+
+ struct powell_params { double A; };
+
+ int
+ powell (gsl_vector * x, void * p, gsl_vector * f) {
+ struct powell_params * params
+ = *(struct powell_params *)p;
+ const double A = (params->A);
+ const double x0 = gsl_vector_get(x,0);
+ const double x1 = gsl_vector_get(x,1);
+
+ gsl_vector_set (f, 0, A * x0 * x1 - 1);
+ gsl_vector_set (f, 1, (exp(-x0) + exp(-x1)
+ - (1.0 + 1.0/A)));
+ return GSL_SUCCESS
+ }
+
+ gsl_multiroot_function F;
+ struct powell_params params = { 10000.0 };
+
+ F.f = &powell;
+ F.n = 2;
+ F.params = &params;
+
+ -- Data Type: gsl_multiroot_function_fdf
+ This data type defines a general system of functions with
+ parameters and the corresponding Jacobian matrix of derivatives,
+
+ `int (* f) (const gsl_vector * X, void * PARAMS, gsl_vector * F)'
+ this function should store the vector result f(x,params) in F
+ for argument X and parameters PARAMS, returning an
+ appropriate error code if the function cannot be computed.
+
+ `int (* df) (const gsl_vector * X, void * PARAMS, gsl_matrix * J)'
+ this function should store the N-by-N matrix result J_ij = d
+ f_i(x,params) / d x_j in J for argument X and parameters
+ PARAMS, returning an appropriate error code if the function
+ cannot be computed.
+
+ `int (* fdf) (const gsl_vector * X, void * PARAMS, gsl_vector * F, gsl_matrix * J)'
+ This function should set the values of the F and J as above,
+ for arguments X and parameters PARAMS. This function
+ provides an optimization of the separate functions for f(x)
+ and J(x)--it is always faster to compute the function and its
+ derivative at the same time.
+
+ `size_t n'
+ the dimension of the system, i.e. the number of components of
+ the vectors X and F.
+
+ `void * params'
+ a pointer to the parameters of the function.
+
+The example of Powell's test function defined above can be extended to
+include analytic derivatives using the following code,
+
+ int
+ powell_df (gsl_vector * x, void * p, gsl_matrix * J)
+ {
+ struct powell_params * params
+ = *(struct powell_params *)p;
+ const double A = (params->A);
+ const double x0 = gsl_vector_get(x,0);
+ const double x1 = gsl_vector_get(x,1);
+ gsl_matrix_set (J, 0, 0, A * x1);
+ gsl_matrix_set (J, 0, 1, A * x0);
+ gsl_matrix_set (J, 1, 0, -exp(-x0));
+ gsl_matrix_set (J, 1, 1, -exp(-x1));
+ return GSL_SUCCESS
+ }
+
+ int
+ powell_fdf (gsl_vector * x, void * p,
+ gsl_matrix * f, gsl_matrix * J) {
+ struct powell_params * params
+ = *(struct powell_params *)p;
+ const double A = (params->A);
+ const double x0 = gsl_vector_get(x,0);
+ const double x1 = gsl_vector_get(x,1);
+
+ const double u0 = exp(-x0);
+ const double u1 = exp(-x1);
+
+ gsl_vector_set (f, 0, A * x0 * x1 - 1);
+ gsl_vector_set (f, 1, u0 + u1 - (1 + 1/A));
+
+ gsl_matrix_set (J, 0, 0, A * x1);
+ gsl_matrix_set (J, 0, 1, A * x0);
+ gsl_matrix_set (J, 1, 0, -u0);
+ gsl_matrix_set (J, 1, 1, -u1);
+ return GSL_SUCCESS
+ }
+
+ gsl_multiroot_function_fdf FDF;
+
+ FDF.f = &powell_f;
+ FDF.df = &powell_df;
+ FDF.fdf = &powell_fdf;
+ FDF.n = 2;
+ FDF.params = 0;
+
+Note that the function `powell_fdf' is able to reuse existing terms
+from the function when calculating the Jacobian, thus saving time.
+
+
+File: gsl-ref.info, Node: Iteration of the multidimensional solver, Next: Search Stopping Parameters for the multidimensional solver, Prev: Providing the multidimensional system of equations to solve, Up: Multidimensional Root-Finding
+
+34.4 Iteration
+==============
+
+The following functions drive the iteration of each algorithm. Each
+function performs one iteration to update the state of any solver of the
+corresponding type. The same functions work for all solvers so that
+different methods can be substituted at runtime without modifications to
+the code.
+
+ -- Function: int gsl_multiroot_fsolver_iterate (gsl_multiroot_fsolver
+ * S)
+ -- Function: int gsl_multiroot_fdfsolver_iterate
+ (gsl_multiroot_fdfsolver * S)
+ These functions perform a single iteration of the solver S. If the
+ iteration encounters an unexpected problem then an error code will
+ be returned,
+
+ `GSL_EBADFUNC'
+ the iteration encountered a singular point where the function
+ or its derivative evaluated to `Inf' or `NaN'.
+
+ `GSL_ENOPROG'
+ the iteration is not making any progress, preventing the
+ algorithm from continuing.
+
+ The solver maintains a current best estimate of the root at all
+times. This information can be accessed with the following auxiliary
+functions,
+
+ -- Function: gsl_vector * gsl_multiroot_fsolver_root (const
+ gsl_multiroot_fsolver * S)
+ -- Function: gsl_vector * gsl_multiroot_fdfsolver_root (const
+ gsl_multiroot_fdfsolver * S)
+ These functions return the current estimate of the root for the
+ solver S.
+
+ -- Function: gsl_vector * gsl_multiroot_fsolver_f (const
+ gsl_multiroot_fsolver * S)
+ -- Function: gsl_vector * gsl_multiroot_fdfsolver_f (const
+ gsl_multiroot_fdfsolver * S)
+ These functions return the function value f(x) at the current
+ estimate of the root for the solver S.
+
+ -- Function: gsl_vector * gsl_multiroot_fsolver_dx (const
+ gsl_multiroot_fsolver * S)
+ -- Function: gsl_vector * gsl_multiroot_fdfsolver_dx (const
+ gsl_multiroot_fdfsolver * S)
+ These functions return the last step dx taken by the solver S.
+
+
+File: gsl-ref.info, Node: Search Stopping Parameters for the multidimensional solver, Next: Algorithms using Derivatives, Prev: Iteration of the multidimensional solver, Up: Multidimensional Root-Finding
+
+34.5 Search Stopping Parameters
+===============================
+
+A root finding procedure should stop when one of the following
+conditions is true:
+
+ * A multidimensional root has been found to within the
+ user-specified precision.
+
+ * A user-specified maximum number of iterations has been reached.
+
+ * An error has occurred.
+
+The handling of these conditions is under user control. The functions
+below allow the user to test the precision of the current result in
+several standard ways.
+
+ -- Function: int gsl_multiroot_test_delta (const gsl_vector * DX,
+ const gsl_vector * X, double EPSABS, double EPSREL)
+ This function tests for the convergence of the sequence by
+ comparing the last step DX with the absolute error EPSABS and
+ relative error EPSREL to the current position X. The test returns
+ `GSL_SUCCESS' if the following condition is achieved,
+
+ |dx_i| < epsabs + epsrel |x_i|
+
+ for each component of X and returns `GSL_CONTINUE' otherwise.
+
+ -- Function: int gsl_multiroot_test_residual (const gsl_vector * F,
+ double EPSABS)
+ This function tests the residual value F against the absolute
+ error bound EPSABS. The test returns `GSL_SUCCESS' if the
+ following condition is achieved,
+
+ \sum_i |f_i| < epsabs
+
+ and returns `GSL_CONTINUE' otherwise. This criterion is suitable
+ for situations where the precise location of the root, x, is
+ unimportant provided a value can be found where the residual is
+ small enough.
+
+
+File: gsl-ref.info, Node: Algorithms using Derivatives, Next: Algorithms without Derivatives, Prev: Search Stopping Parameters for the multidimensional solver, Up: Multidimensional Root-Finding
+
+34.6 Algorithms using Derivatives
+=================================
+
+The root finding algorithms described in this section make use of both
+the function and its derivative. They require an initial guess for the
+location of the root, but there is no absolute guarantee of
+convergence--the function must be suitable for this technique and the
+initial guess must be sufficiently close to the root for it to work.
+When the conditions are satisfied then convergence is quadratic.
+
+ -- Derivative Solver: gsl_multiroot_fdfsolver_hybridsj
+ This is a modified version of Powell's Hybrid method as
+ implemented in the HYBRJ algorithm in MINPACK. Minpack was
+ written by Jorge J. More', Burton S. Garbow and Kenneth E.
+ Hillstrom. The Hybrid algorithm retains the fast convergence of
+ Newton's method but will also reduce the residual when Newton's
+ method is unreliable.
+
+ The algorithm uses a generalized trust region to keep each step
+ under control. In order to be accepted a proposed new position x'
+ must satisfy the condition |D (x' - x)| < \delta, where D is a
+ diagonal scaling matrix and \delta is the size of the trust
+ region. The components of D are computed internally, using the
+ column norms of the Jacobian to estimate the sensitivity of the
+ residual to each component of x. This improves the behavior of the
+ algorithm for badly scaled functions.
+
+ On each iteration the algorithm first determines the standard
+ Newton step by solving the system J dx = - f. If this step falls
+ inside the trust region it is used as a trial step in the next
+ stage. If not, the algorithm uses the linear combination of the
+ Newton and gradient directions which is predicted to minimize the
+ norm of the function while staying inside the trust region,
+
+ dx = - \alpha J^{-1} f(x) - \beta \nabla |f(x)|^2.
+
+ This combination of Newton and gradient directions is referred to
+ as a "dogleg step".
+
+ The proposed step is now tested by evaluating the function at the
+ resulting point, x'. If the step reduces the norm of the function
+ sufficiently then it is accepted and size of the trust region is
+ increased. If the proposed step fails to improve the solution
+ then the size of the trust region is decreased and another trial
+ step is computed.
+
+ The speed of the algorithm is increased by computing the changes
+ to the Jacobian approximately, using a rank-1 update. If two
+ successive attempts fail to reduce the residual then the full
+ Jacobian is recomputed. The algorithm also monitors the progress
+ of the solution and returns an error if several steps fail to make
+ any improvement,
+
+ `GSL_ENOPROG'
+ the iteration is not making any progress, preventing the
+ algorithm from continuing.
+
+ `GSL_ENOPROGJ'
+ re-evaluations of the Jacobian indicate that the iteration is
+ not making any progress, preventing the algorithm from
+ continuing.
+
+
+ -- Derivative Solver: gsl_multiroot_fdfsolver_hybridj
+ This algorithm is an unscaled version of `hybridsj'. The steps are
+ controlled by a spherical trust region |x' - x| < \delta, instead
+ of a generalized region. This can be useful if the generalized
+ region estimated by `hybridsj' is inappropriate.
+
+ -- Derivative Solver: gsl_multiroot_fdfsolver_newton
+ Newton's Method is the standard root-polishing algorithm. The
+ algorithm begins with an initial guess for the location of the
+ solution. On each iteration a linear approximation to the
+ function F is used to estimate the step which will zero all the
+ components of the residual. The iteration is defined by the
+ following sequence,
+
+ x -> x' = x - J^{-1} f(x)
+
+ where the Jacobian matrix J is computed from the derivative
+ functions provided by F. The step dx is obtained by solving the
+ linear system,
+
+ J dx = - f(x)
+
+ using LU decomposition.
+
+ -- Derivative Solver: gsl_multiroot_fdfsolver_gnewton
+ This is a modified version of Newton's method which attempts to
+ improve global convergence by requiring every step to reduce the
+ Euclidean norm of the residual, |f(x)|. If the Newton step leads
+ to an increase in the norm then a reduced step of relative size,
+
+ t = (\sqrt(1 + 6 r) - 1) / (3 r)
+
+ is proposed, with r being the ratio of norms |f(x')|^2/|f(x)|^2.
+ This procedure is repeated until a suitable step size is found.
+
+
+File: gsl-ref.info, Node: Algorithms without Derivatives, Next: Example programs for Multidimensional Root finding, Prev: Algorithms using Derivatives, Up: Multidimensional Root-Finding
+
+34.7 Algorithms without Derivatives
+===================================
+
+The algorithms described in this section do not require any derivative
+information to be supplied by the user. Any derivatives needed are
+approximated by finite differences. Note that if the
+finite-differencing step size chosen by these routines is inappropriate,
+an explicit user-supplied numerical derivative can always be used with
+the algorithms described in the previous section.
+
+ -- Solver: gsl_multiroot_fsolver_hybrids
+ This is a version of the Hybrid algorithm which replaces calls to
+ the Jacobian function by its finite difference approximation. The
+ finite difference approximation is computed using
+ `gsl_multiroots_fdjac' with a relative step size of
+ `GSL_SQRT_DBL_EPSILON'. Note that this step size will not be
+ suitable for all problems.
+
+ -- Solver: gsl_multiroot_fsolver_hybrid
+ This is a finite difference version of the Hybrid algorithm without
+ internal scaling.
+
+ -- Solver: gsl_multiroot_fsolver_dnewton
+ The "discrete Newton algorithm" is the simplest method of solving a
+ multidimensional system. It uses the Newton iteration
+
+ x -> x - J^{-1} f(x)
+
+ where the Jacobian matrix J is approximated by taking finite
+ differences of the function F. The approximation scheme used by
+ this implementation is,
+
+ J_{ij} = (f_i(x + \delta_j) - f_i(x)) / \delta_j
+
+ where \delta_j is a step of size \sqrt\epsilon |x_j| with \epsilon
+ being the machine precision (\epsilon \approx 2.22 \times 10^-16).
+ The order of convergence of Newton's algorithm is quadratic, but
+ the finite differences require n^2 function evaluations on each
+ iteration. The algorithm may become unstable if the finite
+ differences are not a good approximation to the true derivatives.
+
+ -- Solver: gsl_multiroot_fsolver_broyden
+ The "Broyden algorithm" is a version of the discrete Newton
+ algorithm which attempts to avoids the expensive update of the
+ Jacobian matrix on each iteration. The changes to the Jacobian
+ are also approximated, using a rank-1 update,
+
+ J^{-1} \to J^{-1} - (J^{-1} df - dx) dx^T J^{-1} / dx^T J^{-1} df
+
+ where the vectors dx and df are the changes in x and f. On the
+ first iteration the inverse Jacobian is estimated using finite
+ differences, as in the discrete Newton algorithm.
+
+ This approximation gives a fast update but is unreliable if the
+ changes are not small, and the estimate of the inverse Jacobian
+ becomes worse as time passes. The algorithm has a tendency to
+ become unstable unless it starts close to the root. The Jacobian
+ is refreshed if this instability is detected (consult the source
+ for details).
+
+ This algorithm is included only for demonstration purposes, and is
+ not recommended for serious use.
+
+
+File: gsl-ref.info, Node: Example programs for Multidimensional Root finding, Next: References and Further Reading for Multidimensional Root Finding, Prev: Algorithms without Derivatives, Up: Multidimensional Root-Finding
+
+34.8 Examples
+=============
+
+The multidimensional solvers are used in a similar way to the
+one-dimensional root finding algorithms. This first example
+demonstrates the `hybrids' scaled-hybrid algorithm, which does not
+require derivatives. The program solves the Rosenbrock system of
+equations,
+
+ f_1 (x, y) = a (1 - x)
+ f_2 (x, y) = b (y - x^2)
+
+with a = 1, b = 10. The solution of this system lies at (x,y) = (1,1)
+in a narrow valley.
+
+ The first stage of the program is to define the system of equations,
+
+ #include <stdlib.h>
+ #include <stdio.h>
+ #include <gsl/gsl_vector.h>
+ #include <gsl/gsl_multiroots.h>
+
+ struct rparams
+ {
+ double a;
+ double b;
+ };
+
+ int
+ rosenbrock_f (const gsl_vector * x, void *params,
+ gsl_vector * f)
+ {
+ double a = ((struct rparams *) params)->a;
+ double b = ((struct rparams *) params)->b;
+
+ const double x0 = gsl_vector_get (x, 0);
+ const double x1 = gsl_vector_get (x, 1);
+
+ const double y0 = a * (1 - x0);
+ const double y1 = b * (x1 - x0 * x0);
+
+ gsl_vector_set (f, 0, y0);
+ gsl_vector_set (f, 1, y1);
+
+ return GSL_SUCCESS;
+ }
+
+The main program begins by creating the function object `f', with the
+arguments `(x,y)' and parameters `(a,b)'. The solver `s' is initialized
+to use this function, with the `hybrids' method.
+
+ int
+ main (void)
+ {
+ const gsl_multiroot_fsolver_type *T;
+ gsl_multiroot_fsolver *s;
+
+ int status;
+ size_t i, iter = 0;
+
+ const size_t n = 2;
+ struct rparams p = {1.0, 10.0};
+ gsl_multiroot_function f = {&rosenbrock_f, n, &p};
+
+ double x_init[2] = {-10.0, -5.0};
+ gsl_vector *x = gsl_vector_alloc (n);
+
+ gsl_vector_set (x, 0, x_init[0]);
+ gsl_vector_set (x, 1, x_init[1]);
+
+ T = gsl_multiroot_fsolver_hybrids;
+ s = gsl_multiroot_fsolver_alloc (T, 2);
+ gsl_multiroot_fsolver_set (s, &f, x);
+
+ print_state (iter, s);
+
+ do
+ {
+ iter++;
+ status = gsl_multiroot_fsolver_iterate (s);
+
+ print_state (iter, s);
+
+ if (status) /* check if solver is stuck */
+ break;
+
+ status =
+ gsl_multiroot_test_residual (s->f, 1e-7);
+ }
+ while (status == GSL_CONTINUE && iter < 1000);
+
+ printf ("status = %s\n", gsl_strerror (status));
+
+ gsl_multiroot_fsolver_free (s);
+ gsl_vector_free (x);
+ return 0;
+ }
+
+Note that it is important to check the return status of each solver
+step, in case the algorithm becomes stuck. If an error condition is
+detected, indicating that the algorithm cannot proceed, then the error
+can be reported to the user, a new starting point chosen or a different
+algorithm used.
+
+ The intermediate state of the solution is displayed by the following
+function. The solver state contains the vector `s->x' which is the
+current position, and the vector `s->f' with corresponding function
+values.
+
+ int
+ print_state (size_t iter, gsl_multiroot_fsolver * s)
+ {
+ printf ("iter = %3u x = % .3f % .3f "
+ "f(x) = % .3e % .3e\n",
+ iter,
+ gsl_vector_get (s->x, 0),
+ gsl_vector_get (s->x, 1),
+ gsl_vector_get (s->f, 0),
+ gsl_vector_get (s->f, 1));
+ }
+
+Here are the results of running the program. The algorithm is started at
+(-10,-5) far from the solution. Since the solution is hidden in a
+narrow valley the earliest steps follow the gradient of the function
+downhill, in an attempt to reduce the large value of the residual. Once
+the root has been approximately located, on iteration 8, the Newton
+behavior takes over and convergence is very rapid.
+
+ iter = 0 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03
+ iter = 1 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03
+ iter = 2 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01
+ iter = 3 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01
+ iter = 4 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01
+ iter = 5 x = -1.274 -5.680 f(x) = 2.274e+00 -7.302e+01
+ iter = 6 x = -1.274 -5.680 f(x) = 2.274e+00 -7.302e+01
+ iter = 7 x = 0.249 0.298 f(x) = 7.511e-01 2.359e+00
+ iter = 8 x = 0.249 0.298 f(x) = 7.511e-01 2.359e+00
+ iter = 9 x = 1.000 0.878 f(x) = 1.268e-10 -1.218e+00
+ iter = 10 x = 1.000 0.989 f(x) = 1.124e-11 -1.080e-01
+ iter = 11 x = 1.000 1.000 f(x) = 0.000e+00 0.000e+00
+ status = success
+
+Note that the algorithm does not update the location on every
+iteration. Some iterations are used to adjust the trust-region
+parameter, after trying a step which was found to be divergent, or to
+recompute the Jacobian, when poor convergence behavior is detected.
+
+ The next example program adds derivative information, in order to
+accelerate the solution. There are two derivative functions
+`rosenbrock_df' and `rosenbrock_fdf'. The latter computes both the
+function and its derivative simultaneously. This allows the
+optimization of any common terms. For simplicity we substitute calls to
+the separate `f' and `df' functions at this point in the code below.
+
+ int
+ rosenbrock_df (const gsl_vector * x, void *params,
+ gsl_matrix * J)
+ {
+ const double a = ((struct rparams *) params)->a;
+ const double b = ((struct rparams *) params)->b;
+
+ const double x0 = gsl_vector_get (x, 0);
+
+ const double df00 = -a;
+ const double df01 = 0;
+ const double df10 = -2 * b * x0;
+ const double df11 = b;
+
+ gsl_matrix_set (J, 0, 0, df00);
+ gsl_matrix_set (J, 0, 1, df01);
+ gsl_matrix_set (J, 1, 0, df10);
+ gsl_matrix_set (J, 1, 1, df11);
+
+ return GSL_SUCCESS;
+ }
+
+ int
+ rosenbrock_fdf (const gsl_vector * x, void *params,
+ gsl_vector * f, gsl_matrix * J)
+ {
+ rosenbrock_f (x, params, f);
+ rosenbrock_df (x, params, J);
+
+ return GSL_SUCCESS;
+ }
+
+The main program now makes calls to the corresponding `fdfsolver'
+versions of the functions,
+
+ int
+ main (void)
+ {
+ const gsl_multiroot_fdfsolver_type *T;
+ gsl_multiroot_fdfsolver *s;
+
+ int status;
+ size_t i, iter = 0;
+
+ const size_t n = 2;
+ struct rparams p = {1.0, 10.0};
+ gsl_multiroot_function_fdf f = {&rosenbrock_f,
+ &rosenbrock_df,
+ &rosenbrock_fdf,
+ n, &p};
+
+ double x_init[2] = {-10.0, -5.0};
+ gsl_vector *x = gsl_vector_alloc (n);
+
+ gsl_vector_set (x, 0, x_init[0]);
+ gsl_vector_set (x, 1, x_init[1]);
+
+ T = gsl_multiroot_fdfsolver_gnewton;
+ s = gsl_multiroot_fdfsolver_alloc (T, n);
+ gsl_multiroot_fdfsolver_set (s, &f, x);
+
+ print_state (iter, s);
+
+ do
+ {
+ iter++;
+
+ status = gsl_multiroot_fdfsolver_iterate (s);
+
+ print_state (iter, s);
+
+ if (status)
+ break;
+
+ status = gsl_multiroot_test_residual (s->f, 1e-7);
+ }
+ while (status == GSL_CONTINUE && iter < 1000);
+
+ printf ("status = %s\n", gsl_strerror (status));
+
+ gsl_multiroot_fdfsolver_free (s);
+ gsl_vector_free (x);
+ return 0;
+ }
+
+The addition of derivative information to the `hybrids' solver does not
+make any significant difference to its behavior, since it able to
+approximate the Jacobian numerically with sufficient accuracy. To
+illustrate the behavior of a different derivative solver we switch to
+`gnewton'. This is a traditional Newton solver with the constraint that
+it scales back its step if the full step would lead "uphill". Here is
+the output for the `gnewton' algorithm,
+
+ iter = 0 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03
+ iter = 1 x = -4.231 -65.317 f(x) = 5.231e+00 -8.321e+02
+ iter = 2 x = 1.000 -26.358 f(x) = -8.882e-16 -2.736e+02
+ iter = 3 x = 1.000 1.000 f(x) = -2.220e-16 -4.441e-15
+ status = success
+
+The convergence is much more rapid, but takes a wide excursion out to
+the point (-4.23,-65.3). This could cause the algorithm to go astray in
+a realistic application. The hybrid algorithm follows the downhill
+path to the solution more reliably.
+
+
+File: gsl-ref.info, Node: References and Further Reading for Multidimensional Root Finding, Prev: Example programs for Multidimensional Root finding, Up: Multidimensional Root-Finding
+
+34.9 References and Further Reading
+===================================
+
+The original version of the Hybrid method is described in the following
+articles by Powell,
+
+ M.J.D. Powell, "A Hybrid Method for Nonlinear Equations" (Chap 6, p
+ 87-114) and "A Fortran Subroutine for Solving systems of Nonlinear
+ Algebraic Equations" (Chap 7, p 115-161), in `Numerical Methods for
+ Nonlinear Algebraic Equations', P. Rabinowitz, editor. Gordon and
+ Breach, 1970.
+
+The following papers are also relevant to the algorithms described in
+this section,
+
+ J.J. More', M.Y. Cosnard, "Numerical Solution of Nonlinear
+ Equations", `ACM Transactions on Mathematical Software', Vol 5, No
+ 1, (1979), p 64-85
+
+ C.G. Broyden, "A Class of Methods for Solving Nonlinear
+ Simultaneous Equations", `Mathematics of Computation', Vol 19
+ (1965), p 577-593
+
+ J.J. More', B.S. Garbow, K.E. Hillstrom, "Testing Unconstrained
+ Optimization Software", ACM Transactions on Mathematical Software,
+ Vol 7, No 1 (1981), p 17-41
+
+
+File: gsl-ref.info, Node: Multidimensional Minimization, Next: Least-Squares Fitting, Prev: Multidimensional Root-Finding, Up: Top
+
+35 Multidimensional Minimization
+********************************
+
+This chapter describes routines for finding minima of arbitrary
+multidimensional functions. The library provides low level components
+for a variety of iterative minimizers and convergence tests. These can
+be combined by the user to achieve the desired solution, while providing
+full access to the intermediate steps of the algorithms. Each class of
+methods uses the same framework, so that you can switch between
+minimizers at runtime without needing to recompile your program. Each
+instance of a minimizer keeps track of its own state, allowing the
+minimizers to be used in multi-threaded programs. The minimization
+algorithms can be used to maximize a function by inverting its sign.
+
+ The header file `gsl_multimin.h' contains prototypes for the
+minimization functions and related declarations.
+
+* Menu:
+
+* Multimin Overview::
+* Multimin Caveats::
+* Initializing the Multidimensional Minimizer::
+* Providing a function to minimize::
+* Multimin Iteration::
+* Multimin Stopping Criteria::
+* Multimin Algorithms::
+* Multimin Examples::
+* Multimin References and Further Reading::
+
+
+File: gsl-ref.info, Node: Multimin Overview, Next: Multimin Caveats, Up: Multidimensional Minimization
+
+35.1 Overview
+=============
+
+The problem of multidimensional minimization requires finding a point x
+such that the scalar function,
+
+ f(x_1, ..., x_n)
+
+takes a value which is lower than at any neighboring point. For smooth
+functions the gradient g = \nabla f vanishes at the minimum. In general
+there are no bracketing methods available for the minimization of
+n-dimensional functions. The algorithms proceed from an initial guess
+using a search algorithm which attempts to move in a downhill direction.
+
+ Algorithms making use of the gradient of the function perform a
+one-dimensional line minimisation along this direction until the lowest
+point is found to a suitable tolerance. The search direction is then
+updated with local information from the function and its derivatives,
+and the whole process repeated until the true n-dimensional minimum is
+found.
+
+ The Nelder-Mead Simplex algorithm applies a different strategy. It
+maintains n+1 trial parameter vectors as the vertices of a
+n-dimensional simplex. In each iteration step it tries to improve the
+worst vertex by a simple geometrical transformation until the size of
+the simplex falls below a given tolerance.
+
+ Both types of algorithms use a standard framework. The user provides
+a high-level driver for the algorithms, and the library provides the
+individual functions necessary for each of the steps. There are three
+main phases of the iteration. The steps are,
+
+ * initialize minimizer state, S, for algorithm T
+
+ * update S using the iteration T
+
+ * test S for convergence, and repeat iteration if necessary
+
+Each iteration step consists either of an improvement to the
+line-minimisation in the current direction or an update to the search
+direction itself. The state for the minimizers is held in a
+`gsl_multimin_fdfminimizer' struct or a `gsl_multimin_fminimizer'
+struct.
+
+
+File: gsl-ref.info, Node: Multimin Caveats, Next: Initializing the Multidimensional Minimizer, Prev: Multimin Overview, Up: Multidimensional Minimization
+
+35.2 Caveats
+============
+
+Note that the minimization algorithms can only search for one local
+minimum at a time. When there are several local minima in the search
+area, the first minimum to be found will be returned; however it is
+difficult to predict which of the minima this will be. In most cases,
+no error will be reported if you try to find a local minimum in an area
+where there is more than one.
+
+ It is also important to note that the minimization algorithms find
+local minima; there is no way to determine whether a minimum is a global
+minimum of the function in question.
+
+
+File: gsl-ref.info, Node: Initializing the Multidimensional Minimizer, Next: Providing a function to minimize, Prev: Multimin Caveats, Up: Multidimensional Minimization
+
+35.3 Initializing the Multidimensional Minimizer
+================================================
+
+The following function initializes a multidimensional minimizer. The
+minimizer itself depends only on the dimension of the problem and the
+algorithm and can be reused for different problems.
+
+ -- Function: gsl_multimin_fdfminimizer *
+gsl_multimin_fdfminimizer_alloc (const gsl_multimin_fdfminimizer_type *
+ T, size_t N)
+ -- Function: gsl_multimin_fminimizer * gsl_multimin_fminimizer_alloc
+ (const gsl_multimin_fminimizer_type * T, size_t N)
+ This function returns a pointer to a newly allocated instance of a
+ minimizer of type T for an N-dimension function. If there is
+ insufficient memory to create the minimizer then the function
+ returns a null pointer and the error handler is invoked with an
+ error code of `GSL_ENOMEM'.
+
+ -- Function: int gsl_multimin_fdfminimizer_set
+ (gsl_multimin_fdfminimizer * S, gsl_multimin_function_fdf *
+ FDF, const gsl_vector * X, double STEP_SIZE, double TOL)
+ This function initializes the minimizer S to minimize the function
+ FDF starting from the initial point X. The size of the first
+ trial step is given by STEP_SIZE. The accuracy of the line
+ minimization is specified by TOL. The precise meaning of this
+ parameter depends on the method used. Typically the line
+ minimization is considered successful if the gradient of the
+ function g is orthogonal to the current search direction p to a
+ relative accuracy of TOL, where dot(p,g) < tol |p| |g|. A TOL
+ value of 0.1 is suitable for most purposes, since line
+ minimization only needs to be carried out approximately. Note
+ that setting TOL to zero will force the use of "exact"
+ line-searches, which are extremely expensive.
+
+ -- Function: int gsl_multimin_fminimizer_set (gsl_multimin_fminimizer
+ * S, gsl_multimin_function * F, const gsl_vector * X, const
+ gsl_vector * STEP_SIZE)
+ This function initializes the minimizer S to minimize the function
+ F, starting from the initial point X. The size of the initial
+ trial steps is given in vector STEP_SIZE. The precise meaning of
+ this parameter depends on the method used.
+
+ -- Function: void gsl_multimin_fdfminimizer_free
+ (gsl_multimin_fdfminimizer * S)
+ -- Function: void gsl_multimin_fminimizer_free
+ (gsl_multimin_fminimizer * S)
+ This function frees all the memory associated with the minimizer S.
+
+ -- Function: const char * gsl_multimin_fdfminimizer_name (const
+ gsl_multimin_fdfminimizer * S)
+ -- Function: const char * gsl_multimin_fminimizer_name (const
+ gsl_multimin_fminimizer * S)
+ This function returns a pointer to the name of the minimizer. For
+ example,
+
+ printf ("s is a '%s' minimizer\n",
+ gsl_multimin_fdfminimizer_name (s));
+
+ would print something like `s is a 'conjugate_pr' minimizer'.
+
+
+File: gsl-ref.info, Node: Providing a function to minimize, Next: Multimin Iteration, Prev: Initializing the Multidimensional Minimizer, Up: Multidimensional Minimization
+
+35.4 Providing a function to minimize
+=====================================
+
+You must provide a parametric function of n variables for the
+minimizers to operate on. You may also need to provide a routine which
+calculates the gradient of the function and a third routine which
+calculates both the function value and the gradient together. In order
+to allow for general parameters the functions are defined by the
+following data types:
+
+ -- Data Type: gsl_multimin_function_fdf
+ This data type defines a general function of n variables with
+ parameters and the corresponding gradient vector of derivatives,
+
+ `double (* f) (const gsl_vector * X, void * PARAMS)'
+ this function should return the result f(x,params) for
+ argument X and parameters PARAMS.
+
+ `void (* df) (const gsl_vector * X, void * PARAMS, gsl_vector * G)'
+ this function should store the N-dimensional gradient g_i = d
+ f(x,params) / d x_i in the vector G for argument X and
+ parameters PARAMS, returning an appropriate error code if the
+ function cannot be computed.
+
+ `void (* fdf) (const gsl_vector * X, void * PARAMS, double * f, gsl_vector * G)'
+ This function should set the values of the F and G as above,
+ for arguments X and parameters PARAMS. This function
+ provides an optimization of the separate functions for f(x)
+ and g(x)--it is always faster to compute the function and its
+ derivative at the same time.
+
+ `size_t n'
+ the dimension of the system, i.e. the number of components of
+ the vectors X.
+
+ `void * params'
+ a pointer to the parameters of the function.
+
+ -- Data Type: gsl_multimin_function
+ This data type defines a general function of n variables with
+ parameters,
+
+ `double (* f) (const gsl_vector * X, void * PARAMS)'
+ this function should return the result f(x,params) for
+ argument X and parameters PARAMS.
+
+ `size_t n'
+ the dimension of the system, i.e. the number of components of
+ the vectors X.
+
+ `void * params'
+ a pointer to the parameters of the function.
+
+The following example function defines a simple paraboloid with two
+parameters,
+
+ /* Paraboloid centered on (dp[0],dp[1]) */
+
+ double
+ my_f (const gsl_vector *v, void *params)
+ {
+ double x, y;
+ double *dp = (double *)params;
+
+ x = gsl_vector_get(v, 0);
+ y = gsl_vector_get(v, 1);
+
+ return 10.0 * (x - dp[0]) * (x - dp[0]) +
+ 20.0 * (y - dp[1]) * (y - dp[1]) + 30.0;
+ }
+
+ /* The gradient of f, df = (df/dx, df/dy). */
+ void
+ my_df (const gsl_vector *v, void *params,
+ gsl_vector *df)
+ {
+ double x, y;
+ double *dp = (double *)params;
+
+ x = gsl_vector_get(v, 0);
+ y = gsl_vector_get(v, 1);
+
+ gsl_vector_set(df, 0, 20.0 * (x - dp[0]));
+ gsl_vector_set(df, 1, 40.0 * (y - dp[1]));
+ }
+
+ /* Compute both f and df together. */
+ void
+ my_fdf (const gsl_vector *x, void *params,
+ double *f, gsl_vector *df)
+ {
+ *f = my_f(x, params);
+ my_df(x, params, df);
+ }
+
+The function can be initialized using the following code,
+
+ gsl_multimin_function_fdf my_func;
+
+ double p[2] = { 1.0, 2.0 }; /* center at (1,2) */
+
+ my_func.f = &my_f;
+ my_func.df = &my_df;
+ my_func.fdf = &my_fdf;
+ my_func.n = 2;
+ my_func.params = (void *)p;
+
+
+File: gsl-ref.info, Node: Multimin Iteration, Next: Multimin Stopping Criteria, Prev: Providing a function to minimize, Up: Multidimensional Minimization
+
+35.5 Iteration
+==============
+
+The following function drives the iteration of each algorithm. The
+function performs one iteration to update the state of the minimizer.
+The same function works for all minimizers so that different methods can
+be substituted at runtime without modifications to the code.
+
+ -- Function: int gsl_multimin_fdfminimizer_iterate
+ (gsl_multimin_fdfminimizer * S)
+ -- Function: int gsl_multimin_fminimizer_iterate
+ (gsl_multimin_fminimizer * S)
+ These functions perform a single iteration of the minimizer S. If
+ the iteration encounters an unexpected problem then an error code
+ will be returned.
+
+The minimizer maintains a current best estimate of the minimum at all
+times. This information can be accessed with the following auxiliary
+functions,
+
+ -- Function: gsl_vector * gsl_multimin_fdfminimizer_x (const
+ gsl_multimin_fdfminimizer * S)
+ -- Function: gsl_vector * gsl_multimin_fminimizer_x (const
+ gsl_multimin_fminimizer * S)
+ -- Function: double gsl_multimin_fdfminimizer_minimum (const
+ gsl_multimin_fdfminimizer * S)
+ -- Function: double gsl_multimin_fminimizer_minimum (const
+ gsl_multimin_fminimizer * S)
+ -- Function: gsl_vector * gsl_multimin_fdfminimizer_gradient (const
+ gsl_multimin_fdfminimizer * S)
+ -- Function: double gsl_multimin_fminimizer_size (const
+ gsl_multimin_fminimizer * S)
+ These functions return the current best estimate of the location
+ of the minimum, the value of the function at that point, its
+ gradient, and minimizer specific characteristic size for the
+ minimizer S.
+
+ -- Function: int gsl_multimin_fdfminimizer_restart
+ (gsl_multimin_fdfminimizer * S)
+ This function resets the minimizer S to use the current point as a
+ new starting point.
+
+
+File: gsl-ref.info, Node: Multimin Stopping Criteria, Next: Multimin Algorithms, Prev: Multimin Iteration, Up: Multidimensional Minimization
+
+35.6 Stopping Criteria
+======================
+
+A minimization procedure should stop when one of the following
+conditions is true:
+
+ * A minimum has been found to within the user-specified precision.
+
+ * A user-specified maximum number of iterations has been reached.
+
+ * An error has occurred.
+
+The handling of these conditions is under user control. The functions
+below allow the user to test the precision of the current result.
+
+ -- Function: int gsl_multimin_test_gradient (const gsl_vector * G,
+ double EPSABS)
+ This function tests the norm of the gradient G against the
+ absolute tolerance EPSABS. The gradient of a multidimensional
+ function goes to zero at a minimum. The test returns `GSL_SUCCESS'
+ if the following condition is achieved,
+
+ |g| < epsabs
+
+ and returns `GSL_CONTINUE' otherwise. A suitable choice of EPSABS
+ can be made from the desired accuracy in the function for small
+ variations in x. The relationship between these quantities is
+ given by \delta f = g \delta x.
+
+ -- Function: int gsl_multimin_test_size (const double SIZE, double
+ EPSABS)
+ This function tests the minimizer specific characteristic size (if
+ applicable to the used minimizer) against absolute tolerance
+ EPSABS. The test returns `GSL_SUCCESS' if the size is smaller
+ than tolerance, otherwise `GSL_CONTINUE' is returned.
+
+
+File: gsl-ref.info, Node: Multimin Algorithms, Next: Multimin Examples, Prev: Multimin Stopping Criteria, Up: Multidimensional Minimization
+
+35.7 Algorithms
+===============
+
+There are several minimization methods available. The best choice of
+algorithm depends on the problem. All of the algorithms use the value
+of the function and its gradient at each evaluation point, except for
+the Simplex algorithm which uses function values only.
+
+ -- Minimizer: gsl_multimin_fdfminimizer_conjugate_fr
+ This is the Fletcher-Reeves conjugate gradient algorithm. The
+ conjugate gradient algorithm proceeds as a succession of line
+ minimizations. The sequence of search directions is used to build
+ up an approximation to the curvature of the function in the
+ neighborhood of the minimum.
+
+ An initial search direction P is chosen using the gradient, and
+ line minimization is carried out in that direction. The accuracy
+ of the line minimization is specified by the parameter TOL. The
+ minimum along this line occurs when the function gradient G and
+ the search direction P are orthogonal. The line minimization
+ terminates when dot(p,g) < tol |p| |g|. The search direction is
+ updated using the Fletcher-Reeves formula p' = g' - \beta g where
+ \beta=-|g'|^2/|g|^2, and the line minimization is then repeated
+ for the new search direction.
+
+ -- Minimizer: gsl_multimin_fdfminimizer_conjugate_pr
+ This is the Polak-Ribiere conjugate gradient algorithm. It is
+ similar to the Fletcher-Reeves method, differing only in the
+ choice of the coefficient \beta. Both methods work well when the
+ evaluation point is close enough to the minimum of the objective
+ function that it is well approximated by a quadratic hypersurface.
+
+ -- Minimizer: gsl_multimin_fdfminimizer_vector_bfgs2
+ -- Minimizer: gsl_multimin_fdfminimizer_vector_bfgs
+ These methods use the vector Broyden-Fletcher-Goldfarb-Shanno
+ (BFGS) algorithm. This is a quasi-Newton method which builds up
+ an approximation to the second derivatives of the function f using
+ the difference between successive gradient vectors. By combining
+ the first and second derivatives the algorithm is able to take
+ Newton-type steps towards the function minimum, assuming quadratic
+ behavior in that region.
+
+ The `bfgs2' version of this minimizer is the most efficient
+ version available, and is a faithful implementation of the line
+ minimization scheme described in Fletcher's `Practical Methods of
+ Optimization', Algorithms 2.6.2 and 2.6.4. It supercedes the
+ original `bfgs' routine and requires substantially fewer function
+ and gradient evaluations. The user-supplied tolerance TOL
+ corresponds to the parameter \sigma used by Fletcher. A value of
+ 0.1 is recommended for typical use (larger values correspond to
+ less accurate line searches).
+
+
+ -- Minimizer: gsl_multimin_fdfminimizer_steepest_descent
+ The steepest descent algorithm follows the downhill gradient of the
+ function at each step. When a downhill step is successful the
+ step-size is increased by a factor of two. If the downhill step
+ leads to a higher function value then the algorithm backtracks and
+ the step size is decreased using the parameter TOL. A suitable
+ value of TOL for most applications is 0.1. The steepest descent
+ method is inefficient and is included only for demonstration
+ purposes.
+
+ -- Minimizer: gsl_multimin_fminimizer_nmsimplex
+ This is the Simplex algorithm of Nelder and Mead. It constructs n
+ vectors p_i from the starting vector X and the vector STEP_SIZE as
+ follows:
+
+ p_0 = (x_0, x_1, ... , x_n)
+ p_1 = (x_0 + step_size_0, x_1, ... , x_n)
+ p_2 = (x_0, x_1 + step_size_1, ... , x_n)
+ ... = ...
+ p_n = (x_0, x_1, ... , x_n+step_size_n)
+
+ These vectors form the n+1 vertices of a simplex in n dimensions.
+ On each iteration the algorithm tries to improve the parameter
+ vector p_i corresponding to the highest function value by simple
+ geometrical transformations. These are reflection, reflection
+ followed by expansion, contraction and multiple contraction. Using
+ these transformations the simplex moves through the parameter
+ space towards the minimum, where it contracts itself.
+
+ After each iteration, the best vertex is returned. Note, that due
+ to the nature of the algorithm not every step improves the current
+ best parameter vector. Usually several iterations are required.
+
+ The routine calculates the minimizer specific characteristic size
+ as the average distance from the geometrical center of the simplex
+ to all its vertices. This size can be used as a stopping
+ criteria, as the simplex contracts itself near the minimum. The
+ size is returned by the function `gsl_multimin_fminimizer_size'.
+
+
+File: gsl-ref.info, Node: Multimin Examples, Next: Multimin References and Further Reading, Prev: Multimin Algorithms, Up: Multidimensional Minimization
+
+35.8 Examples
+=============
+
+This example program finds the minimum of the paraboloid function
+defined earlier. The location of the minimum is offset from the origin
+in x and y, and the function value at the minimum is non-zero. The main
+program is given below, it requires the example function given earlier
+in this chapter.
+
+ int
+ main (void)
+ {
+ size_t iter = 0;
+ int status;
+
+ const gsl_multimin_fdfminimizer_type *T;
+ gsl_multimin_fdfminimizer *s;
+
+ /* Position of the minimum (1,2). */
+ double par[2] = { 1.0, 2.0 };
+
+ gsl_vector *x;
+ gsl_multimin_function_fdf my_func;
+
+ my_func.f = &my_f;
+ my_func.df = &my_df;
+ my_func.fdf = &my_fdf;
+ my_func.n = 2;
+ my_func.params = &par;
+
+ /* Starting point, x = (5,7) */
+ x = gsl_vector_alloc (2);
+ gsl_vector_set (x, 0, 5.0);
+ gsl_vector_set (x, 1, 7.0);
+
+ T = gsl_multimin_fdfminimizer_conjugate_fr;
+ s = gsl_multimin_fdfminimizer_alloc (T, 2);
+
+ gsl_multimin_fdfminimizer_set (s, &my_func, x, 0.01, 1e-4);
+
+ do
+ {
+ iter++;
+ status = gsl_multimin_fdfminimizer_iterate (s);
+
+ if (status)
+ break;
+
+ status = gsl_multimin_test_gradient (s->gradient, 1e-3);
+
+ if (status == GSL_SUCCESS)
+ printf ("Minimum found at:\n");
+
+ printf ("%5d %.5f %.5f %10.5f\n", iter,
+ gsl_vector_get (s->x, 0),
+ gsl_vector_get (s->x, 1),
+ s->f);
+
+ }
+ while (status == GSL_CONTINUE && iter < 100);
+
+ gsl_multimin_fdfminimizer_free (s);
+ gsl_vector_free (x);
+
+ return 0;
+ }
+
+The initial step-size is chosen as 0.01, a conservative estimate in this
+case, and the line minimization parameter is set at 0.0001. The program
+terminates when the norm of the gradient has been reduced below 0.001.
+The output of the program is shown below,
+
+ x y f
+ 1 4.99629 6.99072 687.84780
+ 2 4.98886 6.97215 683.55456
+ 3 4.97400 6.93501 675.01278
+ 4 4.94429 6.86073 658.10798
+ 5 4.88487 6.71217 625.01340
+ 6 4.76602 6.41506 561.68440
+ 7 4.52833 5.82083 446.46694
+ 8 4.05295 4.63238 261.79422
+ 9 3.10219 2.25548 75.49762
+ 10 2.85185 1.62963 67.03704
+ 11 2.19088 1.76182 45.31640
+ 12 0.86892 2.02622 30.18555
+ Minimum found at:
+ 13 1.00000 2.00000 30.00000
+
+Note that the algorithm gradually increases the step size as it
+successfully moves downhill, as can be seen by plotting the successive
+points.
+
+The conjugate gradient algorithm finds the minimum on its second
+direction because the function is purely quadratic. Additional
+iterations would be needed for a more complicated function.
+
+ Here is another example using the Nelder-Mead Simplex algorithm to
+minimize the same example object function, as above.
+
+ int
+ main(void)
+ {
+ size_t np = 2;
+ double par[2] = {1.0, 2.0};
+
+ const gsl_multimin_fminimizer_type *T =
+ gsl_multimin_fminimizer_nmsimplex;
+ gsl_multimin_fminimizer *s = NULL;
+ gsl_vector *ss, *x;
+ gsl_multimin_function minex_func;
+
+ size_t iter = 0, i;
+ int status;
+ double size;
+
+ /* Initial vertex size vector */
+ ss = gsl_vector_alloc (np);
+
+ /* Set all step sizes to 1 */
+ gsl_vector_set_all (ss, 1.0);
+
+ /* Starting point */
+ x = gsl_vector_alloc (np);
+
+ gsl_vector_set (x, 0, 5.0);
+ gsl_vector_set (x, 1, 7.0);
+
+ /* Initialize method and iterate */
+ minex_func.f = &my_f;
+ minex_func.n = np;
+ minex_func.params = (void *)&par;
+
+ s = gsl_multimin_fminimizer_alloc (T, np);
+ gsl_multimin_fminimizer_set (s, &minex_func, x, ss);
+
+ do
+ {
+ iter++;
+ status = gsl_multimin_fminimizer_iterate(s);
+
+ if (status)
+ break;
+
+ size = gsl_multimin_fminimizer_size (s);
+ status = gsl_multimin_test_size (size, 1e-2);
+
+ if (status == GSL_SUCCESS)
+ {
+ printf ("converged to minimum at\n");
+ }
+
+ printf ("%5d ", iter);
+ for (i = 0; i < np; i++)
+ {
+ printf ("%10.3e ", gsl_vector_get (s->x, i));
+ }
+ printf ("f() = %7.3f size = %.3f\n", s->fval, size);
+ }
+ while (status == GSL_CONTINUE && iter < 100);
+
+ gsl_vector_free(x);
+ gsl_vector_free(ss);
+ gsl_multimin_fminimizer_free (s);
+
+ return status;
+ }
+
+The minimum search stops when the Simplex size drops to 0.01. The
+output is shown below.
+
+ 1 6.500e+00 5.000e+00 f() = 512.500 size = 1.082
+ 2 5.250e+00 4.000e+00 f() = 290.625 size = 1.372
+ 3 5.250e+00 4.000e+00 f() = 290.625 size = 1.372
+ 4 5.500e+00 1.000e+00 f() = 252.500 size = 1.372
+ 5 2.625e+00 3.500e+00 f() = 101.406 size = 1.823
+ 6 3.469e+00 1.375e+00 f() = 98.760 size = 1.526
+ 7 1.820e+00 3.156e+00 f() = 63.467 size = 1.105
+ 8 1.820e+00 3.156e+00 f() = 63.467 size = 1.105
+ 9 1.016e+00 2.812e+00 f() = 43.206 size = 1.105
+ 10 2.041e+00 2.008e+00 f() = 40.838 size = 0.645
+ 11 1.236e+00 1.664e+00 f() = 32.816 size = 0.645
+ 12 1.236e+00 1.664e+00 f() = 32.816 size = 0.447
+ 13 5.225e-01 1.980e+00 f() = 32.288 size = 0.447
+ 14 1.103e+00 2.073e+00 f() = 30.214 size = 0.345
+ 15 1.103e+00 2.073e+00 f() = 30.214 size = 0.264
+ 16 1.103e+00 2.073e+00 f() = 30.214 size = 0.160
+ 17 9.864e-01 1.934e+00 f() = 30.090 size = 0.132
+ 18 9.190e-01 1.987e+00 f() = 30.069 size = 0.092
+ 19 1.028e+00 2.017e+00 f() = 30.013 size = 0.056
+ 20 1.028e+00 2.017e+00 f() = 30.013 size = 0.046
+ 21 1.028e+00 2.017e+00 f() = 30.013 size = 0.033
+ 22 9.874e-01 1.985e+00 f() = 30.006 size = 0.028
+ 23 9.846e-01 1.995e+00 f() = 30.003 size = 0.023
+ 24 1.007e+00 2.003e+00 f() = 30.001 size = 0.012
+ converged to minimum at
+ 25 1.007e+00 2.003e+00 f() = 30.001 size = 0.010
+
+The simplex size first increases, while the simplex moves towards the
+minimum. After a while the size begins to decrease as the simplex
+contracts around the minimum.
+
+
+File: gsl-ref.info, Node: Multimin References and Further Reading, Prev: Multimin Examples, Up: Multidimensional Minimization
+
+35.9 References and Further Reading
+===================================
+
+The conjugate gradient and BFGS methods are described in detail in the
+following book,
+
+ R. Fletcher, `Practical Methods of Optimization (Second Edition)'
+ Wiley (1987), ISBN 0471915475.
+
+ A brief description of multidimensional minimization algorithms and
+more recent references can be found in,
+
+ C.W. Ueberhuber, `Numerical Computation (Volume 2)', Chapter 14,
+ Section 4.4 "Minimization Methods", p. 325-335, Springer (1997),
+ ISBN 3-540-62057-5.
+
+The simplex algorithm is described in the following paper,
+
+ J.A. Nelder and R. Mead, `A simplex method for function
+ minimization', Computer Journal vol. 7 (1965), 308-315.
+
+
+
+File: gsl-ref.info, Node: Least-Squares Fitting, Next: Nonlinear Least-Squares Fitting, Prev: Multidimensional Minimization, Up: Top
+
+36 Least-Squares Fitting
+************************
+
+This chapter describes routines for performing least squares fits to
+experimental data using linear combinations of functions. The data may
+be weighted or unweighted, i.e. with known or unknown errors. For
+weighted data the functions compute the best fit parameters and their
+associated covariance matrix. For unweighted data the covariance
+matrix is estimated from the scatter of the points, giving a
+variance-covariance matrix.
+
+ The functions are divided into separate versions for simple one- or
+two-parameter regression and multiple-parameter fits. The functions
+are declared in the header file `gsl_fit.h'.
+
+* Menu:
+
+* Fitting Overview::
+* Linear regression::
+* Linear fitting without a constant term::
+* Multi-parameter fitting::
+* Fitting Examples::
+* Fitting References and Further Reading::
+
+
+File: gsl-ref.info, Node: Fitting Overview, Next: Linear regression, Up: Least-Squares Fitting
+
+36.1 Overview
+=============
+
+Least-squares fits are found by minimizing \chi^2 (chi-squared), the
+weighted sum of squared residuals over n experimental datapoints (x_i,
+y_i) for the model Y(c,x),
+
+ \chi^2 = \sum_i w_i (y_i - Y(c, x_i))^2
+
+The p parameters of the model are c = {c_0, c_1, ...}. The weight
+factors w_i are given by w_i = 1/\sigma_i^2, where \sigma_i is the
+experimental error on the data-point y_i. The errors are assumed to be
+gaussian and uncorrelated. For unweighted data the chi-squared sum is
+computed without any weight factors.
+
+ The fitting routines return the best-fit parameters c and their p
+\times p covariance matrix. The covariance matrix measures the
+statistical errors on the best-fit parameters resulting from the errors
+on the data, \sigma_i, and is defined as C_{ab} = <\delta c_a \delta
+c_b> where < > denotes an average over the gaussian error distributions
+of the underlying datapoints.
+
+ The covariance matrix is calculated by error propagation from the
+data errors \sigma_i. The change in a fitted parameter \delta c_a
+caused by a small change in the data \delta y_i is given by
+
+ \delta c_a = \sum_i (dc_a/dy_i) \delta y_i
+
+allowing the covariance matrix to be written in terms of the errors on
+the data,
+
+ C_{ab} = \sum_{i,j} (dc_a/dy_i) (dc_b/dy_j) <\delta y_i \delta y_j>
+
+For uncorrelated data the fluctuations of the underlying datapoints
+satisfy <\delta y_i \delta y_j> = \sigma_i^2 \delta_{ij}, giving a
+corresponding parameter covariance matrix of
+
+ C_{ab} = \sum_i (1/w_i) (dc_a/dy_i) (dc_b/dy_i)
+
+When computing the covariance matrix for unweighted data, i.e. data
+with unknown errors, the weight factors w_i in this sum are replaced by
+the single estimate w = 1/\sigma^2, where \sigma^2 is the computed
+variance of the residuals about the best-fit model, \sigma^2 = \sum
+(y_i - Y(c,x_i))^2 / (n-p). This is referred to as the
+"variance-covariance matrix".
+
+ The standard deviations of the best-fit parameters are given by the
+square root of the corresponding diagonal elements of the covariance
+matrix, \sigma_{c_a} = \sqrt{C_{aa}}.
+
+
+File: gsl-ref.info, Node: Linear regression, Next: Linear fitting without a constant term, Prev: Fitting Overview, Up: Least-Squares Fitting
+
+36.2 Linear regression
+======================
+
+The functions described in this section can be used to perform
+least-squares fits to a straight line model, Y(c,x) = c_0 + c_1 x.
+
+ -- Function: int gsl_fit_linear (const double * X, const size_t
+ XSTRIDE, const double * Y, const size_t YSTRIDE, size_t N,
+ double * C0, double * C1, double * COV00, double * COV01,
+ double * COV11, double * SUMSQ)
+ This function computes the best-fit linear regression coefficients
+ (C0,C1) of the model Y = c_0 + c_1 X for the dataset (X, Y), two
+ vectors of length N with strides XSTRIDE and YSTRIDE. The errors
+ on Y are assumed unknown so the variance-covariance matrix for the
+ parameters (C0, C1) is estimated from the scatter of the points
+ around the best-fit line and returned via the parameters (COV00,
+ COV01, COV11). The sum of squares of the residuals from the
+ best-fit line is returned in SUMSQ.
+
+ -- Function: int gsl_fit_wlinear (const double * X, const size_t
+ XSTRIDE, const double * W, const size_t WSTRIDE, const double
+ * Y, const size_t YSTRIDE, size_t N, double * C0, double *
+ C1, double * COV00, double * COV01, double * COV11, double *
+ CHISQ)
+ This function computes the best-fit linear regression coefficients
+ (C0,C1) of the model Y = c_0 + c_1 X for the weighted dataset (X,
+ Y), two vectors of length N with strides XSTRIDE and YSTRIDE. The
+ vector W, of length N and stride WSTRIDE, specifies the weight of
+ each datapoint. The weight is the reciprocal of the variance for
+ each datapoint in Y.
+
+ The covariance matrix for the parameters (C0, C1) is computed
+ using the weights and returned via the parameters (COV00, COV01,
+ COV11). The weighted sum of squares of the residuals from the
+ best-fit line, \chi^2, is returned in CHISQ.
+
+ -- Function: int gsl_fit_linear_est (double X, double C0, double C1,
+ double C00, double C01, double C11, double * Y, double *
+ Y_ERR)
+ This function uses the best-fit linear regression coefficients
+ C0,C1 and their covariance COV00,COV01,COV11 to compute the fitted
+ function Y and its standard deviation Y_ERR for the model Y = c_0
+ + c_1 X at the point X.
+