In mathematics, a series acceleration method is any one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.
Given an infinite series with a sequence of partial sums
(Sn)n\in\N
having a limit
\limn\toinftySn=S,
an accelerated series is an infinite series with a second sequence of partial sums
(S'n)n\in\N
which asymptotically converges faster to
S
\limn\toinfty
S'n-S | |
Sn-S |
=0.
A series acceleration method is a sequence transformation that transforms the convergent sequences of partial sums of a series into more quickly convergent sequences of partial sums of an accelerated series with the same limit. If a series acceleration method is applied to a divergent series then the proper limit of the series is undefined, but the sequence transformation can still act usefully as an extrapolation method to an antilimit of the series.
The mappings from the original to the transformed series may be linear sequence transformations or non-linear sequence transformations. In general, the non-linear sequence transformations tend to be more powerful.
Two classical techniques for series acceleration are Euler's transformation of series and Kummer's transformation of series. A variety of much more rapidly convergent and special-case tools have been developed in the 20th century, including Richardson extrapolation, introduced by Lewis Fry Richardson in the early 20th century but also known and used by Katahiro Takebe in 1722; the Aitken delta-squared process, introduced by Alexander Aitken in 1926 but also known and used by Takakazu Seki in the 18th century; the epsilon method given by Peter Wynn in 1956; the Levin u-transform; and the Wilf-Zeilberger-Ekhad method or WZ method.
For alternating series, several powerful techniques, offering convergence rates from
5.828-n
17.93-n
n
A basic example of a linear sequence transformation, offering improved convergence, is Euler's transform. It is intended to be applied to an alternating series; it is given by
infty | |
\sum | |
n=0 |
(-1)nan=
infty | |
\sum | |
n=0 |
(-1)n
(\Deltana)0 | |
2n+1 |
where
\Delta
(\Deltana)0=
n | |
\sum | |
k=0 |
(-1)k{n\choosek}an-k.
If the original series, on the left hand side, is only slowly converging, the forward differences will tend to become small quite rapidly; the additional power of two further improves the rate at which the right hand side converges.
A particularly efficient numerical implementation of the Euler transform is the van Wijngaarden transformation.[2]
A series
S=
infty | |
\sum | |
n=0 |
an
can be written as
f(1)
f(z)=
infty | |
\sum | |
n=0 |
anzn.
The function
f(z)
z=1
S
z=1
The conformal transform
z=\Phi(w)
\Phi(0)=0
\Phi(1)=1
\Phi
g(w)=f(\Phi(w)).
Since
\Phi(1)=1
f(1)=g(1)
g(w)
z=\Phi(w)
f(z)
\Phi(0)=0
n
f(z)
n
g(w)
\Phi'(0) ≠ 0
w=1
Examples of such nonlinear sequence transformations are Padé approximants, the Shanks transformation, and Levin-type sequence transformations.
Especially nonlinear sequence transformations often provide powerful numerical methods for the summation of divergent series or asymptotic series that arise for instance in perturbation theory, and therefore may be used as effective extrapolation methods.
See main article: Aitken's delta-squared process. A simple nonlinear sequence transformation is the Aitken extrapolation or delta-squared method,
A:S\toS'=A(S)={(s'n)}n\in\N
defined by
s'n=sn+2-
(sn+2-sn+1)2 | |
sn+2-2sn+1+sn |
.
This transformation is commonly used to improve the rate of convergence of a slowly converging sequence; heuristically, it eliminates the largest part of the absolute error.
\epsilon