Series (mathematics) explained

In mathematics, a series is, roughly speaking, an addition of infinitely many terms, one after the other.[1] The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures in combinatorics through generating functions. The mathematical properties of infinite series make them widely applicable in other quantitative disciplines such as physics, computer science, statistics and finance.

Among the Ancient Greeks, the idea that a potentially infinite summation could produce a finite result was considered paradoxical, most famously in Zeno's paradoxes. Nonetheless, infinite series were applied practically by Ancient Greek mathematicians including Archimedes, for instance in the quadrature of the parabola.[2] [3] The mathematical side of Zeno's paradoxes was resolved using the concept of a limit during the 17th century, especially through the early calculus of Isaac Newton. The resolution was made more rigorous and further improved in the 19th century through the work of Carl Friedrich Gauss and Augustin-Louis Cauchy, among others, answering questions about which of these sums exist via the completeness of the real numbers and whether series terms can be rearranged or not without changing their sums using absolute convergence and conditional convergence of series.

(a1,a2,a3,\ldots)

of terms, whether those terms are numbers, functions, matrices, or anything else that can be added, defines a series, which is the addition of the one after the other. To emphasize that there are an infinite number of terms, series are often also called infinite series. Series are represented by an expression likea_1+a_2+a_3+\cdots,or, using capital-sigma summation notation,\sum_^\infty a_i.

The infinite sequence of additions expressed by a series cannot be explicitly performed in sequence in a finite amount of time. However, if the terms and their finite sums belong to a set that has limits, it may be possible to assign a value to a series, called the sum of the series. This value is the limit as tends to infinity of the finite sums of the first terms of the series if the limit exists.[4] These finite sums are called the partial sums of the series. Using summation notation,\sum_^\infty a_i = \lim_\, \sum_^n a_i,if it exists. When the limit exists, the series is convergent or summable and also the sequence

(a1,a2,a3,\ldots)

is summable, and otherwise, when the limit does not exist, the series is divergent.

The expression \sum_^\infty a_i denotes both the series—the implicit process of adding the terms one after the other indefinitely—and, if the series is convergent, the sum of the series—the explicit limit of the process. This is a generalization of the similar convention of denoting by

a+b

both the addition—the process of adding—and its result—the sum of and .

R

of the real numbers or the field

C

of the complex numbers. If so, the set of all series is also itself a ring, one in which the addition consists of adding series terms together term by term and the multiplication is the Cauchy product.[5] [6]

Definition

Series

ak

are the members of a sequence of numbers, functions, or anything else that can be added. A series may also be represented with capital-sigma notation:\sum_^ a_k \qquad \text \qquad \sum_^ a_k .

It is also common to express series using a few first terms, an ellipsis, a general term, and then a final ellipsis, the general term being an expression of the th term as a function of :a_0 + a_1 + a_2 + \cdots + a_n +\cdots \quad \text \quad f(0) + f(1) + f(2) + \cdots + f(n) + \cdots. For example, Euler's number can be defined with the series\sum_^\infty \frac 1=1+1+\frac12 +\frac 16 +\cdots + \frac 1+\cdots, where

n!

denotes the product of the

n

first positive integers, and

0!

is conventionally equal to

1.

Partial sum of a series

Given a series s=\sum_^\infty a_k, its th partial sum iss_n = \sum_^ a_k = a_0 + a_1 + \cdots + a_n .

Some authors directly identify a series with its sequence of partial sums. Either the sequence of partial sums or the sequence of terms completely characterizes the series, and the sequence of terms can be recovered from the sequence of partial sums by taking the differences between consecutive elements,a_n = s_ - s_.

Partial summation of a sequence is an example of a linear sequence transformation, and it is also known as the prefix sum in computer science. The inverse transformation for recovering a sequence from its partial sums is the finite difference, another linear sequence transformation.

Partial sums of series sometimes have simpler closed form expressions, for instance an arithmetic series has partial sumss_n = \sum_^ \left(a + kd\right)= a + (a + d) + (a + 2d) + \cdots + (a + nd)= (n+1) a + \tfrac12 n (n+1)d,and a geometric series has partial sumss_n = \sum_^ ar^k = a + ar + ar^2 + \cdots + ar^n = a\frac.

Sum of a series

Strictly speaking, a series is said to converge, to be convergent, or to be summable when the sequence of its partial sums has a limit. When the limit of the sequence of partial sums does not exist, the series diverges or is divergent. When the limit of the partial sums exists, it is called the sum of the series or value of the series:\sum_^\infty a_k = \lim_ \sum_^n a_k = \lim_ s_n.A series with only a finite number of nonzero terms is always convergent. Such series are useful for considering finite sums without taking care of the numbers of terms.[8] When the sum exists, the difference between the sum of a series and its

n

th partial sum, s - s_n = \sum_^\infty a_k, is known as the

n

th truncation error of the infinite series.[9] [10]

An example of a convergent series is the geometric series 1 + \frac+ \frac+ \frac + \cdots + \frac + \cdots.

It can be shown by algebraic computation that each partial sum

sn

is \sum_^n \frac 1 = 2-\frac 1. As one has \lim_ \left(2-\frac 1\right) =2,the series is convergent and converges to with truncation errors 1 / 2^n .

By contrast, the geometric series\sum_^\infty 2^kis divergent in the real numbers. However, it is convergent in the extended real number line, with

+infty

as its limit and

+infty

as its truncation error at every step.[11]

When a series's sequence of partial sums is not easily calculated and evaluated for convergence directly, convergence tests can be used to prove that the series converges or diverges.

Grouping and rearranging terms

Grouping

In ordinary finite summations, terms of the summation can be grouped and ungrouped freely without changing the result of the summation as a consequence of the associativity of addition.

a0+a1+a2={}

a0+(a1+a2)={}

(a0+a1)+a2.

Similarly, in a series, any finite groupings of terms of the series will not change the limit of the partial sums of the series and thus will not change the sum of the series. However, if an infinite number of groupings is performed in an infinite series, then the partial sums of the grouped series may have a different limit than the original series and different groupings may have different limits from one another; the sum of

a0+a1+a2+

may not equal the sum of

a0+(a1+a2)+{}

(a3+a4)+.

For example, Grandi's series has a sequence of partial sums that alternates back and forth between and and does not converge. Grouping its elements in pairs creates the series

(1-1)+(1-1)+(1-1)+={}

0+0+0+,

which has partial sums equal to zero at every term and thus sums to zero. Grouping its elements in pairs starting after the first creates the series

1+(-1+1)+{}

(-1+1)+={}

1+0+0+,

which has partial sums equal to one for every term and thus sums to one, a different result.

In general, grouping the terms of a series creates a new series with a sequence of partial sums that is a subsequence of the partial sums of the original series. This means that if the original series converges, so does the new series after grouping: all infinite subsequences of a convergent sequence also converge to the same limit. However, if the original series diverges, then the grouped series do not necessarily diverge, as in this example of Grandi's series above. However, divergence of a grouped series does imply the original series must be divergent, since it proves there is a subsequence of the partial sums of the original series which is not convergent, which would be impossible if it were convergent. This reasoning was applied in Oresme's proof of the divergence of the harmonic series,[12] and it is the basis for the general Cauchy condensation test.

Rearrangement

In ordinary finite summations, terms of the summation can be rearranged freely without changing the result of the summation as a consequence of the commutativity of addition.

a0+a1+a2={}

a0+a2+a1={}

a2+a1+a0.

Similarly, in a series, any finite rearrangements of terms of a series does not change the limit of the partial sums of the series and thus does not change the sum of the series: for any finite rearrangement, there will be some term after which the rearrangement did not affect any further terms: any effects of rearrangement can be isolated to the finite summation up to that term, and finite summations do not change under rearrangement.

However, as for grouping, an infinitary rearrangement of terms of a series can sometimes lead to a change in the limit of the partial sums of the series. Series with sequences of partial sums that converge to a value but whose terms could be rearranged to a form a series with partial sums that converge to some other value are called conditionally convergent series. Those that converge to the same value regardless of rearrangement are called unconditionally convergent series.

For series of real numbers and complex numbers, a series

a0+a1+a2+

is unconditionally convergent if and only if the series summing the absolute values of its terms,

|a0|+|a1|+|a2|+,

is also convergent, a property called absolute convergence. Otherwise, any series of real numbers or complex numbers that converges but does not converge absolutely is conditionally convergent. Any conditionally convergent sum of real numbers can be rearranged to yield any other real number as a limit, or to diverge. These claims are the content of the Riemann series theorem.

A historically important example of conditional convergence is the alternating harmonic series,

\sum\limits_^\infty = 1 - + - + - \cdots,which has a sum of the natural logarithm of 2, while the sum of the absolute values of the terms is the harmonic series,\sum\limits_^\infty = 1 + + + + + \cdots,which diverges per the divergence of the harmonic series, so the alternating harmonic series is conditionally convergent. For instance, rearranging the terms of the alternating harmonic series so that each positive term of the original series is followed by two negative terms of the original series rather than just one yields \begin&1 - \frac12 - \frac14 + \frac13 - \frac16 - \frac18 + \frac15 - \frac1 - \frac1 + \cdots \\[3mu]&\quad = \left(1 - \frac12\right) - \frac14 + \left(\frac13 - \frac16\right) - \frac18 + \left(\frac15 - \frac1\right) - \frac1 + \cdots \\[3mu]&\quad = \frac12 - \frac14 + \frac16 - \frac18 + \frac1 - \frac1 + \cdots \\[3mu]&\quad = \frac12 \left(1 - \frac12 + \frac13 - \frac14 + \frac15 - \frac16 + \cdots \right),\endwhich is

\tfrac12

times the original series, so it would have a sum of half of the natural logarithm of 2. By the Riemann series theorem, rearrangements of the alternating harmonic series to yield any other real number are also possible.

Operations

Series addition

The addition of two series a_0 + a_1 + a_2 + \cdots and b_0 + b_1 + b_2 + \cdots is given by the termwise sum[13] (a_0 + b_0) + (a_1 + b_1) + (a_2 + b_2) + \cdots \,, or, in summation notation,\sum_^ a_k + \sum_^ b_k = \sum_^ a_k + b_k.

Using the symbols

sa,

and

sb,

for the partial sums of the added series and

sa

for the partial sums of the resulting series, this definition implies the partial sums of the resulting series follow

sa=sa,+sb,.

Then the sum of the resulting series, i.e., the limit of the sequence of partial sums of the resulting series, satisfies\lim_ s_ = \lim_ (s_ + s_) = \lim_ s_ + \lim_ s_,when the limits exist. Therefore, first, the series resulting from addition is summable if the series added were summable, and, second, the sum of the resulting series is the addition of the sums of the added series. The addition of two divergent series may yield a convergent series: for instance, the addition of a divergent series with a series of its terms times

-1

will yield a series of all zeros that converges to zero. However, for any two series where one converges and the other diverges, the result of their addition diverges.

For series of real numbers or complex numbers, series addition is associative, commutative, and invertible. Therefore series addition gives the sets of convergent series of real numbers or complex numbers the structure of an abelian group and also gives the sets of all series of real numbers or complex numbers (regardless of convergence properties) the structure of an abelian group.

Scalar multiplication

The product of a series a_0 + a_1 + a_2 + \cdots with a constant number

c

, called a scalar in this context, is given by the termwise product ca_0 + ca_1 + ca_2 + \cdots , or, in summation notation,

c\sum_^ a_k = \sum_^ ca_k.

Using the symbols

sa,

for the partial sums of the original series and

sca,

for the partial sums of the series after multiplication by

c

, this definition implies that

sca,=csa,

for all

n,

and therefore also \lim_ s_ = c \lim_ s_, when the limits exist. Therefore if a series is summable, any nonzero scalar multiple of the series is also summable and vice versa: if a series is divergent, then any nonzero scalar multiple of it is also divergent.

Scalar multiplication of real numbers and complex numbers is associative, commutative, invertible, and it distributes over series addition.

In summary, series addition and scalar multiplication gives the set of convergent series and the set of series of real numbers the structure of a real vector space. Similarly, one gets complex vector spaces for series and convergent series of complex numbers. All these vector spaces are infinite dimensional.

Series multiplication

The multiplication of two series

a0+a1+a2+

and

b0+b1+b2+

to generate a third series

c0+c1+c2+

, called the Cauchy product, can be written in summation notation\biggl(\sum_^ a_k \biggr) \cdot \biggl(\sum_^ b_k \biggr)= \sum_^ c_k = \sum_^ \sum_^ a_ b_,with each c_k = \sum_^ a_ b_ = \!

a0bk+a1bk-1++ak-1b1+akb0.

Here, the convergence of the partial sums of the series

c0+c1+c2+

is not as simple to establish as for addition. However, if both series

a0+a1+a2+

and

b0+b1+b2+

are absolutely convergent series, then the series resulting from multiplying them also converges absolutely with a sum equal to the product of the two sums of the multiplied series,\lim_ s_ = \left(\, \lim_ s_ \right) \cdot \left(\, \lim_ s_ \right).

Series multiplication of absolutely convergent series of real numbers and complex numbers is associative, commutative, and distributes over series addition. Together with series addition, series multiplication gives the sets of absolutely convergent series of real numbers or complex numbers the structure of a commutative ring, and together with scalar multiplication as well, the structure of a commutative algebra; these operations also give the sets of all series of real numbers or complex numbers the structure of an associative algebra.

Examples of numerical series

1 + + + + + \cdots=\sum_^\infty = 2. In general, a geometric series with initial term

a

and common ratio

r

, \sum_^\infty a r^n, converges if and only if |r| < 1, in which case it converges to .

1 - + - + - \cdots= \sum_^\infty = \ln(2), the alternating harmonic series, and -1+\frac - \frac + \frac - \frac + \cdots= \sum_^\infty \frac = -\frac, the Leibniz formula for

\pi.

\sum_^\infty \left(b_n-b_\right) converges if the sequence bn converges to a limit L as n goes to infinity. The value of the series is then b1L.

\sum_^\infty\frac converges for p > 1 and diverges for p ≤ 1, which can be shown with the integral test for convergence described below in convergence tests. As a function of p, the sum of this series is Riemann's zeta function.

_rF_s \left[\begin{matrix}a_1, a_2, \dotsc, a_r \\ b_1, b_2, \dotsc, b_s \end{matrix}; z \right]

= \sum_^ \frac z^n and their generalizations (such as basic hypergeometric series and elliptic hypergeometric series) frequently appear in integrable systems and mathematical physics.[14]

\sum_^\infty \frac, converges or not. The convergence depends on how well

\pi

can be approximated with rational numbers (which is unknown as of yet). More specifically, the values of n with large numerical contributions to the sum are the numerators of the continued fraction convergents of

\pi

, a sequence beginning with 1, 3, 22, 333, 355, 103993, ... . These are integers n that are close to

m\pi

for some integer m, so that

\sinn

is close to

\sinm\pi=0

and its reciprocal is large.

Pi

See main article: Basel problem and Leibniz formula for π.

\sum_^ \frac = \frac + \frac + \frac + \frac + \cdots = \frac

\sum_^\infty \frac = \frac - \frac + \frac - \frac + \frac - \frac + \frac - \cdots = \pi

Natural logarithm of 2

\sum_^\infty \frac = \ln 2

\sum_^\infty \frac = \ln 2

Natural logarithm base e

See main article: e (mathematical constant).

\sum_^\infty \frac = 1-\frac+\frac-\frac+\cdots = \frac

\sum_^\infty \frac = \frac + \frac + \frac + \frac + \frac + \cdots = e

Convergence testing

See main article: Convergence tests.

One of the simplest tests for convergence of a series, applicable to all series, is the vanishing condition or nth-term test: If \lim_ a_n \neq 0, then the series diverges; if \lim_ a_n = 0, then the test is inconclusive.

Absolute convergence tests

See main article: Absolute convergence.

When every term of a series is a non-negative real number, for instance when the terms are the absolute values of another series of real numbers or complex numbers, the sequence of partial sums is non-decreasing. Therefore a series with non-negative terms converges if and only if the sequence of partial sums is bounded, and so finding a bound for a series or for the absolute values of its terms is an effective way to prove convergence or absolute convergence of a series.

For example, the series 1 + \frac14 + \frac19 + \cdots + \frac1 + \cdots\,is convergent and absolutely convergent because \frac1 \le \frac1 - \frac1n for all

n\geq2

and a telescoping sum argument implies that the partial sums of the series of those non-negative bounding terms are themselves bounded above by 2. The exact value of this series is \frac16\pi^2; see Basel problem.

This type of bounding strategy is the basis for general series comparison tests. First is the general direct comparison test: For any series \sum a_n, If \sum b_n is an absolutely convergent series such that

\left\vertan\right\vert\leqC\left\vertbn\right\vert

for some positive real number

C

and for sufficiently large

n

, then \sum a_n converges absolutely as well. If \sum \left\vert b_n \right\vert diverges, and

\left\vertan\right\vert\geq\left\vertbn\right\vert

for all sufficiently large

n

, then \sum a_n also fails to converge absolutely, although it could still be conditionally convergent, for example, if the

an

alternate in sign. Second is the general limit comparison test: If \sum b_n is an absolutely convergent series such that

\left\vert\tfrac{an+1

} \right\vert \leq \left\vert \tfrac \right\vert for sufficiently large

n

, then \sum a_n converges absolutely as well. If \sum \left| b_n \right| diverges, and

\left\vert\tfrac{an+1

} \right\vert \geq \left\vert \tfrac \right\vert for all sufficiently large

n

, then \sum a_n also fails to converge absolutely, though it could still be conditionally convergent if the

an

vary in sign.

Using comparisons to geometric series specifically, those two general comparison tests imply two further common and generally useful tests for convergence of series with non-negative terms or for absolute convergence of series with general terms. First is the ratio test: if there exists a constant

C<1

such that

\left\vert\tfrac{an+1

} \right\vert < C for all sufficiently large 

n

, then \sum a_ converges absolutely. When the ratio is less than

1

, but not less than a constant less than

1

, convergence is possible but this test does not establish it. Second is the root test: if there exists a constant

C<1

such that

style\left\vertan\right\vert1/n\leqC

for all sufficiently large 

n

, then \sum a_ converges absolutely.

Alternatively, using comparisons to series representations of integrals specifically, one derives the integral test: if

f(x)

is a positive monotone decreasing function defined on the interval

[1,infty)

then for a series with terms

an=f(n)

for all 

n

, \sum a_ converges if and only if the integral \int_^ f(x) \, dx is finite. Using comparisons to flattened-out versions of a series leads to Cauchy's condensation test: if the sequence of terms

an

is non-negative and non-increasing, then the two series \sum a_ and \sum 2^ a_ are either both convergent or both divergent.

Conditional convergence tests

See main article: Conditional convergence. A series of real or complex numbers is said to be conditionally convergent (or semi-convergent) if it is convergent but not absolutely convergent. Conditional convergence is tested for differently than absolute convergence.

One important example of a test for conditional convergence is the alternating series test or Leibniz test: A series of the form \sum (-1)^ a_ with all

an>0

is called alternating. Such a series converges if the non-negative sequence

an

is monotone decreasing and converges to 

0

. The converse is in general not true. A famous example of an application of this test is the alternating harmonic series\sum\limits_^\infty = 1 - + - + - \cdots,which is convergent per the alternating series test (and its sum is equal to 

ln2

), though the series formed by taking the absolute value of each term is the ordinary harmonic series, which is divergent.

The alternating series test can be viewed as a special case of the more general Dirichlet's test: if

(an)

is a sequence of terms of decreasing nonnegative real numbers that converges to zero, and

(λn)

is a sequence of terms with bounded partial sums, then the series \sum \lambda_n a_n converges. Taking

λn=(-1)n

recovers the alternating series test.

Abel's test is another important technique for handling semi-convergent series. If a series has the form \sum a_n = \sum \lambda_n b_n where the partial sums of the series with terms

bn

,

sb,n=b0++bn

are bounded,

λn

has bounded variation, and

\limλnbn

exists: if \sup_n |s_| < \infty, \sum \left|\lambda_ - \lambda_n\right| < \infty, and

λnsb,n

converges, then the series \sum a_ is convergent.

Other specialized convergence tests for specific types of series include the Dini test for Fourier series.

Evaluation of truncation errors

The evaluation of truncation errors of series is important in numerical analysis (especially validated numerics and computer-assisted proof). It can be used to prove convergence and to analyze rates of convergence.

Alternating series

See main article: Alternating series. When conditions of the alternating series test are satisfied by S:=\sum_^\infty(-1)^m u_m, there is an exact error evaluation.[15] Set

sn

to be the partial sum s_n:=\sum_^n(-1)^m u_m of the given alternating series

S

. Then the next inequality holds:|S-s_n|\leq u_.

Hypergeometric series

By using the ratio, we can obtain the evaluation of the error term when the hypergeometric series is truncated.[16]

Matrix exponential

See main article: Matrix exponential. For the matrix exponential:

\exp(X) := \sum_^\infty\fracX^k,\quad X\in\mathbb^,

the following error evaluation holds (scaling and squaring method):[17] [18] [19]

T_(X) := \biggl(\sum_^r\frac(X/s)^j\biggr)^s,\quad \bigl\|\exp(X)-T_(X)\bigr\|\leq\frac\exp(\|X\|).

Sums of divergent series

See main article: Divergent series. Under many circumstances, it is desirable to assign generalized sums to series which fail to converge in the strict sense that their sequences of partial sums do not converge. A summation method is any method for assigning sums to divergent series in a way that systematically extends the classical notion of the sum of a series. Summation methods include Cesàro summation, generalized Cesàro (C,α) summation, Abel summation, and Borel summation, in order of applicability to increasingly divergent series. These methods are all based on sequence transformations of the original series of terms or of its sequence of partial sums. An alternative family of summation methods are based on analytic continuation rather than sequence transformation.

A variety of general results concerning possible summability methods are known. The Silverman–Toeplitz theorem characterizes matrix summation methods, which are methods for summing a divergent series by applying an infinite matrix to the vector of coefficients. The most general methods for summing a divergent series are non-constructive and concern Banach limits.

Series of functions

See main article: Function series. A series of real- or complex-valued functions

\sum_^\infty f_n(x)

is pointwise convergent to a limit ƒ(x) on a set E if the series converges for each x in E as a series of real or complex numbers. Equivalently, the partial sums

s_N(x) = \sum_^N f_n(x)

converge to ƒ(x) as N → ∞ for each x ∈ E.

A stronger notion of convergence of a series of functions is uniform convergence. A series converges uniformly in a set

E

if it converges pointwise to the function ƒ(x) at every point of

E

and the supremum of these pointwise errors in approximating the limit by the Nth partial sum,

\sup_ \bigl|s_N(x) - f(x)\bigr|

converges to zero with increasing N, independently of x.

Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the ƒn are integrable on a closed and bounded interval I and converge uniformly, then the series is also integrable on I and can be integrated term-by-term. Tests for uniform convergence include Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion.

More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions converges in mean to a limit function ƒ on a set E if

\lim_ \int_E \bigl|s_N(x)-f(x)\bigr|^2\,dx = 0.

Power series

See main article: Power series.

A power series is a series of the form

\sum_^\infty a_n(x-c)^n.

The Taylor series at a point c of a function is a power series that, in many cases, converges to the function in a neighborhood of c. For example, the series

\sum_^ \frac

is the Taylor series of

ex

at the origin and converges to it for every x.

Unless it converges only at x=c, such a series converges on a certain open disc of convergence centered at the point c in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients an. The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets.

Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required.

Formal power series

See main article: Formal power series.

While many uses of power series refer to their sums, it is also possible to treat power series as formal sums, meaning that no addition operations are actually performed, and the symbol "+" is an abstract symbol of conjunction which is not necessarily interpreted as corresponding to addition. In this setting, the sequence of coefficients itself is of interest, rather than the convergence of the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example, using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras.

Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series "formally", treating the symbol "+" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring.[20] If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term.

Laurent series

See main article: Laurent series. Laurent series generalize power series by admitting terms into the series with negative as well as positive exponents. A Laurent series is thus any series of the form

\sum_^\infty a_n x^n.

If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence.

Dirichlet series

See main article: Dirichlet series.

A Dirichlet series is one of the form

\sum_^\infty,

where s is a complex number. For example, if all an are equal to 1, then the Dirichlet series is the Riemann zeta function

\zeta(s) = \sum_^\infty \frac.

Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of s is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when Re(s) > 1, but the zeta function can be extended to a holomorphic function defined on

\Complex\setminus\{1\}

with a simple pole at 1.

This series can be directly generalized to general Dirichlet series.

Trigonometric series

See main article: Trigonometric series. A series of functions in which the terms are trigonometric functions is called a trigonometric series:

A_0 + \sum_^\infty \left(A_n\cos nx + B_n \sin nx\right).

The most important example of a trigonometric series is the Fourier series of a function.

Asymptotic series

See main article: Asymptotic expansion. Asymptotic series, typically called asymptotic expansions, are infinite series whose terms are functions of a sequence of different asymptotic orders and whose partial sums are approximations of some other function in an asymptotic limit. In general they do not converge, but they are still useful as sequences of approximations, each of which provides a value close to the desired answer for a finite number of terms. They are crucial tools in perturbation theory and in the analysis of algorithms.

An asymptotic series cannot necessarily be made to produce an answer as exactly as desired away from the asymptotic limit, the way that an ordinary convergent series of functions can. In fact, a typical asymptotic series reaches its best practical approximation away from the asymptotic limit after a finite number of terms; if more terms are included, the series will produce less accurate approximations.

History of the theory of infinite series

Development of infinite series

Infinite series play an important role in modern analysis of Ancient Greek philosophy of motion, particularly in Zeno's paradoxes. The paradox of Achilles and the tortoise demonstrates that continuous motion would require an actual infinity of temporal instants, which was arguably an absurdity: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno is said to have argued that therefore Achilles could never reach the tortoise, and thus that continuous movement must be an illusion. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the purely mathematical and imaginative side of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch up with the tortoise. However, in modern philosophy of motion the physical side of the problem remains open, with both philosophers and physicists doubting, like Zeno, that spatial motions are infinitely divisible: hypothetical reconciliations of quantum mechanics and general relativity in theories of quantum gravity often introduce quantizations of spacetime at the Planck scale.[21] [22]

Greek mathematician Archimedes produced the first known summation of an infinite series with amethod that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.[23] [24]

Mathematicians from the Kerala school were studying infinite series .[25]

In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series.

Convergence criteria

The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series

1 + \fracx + \fracx^2 + \cdots

on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence.

Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms convergence and divergence had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form.

Abel (1826) in his memoir on the binomial series

1 + \fracx + \fracx^2 + \cdots

corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of

m

and

x

. He showed the necessity of considering the subject of continuity in questions of convergence.

Cauchy's methods led to special rather than general criteria, andthe same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whoselogarithmic test DuBois-Reymond (1873) and Pringsheim (1889) haveshown to fail within a certain region; of Bertrand (1842), Bonnet(1843), Malmsten (1846, 1847, the latter without integration); Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt(1853).

General criteria began with Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his variouscontributions to the theory of functions, Dini (1867),DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory.

Uniform convergence

The theory of uniform convergence was treated by Cauchy (1821), his limitations being pointed out by Abel, but the first to attack itsuccessfully were Seidel and Stokes (1847–48). Cauchy took up theproblem again (1853), acknowledging Abel's criticism, and reachingthe same conclusions which Stokes had already found. Thomae used thedoctrine (1866), but there was great delay in recognizing the importance of distinguishing between uniform and non-uniformconvergence, in spite of the demands of the theory of functions.

Semi-convergence

A series is said to be semi-convergent (or conditionally convergent) if it is convergent but not absolutely convergent.

Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi (1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847). Schlömilch (Zeitschrift, Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function

F(x) = 1^n + 2^n + \cdots + (x - 1)^n.

Genocchi (1852) has further contributed to the theory.

Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Cayley (1873) brought it intoprominence.

Fourier series

Fourier series were being investigatedas the result of physical considerations at the same time thatGauss, Abel, and Cauchy were working out the theory of infiniteseries. Series for the expansion of sines and cosines, of multiplearcs in powers of the sine and cosine of the arc had been treated byJacob Bernoulli (1702) and his brother Johann Bernoulli (1701) and stillearlier by Vieta. Euler and Lagrange simplified the subject,as did Poinsot, Schröter, Glaisher, and Kummer.

Fourier (1807) set for himself a different problem, toexpand a given function of x in terms of the sines or cosines ofmultiples of x, a problem which he embodied in his Théorie analytique de la chaleur (1822). Euler had already given the formulas for determining the coefficients in the series;Fourier was the first to assert and attempt to prove the generaltheorem. Poisson (1820–23) also attacked the problem from adifferent standpoint. Fourier did not, however, settle the questionof convergence of his series, a matter left for Cauchy (1826) toattempt and for Dirichlet (1829) to handle in a thoroughlyscientific manner (see convergence of Fourier series). Dirichlet's treatment (Crelle, 1829), of trigonometric series was the subject of criticism and improvement byRiemann (1854), Heine, Lipschitz, Schläfli, anddu Bois-Reymond. Among other prominent contributors to the theory oftrigonometric and Fourier series were Dini, Hermite, Halphen,Krause, Byerly and Appell.

Summations over general index sets

Definitions may be given for infinitary sums over an arbitrary index set

I.

This generalization introduces two main differences from the usual notion of series: first, there may be no specific order given on the set

I

; second, the set

I

may be uncountable. The notions of convergence need to be reconsidered for these, then, because for instance the concept of conditional convergence depends on the ordering of the index set.

If

a:I\mapstoG

is a function from an index set

I

to a set

G,

then the "series" associated to

a

is the formal sum of the elements

a(x)\inG

over the index elements

x\inI

denoted by the

\sum_ a(x).

When the index set is the natural numbers

I=\N,

the function

a:\N\mapstoG

is a sequence denoted by

a(n)=an.

A series indexed on the natural numbers is an ordered formal sum and so we rewrite \sum_ as \sum_^ in order to emphasize the ordering induced by the natural numbers. Thus, we obtain the common notation for a series indexed by the natural numbers

\sum_^ a_n = a_0 + a_1 + a_2 + \cdots.

Families of non-negative numbers

When summing a family

\left\{ai:i\inI\right\}

of non-negative real numbers over the index set

I

, define

\sum_a_i = \sup \biggl\ \in [0, +\infty].

When the supremum is finite then the set of

i\inI

such that

ai>0

is countable. Indeed, for every

n\geq1,

the cardinality

\left|An\right|

of the set

An=\left\{i\inI:ai>1/n\right\}

is finite because

\frac \, \left|A_n\right| = \sum_ \frac \leq \sum_ a_i \leq \sum_ a_i < \infty.

If

I

is countably infinite and enumerated as

I=\left\{i0,i1,\ldots\right\}

then the above defined sum satisfies

\sum_ a_i = \sum_^ a_,provided the value

infty

is allowed for the sum of the series.

Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions.

Abelian topological groups

Let

a:I\toX

be a map, also denoted by

\left(ai\right)i,

from some non-empty set

I

into a Hausdorff abelian topological group

X.

Let

\operatorname{Finite}(I)

be the collection of all finite subsets of

I,

with

\operatorname{Finite}(I)

viewed as a directed set, ordered under inclusion

\subseteq

with union as join. The family

\left(ai\right)i,

is said to be if the following limit, which is denoted by

style\sumi\inai

and is called the of

\left(ai\right)i,

exists in

X:

\sum_ a_i := \lim_ \ \sum_ a_i = \lim \biggl\Saying that the sum

styleS:=\sumi\inai

is the limit of finite partial sums means that for every neighborhood

V

of the origin in

X,

there exists a finite subset

A0

of

I

such that

S - \sum_ a_i \in V \qquad \text \; A \supseteq A_0.

Because

\operatorname{Finite}(I)

is not totally ordered, this is not a limit of a sequence of partial sums, but rather of a net.[26] [27]

For every neighborhood

W

of the origin in

X,

there is a smaller neighborhood

V

such that

V-V\subseteqW.

It follows that the finite partial sums of an unconditionally summable family

\left(ai\right)i,

form a, that is, for every neighborhood

W

of the origin in

X,

there exists a finite subset

A0

of

I

such that

\sum_ a_i - \sum_ a_i \in W \qquad \text \; A_1, A_2 \supseteq A_0,which implies that

ai\inW

for every

i\inI\setminusA0

(by taking

A1:=A0\cup\{i\}

and

A2:=A0

).

When

X

is complete, a family

\left(ai\right)i

is unconditionally summable in

X

if and only if the finite sums satisfy the latter Cauchy net condition. When

X

is complete and

\left(ai\right)i,

is unconditionally summable in

X,

then for every subset

J\subseteqI,

the corresponding subfamily

\left(aj\right)j,

is also unconditionally summable in

X.

When the sum of a family of non-negative numbers, in the extended sense defined before, is finite, then it coincides with the sum in the topological group

X=\R.

If a family

\left(ai\right)i

in

X

is unconditionally summable then for every neighborhood

W

of the origin in

X,

there is a finite subset

A0\subseteqI

such that

ai\inW

for every index

i

not in

A0.

If

X

is a first-countable space then it follows that the set of

i\inI

such that

ai0

is countable. This need not be true in a general abelian topological group (see examples below).

Unconditionally convergent series

Suppose that

I=\N.

If a family

an,n\in\N,

is unconditionally summable in a Hausdorff abelian topological group

X,

then the series in the usual sense converges and has the same sum,

\sum_^\infty a_n = \sum_ a_n.

By nature, the definition of unconditional summability is insensitive to the order of the summation. When

style\suman

is unconditionally summable, then the series remains convergent after any permutation

\sigma:\N\to\N

of the set

\N

of indices, with the same sum,

\sum_^\infty a_ = \sum_^\infty a_n.

Conversely, if every permutation of a series

style\suman

converges, then the series is unconditionally convergent. When

X

is complete then unconditional convergence is also equivalent to the fact that all subseries are convergent; if

X

is a Banach space, this is equivalent to say that for every sequence of signs

\varepsilonn=\pm1

, the series

\sum_^\infty \varepsilon_n a_n

converges in

X.

Series in topological vector spaces

If

X

is a topological vector space (TVS) and

\left(xi\right)i

is a (possibly uncountable) family in

X

then this family is summable[28] if the limit

style\limA(I)}xA

of the net

\left(xA\right)A(I)}

exists in

X,

where

\operatorname{Finite}(I)

is the directed set of all finite subsets of

I

directed by inclusion

\subseteq

and x_A := \sum_ x_i.

It is called absolutely summable if in addition, for every continuous seminorm

p

on

X,

the family

\left(p\left(xi\right)\right)i

is summable.If

X

is a normable space and if

\left(xi\right)i

is an absolutely summable family in

X,

then necessarily all but a countable collection of

xi

’s are zero. Hence, in normed spaces, it is usually only ever necessary to consider series with countably many terms.

Summable families play an important role in the theory of nuclear spaces.

Series in Banach and seminormed spaces

The notion of series can be easily extended to the case of a seminormed space. If

xn

is a sequence of elements of a normed space

X

and if

x\inX

then the series

style\sumxn

converges to

x

in

X

if the sequence of partial sums of the series \bigl(\!\!~\sum_^N x_n\bigr)_^ converges to

x

in

X

; to wit,

\Biggl\|x - \sum_^N x_n\Biggr\| \to 0 \quad \text N \to \infty.

More generally, convergence of series can be defined in any abelian Hausdorff topological group. Specifically, in this case,

style\sumxn

converges to

x

if the sequence of partial sums converges to

x.

If

(X,||)

is a seminormed space, then the notion of absolute convergence becomes: A series \sum_ x_i of vectors in

X

converges absolutely if

\sum_ \left|x_i\right| < +\infty

in which case all but at most countably many of the values

\left|xi\right|

are necessarily zero.

If a countable series of vectors in a Banach space converges absolutely then it converges unconditionally, but the converse only holds in finite-dimensional Banach spaces (theorem of).

Well-ordered sums

Conditionally convergent series can be considered if

I

is a well-ordered set, for example, an ordinal number

\alpha0.

In this case, define by transfinite recursion:

\sum_\! a_\beta = a_ + \sum_ a_\beta

and for a limit ordinal

\alpha,

\sum_ a_\beta = \lim_\, \sum_ a_\beta

if this limit exists. If all limits exist up to

\alpha0,

then the series converges.

Examples

f:X\toY

into an abelian topological group

Y,

define for every

a\inX,

f_a(x)= \begin0 & x\neq a, \\f(a) & x=a, \\\end a function whose support is a singleton

\{a\}.

Then f = \sum_f_a in the topology of pointwise convergence (that is, the sum is taken in the infinite product group

styleYX

).

I,

\sum_ \varphi_i(x) = 1. While, formally, this requires a notion of sums of uncountable series, by construction there are, for every given

x,

only finitely many nonzero terms in the sum, so issues regarding convergence of such sums do not arise. Actually, one usually assumes more: the family of functions is locally finite, that is, for every

x

there is a neighborhood of

x

in which all but a finite number of functions vanish. Any regularity property of the

\varphii,

such as continuity, differentiability, that is preserved under finite sums will be preserved for the sum of any subcollection of this family of functions.

\omega1

viewed as a topological space in the order topology, the constant function

f:\left[0,\omega1\right)\to\left[0,\omega1\right]

given by

f(\alpha)=1

satisfies \sum_

Notes and References

  1. Book: Calculus Made Easy . Thompson . Silvanus . Silvanus P. Thompson . Gardner . Martin . Martin Gardner . 1998 . Macmillan . 978-0-312-18548-0 .
  2. Swain . Gordon . Dence . Thomas . 1998 . Archimedes' Quadrature of the Parabola Revisited . Mathematics Magazine . 71 . 2 . 123–130 . 10.2307/2691014 . 0025-570X . 2691014.
  3. Book: Russo, Lucio . The Forgotten Revolution . 2004 . Springer-Verlag . 978-3-540-20396-4 . Germany . 49-52 . Levy . Silvio.
  4. Book: Ablowitz, Mark J. . Complex Variables: Introduction and Applications . Fokas . Athanassios S. . Cambridge University Press . 2003 . 978-0-521-53429-1 . 2nd . 110.
  5. Book: Dummit, David S. . Abstract Algebra . Foote . Richard M. . John Wiley and Sons . 2004 . 978-0-471-43334-7 . 3rd . Hoboken, NJ . 238.
  6. Book: Wilf, Herbert S. . Generatingfunctionology . Academic Press . 1990 . 978-1-48-324857-8 . San Diego . 27-28.
  7. Book: Swokoski, Earl W. . Calculus with Analytic Geometry . Prindle, Weber & Schmidt . 1983 . 978-0-87150-341-1 . Alternate . Boston . 501.
  8. Knuth . Donald E. . 1992 . Two Notes on Notation . American Mathematical Monthly . 99 . 5 . 403–422 . 10.2307/2325085 . 2325085 .
  9. Book: Atkinson, Kendall E. . An Introduction to Numerical Analysis . 1989 . Wiley . 978-0-471-62489-9 . 2nd . New York . 20 . English . 803318878.
  10. Book: Stoer . Josef . Introduction to Numerical Analysis . 2002 . 3rd . Princeton, N.J. . Recording for the Blind & Dyslexic . English . 50556273 . Bulirsch . Roland.
  11. Web site: Wilkins . David . 2007 . Section 6: The Extended Real Number System . 2019-12-03 . maths.tcd.ie.
  12. Kifowit . Steven J. . Stamps . Terra A. . 2006 . The harmonic series diverges again and again . American Mathematical Association of Two-Year Colleges Review . 27 . 2 . 31–43.
  13. Book: Saff, E. B. . Fundamentals of Complex Analysis . Snider . Arthur D. . Pearson Education . 2003 . 0-13-907874-6 . 3rd . 247-249.
  14. Gasper, G., Rahman, M. (2004). Basic hypergeometric series. Cambridge University Press.
  15. https://www.ck12.org/book/CK-12-Calculus-Concepts/section/9.9/ Positive and Negative Terms: Alternating Series
  16. Johansson, F. (2016). Computing hypergeometric functions rigorously. arXiv preprint arXiv:1606.06977.
  17. Higham, N. J. (2008). Functions of matrices: theory and computation. Society for Industrial and Applied Mathematics.
  18. Higham, N. J. (2009). The scaling and squaring method for the matrix exponential revisited. SIAM review, 51(4), 747-764.
  19. http://www.maths.manchester.ac.uk/~higham/talks/exp10.pdf How and How Not to Compute the Exponential of a Matrix
  20. §III.2.11.

  21. .
  22. Web site: 2024-09-25 . The Unraveling of Space-Time . 2024-10-11 . Quanta Magazine . en.
  23. Web site: A history of calculus . O'Connor, J.J. . Robertson, E.F. . amp . University of St Andrews. 1996 . 2007-08-07.
  24. Archimedes and Pi-Revisited. . Bidwell . James K. . 30 November 1993 . School Science and Mathematics . 94 . 3 . 10.1111/j.1949-8594.1994.tb15638.x .
  25. Web site: Indians predated Newton 'discovery' by 250 years. manchester.ac.uk.
  26. Book: Bourbaki, Nicolas. General Topology: Chapters 1–4. Nicolas Bourbaki. 1998. Springer. 978-3-540-64241-1. 261–270.
  27. Book: Choquet, Gustave. Topology. Gustave Choquet. 1966. Academic Press. 978-0-12-173450-3. 216–231.
  28. Book: Schaefer, Helmut H. . Helmut H. Schaefer . Topological Vector Spaces . Wolff . Manfred P. . 1999 . Springer . 978-1-4612-7155-0 . 2nd . Graduate Texts in Mathematics . 8 . New York, NY . 179–180.