In probability theory, the law of the iterated logarithm describes the magnitude of the fluctuations of a random walk. The original statement of the law of the iterated logarithm is due to A. Ya. Khinchin (1924).[1] Another statement was given by A. N. Kolmogorov in 1929.[2]
Let be independent, identically distributed random variables with zero means and unit variances. Let Sn = Y1 + ... + Yn. Then
\limsupn
|Sn| | |
\sqrt{2nloglogn |
Another statement given by A. N. Kolmogorov in 1929[5] is as follows.
Let
\{Yn\}
Sn=Y1+...+Yn
Bn=\operatorname{Var}(Y1)+...+\operatorname{Var}(Yn)
Bn\toinfty
\{Mn\}
|Yn|\leMn
Mn = o\left(\sqrt{
Bn | |
loglogBn |
\limsupn
|Sn| | |
\sqrt{2BnloglogBn |
Note that, the first statement covers the case of the standard normal distribution, but the second does not.
The law of iterated logarithms operates "in between" the law of large numbers and the central limit theorem. There are two versions of the law of large numbers — the weak and the strong — and they both state that the sums Sn, scaled by n−1, converge to zero, respectively in probability and almost surely:
Sn | |
n |
\xrightarrow{p} 0,
Sn | |
n |
\xrightarrow{a.s.}0, as n\toinfty.
On the other hand, the central limit theorem states that the sums Sn scaled by the factor n−1/2 converge in distribution to a standard normal distribution. By Kolmogorov's zero–one law, for any fixed M, the probability that the event
\limsupn
Sn | |
\sqrt{n |
\Pr\left(\limsupn
Sn | |
\sqrt{n |
so
\limsupn
Sn | |
\sqrt{n |
An identical argument shows that
\liminfn
Sn | |
\sqrt{n |
This implies that these quantities cannot converge almost surely. In fact, they cannot even converge in probability, which follows from the equality
S2n | |
\sqrt{2n |
and the fact that the random variables
Sn | |
\sqrt{n |
are independent and both converge in distribution to
l{N}(0,1).
The law of the iterated logarithm provides the scaling factor where the two limits become different:
Sn | |
\sqrt{2nloglogn |
Thus, although the absolute value of the quantity
Sn/\sqrt{2nloglogn}
The law of the iterated logarithm (LIL) for a sum of independent and identically distributed (i.i.d.) random variables with zero mean and bounded increment dates back to Khinchin and Kolmogorov in the 1920s.
Since then, there has been a tremendous amount of work on the LIL for various kinds ofdependent structures and for stochastic processes. The following is a small sample of notable developments.
Hartman–Wintner (1940) generalized LIL to random walks with increments with zero mean and finite variance. De Acosta (1983) gave a simple proof of the Hartman–Wintner version of the LIL.[6]
Chung (1948) proved another version of the law of the iterated logarithm for the absolute value of a brownian motion.[7]
Strassen (1964) studied the LIL from the point of view of invariance principles.[8]
Stout (1970) generalized the LIL to stationary ergodic martingales.[9]
Wittmann (1985) generalized Hartman–Wintner version of LIL to random walks satisfying milder conditions.[10]
Vovk (1987) derived a version of LIL valid for a single chaotic sequence (Kolmogorov random sequence).[11] This is notable, as it is outside the realm of classical probability theory.
Yongge Wang (1996) showed that the law of the iterated logarithm holds for polynomial time pseudorandom sequences also.[12] [13] The Java-based software testing tool tests whether a pseudorandom generator outputs sequences that satisfy the LIL.
Balsubramani (2014) proved a non-asymptotic LIL that holds over finite-time martingale sample paths.[14] This subsumes the martingale LIL as it provides matching finite-sample concentration and anti-concentration bounds, and enables sequential testing[15] and other applications.[16]