In mathematics, a low-discrepancy sequence is a sequence with the property that for all values of
N
x1,\ldots,xN
Roughly speaking, the discrepancy of a sequence is low if the proportion of points in the sequence falling into an arbitrary set B is close to proportional to the measure of B, as would happen on average (but not for particular samples) in the case of an equidistributed sequence. Specific definitions of discrepancy differ regarding the choice of B (hyperspheres, hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value).
Low-discrepancy sequences are also called quasirandom sequences, due to their common use as a replacement of uniformly distributed random numbers.The "quasi" modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random nor pseudorandom, but such sequences share some properties of random variables and in certain applications such as the quasi-Monte Carlo method their lower discrepancy is an important advantage.
Quasirandom numbers have an advantage over pure random numbers in that they cover the domain of interest quickly and evenly.
Two useful applications are in finding the characteristic function of a probability density function, and in finding the derivative function of a deterministic function with a small amount of noise. Quasirandom numbers allow higher-order moments to be calculated to high accuracy very quickly.
Applications that don't involve sorting would be in finding the mean, standard deviation, skewness and kurtosis of a statistical distribution, and in finding the integral and global maxima and minima of difficult deterministic functions. Quasirandom numbers can also be used for providing starting points for deterministic algorithms that only work locally, such as Newton–Raphson iteration.
Quasirandom numbers can also be combined with search algorithms. With a search algorithm, quasirandom numbers can be used to find the mode, median, confidence intervals and cumulative distribution of a statistical distribution, and all local minima and all solutions of deterministic functions.
Various methods of numerical integration can be phrased as approximating the integral of a function
f
\{x1,...,xN}\
1 | |
\int | |
0 |
f(u)du ≈
1 | |
N |
N | |
\sum | |
i=1 |
f(xi).
If the points are chosen as
xi=i/N
f
\{x1,...,xN}\
It is convenient to construct the set
\{x1,...,xN}\
N+1
N
N
N
The discrepancy of a set
P=\{x1,...,xN}\
DN(P)=\supB\in\left|
A(B;P) | |
N |
-λs(B)\right|
where
λs
s
A(B;P)
P
B
J
s
s | |
\prod | |
i=1 |
[ai,bi)=\{x\inRs:ai\lexi<bi\}
where
0\leai<bi\le1
The star-discrepancy
* | |
D | |
N(P) |
J*
s | |
\prod | |
i=1 |
[0,ui)
where
ui
The two are related by
* | |
D | |
N |
\leDN\le2s
* | |
D | |
N |
.
Note: With these definitions, discrepancy represents the worst-case or maximum point density deviation of a uniform set. However, also other error measures are meaningful, leading to other definitions and variation measures. For instance,
L2
L2
N
s
Let
\overline{I}s
s
\overline{I}s=[0,1] x … x [0,1]
f
V(f)
\overline{I}s
x1,\ldots,xN
Is=[0,1)s=[0,1) x … x [0,1)
\left|
1 | |
N |
N | |
\sum | |
i=1 |
f(xi) -
\int | |
\barIs |
f(u)du\right| \leV(f)
* | |
D | |
N |
(x1,\ldots,xN).
\{x1,\ldots,xN\}
Is
\varepsilon>0
f
V(f)=1
\left|
1 | |
N |
N | |
\sum | |
i=1 |
f(xi) -
\int | |
\barIs |
f(u)du
* | |
\right|>D | |
N |
(x1,\ldots,xN)-\varepsilon.
Therefore, the quality of a numerical integration rule depends only on the discrepancy
* | |
D | |
N(x |
1,\ldots,xN)
Let
D=\{1,2,\ldots,d\}
\emptyset ≠ u\subseteqD
dxu:=\prodj\indxj
(xu,1)
1
1 | |
N |
N | |
\sum | |
i=1 |
f(xi) -
\int | |
\barIs |
f(u)du= \sum\emptyset ≠ (-1)|u|
\int | |
[0,1]|u| |
\operatorname{disc}(x | ||||
|
f(xu,1)dxu,
where
\operatorname{disc}(z)=
1 | |
N |
N | |
\sum | |
i=1 |
d | |
\prod | |
j=1 |
1 | |
[0,zj) |
(xi,j)-
d | |
\prod | |
j=1 |
zi
Applying the Cauchy–Schwarz inequality for integrals and sums to the Hlawka–Zaremba identity, we obtain an
L2
\left| | 1 |
N |
N | |
\sum | |
i=1 |
f(xi) -
\int | |
\barIs |
f(u)du\right|\le \|f\|d\operatorname{disc}d(\{ti\}),
where
\operatorname{disc}d(\{ti\})=\left(\sum\emptyset ≠
\int | |
[0,1]|u| |
2 | |
\operatorname{disc}(x | |
u,1) |
1/2 | |
dx | |
u\right) |
and
\|f\|d=\left(\sumu\subseteq
\int | \left| | |
[0,1]|u| |
\partial|u| | |
\partialxu |
2 | |
f(x | |
u,1)\right| |
1/2 | |
dx | |
u\right) |
.
L2
L2
It is computationally hard to find the exact value of the discrepancy of large point sets. The Erdős–Turán–Koksma inequality provides an upper bound.
Let
x1,\ldots,xN
Is
H
* | |
D | |
N |
(x1,\ldots,x
|
| ||||
\right) |
+ \sum | |
0<\|h\|infty\leqH |
1 | \left| | |
r(h) |
1 | |
N |
N | |
\sum | |
n=1 |
2\pii\langleh,xn\rangle | |
e |
\right| \right)
where
smax\{1,|h | |
r(h)=\prod | |
i|\} for |
h=(h1,\ldots,h
s. | |
s)\in\Z |
Conjecture 1. There is a constant
cs
s
* | |
D | |
N |
(x1,\ldots,xN)\geq
c | ||||
|
{x1,\ldots,xN}
Conjecture 2. There is a constant
c's
s
* | |
D | |
N |
(x1,\ldots,xN)\geq
c' | ||||
|
for infinite number of
N
x1,x2,x3,\ldots
These conjectures are equivalent. They have been proved for
s\leq2
Let
s=1
*(x | |
D | |
1,\ldots,x |
|
for any finite point set
\{x1,...,xN}\
Let
s=2
\{x1,...,xN}\
*(x | |
D | |
1,\ldots,x |
N)\geqC
logN | |
N |
where
C=maxa\geq3
1 | |
16 |
a-2 | |
aloga |
=0.023335....
For arbitrary dimensions
s>1
*(x | |
D | |
1,\ldots,x |
|
1 | |||||||
|
| ||||||||||
N |
for any finite point set
\{x1,...,xN}\
\{x1,...,xN}\
t=0
*(x | |
D | |
1,\ldots,x |
N)\geqt
| |||||||||||
N |
for any finite point set
\{x1,...,xN}\
Because any distribution of random numbers can be mapped onto a uniform distribution, and quasirandom numbers are mapped in the same way, this article only concerns generation of quasirandom numbers on a multidimensional uniform distribution.
There are constructions of sequences known such that
* | |
D | |
N |
(x1,\ldots,xN)\leqC
(lnN)s | |
N |
.
C
N
N
s=20
N=1000
Sequences of quasirandom numbers can be generated from random numbers by imposing a negative correlation on those random numbers. One way to do this is to start with a set of random numbers
ri
[0,0.5)
si
[0,1)
si=ri
i
si=0.5+ri
i
A second way to do it with the starting random numbers is to construct a random walk with offset 0.5 as in:
si=si-1+0.5+ri\pmod1.
That is, take the previous quasirandom number, add 0.5 and the random number, and take the result modulo 1.
For more than one dimension, Latin squares of the appropriate dimension can be used to provide offsets to ensure that the whole domain is covered evenly.
For any irrational
\alpha
sn=\{s0+n\alpha\}
has discrepancy tending to
1/N
sn+1=(sn+\alpha)\bmod1 .
A good value of
\alpha
The discrepancy can be bounded by the approximation exponent of
\alpha
\mu
\varepsilon>0
DN((sn))=O\varepsilon(N-1/(\mu-1)+\varepsilon).
By the Thue–Siegel–Roth theorem, the approximation exponent of any irrational algebraic number is 2, giving a bound of
N-1+\varepsilon
The recurrence relation above is similar to the recurrence relation used by a linear congruential generator, a poor-quality pseudorandom number generator:[3]
ri=(ari-1+c)\bmodm
For the low discrepancy additive recurrence above, a and m are chosen to be 1. Note, however, that this will not generate independent random numbers, so should not be used for purposes requiring independence.
The value of
c
c=
\sqrt{5 | |
-1}{2} |
=\varphi-1 ≈ 0.618034.
Another value that is nearly as good is the fractional part of the silver ratio, which is the fractional part of the square root of 2:
c=\sqrt{2}-1 ≈ 0.414214.
In more than one dimension, separate quasirandom numbers are needed for each dimension. A convenient set of values that are used, is the square roots of primes from two up, all taken modulo 1:
c=\sqrt{2},\sqrt{3},\sqrt{5},\sqrt{7},\sqrt{11},\ldots
However, a set of values based on the generalised golden ratio has been shown to produce more evenly distributed points. [5]
The list of pseudorandom number generators lists methods for generating independent pseudorandom numbers.Note: In few dimensions, recursive recurrence leads to uniform sets of good quality, but for larger
s
s>8
See main article: article and van der Corput sequence.
Let
L-1 | |
n=\sum | |
k=0 |
k | |
d | |
k(n)b |
be the
b
n\geq1
0\leqdk(n)<b
gb(n)=\sum
L-1 | |
k=0 |
-k-1 | |
d | |
k(n)b |
.
Then there is a constant
C
b
(gb(n))n
* | |
D | |
N(g |
b(1),...,gb(N))\leqC
logN | |
N |
,
where
* | |
D | |
N |
See main article: article and Halton sequence.
The Halton sequence is a natural generalization of the van der Corput sequence to higher dimensions. Let s be an arbitrary dimension and b1, ..., bs be arbitrary coprime integers greater than 1. Define
x(n)=(g | |
b1 |
(n),...,g | |
bs |
(n)).
Then there is a constant C depending only on b1, ..., bs, such that sequence n≥1 is a s-dimensional sequence with
* | ||
D | C' | |
N(x(1),...,x(N))\leq |
(logN)s | |
N |
.
Let
b1,\ldots,bs-1
s
N
s
N
x(n)=\left(g | |
b1 |
(n),...,g | (n), | |
bs-1 |
n | |
N |
\right)
for
n=1,\ldots,N
* | ||
D | C | |
N(x(1),...,x(N))\leq |
(logN)s-1 | |
N |
where
C
b1,\ldots,bs-1
Note: The formulas show that the Hammersley set is actually the Halton sequence, but we get one more dimension for free by adding a linear sweep. This is only possible if
N
s=2
See main article: article and Sobol sequence.
The Antonov–Saleev variant of the Sobol’ sequence generates numbers between zero and one directly as binary fractions of length
w,
w
Vi,i=1,2,...,w
i
G(i)
si
i
Vi
See main article: article. Poisson disk sampling is popular in video games to rapidly place objects in a way that appears random-lookingbut guarantees that every two points are separated by at least the specified minimum distance.[7] This does not guarantee low discrepancy (as e. g. Sobol’), but at least a significantly lower discrepancy than pure random sampling. The goal of these sampling patterns is based on frequency analysis rather than discrepancy, a type of so-called "blue noise" patterns.
The points plotted below are the first 100, 1000, and 10000 elements in a sequence of the Sobol' type.For comparison, 10000 elements of a sequence of pseudorandom points are also shown.The low-discrepancy sequence was generated by TOMS algorithm 659.[8] An implementation of the algorithm in Fortran is available from Netlib.