In real algebraic geometry, the Łojasiewicz inequality, named after Stanisław Łojasiewicz, gives an upper bound for the distance of a point to the nearest zero of a given real analytic function. Specifically, let ƒ : U → R be a real analytic function on an open set U in Rn, and let Z be the zero locus of ƒ. Assume that Z is not empty. Then for any compact set K in U, there exist positive constants α and C such that, for all x in K
\operatorname{dist}(x,Z)\alpha\leC|f(x)|.
Here α can be large.
The following form of this inequality is often seen in more analytic contexts: with the same assumptions on f, for every p ∈ U there is a possibly smaller open neighborhood W of p and constants θ ∈ (0,1) and c > 0 such that
|f(x)-f(p)|\theta\lec|\nablaf(x)|.
A special case of the Łojasiewicz inequality, due to, is commonly used to prove linear convergence of gradient descent algorithms. This section is based on and .
is a function of type , and has a continuous derivative
\nablaf
X*
\Rd
f
f*
x
X*
are constants.
is
L
is -strongly convex iff
is -PL (where "PL" means "Polyak-Łojasiewicz") iff
The coordinate descent algorithm first samples a random coordinate uniformly, then perform gradient descent by
In stochastic gradient descent, we have a function to minimize , but we cannot sample its gradient directly. Instead, we sample a random gradient , where are such that For example, in typical machine learning, are the parameters of the neural network, and is the loss incurred on the -th training data point, while is the average loss over all training data points.
The gradient update step is where are a sequence of learning rates (the learning rate schedule).
As it is, the proposition is difficult to use. We can make it easier to use by some further assumptions.
The second-moment on the right can be removed by assuming a uniform upper bound. That is, if there exists some such that during the SG process, we have for all , thenSimilarly, if then
For constant learning rate schedule, with , we haveBy induction, we have We see that the loss decreases in expectation first exponentially, but then stops decreasing, which is caused by the term. In short, because the gradient descent steps are too large, the variance in the stochastic gradient starts to dominate, and starts doing a random walk in the vicinity of .
For decreasing learning rate schedule with , we have
E\left[f\left(xk\right)-f*\right]=O(1/k)