The intensity
λ
\{N(t),t\ge0\}
N(t)=M(t)+Λ(t)
where
M(t)
Λ(t)
Λ(t)
N(t)
λ
Λ(t)=
t | |
\int | |
0 |
λ(s)ds
Given probability space
(\Omega,l{F},P)
\{N(t),t\ge0\}
\{l{F}t,t\ge0\}
N
\{λ(t),t\ge0\}
λ(t)=\limh\downarrow
1 | |
h |
E[N(t+h)-N(t)|l{F}t]
The right-continuity property of counting processes allows us to take this limit from the right.[1]
In statistical learning, the variation between
λ
\hat{λ}
If a counting process
N(t)
t\in[0,1]
n
N1,N2,\ldots,Nn
Rn(λ)=
1 | |
\int | |
0 |
λ(t)2dt-
2 | |
n |
n | |
\sum | |
i=1 |
1 | |
\int | |
0 |
λ(t)dNi(t)
which involves an Ito integral. If the assumption is made that
λ(t)
[0,1]
\beta=(\beta1,\beta2,\ldots,\betam)\in
m | |
\R | |
+ |
λ\beta=
m | |
\sum | |
j=1 |
\betajλj,m, λj,m=\sqrt{m}
1 | ||||||||
|
where the
λj,m
\sqrt{m}
L2
\hat{w}j
x>0
\|\beta\|\hat{w
the estimator for
\beta
\hat{\beta}=
\argmin | |||||||||
|
\left\{Rn(λ\beta)+\|\beta\|\hat{w
Then, the estimator
\hat{λ}
λ\hat{\beta
L2
\|\hat{λ}-λ\|
\hat{w}j(x)
\|\hat{λ}-λ\|2\le
inf | |||||||||
|
\left\{\|λ\beta-λ\|2+2\|\beta\|\hat{w
with probability greater than or equal to
1-12.85e-x