An Erdös–Révész Type Law of the Iterated Logarithm for Order Statistics of a Stationary Gaussian Process
An ErdösRévész Type Law of the Iterated Logarithm for Order Statistics of a Stationary Gaussian Process
K. De˛bicki 0
K. M. Kosi n´ski 0
B K. M. Kosin´ski 0
K. De˛bicki 0
0 Instytut Matematyczny, University of Wrocław , Pl. Grunwaldzki 2/4, 50384 Wrocław , Poland
Let { X (t ) : t ∈ R+} be a stationary Gaussian process with almost surely (a.s.) continuous sample paths, E X (t ) = 0, E X 2(t ) = 1 and correlation function satisfying (i) r (t ) = 1 − C t α + o(t α ) as t → 0 for some 0 ≤ α ≤ 2 and C > 0; (ii) supt≥s r (t ) < 1 for each s > 0 and (iii) r (t ) = O (t −λ) as t → ∞ for some λ > 0. For any n ≥ 1, consider n mutually independent copies of X and denote by { Xr :n (t ) : t ≥ 0} the r th smallest order statistics process, 1 ≤ r ≤ n. We provide a tractable criterion for assessing whether, for any positive, nondecreasing function f, P(E f ) = P( Xr :n (t ) > f (t ) i.o.) equals 0 or 1. Using this criterion we find, for a family of functions f p(t ) such that z p(t ) = P(sups∈[0,1] Xr :n (s) > f p(t )) = O ((t log1− p t )−1), that P(E f p ) = 1{ p≥0}. Consequently, with ξ p(t ) = sup{s : 0 ≤ s ≤ t , Xr :n (s) ≥ f p(s)}, for p ≥ 0 we have limt→∞ ξ p(t ) = ∞ and lim supt→∞(ξ p(t ) − t ) = 0 a.s. Complementarily, we prove an ErdösRévész type law of the iterated logarithm lower bound on ξ p(t ), namely, that lim inf t→∞(ξ p(t ) − t )/ h p(t ) = −1 a.s. for p > 1 and lim inf t→∞ log(ξ p(t )/t )/(h p(t )/t ) = −1 a.s. for p ∈ (0, 1], where h p(t ) = (1/z p(t )) p log log t .
Extremes of Gaussian processes; Order statistics process; Law of the iterated logarithm

1 Introduction and Main Results
Let X = {X (t ) : t ∈ R+} be a stationary Gaussian process with almost surely (a.s.)
continuous sample paths, EX (t ) = 0 and EX 2(t ) = 1. Suppose that the correlation
function of X, r (t ) = EX (t )X (0), satisfies the following regularity assumptions:
r (t ) = 1 − C t α + o(t α) as t → 0 for some 0 ≤ α ≤ 2, C > 0,
r ∗(s) = sup r (t ) < 1 for each s > 0,
t≥s
r (t ) = O(t −2λ) as t → ∞ for some λ > 0.
The analysis of extremes of Gaussian stochastic processes has a long history. The
celebrated double sum method, primarily developed by Pickands, e.g., [8], and extended
by seminal works of Piterbarg, e.g., [10] or monograph [9], plays central role in the
extreme value theory of Gaussian processes. The technique developed there appeared
to be an universal method, which may deliver answers also to classes of nonGaussian
processes, see for example, recent contributions of [5,6].
Laws of the iterated logarithm take important place in this theory, providing
properties of extremal behavior of stochastic processes on largetime scale. One of important
contributions in this domain is a result on the process ξ = {ξ(t ) : t ≥ 0}, defined via
ξ(t ) = sup{s : 0 ≤ s ≤ t, X (s) ≥ (2 log s)1/2}. In particular, the law of the iterated
logarithm implies that, see [11,12],
(
1
)
(
2
)
(
3
)
(
4
)
lim sup(ξ(t ) − t ) = 0 a.s.
t→∞
Interestingly, under the above regularity assumptions, [12] gave the lower bound of
ξ(t ) and obtained an Erdös–Révész type law of the iterated logarithm, that is,
ξ(t ) − t
lim inf
t→∞ t (log t )(α−2)/(2α) · log2 t = − αHα(2C )1/α
log (ξ(t )/t ) 2√π
litm→i∞nf log2 t = − H2√2C
(2 + α)√π
a.s. if 0 < α < 2,
a.s. if α = 2,
ewshupetr∈e[0,T H](√α2Bαis/2(tt)h−etα)P,wiciktahnBdsα’/2 c=on{sBtaαn/2t(td)e:fitne≥d 0}bydenHotαing =fractiloimnaTl→B∞rowTn−i1aEn
motion with Hurst index α/2 ∈ (0, 1], i.e., a continuous, centered Gaussian process
with covariance function
1
EBα/2(s)Bα/2(t ) = 2 (sα + t α − t − sα).
Equation (
3
) shows that for any t big enough there exists an s in [t − t (log t )(α−2)/(2α) ·
log2 t, t ] such that, almost surely, X (s) ≥ (2 log s)1/2 and that the length of the interval
t (log t )(α−2)/(2α) · log2 t is smallest possible. Moreover, the bigger the parameter α is,
the wider the interval will be.
In this paper, we derive a counterpart of Shao’s result for the order statistics process
Xr:n. Namely, for any n ≥ 0, we consider X1, . . . , Xn, n mutually independent copies
of X and denote by Xr:n = {Xr:n(t ) : t ≥ 0} the r th smallest order statistics process,
that is, for each t ≥ 0, 1 ≤ r ≤ n,
min X j (t ) ≤ X2:n(t ) ≤ . . . ≤ Xn−1:n(t ) ≤ 1m≤ja≤xn X j (t ) = Xn:n(t ).
X1:n(t ) = 1≤ j≤n
Our first contribution is the theorem that extends classical findings of Qualls and
Watanabe [11].
Theorem 1 For all functions f that are positive and nondecreasing on some interval
[T , ∞), T > 0, it follows that
P E f := P (Xr:n(t ) > f (t ) i.o.) = 0 or 1,
as the integral
I f :=
T
∞
P
sup Xr:n(t ) > f (u) du is finite or infinite.
t∈[0,1]
[1, Theorem 2.2], see also [3], gave the expression for the asymptotic behavior of the
probability in I f , namely
1 n
= C α
r
ˆ
where rˆ = n − r + 1, (u) = 1 −
normal law,
Hα,k = Tl→im∞ T −1Hα,k (T ) ∈ (0, ∞),
Hα,k (T ) =
Rk
e ik=1 wi P
sup min
t∈[0,T ] 1≤i≤k
2
Hα,rˆ u α ( (u))rˆ (1 + o(
1
)),
as u → ∞, (
5
)
(u) and
(u) is the distribution function of unit
√
2Bα(i/)2(t ) − t α − wi > 0
dw1 . . . dwk
and Bα(i/)2, 1 ≤ i ≤ n, are mutually independent fractional Brownian motions. Hα,k is
the generalized Pickands’ constant introduced in [2]; see also [1]. Therefore,
Theorem 1 provides a tractable criterion for settling the dichotomy of P E f .
For instance, let
2
r
ˆ
2 − rˆα
2α
f p(s) =
log s +
+ 1 − p log2 s
, p ∈ R.
One easily checks that, as u → ∞,
Hence, for any p ∈ R,
1 n
= C α
Hα,rˆ
r r
ˆ (2π ) 2ˆ
Furthermore,
P Xr:n(t ) > f p(t ) i.o. =
Xr:n(t )
lim sup √log t =
t→∞
2−rˆα
2α
u log1− p u
−1
2
r
ˆ
2
r
ˆ
01 iiff pp <≥ 00 .
a.s.
(1 + o(
1
)).
(
6
)
Next, consider the process ξ p = {ξ p(t ) : t ≥ 0} defined as
Since I f p = ∞ for p ≥ 0, Theorem 1 implies that
ξ p(t ) = sup{s : 0 ≤ s ≤ t, Xr:n(s) ≥ f p(s)}.
Let, cf. (
6
),
lim ξ p(t ) = ∞
t→∞
a.s. and lim sup(ξ p(t ) − t ) = 0 a.s.
t→∞
h p(t ) = p P
sup Xr:n(s) > f p(t )
s∈[0,1]
−1
log2 t.
The second contribution of this paper is an Erdös–Révs´z type of law of the iterated
logarithm for the process ξ p.
Theorem 2 If p > 1, then
If p ∈ (0, 1], then
lim inf
t→∞
= −1 a.s.
Now, let us complementary put η p = {η p(t ) : t ≥ 0}, where
Since and then it follows that
P ξ p(t ) − t ≤ −x
= P
P z − η p(z) ≤ −x
= P
sup
s∈(t−x,t] f p(s)
Xr:n (s)
< 1
sup
s∈[z,z+x] f p(s)
Xr:n (s)
< 1 ,
= lizm→i∞nf
.
(
7
)
Theorem 2 shows that for t big enough, there exists an s in [t − h p(t ), t ] (as well
as in [t, t + h p(t )] by (
7
)) such that Xr:n (s) ≥ f p(s) and that the length of the
interval h p(t ) is smallest possible. One can retrieve (
3
)–(
4
) by setting n = 1, and
p = 2 −2αrˆα + 1 = 22+αα . Theorem 2 not only generalizes [12, Theorem 1.1], it also
unveils the lacking so far structure of the lower bound of ξ p(t ) by relating it, via
h p(t ), to the asymptotics of the tail distribution of the supremum of the underlying
process evaluated at f p(t ); in (
3
) t (log t )(α−2)/(2α) is of the same asymptotic order as
the reciprocal of P sups∈[0,1] X (s) > (2 log t )1/2 . This shines new light on this type
of results, which appear to be intrinsically connected with Gumbel limit theorems; see,
e.g., [7], where the function h p(t ) plays crucial role. We shall pursue this elsewhere.
The paper is organized as follows. In Sect. 2, we provide a collection of basic results
on order statistics of stationary Gaussian processes, used throughout the paper, and
prove auxiliary lemmas, which constitute building blocks of the proofs of the main
results. These are given in the final part of the paper, Sect. 3.
2 Auxiliary Lemmas
We begin with some auxiliary lemmas that are later needed in the proofs.
The following lemma is the general form of the Borel–Cantelli lemma; cf. [13].
Lemma 1 Consider a sequence of events {Ek : k ≥ 0}. If
∞
k=0
P (Ek ) < ∞,
then P (En i.o.) = 0. Whereas, if
then P (En i.o.) = 1.
∞
k=0
P (Ek ) = ∞
and lim inf
n→∞
1≤k=t≤n P (Ek Et )
n 2
k=1 P (Ek )
≤ 1,
The following two lemmas constitute useful tools for approximating the supremum
of Xr:n on a fixed interval by its maximum on a grid with a sufficiently dense mesh.
Lemma 2 There exist positive constants K , c and u0 such that
P
max
2
0≤ j≤u α /θ
2r α
≤ K u αˆ ( (u))rˆ θ α2 −1 (cθ − 4 ),
α
2 θ 4 , sup Xr:n (t ) > u
Xr:n ( j θ u− α ) ≤ u − u t∈[0,1]
for each θ > 0 and u ≥ u0.
Proof Note that, by stationarity, there exists a constant K , that may vary from line to
line, such that, for sufficiently large u,
α
2 θ 4 , sup Xr:n (t ) > u
Xr:n ( j θ u− α ) ≤ u − u t∈[0,1]
α
θ 4
Xr:n (0) ≤ u − u t∈[0,1]
, sup Xr:n (t ) > u
The last inequality follows from (
5
) and the classical result of [7, Lemma 12.2.5],
where the constant c > 0 is given therein.
The proof of the following lemma follows linebyline the same reasoning as the proof
of [1, Theorem 2.2], and thus we omit it.
Lemma 3 For any θ > 0, as u → ∞,
The next lemma follows directly from [4, Theorem 2.4] and is a generalization of the
classical Berman’s inequality to order statistics.
Lemma 4 For some n, d ≥ 1, and any 1 ≤ l ≤ n let {ξl(0)(i ) : 1 ≤ i ≤ d}
and {ξl(
1
)(i ) : 1 ≤ i ≤ d} be a sequence of N (
0, 1
) variables and set σi(lκ,)jk =
E ξl(κ)(i )ξk(κ)( j ), κ = 0, 1. For any 1 ≤ r ≤ n and 1 ≤ i ≤ d, let ξr(:κn)(i ) be the r th
order statistic of ξ1(κ)(i ), ..., ξn(κ)(i ). Suppose that, for any 1 ≤ i, j ≤ d, 1 ≤ l, k ≤
n, κ = 0, 1,
σi(lκ,)jk = σi(jκ)1{l=k}
for some σi(jκ). Now define
ρi j = max
σi(j0) , σi(j1)
, Ai(rj ) =
σi(j1) (1 + h)(n−r)/2
σi(j0)
Then, for any u1, . . . , ud > 0, for some positive constant Cn,r depending only on n
and r ,
Lemma 5 Under the conditions of Theorem 2, for any ε ∈ (
0, 1
), there exist positive
constants K and ρ depending only on ε, α and λ such that
Proof Let, for any i ≥ 0 and ε ∈ (
0, 1
),
si = S + i (1 + ε), ti = si + 1, xi = f p(ti ), Ii = (si , ti ].
For some θ > 0, define grid points in the interval Ii , as follows
2
si,u = si + uqi , 0 ≤ u ≤ Li , Li = [1/qi ], qi = θ xi− α .
(
8
)
Since f p is an increasing function, it easily follows that, with T (S, ε) = [(T − S −
1)/(1 + ε)],
For any 1 ≤ l ≤ n and i ≥ 0, let Xl,i be an independent copy of the process Xl .
Define a sequence of processes Yl = {Yl (t) : t ∈ ∪i Ii } as Yl (t) = Xl,i (t), if t ∈ Ii .
Let Yr:n = {Yr:n(t) : t ≥ 0} be the r th order statistic of Y1, . . . , Yn. Put
σi(l0,)jk := EXl (i )Xk ( j ) = r ( j − i ) 1{l=k} =: σi(j0)1{l=k},
σi(l1,)jk := EYl (i )Yk ( j ) = r ( j − i ) 1{l=k}1{∃m:i, j∈Im} =: σi(j1)1{l=k},
ρi j = max σi(j0) , σi(j1)
= r ( j − i ) ,
and note that
Ai(rj ) =
σi(j1) (1 + h)2(n−r)
σi(j0)
Without loss of generality assume that λ < 2. From (
2
) it follows that there is s0 such
that for every s > s0,
r ∗(s) ≤ s−λ ≤ min(1, λ)/4.
(r)
Finally, since the integrand in the definition of A˜si,us j,v is continuous and bounded on
[0, r ∗(ε)], there exists a generic constant K not depending on S and T , which may
differ from line to line, such that
A(r)
˜si,us j,v ≤ K r (s j,v − si,u) ≤ K r ∗(( j − i )ε).
Therefore, for sufficiently large S,
P2 ≤ K
0≤i< j≤T (S,ε)
rˆ(xi2 + x 2j)
Li L j r ∗ (( j − i )ε) exp − 2(1 + r ∗ (( j − i )ε)
⎞
t− 1+1λ2 t− 1+1λ2 ( j − i )−λ⎟⎟ .
i j
⎞
⎠
We can bound the first sum from the above by
≤ K ⎜⎜⎝ 0< j−i≤2s0 + j−i>2s0 ⎠⎟⎟ (·)
0≤i< j≤T (S,ε) 0≤i< j≤T (S,ε)
≤ K
∞
i=0
4 rˆxi2
xiα exp − 1 + r ∗ (ε)
≤ K ⎜⎜
⎝ i=0
j−i>2s0
0≤i< j≤T (S,ε)
2
∞ t− 1+√r∗(ε)
i
+
2 2 rˆ(xi2 + x 2j)
xiα x jα ( j − i )−λ exp − 2(1 + 4 )
λ
j−i>2s0
0≤i< j≤T (S,ε)
K
∞
i=0
(S + i )− 1+√r∗(ε) ≤ K S− 1−√4r∗(ε) .
2
The second sum is bounded from above by
S≤i< j<∞
i− 1+1λ2 j− 1+1λ2 ( j − i)−λ =
≤ K S1− 1+2λ2 log S · 1{λ∈[1,2)} + S2−λ− 1+2λ2 · 1{λ∈(
0,1
)} .
Hence, for some positive constant ρ, depending only on ε, α and λ,
P2 ≤ K S−ρ ,
which finishes the proof.
Lemma 6 Under the conditions of Theorem 2, for any ε ∈ (
0, 1
), there exist positive
constants K and ρ depending only on ε, α and λ such that
1
≥ 4 exp −(1 + ε)
S
T
P
Proof Let, for any i ≥ 0, ai = S + i so that yi = f p(ai ). Define grid points in the
interval (ai , ai+1] as follows
α
Finally, put yˆi = yi − θi4 /yi. Similarly as in the proof of Lemma 5, using Lemma 4
we have
P
[T−S]
0≤mua≤xLi Xr:n(ai,u) ≤ yˆi
i=0
[T−S]
≥ P 0≤mua≤xLi Xr:n(ai,u) ≤ yˆi
i=0
− Cn,r
⎛
0≤i<j≤[T−S] 0≤u≤Li
0≤v≤Lj
exp⎝ −2(1 + r(aj,v − ai,u))⎠
rˆ yˆi2 + yˆj
2
⎞
=: P1 − P2,
(r)
where A˜ai,uaj,v is as in (
9
).
Estimate of P1.
Note that, by Lemma 3 combined with Eq. 5,
yˆi yˆj −(n−r) −A˜a(ri,)uaj,v +
P1 ≥ 41 exp −
≥ 41 exp −
≥ 41 exp −(1 + ε)
i=0
[T−S]
[T−S]
P 0≤mua≤xLi Xr:n(ai,u) > yˆi
P sup Xr:n(t) > yˆi
i=0 t∈[0,1]
[T−S]
P sup Xr:n(t) > yi
i=0 t∈[0,1]
T
≥ 41 exp −(1 + ε) P sup Xr:n(t) > fp(u) du ,
S t∈[0,1]
provided that S is sufficiently large.
Estimate of P2.
Noting that, for j ≥ i + 2, and any 0 ≤ u ≤ Li,0 ≤ v ≤ L j;
aj,v − ai,u = aj + vqj − ai − uqi ≥ j − i − 1,
sup r(aj,v −ai,u) ≤ sup r(s −s ) = r∗(j −i −1) ≤ r∗(
1
)<1. (
11
)
0≤u≤Li s−s ≥j−i−1
0≤v≤Lj
(r)
Since the integrand in definition of A˜ai,ua j,v is continuous and bounded on [0, r ∗(
1
)],
there exists a constant K such that
A(r)
˜ai,ua j,v ≤ K r (a j,v − ai,u ) ≤ K r ∗( j − i − 1) < K .
On the other hand, by (
1
), there exist positive constants s0 < 1, such that, for every
0 ≤ s ≤ s0,
(r)
A˜0s ≥ r (s) ≥ 1 − 2sα > 0.
Hence,
(− A˜a(ri,)ua j,v )+ = 0, if j = i + 1, 1 + vq j − uqi ≤ s0,
r (a j,v − ai,u ) ≤ r ∗(s0) < 1, if j = i + 1, 1 + vq j − uqi > s0
(
12
)
(
13
)
Therefore, by (11)–(13) we obtain
P2 ≤
0≤i≤j =[Ti+−1S]−1 0≤u≤Li
0≤v≤L j
+
0≤i≤[T −S]−2 0≤u≤Li
i+2≤ j≤[T −S] 0≤v≤L j
1
√1 − r ∗(s0)
exp
rˆ(yˆi2 + yˆ 2j)
− 2(1 + r ∗(s0))
r ∗( j − i − 1)
√1 − r ∗(
1
)
exp
rˆ(yˆi2 + yˆ 2j)
− 2(1 + r ∗( j − i − 1))
.
Completely similar to the estimation of P2 in the proof of Lemma 5, we can arrive
that there exist positive constants K and ρ, independent of S and T , such that, for
sufficiently large S,
P2 ≤ K S−ρ .
The following lemma is a straightforward modification of Lemmas 3.1 and 4.1 of
[14] and [11, Lemma 1.4].
Lemma 7 If Theorem 1 is true under the additional condition that for large t ,
2 3
rˆ log t ≤ f 2(t ) ≤ rˆ log t,
(
14
)
it is true without the additional condition.
3 Proofs of the Main Results
Proof of Theorem 1 Note that the case I f < ∞ is straightforward and does not need
any additional knowledge on process Xr:n apart from the assumption of stationarity.
Indeed, for sufficiently large T ,
∞
where, recall, si,u = S + i (1 + ε) + uθ xi−2/α, Li = [1/(θ xi−2/α)], θ , ε > 0.
Furthermore, for sufficiently large S and θ , cf. estimation of P1,
∞
i=0
Let Ei = {max1≤u≤Li Xr:n(si,u ) ≤ xi }, and note that
sup Xr:n(t ) > f (u) du = ∞.
t∈[0,1]
(15)
1 − P Eic i.o. = ml→im∞ k=m P (Ek ) + ml→im∞
P
∞
k=m
∞
k=m
Ek
−
P (Ek ) .
The first limit is zero as a consequence of (15), and the second limit will be zero
because of the asymptotic independence of the events Ek . Indeed, there exist positive
constants K and ρ, such that for any n > m,
n
k=m
Am,n = P
Ek
−
P (Ek ) ≤ K (S + m)−ρ ,
by the same calculations as in the estimate of P2 in Lemma 5 after realizing that,
by Lemma 7, we might restrict ourselves to the case when (
14
) holds. Therefore,
P Eic i.o. = 1, which finishes the proof.
∞
n
k=m
Proof of Theorem 2 Step 1. Let p > 1, then, for every ε ∈ (
0, 14
),
Tk
Sk+1
lim inf ξp(Tk) − Tk ≥ −(1 + 2ε)2 a.s.
k→∞ h p(Tk)
Since ξ(t) is a nondecreasing random function of t, for every Tk ≤ t ≤ Tk+1, we
have
ξp(t) − t
h p(t)
h p(Tk) ξp(Tk) − Tk+1
≥ h p(Tk+1) h p(Tk)
h p(Tk)
= h p(Tk+1)
ξp(Tk) − Tk
h p(Tk)
−
Tk+1 − Tk .
h p(Tk)
For p > 1 elementary calculus implies
lim
k→∞
Tk+1 − Tk
h p(Tk)
h p(Tk)
= 0, lim
k→∞ h p(Tk+1) = 1,
lim inf
ξ p(t ) − t
h p(t )
≥ lim inf
k→∞
ξ p(Tk ) − Tk
h p(Tk )
a.s.,
which finishes the proof of this step.
Step 2. Let p > 1, then, for every ε ∈ (
0, 1
),
4
Proof As in the proof of the lower bound, put lim inf
t→∞
ξ p(t ) − t
h p(t )
It suffices to show P (Bn i.o.) = 1, that is
lim P
m→∞
∞
# Bk
k=m
= 1.
Let ak
i = Sk + i and define grid points in the interval [aik , aik+1] as follows
aik,u = aik + uqik , 0 ≤ u ≤ Lik , Lik = [1/qik ], qik = θik (yik )− α2 , θik =
so that
Let
yik = f p(aik ).
Put
Clearly, for m ≥ 1,
(18)
yi
8
k − α
,
Tk = exp(k(1+ε2)/ p), Sk = Tk − (1 − ε)h p(Tk ), k ≥ 1.
Bk = {ξ p(Tk ) ≤ Sk } =
sup
Sk <s≤Tk
Xr n(s)
:
f p(s)
< 1 .
Ak =
[Tk −Sk ]
i=0
max Xr n(aik,u ) ≤ yi − θik /yik .
k
k :
0≤u≤Li
P
∞
# Ak
k=m
≤
P
∞
# Bk
k=m
+
∞
k=m
c
P Ak ∩ Bk .
Put yˆik = yik − θik /yik . Then, by Lemma 2, for some constants K independent of S and
T , which may vary between (and among) lines,
∞
k=m
c
P Ak ∩ Bk ≤
≤ K
≤ K
≤ K
≤ K
P 0≤mua≤xLik Xr:n(aik,u) ≤ yˆik , sup Xr:n(s) ≥ yik
s∈[0,1]
(yik ) 2αrˆ (yik )(θik) α2 −1
K (θik)− α4
(aik log1−p aik)−1(log aik) α4 −3α exp −
(Sk + i )−3(log(Sk + i )) α4 −3α+p−1
log2 aik
K
Sk−1 ≤ K m−4,
provided m is large enough. Therefore,
and
∞
lim
m→∞ k=m
c
P Ak ∩ Bk = 0
lim P
m→∞
∞
# Bk ≥ ml→im∞ P
k=m
∞
# Ak .
k=m
To finish the proof of (18), we only need to show that
Similarly to (16), we have
Tk
Sk
P
sup Xr:n(t) > f p(u) du ∼ (1 − ε) p log2 Tk.
t∈[0,1]
Now from Lemma 6 it follows that
for every k sufficiently large. Hence,
∞
k=1
P ( Ak ) = ∞.
Applying Lemma 4, we get for 0 ≤ t < k
where, similarly to the proof of Lemma 5,
P ( Ak At ) ≤ P ( Ak ) P ( At ) + Mk,t ,
(20)
(21)
Mk,t = Cn,r
where
It is easy to see that,
0≤i≤[Tk−Sk] 0≤u≤Li
k
0≤ j≤[Tt −St ] 0≤v≤Ltj
k t −(n−r)
yˆi yˆ j
(r)
A˜sik,ustj,v
exp
⎛
⎝ −
rˆ (yˆik)2 + (yˆtj )2 ⎞
2(1 + r (sik,u − stj,v)) ⎠
,
(r)
A˜sik,ustj,v
≤ K r (sik,u − s j,v) .
t
so that, for 0 ≤ t < k and k large enough, and assuming without loss of generality
r (sik,u − s j,v) ≤ r ∗(Sk − Tt ) ≤ r ∗(Sk − Tk−1) ≤ K r ∗
t
(Tk − Tk−1)
1
2
≤ 2K (Tk − Tk−1)−λ
≤ min(1, λ)/16.
Therefore,
Mk,t ≤ K (Tk − Tk−1)−λ
0≤i≤[Tk−Sk]
0≤ j≤[Tt −St ]
Hence we have,
0≤t<k<∞
Now (19) follows from (21), (22) and (20) and the general form of the Borel–Cantelli
lemma.
Step 3. If p ∈ (0, 1], then, for every ε ∈ (0, 41 ),
and
Proof Put
Tk = exp(k1/p), Sk = Tk exp −(1 + 2ε)2h p(Tk ) .
(22)
(23)
(24)
λ λ
≤ K (Tk − Tk−1)−λ log α5 Tk log α5 Tt · Tk4 Tt 4
λ
≤ K Tk− 4 ≤ K exp(−λk(1+ε2)/p/4).
since
This proves (23). Let
Noting that
Proceeding the same as in the proof of (17), one can obtain that
On the other hand it is clear that
≥ likm→i∞nf
log ξp(Tk )/Tk
h p(Tk )/Tk
a.s.
lim log (Tk /Tk+1)
k→∞ h p(Tk )/Tk
= 0, lim
k→∞
h p(Tk ) Tk+1
Tk h p(Tk+1) = 1.
Tk = exp k(1+ε2)/p , Sk = Tk exp −(1 − ε)h p(Tk ) .
along the same lines as in the proof of (18), we also have
which proves (24).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution,
and reproduction in any medium, provided you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license, and indicate if changes were made.
1. De˛bicki, K. , Hashorva , E. , Ji , L. , Tabis´, K. : On the probability of conjunctions of stationary Gaussian process . Stat. Probab. Lett . 88 , 141  148 ( 2014 )
2. De˛bicki, K. , Hashorva , E. , Ji , L. , Ling , C. : Extremes of order statistics of stationary processes . Test 24 , 229  248 ( 2015 )
3. De˛bicki, K. , Hashorva , E. , Ji , L. , Tabis´, K. : Extremes of vectorvalued Gaussian processes: exact asymptotics . Stoch. Process. Appl . 125 , 4039  4065 ( 2015 )
4. De˛bicki, K. , Hashorva , E. , Ji , L. , Ling , C. : Comparison inequalities for order statistics of Gaussian arrays ( 2016 ). arXiv: 1503 .09094v1
5. Hashorva , E. , Ji , L. , Piterbarg , V.I. : On the supremum of γ reflected processes with fractional Brownian motion as input . Stoch. Process. Appl . 123 , 4111  4127 ( 2013 )
6. Hashorva , E. , Korshunov , D. , Piterbarg , V.I. : Asymptotic expansion of Gaussian chaos via probabilistic approach . Extremes 18 ( 3 ), 315  347 ( 2015 )
7. Leadbetter , M.R. , Lindgren , G. , Rootzen , H.: Extremes and Related Properties of Random Sequences and Processes. Springer, Berlin ( 1983 )
8. Pickands III , J.: Upcrossing probabilities for stationary Gaussian processes . Trans. Am. Math. Soc . 145 , 51  73 ( 1969 )
9. Piterbarg , V.I. : Asymptotic Methods in the Theory of Gaussian Processes and Fields , Volume 148 of Translations of Mathematical Monographs. American Mathematical Society , Providence ( 1996 )
10. Piterbarg , V.I. , Prisyazhnyuk , V. : Asymptotic behavior of the probability of a large excursion for a nonstationary Gaussian processes . Theory Probab. Math. Stat . 18 , 121  133 ( 1978 )
11. Qualls , C. , Watanabe , H. : An asymptotic 01 behavior of Gaussian processes . Ann. Math. Stat. 42 ( 6 ), 2029  2035 ( 1971 )
12. Shao , Q.M.: An ErdösRévész type law of the iterated logarithm for stationary Gaussian processes . Probab. Theory Relat. Fields 94 , 119  133 ( 1992 )
13. Spitzer , F. : Principles of Random Walk . Van Nostrand, Princeton ( 1964 )
14. Watanabe , H. : An asymptotic property of Gaussian processes . Am. Math. Soc . 148 ( 1 ), 233  248 ( 1970 )