#### Heavy-Tailed Random Walks on Complexes of Half-Lines

Heavy-Tailed Random Walks on Complexes of Half-Lines
Mikhail V. Menshikov 1
Dimitri Petritis 0 1
Andrew R. Wade 1
B Andrew R. Wade 1
0 IRMAR, Campus de Beaulieu , 35042 Rennes Cedex , France
1 Department of Mathematical Sciences, Durham University , South Road, Durham DH1 3LE , UK
We study a random walk on a complex of finitely many half-lines joined at a common origin; jumps are heavy-tailed and of two types, either one-sided (towards the origin) or two-sided (symmetric). Transmission between half-lines via the origin is governed by an irreducible Markov transition matrix, with associated stationary distribution μk . If χk is 1 for one-sided half-lines k and 1/2 for two-sided half-lines, and αk is the tail exponent of the jumps on half-line k, we show that the recurrence classification for the case where all αk χk ∈ (0, 1) is determined by the sign of k μk cot(χk π αk ). In the case of two half-lines, the model fits naturally on R and is a version of the oscillating random walk of Kemperman. In that case, the cotangent criterion for recurrence becomes linear in α1 and α2; our general setting exhibits the essential nonlinearity in the cotangent criterion. For the general model, we also show existence and nonexistence of polynomial moments of return times. Our moments results are sharp (and new) for several cases of the oscillating random walk; they are apparently even new for the case of a homogeneous random walk on R with symmetric increments of tail exponent α ∈ (1, 2).
Mathematics Subject Classification (2010) 60J05 (Primary); 60J10; 60G50 (Secondary)
1 Introduction
We study Markov processes on a complex of half-lines R+ × S, where S is finite,
and all half-lines are connected at a common origin. On a given half-line, a particle
performs a random walk with a heavy-tailed increment distribution, until it would exit
the half-line, when it switches (in general, at random) to another half-line to complete
its jump.
To motivate the development of the general model, we first discuss informally some
examples; we give formal statements later.
The one-sided oscillating random walk takes place on two half-lines, which we
may map onto R. From the positive half-line, the increments are negative with density
proportional to y−1−α, and from the negative half-line, the increments are positive
with density proportional to y−1−β , where α, β ∈ (0, 1). The walk is transient if and
only if α + β < 1; this is essentially a result of Kemperman [15].
The oscillating random walk has several variations and has been well studied over
the years (see, e.g. [17–19,21]). This previous work, as we describe in more detail
below, is restricted to the case of two half-lines. We generalize this model to an arbitrary
number of half-lines, labelled by a finite set S, by assigning a rule for travelling from
half-line to half-line.
First, we describe a deterministic rule. Let T be a positive integer. Define a routing
schedule of length T to be a sequence σ = (i1, . . . , iT ) of T elements of S, dictating
the sequence in which half-lines are visited, as follows. The walk starts from line i1,
and, on departure from line i1 jumps over the origin to i2, and so on, until departing iT it
returns to i1; on line k ∈ S, the walk jumps towards the origin with density proportional
to y−1−αk where αk ∈ (0, 1). One simple example takes a cyclic schedule in which σ
is a permutation of the elements of S. In any case, a consequence of our results is that
now the walk is transient if and only if
k∈S
where μk is the number of times k appears in the sequence σ . In particular, in the
cyclic case the transience criterion is k∈S cot(π αk ) > 0.
It is easy to see that, if S contains two elements, the cotangent criterion (1.1) is
equivalent to the previous one for the one-sided oscillating walk (α1 + α2 < 1). For
more than two half-lines, the criterion is nonlinear, and it was necessary to extend the
model to more than two lines in order to see the essence of the behaviour.
More generally, we may choose a random routing rule between lines: on departure
from half-line i ∈ S, the walk jumps to half-line j ∈ S with probability p(i, j ). The
deterministic cyclic routing schedule is a special case in which p(i, i ) = 1 for i the
successor to i in the cycle. In fact, this set-up generalizes the arbitrary deterministic
routing schedule described above, as follows. Given the schedule sequence σ of length
T , we may convert this to a cyclic schedule on an extended state space consisting of
μk copies of line k and then reading σ as a permutation. So the deterministic routing
model is a special case of the model with Markov routing, which will be the focus of
the rest of the paper.
Our result again will say that (1.1) is the criterion for transience, where μk is now the
stationary distribution associated with the stochastic matrix p(i, j ). Our general model
also permits two-sided increments for the walk from some of the lines, which contribute
terms involving cot(π αk /2) to the cotangent criterion (1.1). These two-sided models
also generalize previously studied classical models (see, e.g. [15,17,23]). Again, it
is only in our general setting that the essential nature of the cotangent criterion (1.1)
becomes apparent.
Rather than R+ × S, one could work on Z+ × S instead, with mass functions
replacing probability densities; the results would be unchanged.
The paper is organized as follows. In Sect. 2, we formally define our model and
describe our main results, which as well as a recurrence classification include results
on existence of moments of return times in the recurrent cases. In Sect. 3, we explain
how our general model relates to the special case of the oscillating random walk when
S has two elements, and state our results for that model; in this context, the
recurrence classification results are already known, but the existence-of-moments results
are new even here, and are in several important cases sharp. The present work was also
motivated by some problems concerning many-dimensional, partially homogeneous
random walks similar to models studied in [5,6,13]: we describe this connection in
Sect. 4. The main proofs are presented in Sects. 5, 6, and 7, the latter dealing with the
critical boundary case which is more delicate and requires additional work. We collect
various technical results in the Appendix.
2 Model and Results
Consider (Xn, ξn; n ∈ Z ), a discrete-time, time-homogeneous Markov process with
+
state space R+ × S, where S is a finite non-empty set. The state space is equipped
with the appropriate Borel sets, namely sets of the form B × A where B ∈ B(R+) is
a Borel set in R+, and A ⊆ S. The process will be described by:
– an irreducible stochastic matrix labelled by S, P = ( p(i, j ); i, j ∈ S); and
– a collection (wi ; i ∈ S) of probability density functions, so wi : R → R+ is a
Borel function with R wi (y)dy = 1.
We view R+ × S as a complex of half-lines R+ × {k}, or branches, connected at a
central origin O := {0} × S; at time n, the coordinate ξn describes which branch the
process is on, and Xn describes the distance along that branch at which the process
sits. We will call Xn a random walk on this complex of branches.
To simplify notation, throughout we write Px,i [ · ] for P[ · | (X0, ξ0) = (x , i )],
the conditional probability starting from (x , i ) ∈ R+ × S; similarly, we use Ex,i
for the corresponding expectation. The transition kernel of the process is given for
(x , i ) ∈ R+ × S, for all Borel sets B ⊆ R+ and all j ∈ S, by
= Px,i [(X1, ξ1) ∈ B × { j }]
P (Xn+1, ξn+1) ∈ B × { j } | (Xn, ξn) = (x , i )
= p(i, j )
wi (−z − x )dz + 1{i = j }
wi (z − x )dz.
The dynamics of the process represented by (2.1) can be described algorithmically
as follows. Given (Xn, ξn) = (x , i ) ∈ R+ × S, generate (independently) a spatial
increment ϕn+1 from the distribution given by wi and a random index ηn+1 ∈ S
according to the distribution p(i, · ). Then,
– if x + ϕn+1 ≥ 0, set (Xn+1, ξn+1) = (x + ϕn+1, i ); or
– if x + ϕn+1 < 0, set (Xn+1, ξn+1) = (|x + ϕn+1|, ηn+1).
In words, the walk takes a wξn -distributed step. If this step would bring the walk
beyond the origin, it passes through the origin and switches onto branch ηn+1 (or, if
ηn+1 happens to be equal to ξn, it reflects back along the same branch).
The finite irreducible stochastic matrix P is associated with a (unique) positive
invariant probability distribution (μk ; k ∈ S) satisfying
j∈S
μ j p( j, k) − μk = 0, for all k ∈ S.
For future reference, we state the following.
Our interest here is when the wi are heavy-tailed. We allow two classes of
distribution for the wi : one-sided or symmetric. It is convenient, then, to partition S as
S = Sone ∪ Ssym where Sone and Ssym are disjoint sets, representing those branches
on which the walk takes, respectively, one-sided and symmetric jumps. The wk are
then described by a collection of positive parameters (αk ; k ∈ S).
For a probability density function v : R → R+, an exponent α ∈ (0, ∞), and a
constant c ∈ (0, ∞), we write v ∈ Dα,c to mean that there exists c : R+ → (0, ∞)
with supy c(y) < ∞ and limy→∞ c(y) = c for which
v(y) =
if y ≤ 0.
If v ∈ Dα,c is such that (2.3) holds and c(y) satisfies the stronger condition c(y) =
c + O(y−δ) for some δ > 0, then we write v ∈ Dα+,c.
Our assumption on the increment distributions wi is as follows.
wk (y) =
vk (−y)
if k ∈ Sone
21 vk (|y|) if k ∈ Ssym.
We say that Xn is recurrent if lim infn→∞ Xn = 0, a.s., and transient if
limn→∞ Xn = ∞, a.s. An irreducibility argument shows that our Markov chain
(Xn, ξn) displays the usual recurrence/transience dichotomy and exactly one of these
two situations holds; however, our proofs establish this behaviour directly using
semimartingale arguments, and so we may avoid discussion of irreducibility here.
Throughout we define, for k ∈ S,
1
= 12
Our first main result gives a recurrence classification for the process.
Theorem 1 Suppose that (A0) and (A1) hold.
In the recurrent cases, it is of interest to quantify recurrence via existence or
nonexistence of passage-time moments. For a > 0, let τa := min{n ≥ 0 : Xn ≤ a}, where
throughout the paper we adopt the usual convention that min ∅ := +∞. The next result
shows that in all the recurrent cases, excluding the boundary case in Theorem 1(b)(iii),
the tails of τa are polynomial.
Theorem 2 Suppose that (A0) and (A1) hold. In cases (a) and (b)(i) of Theorem 1,
there exist 0 < q0 ≤ q1 < ∞ such that for all x > a and all k ∈ S,
< ∞, for q < q0
= ∞, for q > q1.
Remark 1 Our general results have q0 < q1 so that Theorem 2 does not give sharp
estimates; these remain an open problem.
We do have sharp results in several particular cases for two half-lines, in which
case our model reduces to the oscillating random walk considered by Kemperman
[15] and others. We present these sharp moments results (Theorems 4 and 6) in the
next section, which discusses in detail the case of the oscillating random walk, and
also describes how our recurrence results relate to the known results for this classical
model.
3 Oscillating Random Walks and Related Examples
3.1 Two Half-Lines Become One Line
In the case of our general model in which S consists of two elements, S = {−1, +1},
say, it is natural and convenient to represent our random walk on the whole real line
R. Namely, if ω(x , k) := k x for x ∈ R+ and k = ±1, we let Zn = ω(Xn , ξn).
The simplest case has no reflection at the origin, only transmission, i.e. p(i, j ) =
1{i = j }, so that μ = ( 2 , 21 ). Then, for B ⊆ R a Borel set,
1
P Zn+1 ∈ B | (Xn, ξn) = (x , i ) = Px,i (X1, ξ1) ∈ B+ ×{+1}
where B+ = B ∩ R+ and B− = {−x : x ∈ B, x < 0}. In particular, by (2.1), writing
w+ for w+1, for x ∈ R ,
+
P Zn+1 ∈ B | (Xn, ξn) = (x , +1) =
w+(z − x )dz +
w+(−z − x )dz
B−
+ Px,i (X1, ξ1) ∈ B− ×{−1} ,
w+(z − x )dz,
P Zn+1 ∈ B | (Xn, ξn) = (x , −1) =
w−(z + x )dz.
and hence we have for x ∈ R\{0} and Borel B ⊆ R,
P Zn+1 ∈ B | Zn = x =
B w+(z − x )dz if x > 0
B w−(z − x )dz if x < 0.
We may make an arbitrary non-trivial choice for the transition law at Zn = 0 without
affecting the behaviour of the process, and then, (3.1) shows that Zn is a
timehomogeneous Markov process on R. Now, Zn is recurrent if lim infn→∞ |Zn| = 0,
a.s., or transient if limn→∞ |Zn| = ∞, a.s. The one-dimensional case described
at (3.1) has received significant attention over the years. We describe several of the
classical models that have been considered.
3.2 Examples and Further Results
The most classical case is as follows.
In this case, Zn describes a random walk with i.i.d. symmetric increments.
Theorem 3 Suppose that (Sym) holds. Then, the symmetric random walk is transient
if α < 1 and recurrent if α > 1. If, in addition, v ∈ Dα+,c, then the case α = 1 is
recurrent.
Theorem 3 follows from our Theorem 1, since in this case
k∈S
Since it deals with a sum of i.i.d. random variables, Theorem 3 may be deduced from
the classical theorem of Chung and Fuchs [7], via, e.g. the formulation of Shepp
[23]. The method of the present paper provides an alternative to the classical (Fourier
analytic) approach that generalizes beyond the i.i.d. setting (note that Theorem 3 is
not, formally, a consequence of Shepp’s most accessible result, Theorem 5 of [23],
since v does not necessarily correspond to a unimodal distribution in Shepp’s sense).
With τa as defined previously, in the setting of the present section we have τa =
min{n ≥ 0 : |Zn| ≤ a}. Use Ex [ · ] as shorthand for E[ · | Z0 = x ]. We have the
following result on existence of passage-time moments, whose proof is in Sect. 6;
while part (i) is well known, we could find no reference for part (ii).
Our main interest concerns spatially inhomogeneous models, i.e. in which wx depends
on x , typically only through sgn x , the sign of x . Such models are known as oscillating
random walks and were studied by Kemperman [15], to whom the model was suggested
in 1960 by Anatole Joffe and Peter Ney (see [15, p. 29]).
The next example, following [15], is a one-sided oscillating random walk:
w+(y) = v+(−y), and w−(y) = v−(y).
In other words, the walk always jumps in the direction of (and possibly over) the origin,
with tail exponent α from the positive half-line and exponent β from the negative
halfline. The following recurrence classification applies.
Theorem 5 Suppose that (Osc1) holds. Then, the one-sided oscillating random walk
is transient if α + β < 1 and recurrent if α + β > 1. If, in addition, v+ ∈ Dα+,c+ and
v− ∈ Dβ+,c− , then the case α + β = 1 is recurrent.
Theorem 5 was obtained in the discrete-space case by Kemperman [15, p. 21]; it
follows from our Theorem 1, since in this case
k∈S
The special case of (Osc1) in which α = β was called antisymmetric by
Kemperman; here, Theorem 5 shows that the walk is transient for α < 1/2 and recurrent for
α > 1/2. We have the following moments result, proved in Sect. 6.
q q
(i) If α ≥ 1, then Ex [τa ] < ∞ if q < α and Ex [τa ] = ∞ if q > α.
(ii) If α ∈ (1/2, 1), then Ex [τaq ] < ∞ if q < 2 − α1 and Ex [τaq ] = ∞ if q > 2 − α1 .
Another model in the vein of [15] is a two-sided oscillating random walk:
1 1
w+(y) = 2 v+(|y|), and w−(y) = 2 v−(|y|).
Now, the jumps of the walk are symmetric, as under (Sym), but with a tail exponent
depending upon which side of the origin the walk is currently on, as under (Osc1).
The most general recurrence classification result for the model (Osc2) is due to
Sandric´ [21]. A somewhat less general, discrete-space version was obtained by Rogozin
and Foss (Theorem 2 of [17, p. 159]), building on [15]. Analogous results in continuous
time were given in [4,11]. Here is the result.
v− ∈ Dβ+,c− , then the case α + β = 2 is recurrent.
Theorem 7 Suppose that (Osc2) holds. Then, the two-sided oscillating random walk
is transient if α + β < 2 and recurrent if α + β > 2. If, in addition, v+ ∈ Dα+,c+ and
Theorem 7 also follows from our Theorem 1, since in this case
1 1 sin(π(α + β)/2)
μk cot(χk π αk ) = 2 cot(π α/2) + 2 cot(πβ/2) = 2 sin(π α/2) sin(πβ/2) .
k∈S
A final model is another oscillating walk that mixes the one- and two-sided models:
1
w+(y) = 2 v+(|y|), and w−(y) = v−(y).
In the discrete-space case, Theorem 2 of Rogozin and Foss [17, p. 159] gives the
recurrence classification.
Theorem 8 Suppose that (Osc3) holds. Then, the mixed oscillating random walk is
transient if α + 2β < 2 and recurrent if α + 2β > 2. If, in addition, v+ ∈ Dα+,c+ and
v− ∈ Dβ+,c− , then the case α + 2β = 2 is recurrent.
Theorem 8 also follows from our Theorem 1, since in this case
k∈S
1 1 sin(π(α + 2β)/2)
μk cot(χk π αk ) = 2 cot(π α/2) + 2 cot(πβ) = 2 sin(π α/2) sin(πβ) .
3.3 Additional Remarks
It is possible to generalize the model further by permitting the local transition density
to vary within each half-line. Then, we have the transition kernel
P[Zn+1 ∈ B | Zn = x ] =
wx (z − x )dz,
for all Borel sets B ⊆ R. Here, the local transition densities wx : R → R+ are
Borel functions. Variations of the oscillating random walk, within the general setting
of (3.2), have also been studied in the literature. Sandric´ [19,21] supposes that the wx
satisfy, for each x ∈ R, wx (y) ∼ c(x )|y|−1−α(x) as |y| → ∞ for some measurable
functions c and α; he refers to this as a stable-like Markov chain. Under a uniformity
condition on the wx , and other mild technical conditions, Sandric´ [19] obtained, via
Foster–Lyapunov methods similar in spirit to those of the present paper, sufficient
conditions for recurrence and transience: essentially lim inf x→∞ α(x ) > 1 is
sufficient for recurrence and lim supx→∞ α(x ) < 1 is sufficient for transience. These
results can be seen as a generalization of Theorem 3. Some related results for models
in continuous-time (Lévy processes) are given in [20,22,24]. Further results and an
overview of the literature are provided in Sandric´’s Ph.D. thesis [18].
4 Many-Dimensional Random Walks
The next two examples show how versions of the oscillating random walk of Sect. 3
arise as embedded Markov chains in certain two-dimensional random walks.
Example 1 Consider ξn = (ξn(1), ξn(2)), n ∈ Z+, a nearest-neighbour random walk on
Z2 with transition probabilities
P ξn+1 = (y1, y2) | ξn = (x1, x2) = p (x1, x2; y1, y2) .
Fig. 1 Pictorial representation of the non-homogeneous nearest-neighbour random walk on Z2 of
Example 1, plus a simulated trajectory of 5000 steps of the walk. We conjecture that the walk is recurrent
Suppose that the probabilities are given for x2 = 0 by,
1
p (x1, x2; x1, x2 + 1) = p (x1, x2; x1, x2 − 1) = 3 ;
(the rest being zero) and for x2 = 0 by p(x1, 0; x1, 1) = 1 for all x1 > 0,
p(x1, 0; x1, −1) = 1 for all x1 < 0, and p(0, 0; 0, 1) = p(0, 0; 0, −1) = 1/2.
See Fig. 1 for an illustration.
Set τ0 := 0 and define recursively τk+1 = min{n > τk : ξn(2) = 0} for k ≥ 0;
consider the embedded Markov chain Xn = ξτ(n1). We show that Xn is a discrete version
of the oscillating random walk described in Sect. 3. Indeed, |ξn(2)| is a reflecting random
walk on Z+ with increments taking values −1, 0, +1 each with probability 1/3. We
then (see, e.g. [9, p. 415]) have that for some constant c ∈ (0, ∞),
P [τ1 > r ] = (c + o(1)) r −1/2, as r → ∞.
P τ1 > 3r + r 3/4 = P τ1 > 3r + r 3/4, ξτ1 − ξτ0 < −r
+ P τ1 > 3r + r 3/4, ξτ1 − ξτ0 ≥ −r
≤ P ξτ1 − ξτ0 < −r + P ξ 3r+r3/4 − ξτ0 ≥ −r .
In other words,
− P ξ (31r)+r3/4 − ξ0(1) ≥ −r .
Here, ξn(1) + n3 is a martingale with bounded increments, so, by the Azuma–Hoeffding
inequality, for some ε > 0 and all r ≥ 1,
P ξ (31r)+r3/4 − ξ0(1) ≥ −r
≤ P ξ (31r)+r3/4 + 31 3r + r 3/4
+ P ξ (31r)−r3/4 − ξ0(1) ≤ −r ,
where, by Azuma–Hoeffding once more,
P ξ (31r)−r3/4 − ξ0(1) ≤ −r
≤ P ξ (31r)−r3/4 + 31 3r − r 3/4
(1) > r } when
Combining these bounds, and using the symmetric argument for {ξτ1
ξ0(1) = x < 0, we see that for r > 0,
where u(r ) = (c + o(1))r −1/2. Thus, Xn satisfies a discrete-space analogue of (Osc1)
with α = β = 1/2. This is the critical case identified in Theorem 5, but that result
does not cover this case due to the rate of convergence estimate for u; a finer analysis
is required. We conjecture that the walk is recurrent.
Example 2 We present two variations on the previous example, which are superficially
similar but turn out to be less delicate. First, modify the random walk of the previous
example by supposing that (4.1) holds but replacing the behaviour at x2 = 0 by
p(x1, 0; x1, 1) = p(x1, 0; x1, −1) = 1/2 for all x1 ∈ Z. See the left-hand part of
Fig. 2 for an illustration.
The embedded process Xn now has, for all x ∈ Z and for r ≥ 0,
where u(r ) = (c/2)(1 + o(1))r −1/2. Thus, Xn is a random walk with symmetric
increments, and the discrete version of our Theorem 3 (and also a result of [23])
Fig. 2 Pictorial representation of the two non-homogeneous nearest-neighbour random walks on Z2 of
Example 2. Each of these walks is transient
implies that the walk is transient. This walk was studied by Campanino and Petritis
[5,6], who proved transience via different methods.
Next, modify the random walk of Example 1 by supposing that (4.1) holds but
replacing the behaviour at x2 = 0 by p(x1, 0; x1, 1) = p(x1, 0; x1, −1) = 1/2 if
x1 ≥ 0, and p(x1, 0; x1, −1) = 1 for x1 < 0. See the right-hand part of Fig. 2 for an
illustration. This time the walk takes a symmetric increment as at (4.3) when x ≥ 0
but a one-sided increment as at (4.2) when x < 0. In this case, the discrete version of
our Theorem 8 (and also a result of [17]) shows that the walk is transient.
One may obtain the general model on Z+× S as an embedded process for a random
walk on complexes of half-spaces, generalizing the examples considered here; we
leave this to the interested reader.
5 Recurrence Classification in the Non-critical Cases
5.1 Lyapunov Functions
Our proofs are based on demonstrating appropriate Lyapunov functions; that is, for
suitable ϕ : R+ × S → R+ we study Yn = ϕ(Xn , ξn) such that Yn has appropriate
local supermartingale or submartingale properties for the one-step mean increments
= Ex,i [ϕ(X1, ξ1) − ϕ (X0, ξ0)] .
Dϕ(x , i ) := E ϕ (Xn+1, ξn+1) − ϕ (Xn, ξn) | (Xn, ξn) = (x , i )
First, we note some consequences of the transition law (2.1). Let ϕ : R+× S → R
be measurable. Then, we have from (2.1) that, for (x , i ) ∈ R+ × S,
(ϕ(x + y, i ) − ϕ(x , i )) wi (y)dy.
fν (x , k) := λk fν (x ) = λk (1 + x )ν .
Now, for this Lyapunov function, (5.1) gives
λ j fν (x + y) − λi fν (x ) wi (y)dy
−x
−x
j∈S
−x
j∈S
−x
Depending on whether i ∈ Ssym or i ∈ Sone, the above integrals can be expressed in
terms of vi as follows. For i ∈ Ssym,
( fν (x + y) + fν (x − y) − 2 fν (x )) vi (y)dy
For i ∈ Sone,
j∈S
j∈S
5.2 Estimates of Functional Increments
In the course of our proofs, we need various integral estimates that can be expressed
in terms of classical transcendental functions. For the convenience of the reader, we
gather all necessary integrals in Lemmas 1 and 2; the proofs of these results are deferred
until the Appendix. Recall that the Euler gamma function Γ satisfies the functional
equation zΓ (z) = Γ (z + 1), and the hypergeometric function m Fn is defined via a
power series (see [1]).
1 1
du = α − ν 2 F1 (−ν, α − ν; α − ν + 1; −1) − α ;
du =
α , 3−2 ν ; 23 , 2, 2 − α2 ; 1 .
4 F3 1, 1 − ν2 , 1 − 2
1 −
Remark 2 The j integrals can be obtained as derivatives with respect to ν of the i
integrals, evaluated at ν = 0.
The next result collects estimates for our integrals in the expected functional
increments (5.4) and (5.5) in terms of the integrals in Lemma 1.
fν (y − x )v(y)dy = cx ν−αi2ν,,1α + o x ν−α ;
fν (x )v(y)dy = cx ν−αi2ν,,0α + o x ν−α ;
( fν (x + y) − fν (x )) v(y)dy = cx ν−αi0ν,α + o x ν−α .
( fν (x − y) − fν (x )) v(y)dy = cx ν−αi˜1ν,α + o x ν−α .
Moreover, if v ∈ Dα+,c then stronger versions of all of the above estimates hold with
o(x ν−α) replaced by O(x ν−α−δ) for some δ > 0.
Proof These estimates are mostly quite straightforward, so we do not give all the
details. We spell out the estimate in (5.6); the others are similar. We have
(u − 1)ν u−1−α(x − u−1)−1−αc(ux − 1)du.
Let ε ∈ (0, c). Then, there exists y0 ∈ R+ such that |c(y) − c| < ε for all y ≥ y0, so
that |c(ux − 1) − c| < ε for all u in the range of integration, provided x ≥ y0. Writing
f (u) = (u − 1)ν u−1−α, and g(u) =
x − u−1 −1−α c(ux − 1),
for the duration of the proof, we have that
f (u)g(u)du.
1+x and x ≥ y0, we have
For u ≥ x
g− := (c − ε)x −1−α ≤ g(u) ≤ (c + ε)(x − 1)−1−α =: g+,
so that g+ − g− ≤ 2ε(x − 1)−1−α + C1x −2−α for a constant C1 < ∞ not depending
on x ≥ y0 or ε. Moreover, it is easy to see that 1∞ | f (u)|du ≤ C2 for a constant C2
depending only on ν and α, provided ν ∈ (−1, α). Hence, Lemma 18 shows that
for all x ≥ y0. Since also
f (u)du ≤ 2C2ε(x − 1)−1−α + C1C2x −2−α,
f (u)g(u)du − cx −1−αi ν,α
2,1 ≤ εx −1−α,
We also need the following simple estimates for ranges of α when the asymptotics
for the final two integrals in Lemma 3 are not valid.
(i) For α ≥ 2 and any ν ∈ (0, 1), there exist ε > 0 and x0 ∈ R+ such that, for all
x ≥ x0,
( fν (x + y) + fν (x − y) − 2 fν (x )) v(y)dy ≤
Proof For part (i), set aν (z) = (1 + z)ν + (1 − z)ν − 2, so that
( fν (x + y) + fν (x − y) − 2 fν (x )) v(y)dy
Suppose that α ≥ 2 and ν ∈ (0, 1). For ν ∈ (0, 1), calculus shows that aν (z) has a
single local maximum at z = 0, so that aν (z) ≤ 0 for all z. Moreover, Taylor’s theorem
shows that for any ν ∈ (0, 1) there exists δν ∈ (0, 1) such that aν (z) ≤ −(ν/2)(1−ν)z2
for all z ∈ [0, δν ]. Also, c(y) ≥ c/2 > 0 for all y ≥ y0 sufficiently large. Hence, for
all x ≥ y0/δν ,
which yields part (i) of the lemma.
For part (ii), suppose that α ≥ 1 and ν > 0. For any ν > 0, there exists δν ∈ (0, 1)
such that (1 − z)ν − 1 ≤ −(ν/2)z for all z ∈ [0, δν ]. Moreover, c(y) ≥ c/2 > 0 for all
y ≥ y0 sufficiently large. Hence, since the integrand is non-positive, for x > y0/δν ,
and part (ii) follows.
Lemma 5 Suppose that (A1) holds and χi αi < 1. Then for ν ∈ (−1, 1 ∧ αi ),
{one, sym}, and i ∈ S , as x → ∞,
D fν (x , i ) = χi λi ci x ν−αi i ν,αi
2,1
D fν (x , i ) = χi λi ci x ν−αi i ν,αi
2,1
Rsym(α, ν) = −1 + νπ cot(π αi /2) + o(ν) if
Rone(α, ν) = −1 + νπ cot(π αi ) + o(ν) if
Proof The above expressions for D fν (x , i ) follow from (5.4) and (5.5) with Lemma 3.
Additionally, we compute that Rsym(α, 0) = Rone(α, 0) = −1. For ν in a
neighbourhood of 0, uniformity of convergence of the integrals over (1, ∞) enables us to
differentiate with respect to ν under the integral sign to get
using Lemma 2 and the digamma reflection formula (equation 6.3.7 from [1, p. 259]),
and then, the first formula in (5.12) follows by Taylor’s theorem. Similarly, for the
second formula in (5.12),
∂ j˜1α j2α
∂ν Rone(α, ν)|ν=0 = i20,,1α − Rone(α, 0) i20,,1α
This completes the proof.
We conclude this subsection with two algebraic results.
Lemma 6 Suppose that (A0) holds. Given (bk ; k ∈ S) with bk ∈ R for all k, there
exists a solution (θk ; k ∈ S) with θk ∈ R for all k to the system of equations
j∈S
p(k, j )θ j − θk = bk , (k ∈ S),
if and only if k∈S μk bk = 0. Moreover, if a solution to (5.13) exists, we may take
θk > 0 for all k ∈ S.
Proof As column vectors, we write μ = (μk ; k ∈ S) for the stationary probabilities
as given in (A0), b = (bk ; k ∈ S), and θ = (θk ; k ∈ S). Then, in matrix-vector form,
(5.13) reads ( P − I )θ = b, while μ satisfies (2.2), which reads ( P − I ) μ = 0, the
homogeneous system adjoint to (5.13) (here I is the identity matrix and 0 is the vector
of all 0s).
A standard result from linear algebra (a version of the Fredholm alternative) says
that ( P − I )θ = b admits a solution θ if and only if the vector b is orthogonal to any
solution x to ( P − I ) x = 0; but, by (A0), any such x is a scalar multiple of μ. In
other words, a solution θ to (5.13) exists if and only if μ b = 0, as claimed.
Finally, since P is a stochastic matrix, ( P − I )1 = 0, where 1 is the column vector
of all 1s; hence, if θ solves ( P − I )θ = b, then so does θ + γ 1 for any γ ∈ R. This
implies the final statement in the lemma.
Lemma 7 Let U = (Uk, ; k, = 0, . . . , M ) be a given upper triangular matrix
having all its upper triangular elements non-negative (Uk, ≥ 0 for 0 ≤ k < ≤ M
and vanishing all other elements) and A = ( Ak ; k = 1, . . . , M ) a vector with positive
components. Then, there exists a unique lower triangular matrix L = (Lk, ; k, =
0, . . . , M ) (so diagonal and upper triangular elements vanish) satisfying
(i) Lm,m−1 = (U L)m,m + Am for m = 1, . . . , M ;
(ii) Lk, =
< k ≤ M .
< k ≤ M .
Proof We construct L inductively. Item (i) demands
Lm,m−1 =
Um, L ,m + Am =
In the case m = M , with the usual convention that an empty sum is 0, the demand (5.14)
is simply L M,M−1 = AM . So we can start our construction taking L M,M−1 = AM ,
which is positive by assumption [item (ii) makes no demands in the case k = M ,
= M − 1].
Suppose now that all matrix elements Lk, have been computed in the lower-right
corner Λm (1 ≤ m ≤ M ):
=0
=m+1
The elements of L involved in statement (i) (for given m) and in statement (ii) for
= m − 1 are all in Λm ; thus, as part of our inductive hypothesis we may suppose
that the elements of Λm are such that (i) holds for the given m, and (ii) holds with
= m − 1 and all m ≤ k ≤ M . We have shown that we can achieve this for ΛM .
The inductive step is to construct from Λm (2 ≤ m ≤ M ) elements Lk,m−2 for
m − 1 ≤ k ≤ M and hence complete the array Λm−1 in such a way that (i) holds for
m − 1 replacing m, that (ii) holds for = m − 2, and that all elements are positive.
Now, (5.14) reveals the demand of item (i) as
=m
Lm−1,m−2 =
Um−1, L ,m−1 + Am−1,
k−m+1
r=0
Lk,m−2 =
Lm−1+r,m−2+r = Lm−1,m−2 + · · · + Lk,k−1
which involves only elements of Λm in addition to Lm−1,m−2, which we have already
defined, and positivity of all the Lk,m−2 follows by hypothesis. This gives us the
construction of Λm−1 and establishes the inductive step.
This algorithm can be continued down to Λ1. But then the lower triangular matrix L
is totally determined. The diagonal and upper triangular elements of L do not influence
the construction, and may be set to zero.
Corollary 1 Let the matrix U and the vector A be as in Lemma 7. Let L be the set of
lower triangular matrices L˜ satisfying
(i) L˜ m,m−1 > (U L˜ )m,m + Am for m = 1, . . . , M ;
(ii) L˜ k, =
< k ≤ M ,
viewed as subset of the positive cone V = (0, ∞) M(M2−1) . Then, L is a non-empty, open
subset of V.
We use the notation
α := mk∈iSn αk ; α¯ := mk∈aSx αk ; α := mk∈iSn {αk ∧ (1/χk )} and
α := mk∈aSx {αk ∧ (1/χk )} .
We start with the case maxk∈S χk αk < 1. We will obtain a local supermartingale
by choosing the λk carefully. Lemma 6, which shows how the stationary probabilities
μk enter, is crucial; a similar idea was used for random walks on strips in Section 3.1
of [10]. Next is our key local supermartingale result in this case.
D fν (x , i ) = Ex,i [ fν (X1, ξ1) − fν (X0, ξ0)] ≤ −εx ν−α¯ ,
D fν (x , i ) = Ex,i [ fν (X1, ξ1) − fν (X0, ξ0)] ≤ −εx ν−α¯ ,
j∈S
p(i, j )θ j − θi + ai = −δ, for all i ∈ S;
be negative for all i and all x sufficiently large provided that we can find ν such that
(Pλ)i + R (αi , ν) < 0 for all i . By (5.12), writing θ = (θk ; k ∈ S),
λi
by (5.15). Therefore, by Lemma 5, we can always find sufficiently small ν > 0 and a
vector λ = λ(ν) with strictly positive elements for which D fν (x , i ) ≤ −εx ν−αi for
some ε > 0, all i , and all x sufficiently large. Maximizing over i gives part (i).
The argument for part (ii) is similar. Suppose ν ∈ (−1, 0). This time, k∈S μk ak =
δ for some δ > 0, and we set bk = −ak + δ, so that k∈S μk bk = 0 once more.
Lemma 6 now shows that we can find θk so that
With this choice of θk , we again set λk = 1 + θk ν; note we may assume λk > 0 for
all k for ν sufficiently small.
Again, D fν (x , i ) will be non-positive for all i and all x sufficiently large provided
that we can find λ and ν such that (Pλλi)i + R (αi , ν) < 0 for all i . Following a similar
argument to before, we obtain with (5.16) that
+ R (αi , ν) = ν (( Pθ )i − θi ) + νai + o(ν) = νδ + o(ν).
Thus, we can find for ν < 0 close enough to 0 a vector λ = λ(ν) with strictly positive
elements for which D fν (x , i ) ≤ −εx ν−αi for all i and all x sufficiently large.
Now we examine the case maxk∈S χk αk ≥ 1.
Proposition 2 Suppose that (A0) and (A1) hold, and maxk∈S χk αk ≥ 1. Then, there
exist ν ∈ (0, α ), λk > 0 (k ∈ S), ε > 0, and x0 ∈ R+ such that for all i ∈ S,
Before starting the proof of this proposition, we introduce the following notation. For
k ∈ S, denote by ak = π cot(π χk αk ) and define the vector a = (ak ; k ∈ S). For
i ∈ S and A ⊆ S, write P(i, A) = j∈A p(i, j ). Define S0 = {i ∈ S : χi αi ≥ 1}
and recursively, for m ≥ 1,
Denote by M := max{m ≥ 0 : Sm = ∅}. Since P is irreducible, the collection
(Sm ; m = 0, . . . , M ) is a partition of S.
Proof of Proposition 2 It suffices to find x0 ∈ R+, ε > 0, ν ∈ (0, α ), and an open,
non-empty subset G of the positive cone C := (0, ∞)|S| such that
G ⊆ ∩i∈S {λ ∈ C : D fν (x , i ) satisfies condition (5.17)}.
Now, for i ∈ S0, inequality (5.17) is satisfied thanks to (5.4), (5.5), and Lemmas 3
and 4 for every choice of λ (with positive components) and ν ∈ (0, α). Hence, the
previous condition reduces to the requirement
G ⊆ ∩i∈S\S0 {λ ∈ C : D fν (x , i ) satisfies condition (5.17)}.
The rest of the proof is devoted into establishing this fact.
+ R (αi , ν) < 0, i = 1, . . . , M
has non-trivial solutions λ for sufficiently small ν. Thanks to Lemma 5, we have
R (αi , ν) = −1 + νai + o(ν). We will obtain (5.17) if, for ν sufficiently small,
m=1
λ ∈ C : ( Pλλi)i < 1 − νai , i ∈ Sm
= ∅.
We seek a solution λ ∈ R (for sufficiently small ν), under the Ansatz that the λ j are
constant on every S , i.e. the vector λ has the form λˆ with λˆ j = λ( ) for all j ∈ S .
Suppose that i ∈ Sm . Then, p(i, j ) = 0 for j ∈ S with < m − 1, so that
j∈S
=0
=m−1
We introduce the auxiliary matrix ρ = (ρk, ; k, ∈ {0, . . . , M }) defined by ρk, :=
λ(k)/λ( ). By construction, ρk,k = 1 and ρk, = 1/ρ ,k . Let
1 1 1
Lk, = ν log ρk, = − ν log ρ ,k = ν
It suffices to determine the upper triangular part of ρ, or, equivalently, the lower
triangular array (Lk, ; 0 ≤ < k ≤ M ). We do so recursively, starting with L M,M−1.
In the case i ∈ SM , the condition in (5.20) reads, by (5.21),
P(i, SM−1)
ρm−1,m P(i, Sm−1) < 1 − νai − P(i, Sm ) −
=m+1
Using the fact that for < m we have ρ ,m = exp(−ν Lm, ) and for
ρ ,m = exp(ν L ,m ), the above expression becomes, up to o(ν) terms
> m we have
P(i, Sm−1)
=m+1
P(i, Sm−1)
=m+1
P(i, Sm−1)
P(i, Sm−1)
P(i, Sm−1)
Introducing the upper triangular matrix U = (Um,n; 0 ≤ m < n ≤ M ) defined by
Um,n = maxi∈Sm PP(i(,iS,Smn−)1) for m ≥ 1, and the vector Am = maxi∈Sm P(i,Saim−1) for
m = 1, . . . , M , the condition in (5.20) is satisfied if we solve the recursion
Lm,m−1 > (U L)m,m + Am ,
for m = M, M − 1, . . . , 1,
with initial condition L M,M−1 > AM and condition L ,m > 0 for 0 ≤ m <
Additionally, we have from (5.22) that
≤ M .
Lk, = Lk,k−1 + Lk−1,k−2 + · · · + L +1, , 0 ≤
< k ≤ M.
Hence, by Corollary 1, there exist non-trivial solutions for the lower triangular matrix
L within an algorithmically determined region L. The positivity of the lower triangular
part of L implies that the components of λ are ordered: λ(m) < λ(m+1) for 0 ≤ m <
M − 1. Given L, the ratios of the λ(k) are determined, and by construction, the λ(k)
are positive.
We are almost ready to complete the proof of Theorem 1, excluding part (b)(iii);
first, we need one more technical result concerning non-confinement.
P Xn+1 − Xn ≥ 1 | (Xn, ξn) = (y, i ) ≥ εx , for all y ∈ [0, x ] and all i ∈ S.
(5.23)
Indeed, given (x , i ) ∈ R+ × S, we may choose j ∈ S so that p(i, j ) > 0 and we
may choose z0 ≥ x sufficiently large so that, for some ε > 0, vi (z) ≥ εz−1−αi for all
z ≥ z0. Then if y ∈ [0, x ],
P Xn+1 ≥ y + 1 | (Xn, ξn) = (y, i ) ≥ p(i, j )
which gives (5.23). The local escape property (5.23) implies the lim sup result by a
standard argument: see, e.g. [16, Proposition 3.3.4].
Proof of Theorem 1 We are not yet ready to prove part (b)(iii): we defer that part of
the proof until Sect. 7.
The other parts of the theorem follow from the supermartingale estimates in this
section together with the technical results from the Appendix. Indeed, under the conditions
of part (a) or (b)(i) of the theorem, we have from Propositions 2 or 1(i), respectively,
that for suitable ν > 0 and λk ,
E fν (Xn+1, ξn) − fν (Xn, ξn) | Xn, ξn ≤ 0, on {Xn ≥ x0}.
Thus, we may apply Lemma 16, which together with Lemma 8 shows that
lim infn→∞ Xn ≤ x0, a.s. Thus, there exists an interval I ⊆ [0, x0 + 1] such that
(Xn, ηn) ∈ I × {i } i.o., where i is some fixed element of S. Let τ0 := 0 and for
k ∈ N define τk = min{n > τk−1 + 1 : (Xn, ηn) ∈ I × {i }}. Given i ∈ S, we
may choose j, k ∈ S such that p(i, j ) > δ1 and p( j, k) > δ1 for some δ1 > 0; let
γ = αi ∨α j . Then, we may choose δ2 ∈ (0, 1) and z0 ∈ R+such that vi (z) > δ2z−1−γ
and v j (z) > δ2z−1−γ for all z ≥ z0. Then, for any ε ∈ (0, 1),
≥ δ12δ22ε(z0 + 3)−1−γ (x0 + z0 + 3)−1−γ ,
uniformly in k. Thus, Lévy’s extension of the Borel–Cantelli lemma shows Xn < ε
infinitely often. Thus, since ε ∈ (0, 1) was arbitrary, lim infn→∞ Xn = 0, a.s.
On the other hand, under the conditions of part (b)(ii) of the theorem, we have from
Proposition 1(ii) that for suitable ν < 0 and λk ,
E fν (Xn+1, ξn) − fν (Xn, ξn) | Xn, ξn ≤ 0, on {Xn ≥ x1},
for any x1 sufficiently large. Thus, we may apply Lemma 17, which shows that for
any ε > 0 there exists x ∈ (x1, ∞) for which, for all n ≥ 0,
Fn ≥ 1 − ε, on {Xn ≥ x }.
Set σx = min{n ≥ 0 : Xn ≥ x }. Then, on {σx < ∞},
P lim inf Xm ≥ x1 ≥ E P inf Xm > x1
m→∞ m≥σx
≥ (1 − ε)P[σx < ∞] = (1 − ε),
by Lemma 8. Since ε > 0 was arbitrary, we get lim infm→∞ Xm ≥ x1, a.s., and since
x1 was arbitrary we get limm→∞ Xm = ∞, a.s.
6 Existence or Non-existence of Moments
The following result is a straightforward reformulation of Theorem 1 of [2].
Lemma 9 Let Yn be an integrable Fn-adapted stochastic process, taking values in an
unbounded subset of R+, with Y0 = x0 fixed. For x > 0, let σx := inf{n ≥ 0 : Yn ≤ x }.
Suppose that there exist δ > 0, x > 0, and γ < 1 such that for any n ≥ 0,
E Yn+1 − Yn | Fn ≤ −δYnγ , on {n < σx }.
Then, for any p ∈ [0, 1/(1 − γ )), E[σxp] < ∞.
The following companion result on non-existence of moments is a reformulation
of Corollary 1 of [2].
Lemma 10 Let Yn be an integrable Fn -adapted stochastic process, taking values in an
unbounded subset of R+, with Y0 = x0 fixed. For x > 0, let σx := inf{n ≥ 0 : Yn ≤ x }.
Suppose that there exist C1, C2 > 0, x > 0, p > 0, and r > 1 such that for any n ≥ 0,
on {n < σx } the following hold:
E Yn+1 − Yn | Fn ≥ −C1;
E Ynr+1 − Ynr | Fn ≤ C2Ynr−1;
E Ynp+1 − Ynp | Fn ≥ 0.
q
Then, for any q > p, E[σx ] = ∞ for x0 > x .
6.2 Proof of Theorem 2
Proof of Theorem 2 Under conditions (a) or (b)(i) of Theorem 1, we have from
Propositions 2 or 1, respectively, that there exist positive λk and constants ε > 0, β > 0 and
ν ∈ (0, β) such that,
for all x ≥ x0
Let Yn = fν (Xn, ξn ). Then, Yn is bounded above and below by positive constants times
(1 + Xn )ν , so we have that (6.1) holds for x sufficiently large with γ = 1 − (β/ν).
It follows from Lemma 9 that E[σxp] < ∞ for p ∈ (0, ν/β), which gives the claimed
existence of moments result.
It is not hard to see that some moments of the return time fail to exist, due to the
heavy-tailed nature of the model, and an argument is easily constructed using the ‘one
big jump’ idea: a similar idea is used in [14]. We sketch the argument. For any x , i ,
for all y sufficiently large we have Px,i [X1 ≥ y − x ] ≥ εy−α¯ . Given such a first
jump, with uniformly positive probability the process takes time at least of order yβ
to return to a neighbourhood of zero (where β can be bounded in terms of α); this can
be proved using a suitable maximal inequality as in the proof of Theorem 2.10 of [14].
Combining these two facts shows that with probability of order y−α¯ the return time
to a neighbourhood of the origin exceeds order yβ . This polynomial tail bound yields
non-existence of sufficiently high moments.
6.3 Explicit Cases: Theorems 4 and 6
We now restrict attention to the case S = {1, 2} with α1 = α2 = α and χ1 = χ2 = χ ,
so both half-lines are of the same type. Take λ1 = λ2 = 1 and ν ∈ (0, α), so that
fν (x , i ) = (1 + x )ν . Then, Lemma 5 shows that, for i ∈ S , ∈ {sym, one},
D fν (x , i ) = χ ci x ν−αC (ν, α) + o(x ν−α),
C sym(ν, α) = i2ν,,1α + i0ν,α + i1ν,α − i2α,0;
C one(ν, α) = i2ν,,1α + i˜1ν,α − i2α,0.
The two cases we are interested in are the recurrent two-sided symmetric case,
where χ = 21 (i.e. S = Ssym) with α > 1, and the recurrent one-sided antisymmetric
case, where χ = 1 (i.e. S = Sone) with α > 21 .
Lemma 11 Let ∈ {sym, one} and χ α ∈ ( 21 , 1). The function ν → C (ν, α) is
continuous for ν ∈ [0, α) with C (0, α) = 0 and limν↑α C (ν, α) = ∞. There exists
ν0 = ν0(α) ∈ (0, α) such that C (ν, α) < 0 for α ∈ (0, ν0), C (ν0, α) = 0, and
C (ν, α) > 0 for α ∈ (ν0, α)
Proof We give the proof only in the case = sym; the other case is very similar. Thus,
χ = 21 , and for ease of notation, we write just C instead of C .
Clearly C (0, α) = 0. For ν ≥ 1, convexity of the function z → zν on R+ shows
that (1 + u)ν + (1 − u)ν − 2 ≥ 0 for all u ∈ [0, 1], so that i1ν,α ≥ 0; clearly, i2ν,,1α and
i ν,α are also non-negative. Hence, by the expression for i2ν,,1α in Lemma 1,
0
which is negative for α ∈ (1, 2). Hence, C (ν, α) < 0 for ν > 0 small enough.
Since ν → C (ν, α) is a non-constant analytic function on [0, α), its zeros can
accumulate only at α, but this is ruled out by the fact that C (ν, α) → ∞ as ν → α.
Hence, C ( · , α) has only finitely many zeros in [0, α); one is at 0, and there must be
at least one zero in (0, α), by Rolle’s theorem. Define ν− := ν−(α) and ν+ := ν+(α)
to be the smallest and largest such zeros, respectively.
Suppose 0 < ν1 ≤ ν2 < α. By Jensen’s inequality, Ex,i [(1 + X1)ν2 ] ≥
(1 + x )ν2 + D fν2 (x , i ) ≥ (1 + x )ν1 + D fν1 (x , i ) ν2/ν1
= (1 + x )ν2 1 + (1 + x )−ν1 D fν1 (x , i ) ν2/ν1
ν2 x ν2−ν1 D fν1 (x , i ) + o(x ν2−α),
= (1 + x )ν2 + ν1
using Taylor’s theorem and the fact that D fν1 (x , i ) = O(x ν1−α), by (6.5). By another
application of (6.5), it follows that
χ ci x ν2−αC (ν2, α) + o(x ν2−α) ≥ νν21 χ ci x ν2−αC (ν1, α) + o(x ν2−α).
Multiplying by x α−ν2 and taking x → ∞, we obtain
ν2
C (ν2, α) ≥ ν1 C (ν1, α), for 0 < ν1 ≤ ν2 < α.
In particular, (i) if C (ν1, α) ≥ 0 then C (ν, α) ≥ 0 for all ν ≥ ν1 > 0; and (ii) if
C (ν1, α) > 0 and ν2 > ν1, we have C (ν2, α) > C (ν1, α). It follows from these two
observations that C (ν, α) = 0 for ν ∈ [ν−, ν+], which is not possible unless ν− = ν+.
Hence, there is exactly one zero of C ( · , α) in (0, α); call it ν0(α).
Lemma 12 The positive zero of C ( · , α) described in Lemma 11 is given by v0one(α) =
sym(α) = α − 1.
2α − 1 or v0
Proof First suppose
= one. Then, from Lemma 1 we verify that for α ∈ ( 21 , 1),
= 0,
since αΓ (α) = Γ (1 + α).
Now, suppose that = sym and α ∈ (1, 2). To verify C sym(α − 1, α) = 0, it is
simpler to work with the integral representations directly, rather than the hypergeometric
functions. After the substitution z = 1/u, we have
Similarly, after the same substitution,
(z + 1)α−1 + (z − 1)α−1 − 2zα−1 dz,
which we may evaluate as
= yl→im∞ 1
(z + 1)α−1 + (z − 1)α−1 − 2zα−1 dz
lim yα (1 + y−1)α + (1 − y−1)α − 2
+ α y→∞
Finally, we have that i2α,−1 1,α − i2α,0 = ΓΓ(1(+α)α) − α1 = 0, so altogether we verify that
C sym(α − 1, α) = 0.
We can now complete the proofs of Theorems 4 and 6.
Proof of Theorem 4 Let Yn = fν (Xn, ξn). First suppose that α ∈ (1, 2). Then, we
have from (6.5) together with Lemmas 11 and 12 that, for any ν ∈ (0, α − 1),
E[Yn+1 − Yn | Fn] ≤ −εYn1−(α/ν),
{Yn ≥ y0},
for some ε > 0 and y0 ∈ R+. It follows from Lemma 9 that E[σ p] < ∞ for p < ν/α
and since ν < α − 1 was arbitrary we get E[σ p] < ∞ for p < 1 − (1/α).
For the non-existence of moments when α ∈ (1, 2), we will apply Lemma 10 with
Yn = fν (Xn, ξn) = (1 + Xn)ν for some ν ∈ (0, α). Then, condition (6.2) follows
from (6.5), which also shows that for r ∈ (1, α/ν),
so that E Y r n | Fn] ≤ ci C sym(r ν, α)Ynr−(α/ν), for all Yn sufficiently large.
[ n+1 − Y r
Since α/ν > 1 condition (6.3) follows. Finally, we may choose ν < α close enough
to α and then take γ ∈ (α − 1, ν) so that from (6.5), with Lemmas 11 and 12,
E[Ynγ+/ν1 − Ynγ /ν | Fn] ≥ 0, for all Yn sufficiently large. Thus, we may apply Lemma 10
to obtain E[σ p] = ∞ for p > γ /ν, and taking γ close to α − 1 and ν close to α we
can achieve any p > 1 − (1/α), as claimed.
Next, suppose that α ≥ 2. A similar argument to before but this time using Lemmas 3
and 4 shows that for any ν ∈ (0, 1),
E[Yn+1 − Yn | Fn] ≤ −εYn1−(2/ν),
{Yn ≥ y0},
for some ε > 0 and y0 ∈ R+. Lemma 9 then shows that E[σ p] < ∞ for p < ν/2 and
since ν ∈ (0, 1) was arbitrary we get E[σ p] < ∞ for p < 1/2.
We sketch the argument for E[σ p] = ∞ when p > 1/2. For ν ∈ (1, α), it is not
hard to show that D fν (x , i ) ≥ 0 for all x sufficiently large, and we may verify the
other conditions of Lemma 10 to show that E[σ p] = ∞ for p > 1/2.
Proof of Theorem 6 Most of the proof is similar to that of Theorem 4, so we omit the
details. The case where a different argument is required is the non-existence part of
the case α ≥ 1. We have that for some ε > 0 and all y sufficiently large, Px,i [X1 ≥
y] ≥ ε(x + y)−α. A similar argument to Lemma 4 shows that for any ν ∈ (0, 1), for
some C ∈ R+, E[Xnν+1 − Xnν | Xn = x ] ≥ −C . Then, a suitable maximal inequality
implies that with probability at least 1/2 started from X1 ≥ y it takes at least cyν
steps for Xn to return to a neighbourhood of 0, for some c > 0. Combining the two
estimates gives
which implies E[σ p] = ∞ for p ≥ α/ν, and since ν ∈ (0, 1) was arbitrary, we can
achieve any p > α.
7 Recurrence Classification in the Critical Cases
In this section, we prove Theorem 1(b)(iii). Throughout this section, we write ak :=
cot(χk π αk ) and suppose that maxk∈S χk αk < 1, that μk ak = 0, and that
vi ∈ Dα+i ,ci for all i ∈ S, that is, for y > 0, vi (y) = ci (y)yk−∈1S−αi , with αi ∈ (0, ∞)
and ci (y) = ci + O(y−δ), where δ > 0 may be chosen so as not to depend upon i .
To prove recurrence in the critical cases, we need a function that grows more slowly
than any power; now, the weights λk are additive rather than multiplicative. For x ∈ R,
write g(x ) := log(1 + |x |). Then, for x ∈ R+ and k ∈ S, define
where λk > 0 for all k ∈ S. Also write
g(x , k) := g(x ) + λk = log (1 + |x |) + λk ,
h(x , k) := (g(x , k))1/2 = (log(1 + |x |) + λk )1/2 .
j∈S
we fix such a choice of the λk from now on.
We prove recurrence by establishing the following result.
Lemma 13 Suppose that the conditions of Theorem 1(b)(iii) hold, and that (λk ; k ∈
S) are such that (7.2) holds. Then, there exists x0 ∈ R+ such that
E h (Xn+1, ξn+1) − h (Xn, ξn) | Xn = x , ξn = i ≤ 0, for x ≥ x0
and all i ∈ S.
Dg(x , i ) = E g(Xn+1, ξn+1) − g(Xn, ξn) | (Xn, ξn) = (x , i )
−x ∞
j∈S
j∈S
(g(x + y) − g(x )) wi (y)dy
where we have used the fact that g(x ) is defined for all x ∈ R and symmetric about 0,
and we have introduced the notation
Ti (x ) :=
Gi (x ) :=
The next lemma, proved in the next subsection, estimates the integrals in (7.4).
Lemma 14 Suppose that vi ∈ Dα+i ,ci . Then, for
for some η > 0, as x → ∞,
Note that Lemma 14 together with (7.3) shows that
Dg(x , i ) =
j∈S
⎞
p(i, j )(λ j − λi )⎠ x −αi + O(x −αi −η) = O(x −αi −η),
by (7.2). This is not enough by itself to establish recurrence, since the sign of the
O(x −αi −η) term is unknown. This is why we need the function h(x , i ).
= sym will follow from the estimates
v(y)dy = cx −α j0α + O(x −α−η);
v(y)dy = cx −α j2α + O(x −α−η);
v(y)dy = cx −α j1α + O(x −α−η);
since then we obtain from (7.6) with (7.7) and Lemma 2 that
which yields the stated result via the digamma reflection formula (equation 6.3.7 from
[1, p. 259]). We present here in detail the proof of only the final estimate in (7.7); the
others are similar. Some algebra followed by the substitution u = y/(1 + x ) shows
that the third integral in (7.7) is
log(1 − u2)c(u(1 + x ))u−1−α du ≤ C
using Taylor’s theorem for log and the fact that c(y) is uniformly bounded. On the
√
other hand, for u > 1+xx we have c(u(1 + x )) = c + O(x −δ/2), so that
Here we have that
x
√1x+x log(1 − u2)u−1−αdu = j1 + O x (α/2)−1
α
1+x
+ O x −1 log x .
Combining these estimates and using the fact that α ∈ (0, 2), we obtain the final
estimate in (7.7). The claim in the lemma for = one follows after some analogous
computations, which we omit.
Now, we relate Dh(x , i ) to Dg(x , i ), by comparing the individual integral terms.
1
h(x + y, i ) − h(x , i ) vi (y)dy ≤ 2h(x , i ) x
1 x
h(x − y, i ) − h(x , i ) vi (y)dy ≤ 2h(x , i ) 0 (g(x − y) − g(x )) vi (y)dy;
(g(x + y) − g(x )) vi (y)dy;
h(x + y, i ) + h(x − y, i ) − 2h(x , i ) vi (y)dy
g(x + y) + g(x − y) − 2g(x ) vi (y)dy.
0
1
≤ 2h(x , i ) 0
(h(y − x , j ) − h(x , i )) vi (y)dy
(g(y − x , j ) − g(x , i )) vi (y)dy − ε log3/2 x .
Thus, for y ≥ 0,
g(x + y) − g(x )
h(x + y, i ) − h(x , i ) = h(x + y, i ) + h(x , i ) ≤
since g(x + y) − g(x ) ≥ 0 and h(x + y, i ) ≥ h(x , i ). This gives the first inequality
in the lemma. Similarly, for y ∈ [0, x ],
g(x − y) − g(x )
h(x − y, i ) − h(x , i ) = h(x − y, i ) + h(x , i ) ≤
since g(x − y) − g(x ) ≤ 0 and h(x − y, i ) ≤ h(x , i ). This gives the second inequality
and also yields the third inequality once combined with the y ∈ [0, x ] case of (7.8).
Finally, for y ≥ x note that
Also note that, for y ≥ x > 0, g(x , i ) = g(x ) + λi and g(y − x , j ) = g(y − x ) + λ j ,
so
g(y − x , j ) − g(x , i ) = log eλ j −λi 1 + y − x
1 + x
So the sign of the expression in (7.9) is non-positive for y ≤ ψ (x ) := x − 1 + (1 +
x )eλi −λ j and non-negative for y ≥ ψ (x ), and
By the monotonicity in y of the denominator, the expression in (7.9) satisfies
g(y − x , j ) − g(x , i ) g(y − x , j ) − g(x , i )
h(y − x , j ) + h(x , i ) ≤ h(ψ (x ) − x , j ) + h(x , i )
both for y ∈ [0, ψ (x )] and for y ∈ [ψ (x ), ∞). Here h(ψ (x ) − x , j ) = h(x , i ), by
(7.10). Hence, we obtain the bound
(h(y − x , j ) − h(x , i )) vi (y)dy ≤
To improve on this estimate, suppose that y ≥ K x where K ∈ N is such that K x >
ψ (x ). Then, using the fact that the numerator in (7.9) is positive, we may choose
K ∈ N such that for all j and all y ≥ K x ,
g(y − x , j ) − g(x , i )
h(y − x , j ) − h(x , i ) ≤ h((K − 1)x , j ) + h(x , i )
g(y − x , j ) − g(x , i )
1
(log(1 + |x |) + λi + 1)1/2 = h(x , i ) 1 + g(x , i )
= h(x , i ) +
h(y − x , j ) − h(x , i ) ≤
g(y − x , j ) − g(x , i )
g(y − x , j ) − g(x , i )
The final inequality in the lemma now follows since, for all x sufficiently large,
≥ 2
≥ 2
(g(y − x , j ) − g(x , i )) vi (y)d y
− 1
log eλ j −λi (u − 1) u−1−αi du,
Now, we may complete the proofs of Lemma 13 and then Theorem 1(b)(iii).
Proof of Lemma 13 Lemma 15 together with (7.5) shows that,
Dh(x , i ) ≤
for all x sufficiently large.
Proof of Theorem 1(b)(iii) Lemma 13 with Lemma 16 shows that lim inf n→∞ Xn ≤
x0, a.s., and then, a similar argument to that in the proof of parts (a) and (b)(i) of
Theorem 1 shows that lim inf n→∞ Xn = 0, a.s.
Acknowledgements We are grateful to the anonymous referee for a careful reading of the manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution,
and reproduction in any medium, provided you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license, and indicate if changes were made.
Appendix: Technical Results
Semimartingale Results
Lemma 16 Let ( Xn , ξn ) be an Fn -adapted process taking values in R+ × S. Let f :
R+×S → R+be such that limx→∞ f (x , i ) = ∞ for all i ∈ S, and E f ( X0, ξ0) < ∞.
Suppose that there exist x0 ∈ R+ and C < ∞ for which, for all n ≥ 0,
E f (Xn+1, ξn+1) − f (Xn, ξn) | Fn ≤ 0, on {Xn > x0}, a.s.;
E f (Xn+1, ξn+1) − f (Xn, ξn) | Fn ≤ C, on {Xn ≤ x0}, a.s.
= 1.
Proof First note that, by hypothesis, E f (X1, ξ1) ≤ E f (X0, ξ0) + C < ∞, and
iterating this argument, it follows that E f (Xn, ξn) < ∞ for all n ≥ 0.
Fix n ∈ Z+. For x0 ∈ R+ in the hypothesis of the lemma, write λ = min{m ≥ n :
Xm ≤ x0}. Let Ym = f (Xm∧λ, ξm∧λ). Then, (Ym , m ≥ n) is an (Fm , m ≥ n)-adapted
non-negative supermartingale. Hence, by the supermartingale convergence theorem,
there exists Y∞ ∈ R+ such that limm→∞ Ym = Y∞, a.s. In particular,
limm→s∞up f (Xm , ξm ) ≤ Y∞, on {λ = ∞}.
Set ζi = sup{x ≥ 0 : f (x , i ) ≤ 1 + Y∞}, which has ζi < ∞ a.s. since
limx→∞ f (x , i ) = ∞. Then, lim supm→∞ Xm ≤ maxi ζi < ∞ on {λ = ∞}. Hence
∪ min≥fn Xm ≤ x0
= 1.
Since n ∈ Z+ was arbitrary, the result follows:
n≥0
= 1.
This completes the proof.
Lemma 17 Let (Xn, ξn) be an Fn-adapted process taking values in R+ × S. Let
f : R+ × S → R+ be such that supx,i f (x , i ) < ∞ and limx→∞ f (x , i ) = 0 for all
i ∈ S. Suppose that there exists x1 ∈ R+ for which inf y≤x1 f (y, i ) > 0 for all i and
Fn ≥ 1 − ε, on {Xn ≥ x }.
Proof Fix n ∈ Z+. For x1 ∈ R+ in the hypothesis of the lemma, write λ = min{m ≥
n : Xm ≤ x1} and set Ym = f (Xm∧λ, ξm∧λ). Then, (Ym , m ≥ n) is an (Fm , m ≥
n)adapted non-negative supermartingale, and so converges a.s. as m → ∞ to some
Y∞ ∈ R+. Moreover, by the optional stopping theorem for supermartingales,
Here, we have that, a.s.,
Y∞1{λ < ∞} = ml→im∞ Ym 1{λ < ∞} = f (Xλ, ξλ) 1{λ < ∞}
Combining these inequalities we obtain
Since limy→∞ f (y, i ) = 0 and inf y≤x1 f (y, i ) > 0, given ε > 0 we can choose
x > x1 large enough so that
maxi supy≥x f (y, i )
mini inf y≤x1 f (y, i )
the choice of x depends only on f , x1, and ε, and, in particular, does not depend on
n. Then, on {Xn ≥ x }, P[λ < ∞ | Fn] < ε, as claimed.
Proofs of Integral Computations
(1 + u)ν + (1 − u)ν − 2 u−1−α = 2
n≥1
Here, the power series for n ≥ 2 converges normally (hence uniformly) over |u| ≤
1. This remark allows interchanging summation and integration to obtain i1ν,α =
W ν,α(1), where for |z| ≤ 1 we define
n≥1
1 2 − α 1
= 2 − α ν(ν − 1)z 1 + 4 − α 4! (ν − 2)(ν − 3)z
1
= 2 − α
n≥0
(ν − 2)(ν − 3)(ν − 4)(ν − 5)z2 + · · ·
2−α 1
where cn = 2n+2−α (2(n+1))! (ν − 2)(ν − 3) · · · (ν − 2n − 1), for n ≥ 1 and c0 = 1.
An elementary computation yields
(n + 1 − α/2)(n + 1 − ν/2)(n + 3/2 − ν/2)(n + 1)
(n + 2 − α/2)(n + 2)(n + 1/2)
Therefore (see [3, p. 10] or [12, equation 5.81, p. 207] for a more easily accessible
reference), n≥0 cn zn = 4 F3(1, 1 − ν2 , 1 − α2 , 3−2 ν , 1; 2, 23 , 2 − α2 ; z). Now, the series
defining the generalized hypergeometric function p Fq (β1, . . . , β p; γ1, . . . , γq ; z) for
p = q + 1 converges for all z with |z| < 1; for z = 1, the series converges for
iq=1 γi > pj=1 β j , a condition that reduces to ν > −1 in the present case. Hence,
α , 3−2 ν , 1; 2, 23 , 2 − α2 ; 1 .
4 F3 1, 1 − ν2 , 1 − 2
(1 − u)ν u−1−αdu = (1 − t )1+ν
provided ν > −1, by the integral representation for the Gauss hypergeometric function
(equation 15.3.1 of [1, p. 558]). Now, by equation 15.3.6 from [1, p. 559],
2 F1(1 + α, 1 + ν; 2 + ν; 1 − t ) =
2 F1 (1 + α, 1 + ν; 2 + ν; 1 − t ) , Γ (2 + ν)Γ (−α)
Letting t ↓ 0 and using the fact that −αΓ (−α) = Γ (1 − α) we obtain the given
expression for i˜1ν,α.
Proof of Lemma 2 We appeal to tables of standard integrals (Mellin transforms) from
Section 6.4 of [8]. In particular, the given formulae for j0α, j2α, and j˜1α follow from,
respectively, equations 6.4.17, 6.4.20, and 6.4.19 of [8, pp. 315–316]; also used are
formulas 6.3.7 and 6.3.8 from [1, p. 259] and the fact that ψ (1) = −γ . Lastly, for j1α
we use the substitution u2 = s to obtain
This completes the proof.
Finally, we need the following elementary fact.
Lemma 18 Let A ⊆ R+ be a Borel set, and let f and g be measurable functions from
A to R. Suppose that there exist constants g− and g+ with 0 < g− < g+ < ∞ such
that g− ≤ g(u) ≤ g+ for all u ∈ A. Then,
f (u)g(u)du − g−
f (u)du ≤ (g+ − g−)
A | f (u)|du < ∞. Then,
A | f (u)g(u)|du ≤
f (u)du =
f (u)(g(u) − g−)du
| f (u)|(g(u) − g−)du.
This completes the proof.
1. Abramowitz , M. , Stegun , I.A. (eds.): Handbook of Mathematical Functions, National Bureau of Standards, Applied Mathematics Series, vol. 55. U.S. Government Printing Office , Washington D.C. ( 1965 )
2. Aspandiiarov , S. , Iasnogorodski , R. , Menshikov , M. : Passage-time moments for nonnegative stochastic processes and an application to reflected random walks in a quadrant . Ann. Probab . 24 , 932 - 960 ( 1996 )
3. Bailey , W.N. : Generalized Hypergeometric Series, Cambridge Tracts in Mathematics and Mathematical Physics, no. 32 . Cambridge University Press, Cambridge ( 1935 )
4. Böttcher , B. : An overshoot approach to recurrence and transience of Markov processes . Stoch. Process. Appl . 121 , 1962 - 1981 ( 2011 )
5. Campanino , M. , Petritis , D. : Random walks on randomly oriented lattices . Markov Process. Relat. Fields 9 , 391 - 412 ( 2003 )
6. Campanino , M. , Petritis , D. : On the Physical Relevance of Random Walks: An Example of Random Walks on a Randomly Oriented Lattice , Random Walks and Geometry , pp. 393 - 411 . Walter de Gruyter, Berlin ( 2004 )
7. Chung , K.L. , Fuchs , W.H.J. : On the distribution of values of sums of random variables . Mem. Am. Math. Soc. No . 6 , 12 pp ( 1951 )
8. Erdélyi , A. , Magnus , W. , Oberhettinger , F. , Tricomi , F.G. : Tables of Integral Transforms. Vol. I. Based , in Part, on Notes Left by Harry Bateman. McGraw-Hill , New York ( 1954 )
9. Feller , W.: An Introduction to Probability Theory and Its Applications , vol. II, 2nd edn. Wiley, New York ( 1971 )
10. Fayolle , G. , Malyshev , V.A. , Menshikov , M.V. : Topics in the Constructive Theory of Countable Markov Chains . Cambridge University Press , Cambridge ( 1995 )
11. Franke , B. : The scaling limit behaviour of periodic stable-like processes . Bernoulli 12 , 551 - 570 ( 2006 )
12. Graham , R.L. , Knuth , D.E. , Patashnik , O. : Concrete Mathematics. Addison-Wesley Publishing Company , Reading, MA ( 1994 )
13. Guillotin-Plantard , N. , Ny , A.L. : Transient random walks on 2D-oriented lattices , Theory Probab. Appl . 52 , 699 - 711 ( 2008 ). Translated from Teor . Veroyatn. Primen. 52 , 815 - 826 ( 2007 )
14. Hryniv , O. , MacPhee , I.M. , Menshikov , M.V. , Wade , A.R .: Non-homogeneous random walks with non-integrable increments and heavy-tailed random walks on strips . Electron. J. Probab . 17 , 1 - 28 ( 2012 ). doi:10.1214/EJP.v17- 2216
15. Kemperman , J.H.B.: The oscillating random walk . Stoch. Process. Appl . 2 , 1 - 29 ( 1974 )
16. Menshikov , M. , Popov , S. , Wade , A. : Non-homogeneous Random Walks . Cambridge University Press , Cambridge ( 2016 )
17. Rogozin , B.A. , Foss , S.G. : The recurrence of an oscillating random walk , Theor. Probab. Appl . 23 , 155 - 162 ( 1978 ). Translated from Teor . Veroyatn. Primen. 23 , 161 - 169 ( 1978 )
18. Sandric´ , N. : Recurrence and Transience Property of Some Markov Chains, Ph.D. thesis , University of Zagreb ( 2012 )
19. Sandric´ , N. : Recurrence and transience property for a class of Markov chains . Bernoulli 19 , 2167 - 2199 ( 2013 )
20. Sandric´ , N. : Long-time behavior of stable-like processes . Stoch. Process. Appl . 123 , 1276 - 1300 ( 2013 )
21. Sandric´ , N. : Recurrence and transience criteria for two cases of stable-like Markov chains . J. Theor. Probab . 27 , 754 - 788 ( 2014 )
22. Schilling , R.L. , Wang , J. : Some theorems on Feller processes: transience, local times and ultracontractivity . Trans. Am. Math. Soc. 365 , 3255 - 3286 ( 2013 )
23. Shepp , L.A.: Symmetric random walk . Trans. Am. Math. Soc. 104 , 144 - 153 ( 1962 )
24. Wang , J. : Criteria for ergodicity of Lévy type operators in dimension one . Stoch. Process. Appl . 118 , 1909 - 1928 ( 2008 )