Functional Convergence of Linear Processes with HeavyTailed Innovations
Raluca Balan
0
1
Adam Jakubowski
0
1
Sana Louhichi
0
1
0
S. Louhichi Laboratoire Jean Kuntzmann, Institut de mathmatiques appliques de Grenoble
, 51 rue des Mathmatiques, 38041 Grenoble Cedex 9,
France
1
R. Balan Department of Mathematics and Statistics, University of Ottawa
, 585 King Edward Avenue,
Ottawa
, ON K1N 6N5,
Canada
We study convergence in law of partial sums of linear processes with heavytailed innovations. In the case of summable coefficients, necessary and sufficient conditions for the finite dimensional convergence to an stable Lvy Motion are given. The conditions lead to new, tractable sufficient conditions in the case 1. In the functional setting, we complement the existing results on M1convergence, obtained for linear processes with nonnegative coefficients by Avram and Taqqu (Ann Probab 20:483503, 1992) and improved by Louhichi and Rio (Electr J Probab 16(89), 2011), by proving that in the general setting partial sums of linear processes are convergent on the Skorokhod space equipped with the S topology, introduced by Jakubowski (Electr J Probab 2(4), 1997).

Mathematics Subject Classification
60F17 60G52
1 Introduction and Announcement of Results
Let {Y j } jZ be a sequence of independent and identically distributed random variables.
By a linear process built on innovations {Y j }, we mean a stochastic process
where {Z (t )} is the stable Lvy process with Z (1) , and the convergence holds
on the Skorokhod space D([0, 1]), equipped with the Skorokhod J1 topology.
Recall, that if the variance of Z is infinite, then (2) implies the existence of (0, 2)
such that
where the constants {c j } jZ are such that the above series is Pa.s. convergent. Clearly,
in nontrivial cases, such a process is dependent and stationary, and due to the
simple linear structure, many of its distributional characteristics can be easily computed
(provided they exist). This refers not only to the expectation or the covariances, but
also to more involved quantities, like constants for regularly varying tails (see e.g.,
[21] for discussion) or mixing coefficients (see e.g., [10] for discussion).
There exists a huge literature devoted to applications of linear processes in statistical
analysis and modeling of time series. We refer to the popular textbook [6] as an
excellent introduction to the topic.
Here, we would like to stress only two particular features of linear processes.
First, linear processes provide a natural illustration for phenomena of local (or
weak) dependence and longrange dependence. The most striking results go back to
Davydov [9], who obtained a rescaled fractional Brownian motion as a functional
weak limit for suitable normalized partial sums of {Xi }s.
Another important property of linear processes is the propagation of big values.
Suppose that some random variable Y j0 takes a big value, then this big value is
propagated along the sequence Xi (everywhere, where Y j0 is taken with a big coefficient
ci j0 ). Thus, linear processes form the simplest model for phenomena of clustering
of big values, what is important in models of insurance (see e.g., [21]).
In the present paper, we shall deal with heavytailed innovations. More precisely,
we shall assume that the law of Yi belongs to the domain of strict attraction of a
nondegenerate strictly stable law , i.e.,
where Z .
Let us observe that by the Skorokhod theorem [25], we also have
P(Y j  > x ) = x h(x ), x > 0,
P(Y j > x )
x P(Y j  > x ) = p and
lim
lim P(Y j < x )
x P(Y j  > x ) = q, p + q = 1.
The norming constants an in (3) must satisfy
nP(Y j  > an) =
hence are necessarily of the form an = n1/ g(n1/), where the slowly varying function
g(x ) is the de Bruijn conjugate of (C/ h(x ))1/ (see [5]). Moreover, if > 1, then
EY j = 0, and if = 1, then p = q in (5).
Conversely, conditions (4), (5) and
E[Y j ] = 0, if > 1,
{Y j } are symmetric, if = 1,
imply (3).
If an is chosen to satisfy (6) with C = 1, then is given by the characteristic
function
R1 (ei x 1) f, p,q (x ) dx
R1 (ei x 1) f1,1/2,1/2(x ) dx
R1 (ei x 1 i x ) f, p,q (x ) dx
f, p,q (x ) = ( p I(x > 0) + q I(x < 0)) x (1+).
We refer to [12] or any of contemporary monographs on limit theorems for the above
basic information.
Suppose that the tails of Y j  are regularly varying, i.e., (4) holds for some (0, 2),
and the (usual) regularity conditions (7) and (8) are satisfied. It is an observation
due to Astrauskas [1] (in fact: a direct consequence of the Kolmogorov Three Series
Theoremsee Proposition 5.4 below) that the series (1) defining the linear process
Xi is Pa.s. convergent if, and only if,
jZ
Given the above series is convergent we can define
and it is natural to ask for convergence of Sns, when bn is suitably chosen. Astrauskas
[1] and Kasahara & Maejima [16] showed that fractional stable Lvy Motions can
appear in the limit of Sn(t )s, and that some of the limiting processes can have regular
or even continuous trajectories, while trajectories of other can be unbounded on every
interval.
In the present paper, we consider the important case of summable coefficients:
where the constants an are the same as in (2), A = jZ c j and {Z (t )} is an stable
Lvy Motion such that Z (1) Z . The obtained conditions lead to tractable sufficient
conditions, which in case < 1 are new and essentially weaker than condition
jZ
c j  < +, for some 0 < < ,
considered in [1], [8] and [16]. See Sect. 4 for details. Notice that in the case A = 0,
another normalization bn is possible with a nondegenerate limit. We refer to [22] for
comprehensive analysis of dependence structure of infinite variance processes.
Section 3 contains strengthening of (13) to a functional convergence in some suitable
topology on the Skorokhod space D([0, 1]). Since the paper [2], it is known that in
nontrivial cases (when at least two coefficients are nonzero) the convergence in the
Skorokhod J1 topology cannot hold. In fact, none of Skorokhods J1, J2, M1 and M2
topologies are applicable. This can be seen by analysis of the following simple example
([2], p. 488). Set c0 = 1, c1 = 1 and ci = 0 if j = 0, 1. Then Xi = Yi Yi1 and
(13) holds with A = j c j = 0, i.e.,
But we see that
converges in law to a Frchet distribution. This means that supremum is not a
continuous (or almost surely continuous) functional, what excludes convergence in
Skorokhods topologies in the general case.
For linear processes with nonnegative coefficients ci , partial results were obtained
by Avram and Taqqu [2], where convergence in the M1 topology was considered.
Recently, these results have been improved and developed in various directions in
[20] and [3]. We use the linear structure of processes and the established convergence
in the M1 topology to show that in the general case, the finite dimensional convergence
(13) can be strengthen to convergence in the socalled S topology, introduced in [13].
This is a sequential and nonmetric, but fully operational topology, for which addition
is sequentially continuous.
Section 5 is devoted to some consequences of results obtained in previous sections.
We provide examples of functionals continuous in the S topology. In particular, we
show that for every > 0
ci j Y j
AYi
We also discuss possible extensions of the theory to linear sequences built on dependent
summands.
The Appendix contains technical results of independent interest.
Conventions and notations. Throughout the paper, in order to avoid permanent
repetition of standard assumptions and conditions, we adopt the following conventions.
We will say that {Y j }s satisfy the usual conditions if they are independent identically
distributed and (4), (5), (7) and (8) hold. When we write Xi , it is always the linear
process given by (1) and is welldefined, i.e., satisfies (10). Similarly, the norming
constants {an} are defined by (6) and the normalized partial sums Sn(t ) and Zn(t ) are
given by (11) with bn = an and (3), respectively, where Z is the limit in (2) and Z (t )
is the stable Lvy Motion such that Z (1) Z .
2 Convergence of Finite Dimensional Distributions for Summable Coefficients We begin with stating the main result of this section followed by its important consequence.
Theorem 2.1 Let {Y j } be an i.i.d. sequence satisfying the usual conditions. Suppose
that
c j  < +.
if, and only if,
A Z (t ), where A =
j=n+1
0, as n ,
0, as n ,
n j
k=1 j
dn, j =
ck , n N, j Z.
Corollary 2.2 Under the assumptions of Theorem 2.1, define
Ui =
ci j Y j , Xi+ =
ci+ j Y j , Xi =
ci j Y j ,
Tn(t ) f.d.d. A Z (t ), where A =
Tn+(t ) f.d.d. A+ Z (t ), where A+ =
Tn(t ) f.d.d. A Z (t ), where A =
c j,
Proof of Corollary 2.2 In view of Theorem 2.1, it is enough to notice that
= P
n j
k=1 j
k=1 j
Proof of Theorem 2.1 Using Fubinis theorem, we obtain that
ci j Y j =
Further, we may decompose
Let us consider the partial sum process:
First we will show
Lemma 2.3 Under the assumptions of Theorem 2.1 we have for each t > 0
Proof of Lemma 2.3 Define
Vn0 = [nt] A d[nt],j Yj = A Zn(t) Sn0(t).
j=1 an
j=1
A d[nt],j
j=1
We need a simple lemma.
j=1
j=1
Proof of Lemma 2.4 For each m N there exists Nm > max{Nm1,m2} such that
for n Nm
j=1
j=1
For the remaining part we have
j=[nt]jn+1
P A d[nt],j Yj > an +
P A d[nt],j Yj > an 0.
[nt]j
k=1j
[nt]jn
j=jn+1
= [nt ] h(aann) hh(a(an/n)) .
[nt ] h(aann) hh(a(an/n)) [nt ] n1 t , as n .
Lemma 2.3 follows.
In the next step, we shall prove
Lemma 2.5 Under the assumptions of Theorem 2.1, the following items (i)(iii) are
equivalent.
(iii) For every t [0, 1]
Proof of Lemma 2.5 By Lemma 2.3 we know that Sn0(1) A Zn(1) P 0 and
Sn0(1) D A Z (1). Since Sn(1) = Sn(1) + Sn0(1) + Sn+(1), (26) implies (25) and
the latter implies (24).
It follows that
= E ei Sn0(t) E ei(Sn(t)+Sn+(t))
E ei AZ(t) , R1.
E ei AZ(t) , R1,
j=
j=n+1
0, as n , (27)
i.e., relation (14) holds. Therefore the Proof of Theorem 2.1 will be complete, if we can
show that convergence of onedimensional distributions implies the finite dimensional
convergence. But this is obvious in view of (26):
(Sn(t1), Sn(t2), . . . , Sn (tm )) A (Zn(t1), Zn (t2), . . . , Zn (tm )) 0,
P
and the finite dimensional distributions of stochastic processes A Zn(t ) are convergent
to those of A Z (t ).
Remark 2.6 Observe that for onesided moving averages, the two conditions in (14)
reduce to one (the expression in the other equals 0). This is the reason we use in
Theorem 2.1 two conditions replacing the single statement (27).
Remark 2.7 In the Proof of Proposition 5.5, we used the Three Series Theorem with
the level of truncation 1. It is wellknown that any r (0, +) can be chosen as
the truncation level. Hence, conditions (14) admit an equivalent reformulation in the
r form
j=n+1
0, as n .
0, as n .
3 Functional Convergence
3.1 Convergence in the M1 Topology
As outlined in Introduction (see also Sect. 5.2 below), the convergence of finite
dimensional distributions of linear processes built on heavytailed innovations cannot be, in
general, strengthened to functional convergence in any of Skorokhods topologies
J1, J2, M1, M2.
The general linear process {Xi } can be, however, represented as a difference of
linear processes with nonnegative coefficients. Let us recall the notation introduced in
Corollary 2.2:
Xi+ =
Xi =
ci+ j Y j ,
ci j Y j ,
Sn(t ) = Tn+(t ) Tn(t ).
The point is that both Tn+(t ) and Tn(t ) are partial sums of associated sequences
in the sense of [11] (see e.g., [7] for the contemporary theory) and thus exhibit much
more regularity.
Theorem 1 of Louhichi and Rio [20] can be specified to the case of linear processes
considered in our paper in the following way.
Proposition 3.1 Let the innovation sequence {Y j } satisfies the usual conditions. Let
c j 0, j Z, and
c j < +.
If the linear process {Xi } is welldefined and
then also functionally
Sn(t ) f.d.d. A Z (t ),
on the Skorokhod space D([0, 1]) equipped with the M1 topology.
{c j } j1).
Remark 3.2 The first result of this type was obtained by Avram and Taqqu [2]. They
required however more regularity on coefficients (e.g., monotonicity of {c j } j1 and
Let us turn to linear processes with coefficients of arbitrary sign. Given decomposition
(28) and Proposition 3.1, the strategy is now clear: Choose any linear topology on
D([0, 1]) which is coarser than M1, then
Sn(t ) f.d.d. A Z (t ),
should imply
on the Skorokhod space D([0, 1]) equipped with the topology . Since convergence
of cdlg functions in the M1 topology is bounded and implies pointwise convergence
outside of a countable set, there are plenty of such topologies. For instance, any space
of the form L p ([0, 1], ), where p [0, ) and is an atomless finite measure
on [0, 1], is suitable. The point is to choose the finest among linear topologies with
required properties, for we want to have the maximal family of continuous functionals
on D([0, 1]).
Although we are not able to identify such an ideal topology, we believe that this
distinguished position belongs to the S topology, introduced in [13]. This is a
nonmetric sequential topology, with sequentially continuous addition, which is stronger
than any of mentioned above L p() spaces and is functional in the sense it has the
following classic property (see Theorem 3.5 of [13]).
Proposition 3.3 Let Q [0, 1] be dense, 1 Q. Suppose that for each finite subset
Q0 = {q1 < q2 < < qm } Q we have as n
(Xn(q1), Xn(q2), . . . , Xn (qm )) (X0(q1), X0(q2), . . . , X0(qm )),
D
where X0 is a stochastic process with trajectories in D[0, 1]). If {Xn} is uniformly
Stight, then
on the Skorokhod space D([0, 1]) equipped with the S topology.
For readers familiar with the limit theory for stochastic processes, the above property
may seem obvious. But it is trivial only for processes with continuous trajectories. It
is not trivial even in the case of the Skorokhod J1 topology, since the point evaluations
t : D([0, 1]) R1, t (x ) = x (t ),
can be J1discontinuous at some x D([0, 1]) (see [26] for the result corresponding
to Proposition 3.3). In the S topology, the point evaluations are nowhere continuous
(see [13], p. 11). Nevertheless, Proposition 3.3 holds for the S topology, while it does
not hold for the linear metric spaces L p() considered above. It follows that the S
topology is suitable for the needs of limit theory for stochastic processes. It admits
even such efficient tools like the a.s Skorokhod representation for subsequences [14].
On the other hand, since D([0, 1]) equipped with S is nonmetric and sequential, many
of apparently standard reasonings require special tools and careful analysis. This will
be seen below.
Before we define the S topology, we need some notation. Let V([0, 1]) D([0, 1])
be the space of (regularized) functions of finite variation on [0, 1], equipped with the
norm of total variation v = v (1), where
v (t ) = sup v(0) +
v(ti ) v(ti1) ,
i=1
and the supremum is taken over all finite partitions 0 = t0 < t1 < < tm = t .
Since V([0, 1]) can be identified with a dual of (C([0, 1]), ), we have on it the
weak topology. We shall write vn v0 if for every f C([0, 1])
f (t )dvn(t )
f (t )dv0(t ).
Definition 3.4 (Sconvergence and the S topology) We shall say that xn Sconverges
to x0 (in short xn S x0) if for every > 0 one can find elements vn, V([0, 1]),
n = 0, 1, 2, . . . which are uniformly close to xns and weakly convergent:
xn vn, , n = 0, 1, 2, . . . ,
vn, v0,, as n .
The S topology is the sequential topology determined by the Sconvergence.
Remark 3.5 This definition was given in [13], and we refer to this paper for detailed
derivation of basic properties of Sconvergence and construction of the S topology,
as well as for instruction how to effectively operate with S. Here, we shall stress
only that the S topology emerges naturally in the context of the following criteria of
compactness, which will be used in the sequel.
Proposition 3.6 (2.7 in [13]) For > 0, let N(x ) be the number of oscillations of
the function x D([0, 1]), i.e., the largest integer N 1, for which there exist some
points
x (t2k ) x (t2k1) > for all k = 1, . . . , N .
Let K D. Assume that
Corollary 3.7 (2.14 in [13]) Let Q [0, 1], 1 Q, be dense. Suppose that {xn}
D([0, 1]) is relatively Scompact and as n
xn(q) x0(q), q Q.
Then xn x0 in S.
Remark 3.8 The S topology is sequential, i.e., it is generated by the convergence
S. By the KantorovichKisynski recipe [17] xn x0 in S topology if, and only
if, in each subsequence {xnk } one can find a further subsequence xnkl S x0. This
is the same story as with a.s. convergence and convergence in probability of random
variables.
According to our strategy, we are going to prove that Skorokhods M1topology is
stronger than the S topology or, equivalently, that xn M1 x0 implies xn S x0.
We refer the reader to the original Skorohods article [24] for the definition of the M1
topology, as well as to Chapter 12 of [28] for a comprehensive account of properties
of this topology.
The M1 convergence can be described using a suitable modulus of continuity. We
define for x D([0, 1]) and > 0
H (x (t1), x (t2), x (t3)) ,
where H (a, b, c) is the distance between b and the interval with endpoints a and c:
Proposition 3.9 (2.4.1 of [24]) Let (xn)n1 and x0 be arbitrary elements in D([0, 1]).
Then
xn(t ) x (t ), t Q,
In particular, if xn M1 x0, then
Lemma 3.10 For any a, b, c, d R1
a b c d + H (c, a, d) + H (c, b, d).
Proof If c a b d, then b a d c = d c + H (c, a, d) + H (c, b, d).
If a c b d then b a = b c + c a d c + H (c, a, d) = d c +
H (c, a, d) + H (c, b, d). If a c d b then b a = b d + d c + c a =
H (c, b, d) + d c + H (c, a, b). If a b c d, then b a b c + c a =
H (c, b, d) + H (c, a, d) H (c, b, d) + H (c, a, d) + d c. The other cases can be
reduced to the considered above.
H (x (u), x (v), x (w)).
x (t2k ) x (t2k1) > for all k = 1, . . . , N .
To see this, suppose that x (t3) < x (t2) . Then the distance between x (t2) and
the interval with endpoints x (t1) and x (t3) is greater than , which is a contradiction.
Hence x (t3) x (t2) . On the other hand, if we assume that x (t4) x (t3) < ,
we obtain that
x (t2k ) x (t2k1) > , for all k = 1, . . . , N
Taking the sum of these inequalities, we conclude that:
x (t2k+1) x (t2k ) > for all k = 1, . . . , N 1.
On the other hand, by Corollary 3.11, we have:
Combining (37) and (38), we obtain that
x (t2N ) x (t1) x (t ) x (s) + 2.
N
x (t2N ) x (t1) N ( ) + .
This again allows us to use Corollary 3.11 and gives the desired bound for N
The following result was stated without proof in [13]. A short proof can be
given using Skorohods criterion 2.2.11 (page 267 of [24]) for the M1convergence,
expressed in terms of the number of upcrossings. This proof has a clear disadvantage:
It refers to an equivalent definition of the M1convergence, but the equivalence of
both definitions was not proved in Skorokhods paper. In the present article, we give
a complete proof.
Theorem 3.13 The S topology is weaker than the M1 topology (and hence, weaker
than the J1 topology). Consequently, a set A D([0, 1]) which is relatively
M1compact is also relatively Scompact.
Proof Let xn M1 x0. By Proposition 3.9
xn(t ) x0(t ),
on the dense set of points of continuity of x0 and for t = 1. Suppose, we know that
K = {xn} satisfies conditions (32) and (33). Then by Proposition 3.6 {xn} is relatively
Scompact and by Corollary 3.7 xn x0 in S. Thus, it remains to check conditions
K = sup N(xn) < , > 0.
n
hence supn supt[0,t0] xn(t ) < +. We also know that xn (t0) x0(t0) and xn (1)
x0(1). Choose n N and u (t0, 1). By the very definition of the modulus H
It follows that also
t j+1 t j < , j = 0, 1, . . . , M 1.
xn(t j ) x (t j ) < , j = 0, 1, . . . , M.
The Proof of (40) will be complete once we estimate the number N by a constant
independent of n.
The oscillations of xn determined by (42) can be divided into two (disjoint)
groups. The first group (Group 1) contains the oscillations for which the corresponding
interval [s2k1, s2k ) contains at least one point t j . Since the number of points t j is M ,
the number of oscillations in Group 1 is at most M.
In the second group (Group 2), we have those oscillations for which the corresponding
interval [s2k1, s2k ) contains no point t j , i.e.,
Summing (44) and (46), we obtain that
N M
= M
which does not depend on n. Theorem 3.13 follows.
For the sake of completeness, we provide also a typical example of a sequence
(xn)n1 in D[0, 1] which is Sconvergent, but does not converge in the M1 topology.
Since there are M intervals of the form [t j , t j+1], we conclude that
the number of oscillations in Group 2 is at most M
Example 3.14 Let x = 0 and
xn (t ) = 1[1/21/n,1](t ) 1[1/2+1/n,1](t ) =
1 if 21 n t < 21 + n
1 1
0 otherwise
f (t )dvn(t ) = f
f
The fact that (xn)n1 cannot converge in M1 follows by Proposition 3.9 since if
t1 < 21 n1 < t2 < 21 + n1 < t3, then H (xn(t1), xn (t2), xn (t3)) = 1.
Now, we are ready to specify results on functional convergence of stochastic processes
in the S topology, which are suitable for needs of linear processes. They follow directly
from Proposition 3.6 and Proposition 3.3.
Proposition 3.15 (3.1 in [13]) A family {X } of stochastic processes with
trajectories in D([0, 1]) is uniformly Stight if, and only if, the families of random variables
{ X } and {N(X )} , > 0, are uniformly tight.
Proposition 3.16 Let {Xn}n0 and {Yn}n0 be two sequences of stochastic processes
with trajectories in D([0, 1]) such that as n
(Xn(q1) + Yn(q1), Xn (q2) + Yn(q2), . . . , Xn(qk ) + Yn(qk ))
(X0(q1) + Y0(q1), X0(q2) + Y0(q2) . . . , X0(qk ) + Y0(qk )) ,
D
on the Skorokhod space D([0, 1]) equipped with the S topology.
Proof of Proposition 3.16 According to Proposition 3.3, it is enough to establish the
uniform Stightness of Xn + Yn. This follows immediately from Proposition 3.15 and
from the inequalities x + y x + y and
N(x + y) N/2(x ) + N/2(y),
Remark 3.17 In linear topological spaces, the algebraic sum K1+K2 = {x1+x2 ; x1
K1, x2 K2} of compact sets K1 and K2 is compact. It follows directly from the
continuity of the operation of addition and trivializes the proof of uniform tightness of sum
of uniformly tight random elements. In D([0, 1]) equipped with S, we are, however,
able to prove that the addition is only sequentially continuous, i.e., if xn S x0 and
yn S y0, then xn + yn S x0 + y0. In general, it does not imply continuity (see
[13], p. 18, for detailed discussion). Sequential continuity gives a weaker property:
the sum K1 + K2 of relatively Scompact K1 and K2 is relatively Scompact. For
the uniform tightness purposes, we also need that the Sclosure of K1 + K2 is again
relatively Scompact. This is guaranteed by the lowersemicontinuity in S of
and N (see [13], Corollary 2.10).
3.4 The Main Result
Theorem 3.18 Let {Y j } be an i.i.d. sequence satisfying the usual conditions and
j c j  < +. Let Sn(t ) be defined by (11) and Tn(t ) by (16). Then
Tn(t ) f.d.d. A Z (t ), where A =
on the Skorokhod space D([0, 1]) equipped with the S topology.
Proof By Corollary 2.2
Sn(t ) = Tn+(t ) Tn(t ) f.d.d. A Z (t ).
Now a direct application of Proposition 3.16 completes the proof of the theorem.
4 Discussion of Sufficient Conditions
Conditions (14) do not look tractable. In what follows, we shall provide three types of
checkable sufficient conditions. In both cases, the following slight simplification (47)
of (14) will be useful. As in Proof of Lemma 2.3, we can find a sequence jn ,
jn = o(n), such that
Hence, it is enough to check
j= jn+1
j=n+1
jn
j=n+ jn
0, as n .
0, as n .
0, as n .
0, as n .
The advantage of this form of the conditions consists in the fact that
Corollary 4.1 Under the assumptions of Theorem 2.1, if there exists 0 < < ,
1 such that
jZ
Sn(t ) D(S) A Z (t ).
Proof We have to check (47). By simple manipulations and taking into account that
due to (6) K = supn nan h(an) < + we obtain
jn
j=
n j
= n
K n
j= k=1 j
j= k=1 j
ck nh(an) dn, j 1
an h(an)
h(x) = c(x) exp
where limx c(x) = c (0, ) and limx ( x) = 0, be the Karamata
representation of the slowly varying function h(x) (see e.g., Theorem 1.3.1 in [5]). Take
0 < < min{ , c} and let L > a be such that for x > L
( x) and c < c(x) < c + .
Then, we have for x y L
It follows from that fact and (48) that
Hence it is sufficient to show that
, x y L .
0, as n .
j= k=1 j
In fact, more is true.
Lemma 4.2 If
Proof of Lemma 4.2 We have
j=0 k=1+ j
1
1
j=0 k=1+ j
j=0 k=1+ j
bk 0, as n .
1
= n
k=1
(k n)bk 
k=n+1
The first sum in the last line converges to 0 by Kroneckers lemma. The second is the
rest of a convergent series.
Returning to the Proof of Corollary 4.1, let us notice that convergence
j=n+ jn
0, as n ,
can be checked the same way.
jZ c j  < +, then
Sn(t ) D(S) A Z (t ).
Remark 4.4 Corollaries 4.1 and 4.3 were proved independently by Astrauskas [1]
and Davis and Resnick [8]. Our approach follows direct manipulations of Astrauskas,
while Davis and Resnick involved point process techniques.
Remark 4.5 For 1 assumption (49) is unsatisfactory, for it excludes the case of
strictly stable random variables {Y j } with j c j  < +, but j c j  = +
for every < . With our criterion given in Theorem 2.1 we can easily prove the
needed result.
Corollary 4.6 Suppose that 1,
and h is such that
jZ c j  < +, the usual conditions hold
h(x )/ h(x ) M, 1, x x0,
for some constants M , x0. If the linear process {Xi } is welldefined, then
Sn(t ) D(S) A Z (t ).
Proof of Corollary 4.6 First notice that j c j  < + so that A is defined.
Proceeding like in the Proof of Corollary 4.1, we obtain
jn
j=
jn
n j
j= k=1 j
jn
n j
j= k=1 j
where the convergence to 0 holds by Lemma 4.2.
Remark 4.7 As mentioned before, the above corollary covers the important case when
h(x ) C > 0, as x , i.e., when the law of Yi is in the domain of strict (or
normal) attraction. Many other examples can be produced using Karamatas
representation of slowly varying functions. Assumption (50) is much in the spirit of Lemma
A.4 in [21]. Our final result goes in different direction.
Remark 4.8 Notice that if < 1, then j c j  h(c j 1) < +, with h slowly
varying, automatically implies j c j  < +.
Sn(t ) D(S) A Z (t ),
jZ
and the coefficients c j are regular in a very weak sense: there exists a constant 0 <
< such that
K+ < +, j 0.
K < +, j 0.
(with the convention that 0/0 1.)
Remark 4.10 Notice that we always assume that the linear process is well defined.
This may require more than demanded in Corollary 4.9.
Proof of Corollary 4.9 As before, we have to check (47).
j=
jn dna,j h
n
n j
= n
K n
j= k=1 j
0, as n .
n j
< +.
Thus it is enough to prove
j= k=1 j
n j
k=1 j
n j
ck
n j
k=1 j
k=1 j
= n
ck
This is again more than needed. The proof of
j=n+ jn
dna,j h
n
0, as n .
goes the same way.
and {Xi } is welldefined, then under the usual conditions
c j  =  j 1/ log(1+)/  j 
,  j  3,
Sn(t ) D(S) A Z (t ).
Remark 4.12 In our considerations, we search for conditions giving functional
convergence of {Sn(t )} with the same normalization as {Zn(t )} (by {an}). It is possible
to provide examples of linear processes, which are convergent in the sense of finite
dimensional distribution with different normalization. Moreover, it is likely that also
in the heavytailed case one can obtain a complete description of the convergence of
linear processes, as it is done by Peligrad and Sang [23] in the case of innovations
belonging to the domain of attraction of a normal distribution. We conjecture that
whenever the limit is a stable Lvy motion our functional approach can be adapted to
the more general setting.
5 Some Complements
5.1 SContinuous Functionals
A phenomenon of selfcanceling oscillations, typical for the S topology, was described
in Example 3.14. This example shows that supremum cannot be continuous in the S
topology. In fact, supremum is lower semicontinuous with respect to S, as many other
popular functionalssee [13], Corollary 2.10. On the other hand addition is
sequentially continuous and this property was crucial in consideration given in Sect. 3.4.
Here is another positive example of an Scontinuous functional.
Let be an atomless measure on [0, 1] and let h : R1 R1 be a continuous
function. Consider a smoothing operation s,h on D([0, 1]) given by the formula
Then, s,h (x )() is a continuous function on [0, 1] and a slight modification of the
Proof of Proposition 2.15 in [13] shows that the mapping
x s,h (x ) (C([0, 1]), )
is continuous. In particular, if we set =
h(x ) 0, and suppose that xn S 0, then
(the Lebesgue measure), h(0) = 0,
h(xn(s)) ds 0.
In the case of linear processes, such functionals lead to the following result.
Corollary 5.1 Under the conditions of Corollaries 4.1, 4.3, 4.6 or 4.9 we have for
any > 0
ci j Y j
AYi
Proof of Corollary 5.1 The expression to be analyzed has the form
Sn(t ) A Zn(t ) f.d.d. 0.
where H (x ) = x  and by (26)
We have checked in the course of the Proof of Theorem 3.18, that {Sn} is uniformly
Stight. By (3) { A Zn} is uniformly J1tight, hence also Stight. Similarly as in the
Proof of Proposition 3.16 we deduce that {Sn A Zn} is uniformly Stight. Now an
application of Proposition 3.3 gives
on the Skorokhod space D([0, 1]) equipped with the S topology.
In Introduction, we provided an example of a linear process (c0 = 1, c1 = 1)
for which no Skorokhods convergence is possible. In this example A = 0 and the
limit is degenerate, what might suggest that another, more appropriate norming is
applicable, under which the phenomenon disappears. Here, we give an example with a
nondegenerate limit showing that in the general case M1convergence need not hold.
Example 5.2 Let c0 = > c1 = > 0. Then X j = Y j Y j1 and defining
Zn(t ) by (3) we obtain for t [k/n, (k + 1)/n)
1 1
Sn(t ) = an j=1 X j = an ( Yk Y0) + ( )Zn((k 1)/n).
Clearly, the f.d.d. limit {( )Z (t)} is nondegenerate. We will show that the sequence
{Sn(t)} is not uniformly M1tight and so cannot converge to {( )Z (t)} in the M1
topology.
For the sake of simplicity, let us assume that Y j s are nonnegative and
P (Y1 > x) = x, x 1,
n1
j=0
Gn =
where n = n1/(3). Then,
We have by (61)
It follows that {Sn(t)} are uniformly M1tight if, and only if, {Sn(t)} are. Let wM1 (x, )
be given by (34). Since P Gcn 1 we have for any > 0 and > 0
P (1/an) max Yn, j > / ( )
j
= P (1/an) max Y j > / ( )
j
1 exp (( )/)
and the sequence {Sn(t )} cannot be uniformly M1tight.
We can explore the machinery of Sect. 4 to obtain a natural
Proposition 5.3 We work under the assumptions of Theorem 2.1. Denote by CY the
set of sequences {ci }iZ such that if
Xi =
c j Yi j , i Z,
jZ
n j
k=1 j
0, as n .
0, as n .
k=1 j
j =
n j
k=1 j
j =
j =
n j
k=1 j
n j
k=1 j
n j
k=1 j
Now both terms tend to 0 by Remark 2.7. Identical reasoning can be used in the proof
of the dual condition in (55).
In the main results of the paper, we studied only independent innovations {Y j }. It is
however clear that the functional Sconvergence can be obtained under much weaker
assumptions. In order to apply crucial Proposition 3.16 we need only that
Sn (t ) f.d.d. A Z (t ),
and Tn
on the Skorokhod space D([0, 1]) equipped with the M1 topology. For the latter
relations, Theorem 1 of [20] seems to be an ideal tool for associated sequences (see our
Proposition 3.1). A variety of potential other possible examples is given in [27].
Acknowledgments The authors would like to thank the anonymous referee for careful reading of the
manuscript and comments which improved the paper in various aspects.
Open Access This article is distributed under the terms of the Creative Commons Attribution License
which permits any use, distribution, and reproduction in any medium, provided the original author(s) and
the source are credited.
We provide two results of a technical character. The first one is wellknown [1] and is
stated here for completeness. Proposition 5.5 might be of independent interest.
Proposition 5.4 Let {Y j } be an i.i.d.sequence satisfying (4), (7) and (8) and let {c j }
be a sequence of numbers. Then the series jZ c j Y j is welldefined if, and only if,
Proposition 5.5 Let {Y j } be an i.i.d.sequence satisfying (4), (7) and (8). Consider an
array {cn, j ; n N, j Z} of numbers such that for each n N
jZ
jZ
cn, j  h(cn, j 1) < +.
Set Vn =
jZ cn, j Y j , n N. Then
if, and only if,
jZ
cn, j  h(cn, j 1) 0, as n .
Lemma 5.6 Assume that
P (Y  > x ) = x h(x ),
where h(x ) is slowly varying at x = .
E Y 2I (Y  x ) C2x 2 h(x ), x > 0.
E [Y I (Y  x )] C1x 1 h(x ), x > 0.
2
E [Y I (Y  > x )] E [Y  I (x x0)] + 1 x 1 h(x ), x > 0.
Proof Take > . Applying the direct half of Karamatas Theorem (Th. 1.5.11 [5]),
we obtain
E Y  I (Y  x ) =
Hence there exists x0 such that
If 0 < x x0, then
E YI(Y x) x = x Px(Yh>(x)x) P(Y > x0)xh(x).
1
Hence, for some x0, we have
Proof of Proposition 6.1 We begin with specifying the conditions of the Kolmogorov
Three Series Theorem in terms of our linear sequences. We have
P cjYj > 1 =
h(cj1) = cjh(cj1). (63)
jZ
Var (cjYj)I cjYj 1
E cjYj 2 I cjYj 1
jZ
jZ
jZ
jZ
= cj2E Yj2I(Yj 1/cj)
C2 cj2(1/cj)2h(cj1)
= C2 cjh(cj1).
Applying (60) we obtain
jZ
jZ
jZ
E c j Y j I c j Y j  1
c j E Y j I Y j  1/c j 
jZ
C1
= C1
jZ
jZ
c j (1/c j )1h(c j 1)
If = 1, then by the symmetry we have E Y j I Y j  a
series of truncated expectations trivially vanishes
= 0, a > 0, and the
jZ
E c j Y j I c j Y j  1
= 0.
For (1, 2) we have E X j = 0 and by (62)
jZ
E c j Y j I c j Y j  1  =
jZ
jZ
jZ
c j E Y j I Y j  > 1/c j 
E [Y ] mjaZx c j #{ j ; c j  1/x0}
c j (1/c j )1 h(c j 1)
By (63)(67) we obtain that jZ c j  h(c j 1) < + if, and only if, all the
assumptions of the Three Series Theorem are satisfied. Hence jZ c j Y j is a.s.
convergent if, and only if, (56) holds.
By (64), we have
Var Vn,1 C2
cn, j  h(cn, j 1) 0, as n ,
jZ
jZ
jZ
if (59) holds. Similarly Vn,2 0 by (65)(67). Finally, we have
P Vn,3 = 0 P
jZ
cn, j  h(cn, j 1) 0 as n .
We have proved the sufficiency part of Proposition 5.5.
To prove the only if part, we show first that Vn P 0 implies uniform
infinitesimality of the coefficients, that is
Let {Y j } be an independent copy of {Y j }. If Vn = jZ cn, j Y j , then also Vn
Vn P 0 and these are series of symmetric random variables. For each n select some
arbitrary jn Z and consider decomposition into independent symmetric random
variables
Vn Vn = cn, jn (Y jn Y jn ) +
cn, j (Y j Y j ) = Wn + Wn.
jZ, j = jn
Since {Vn Vn }nN is uniformly tight, so is {Wn}nN (it follows from the Lvy
Ottaviani inequality, see e.g., Proposition 1.1.1 in [18]). Since the law of Y j Y j is
nondegenerate, we obtain
sunp cn, jn  < +.
= E eicY 2 < 1.
It follows that also
,
.
Then { Xn, j = cn, j Y j ;  j  kn , n N} is an infinitesimal array of rowwise
independent random variables, with row sums convergent in probability to zero. Applying
the general central limit theorem (see e.g., Theorem 5.15 in [15]), we obtain
P  Xn, j  > 1 =
P cn, j Y j  > 1 =
cn, j  h(cn, j 1) 0.
 jkn
 jkn
 jkn
This completes the Proof of Proposition 5.5.