#### Spatial Central Limit Theorem for Supercritical Superprocesses

Spatial Central Limit Theorem for Supercritical Superprocesses
Piotr Miłos´ 0
B Piotr Miłos´ 0
0 Warsaw , Poland
We consider a measure-valued diffusion (i.e., a superprocess). It is determined by a couple (L , ψ ), where L is the infinitesimal generator of a strongly recurrent diffusion in Rd and ψ is a branching mechanism assumed to be supercritical. Such processes are known, see for example, (Englander and Winter in Ann Inst Henri Poincaré 42(2):171-185, 2006), to fulfill a law of large numbers for the spatial distribution of the mass. In this paper, we prove the corresponding central limit theorem. The limit and the CLT normalization fall into three qualitatively different classes arising from “competition” of the local growth induced by branching and global smoothing due to the strong recurrence of L . We also prove that the spatial fluctuations are asymptotically independent of the fluctuations of the total mass of the process. This research was partially supported by a Polish Ministry of Science Grant N N201 397537 and by the British Council Young Scientists Programme.
Branching processes; Supercritical branching processes; Limit behavior; Central limit theorem
1 Introduction
1.1 Model
Let {Pt }t≥0 be the semigroup of a strongly recurrent diffusion on Rd with the
infinitesimal generator L. We also introduce the so-called branching mechanism
ψ : R+ → R+. It is represented as
ψ (λ) = −αλ + βλ2 +
e−λx − 1 + λx
(dx ),
R+
where α, β ∈ R, β > 0 and is a measure concentrated on R+ such that
R min(x 2, x ) (dx ) < +∞. In this paper, we will study the behavior of a
superpro+cess {Xt }t≥0 with the infinitesimal operator L (or equivalently, with the semigroup
P) and branching mechanism ψ . It is a time-homogenous, measure-valued Markov
process. As such it is characterized by a transition kernel, which in our case is expressed
in terms of its Laplace transform
− log E(e− f,Xt |X0 = ν) =
Rd
u f (x , t )ν(dx ),
where t ≥ 0, f ∈ b+(Rd ) (bounded, positive and measurable functions on Rd ) and
ν ∈ MF (Rd ) (finite, compactly supported measures). The function u f (x , t ) is the
unique nonnegative solution of the integral equation
t
u f (x , t ) = Pt f (x ) −
Pt−s [ψ (u f (·, s))](x )ds.
0
For the technical details of this construction, we refer the reader to [
6,7
]. The above
definition could appear quite abstract, but actually any superprocess has a natural
interpretation as the short lifetime and high-density limit of branching particle systems
(see, for example, Introduction of [8] and Sect. 1.3). There is a vast body of literature
concerning various aspects of superprocesses, e.g., [
6,7,9,10
].
1.2 Results: Outline
We postpone a formal description of our assumptions and results to Sects. 3 and 4
providing now intuitions.
In this paper, we are interested in the supercritical case in which the system grows
exponentially (on the event of survival). The rate of growth is given by −ψ (0) = α
which, in this paper, is assumed to be strictly positive:
(1.1)
(1.2)
(1.3)
−ψ (0) = α > 0.
It is standard to prove that the limit: V∞ := limt→+∞ e−αt |Xt |, where |Xt | := Xt , 1
is the total mass of the system, exists and is a non-trivial random variable. The
semigroup P corresponds to a strongly recurrent diffusion with its unique invariant measure
denoted by ϕ.
Superprocesses of this type fulfill a spatial law of large numbers. In a nutshell and
without specifying detailed assumptions, recall [8, Theorem 1], this means that for
any bounded continuous function f , we have
lim e−αt Xt , f
t→+∞
= ϕ, f V∞, in probability.
The goal of our paper is to prove the corresponding central limit theorem. This will
be achieved by studying the spatial fluctuations:
where Nt is some norming, not necessarily deterministic.
Before further discussion we need to quantify the recurrence of P. For the sake of
discussion, not being quite precise, we assume that there exists
(1.4)
Xt , f − |Xt | ϕ, f
Nt
,
μ > 0,
such that for a bounded continuous function f , the quantity Pt f − f, ϕ decays
exponentially fast at rate μ. The behavior of (1.4) depends qualitatively on the sign
of α − 2μ. Roughly speaking, it reflects the interplay of two antagonistic forces, the
growth which is local and makes the system more coarse and the smoothing induced
by the spatial evolution corresponding to P. The results split into three qualitatively
different classes:
Small growth rate α < 2μ see Theorem 4. In this case, “the smoothing” prevails
and the formulation of the result resembles the standard CLT. The normalization
is Nt = |Xt |1/2 (which is of order e−(α/2)t ), and the limit is Gaussian, though its
variance is given by a complicated formula. Moreover, the limit does not depend
on X0.
Critical growth rate α = 2μ see Theorem 6. In this case, we are in a situation
of a delicate balance between “the growth and “the smoothing” with the growth
being “somewhat stronger.” The normalization is slightly bigger compared to the
classical case: Nt = t 1/2|Xt |1/2. The limit still does not depend on X0.
Large growth rate α > 2μ see Theorem 8. In this case “the growth” prevails. The
normalization is even bigger: Nt = e(α−μ)t (we have α − μ > α/2 and therefore
Nt √|Xt |). What is perhaps most surprising the limit holds in probability. In
addition, the growth is so fast that the limit depends on the starting configuration
X0. Moreover, we suspect that the limit is non-Gaussian.
In either case, we prove that the spatial fluctuations (1.4) become independent of the
fluctuations of the total mass:
| Xt | − eαt V∞ ,
| Xt |1/2
as the time increases.
1.3 Related Results
In [
2
], the authors established central limit theorem results for the branching particle
system in which particles move according to the Ornstein–Uhlenbeck process (i.e.,
the one with infinitesimal generator L f = 21 σ 2 f − μ (x · grad f )) and branch after
exponential time into two particles. Such a system is closely related to the
superprocess with L and ψ (λ) = αλ + βλ2. In fact, it can be defined as the weak limit of
branching particle systems. In the nth approximation, the system starts from a particle
configuration distributed according to a Poisson point process with intensity nν (ν is
the starting distribution of the superprocess). Each particles carries mass 1/n and lives
for an exponential time with parameter 1/n. During this time, it executes a random
movement according to an Ornstein–Uhlenbeck process. When it dies, the particle is
replaced by a random number of offspring. The mean of this number is supposed to
be 1 + α/n, while the variance 2β. Each particle evolves independently of the others.
We note that this construction can be extended to general L and ψ (see, for example,
[
8
]).
In [
2
], the authors studied fluctuations akin to (1.4) discovering three regimes similar
to the list above. The particle point of view gives arguably more compelling intuitions.
Having this picture in mind, it might be easier to understand the discussion above;
moreover, some further heuristics are given in [2, Remarks 3.4, 3.9, 3.13].
Although [
2
] was inspiration for this paper, it must be stressed that the
approximation, insightful as it is, cannot be easily used as a proof method in the superprocess
setting nor the proofs of [
2
] can be transferred directly. The main difficulty compared
to the branching systems is that a superprocess is not a discrete object. This was
overcome using the backbone construction developed in [
5
]. It represents a supercritical
superprocess as a subcritical superprocess (called dressing) immigrating continuously
on top of a branching diffusion. Controlling the aggregate behavior of the dressing
was the main technical issue to be resolved in this paper. This was achieved using
analytical estimates of the behavior of P, which is a different approach then the
coupling techniques applied in [
2
]. It is noteworthy that these analytical methods proved
to be much more robust and allowed to obtain results for a quite general class of L .
Moreover, in this paper we work with a general branching mechanism ψ , assuming
only finite fourth moment.
Related problems for branching particle systems were also considered in [
1, 4
].
1.4 Organization
The next section presents notation and basic facts required further. Section 3 contains
formulation of the assumptions. Section 4 is devoted to the presentation of our results.
The proofs are deferred to Sects. 5, 6, 7 and 8 and “Appendix.”
2 Preliminaries and Notation
Let us first recall the notions which appeared in the introduction. P is the semigroup
of the diffusion process with the infinitesimal operator L. To shorten the notation for
α ∈ R, we define a semigroup Ptα t≥0 by
Ptα f (x ) := eαt Pt f (x ).
MF is the space of finite, compactly supported measures and b+(Rd ) is the space
of bounded, positive and measurable functions on Rd . By c1, c2, . . ., we will denote
generic constants which might vary from line to line.
For a measure ν and a measurable function f , we write f, ν := Rd f (x )ν(dx ),
provided it exists, and by |ν|, we denote its total mass, i.e., |ν| := 1, ν (we allow it
to be infinite).
We will use C0 to denote the space of continuous functions which grow at most
polynomially. Formally:
C0 = C0(Rd ) :=
f :
Rd
→ R : f is continuous and
there exists n such that | f (x )|/ x n → 0 as x
→ +∞ .
We will use R1, R2, . . . to denote generic functions for C0, and these may vary from
line to line.
For x , y ∈ Rn by x · y, we denote the usual scalar product. By →d , we denote
convergence in law.
The parameter α in (1.1) is the rate of growth of the model. By Ext, we denote the
event that the process becomes extinguished, i.e.,
(2.1)
(2.2)
(2.3)
It is well known that P (Ext) = e−λ∗|X0| where
Ext :=
lim |Xt | = 0 .
t→+∞
λ∗ is the largest root of ψ (λ) = 0.
Clearly, in the supercritical case we have λ∗ > 0.
3 Assumptions
In this section, we state precisely the assumptions on the branching mechanism ψ and
the diffusion semigroup P. We will discuss them and give an example in Sect. 4.4
B1 The branching mechanism ψ given by (1.1) is non-trivial, precisely either β = 0
or = 0. It is supercritical, i.e., α > 0. Moreover, fulfills
R+
max x 4, x 2
(dx ) < +∞.
These conditions imply
and ψ (0) = 0.
ψ (0) = −α, ψ (i)(0) < +∞, for i ∈ {2, 3, 4} ,
Further, we formulate assumptions on the semigroup P. Note that our formulation,
although not the most compact, is chosen so that it is easy to verify and apply in proofs.
Such a presentation also highlights what properties are essential for proofs.
S1 The semigroup P has the unique invariant probability measure ϕ. We require that
any f ∈ C0(Rd ) is integrable with respect to ϕ and for any x ∈ Rd
We will use f˜ to denote the centering of f with respect to ϕ i.e.,
Moreover, for any function f ∈ C0(Rd ), there are R ∈ C0(Rd ) and a bounded
function r : R+ → R+ such that r (t ) 0 and
|eμt Pt f˜(x ) − h(x ) · f h, ϕ | ≤ R(x )r (t ).
Note that for any t ≥ 0, we have Pt h, ϕ = 0 (indeed by the fact that ϕ is invariant
we have Pt h, ϕ = h, ϕ and moreover Pt h, ϕ = e−μt h, ϕ ).
We note that (S3) implies (S2). Indeed one can obtain (3.2) easily by dividing (3.3)
by eμt . We note also that (S1) and (S2) imply the following fact. For any f ∈ C0(Rd ),
there exists R ∈ C0(Rd ) such that for any t ≥ 0
|Pt f (x )| ≤ R(x ).
Remark 1 Conditions (S1), (S2) and (S3) state, roughly speaking, that the diffusion
associated with P is strongly recurrent with the spectral gap μ. It might be possible that
these conditions can be verified using a Bakry–Emery-type condition or by Foster–
Lyapunov criteria. We refer to the classical work [13]. Section 6 addresses the so-called
(3.1)
(3.2)
(3.3)
(3.4)
exponential ergodicity which might be useful for checking (S1) and (S2). Property (S3)
seems harder to be check in generality, one can use the asymptotics of the transition
density (as in the subsequent example). Other methods include using tools of functional
analysis as, for example, [14, Sect. 3].
Example 2 Let us consider a superprocess with
L f = 21 σ 2 f − γ (x · grad f ) ,
(3.5)
i.e., the infinitesimal operator of an Ornstein–Uhlenbeck process, where σ > 0 and
γ > 0 and ψ (λ) = −αλ + βλ2 for α, β > 0. It is obvious that (B1) holds. It is well
known that the unique invariant distribution ϕ of L has density
Moreover, for any f ∈ C0 we have the following representation:
Pt f (x ) = E f x e−γ t + ou(t )G ,
where ou(t ) := √1 − e−2γ t and G is distributed according to ϕ. Using this
representation conditions, (S1), (S2) and (S3) can be verified quite easily (we refer to [1,
Section 6]). Let us just mention that the function h in (S3) is h(x ) = x and μ = γ .
The limit objects V∞ and H∞ can be given a more explicit representation. V∞ is
distributed according to Exp(|X0|−1) and H∞ is non-Gaussian. More information about
the joint distribution of (V∞, H∞) is contained in forthcoming Conjecture 14 which
can be proved in this particular case.
4 Results
We start with a brief discussion of the behavior of the total mass of the superprocess,
i.e., {|Xt |}t≥0. Let {Vt }t≥0 be defined by
Vt := e−αt |Xt |.
Fact 3 Under assumption (B1), the process {Vt }t≥0 is a positive martingale with
respect to its natural filtration. Moreover, it converges
Therefore, V∞ is non-trivial (e.g., EV∞ = V0). We also have
V∞ := t→li+m∞ Vt , a.s. and in L2.
Var(V∞) = σV2 |X0|, σV :=
ψ (0)
α
.
(4.1)
(4.2)
(4.3)
The proof of the martingale property and (4.2) is analogous to the proof of forthcoming
Fact 13 and is left to the reader.
We recall that α > 0 is the growth rate of the system (see (1.1)) and that μ > 0 is the
constant introduced in (S2)–(S3). Analogously to the presentation in the introduction,
we split this section into three parts depending on the sign of α − 2μ.
4.1 Slow Growth α < 2μ
We recall (2.1), (3.1) and define
Theorem 4 Let {Xt }t≥0 be the superprocess starting from X0 ∈ MF (Rd ). Let us
assume that (B1), (S1), (S2) and α < 2μ hold. Then, for any f ∈ C0(Rd ) we have
σ f < +∞ and conditionally on the event Extc the following holds
(4.4)
|Xt |−eαt V∞ ,
(4.5)
where G1 ∼ N (0, σV2 ), G2 ∼ N (0, σ 2f) and Vˆ∞ is V∞ conditioned on Extc. Moreover,
the random variables Vˆ∞, G1, G2 are independent.
Remark 5 The law of the first coordinate of the limit depends on X0 only though its
total mass |X0| (see Fact 3). The second and third coordinate do not depend on X0 at
all.
→d (Vˆ∞, G1, G2), as t → +∞,
The proof is given in Sect. 6.
4.2 Critical Growth α = 2μ
We recall the function h from (S3) and define
σ 2f := ψ (0) (h · f h, ϕ )2 , ϕ .
(4.6)
Using (S1) and (S3), one easily checks that for f ∈ C0 we have σ 2f < +∞. Let us
remind the event Ext in (2.2) and σV given by (4.3). The main result of this section is
Theorem 6 Let {Xt }t≥0 be the superprocess starting from X0 ∈ MF (Rd ). Let us
assume that (B1), (S1), (S2), (S3) and α = 2μ hold. Then, for any f ∈ C0(Rd ) and
conditionally on the event Extc the following holds
where G1 ∼ N (0, σV2 ), G2 ∼ N (0, σ 2f) and Vˆ∞ is V∞ conditioned on Extc. Moreover,
the variables Vˆ∞, G1, G2 are independent.
The proof is given in Sect. 8.
4.3 Fast Growth α > 2μ
Let h be the function from (S3). We define a process {Ht }t≥0 by
Fact 7 Let us assume (B1), (S1), (S2) and (S3). The process H is a martingale and
under assumption α > 2μ it is L2-bounded.
From this fact, it follows that in the setting of this section, the limit
Ht := e−(α−μ)t Xt , h .
H∞ := t→li+m∞ Ht ,
exists both a.s. and in L2. Let us remind the event Ext in (2.2) and σV given by (4.3).
The main result of this section is
Theorem 8 Let {Xt }t≥0 be the superprocess starting from X0 ∈ MF (Rd ). Let us
assume that (B1), (S1), (S2), (S3) and α > 2μ hold. Then, for any f ∈ C0(Rd )
conditionally on the event Extc the following holds
(4.7)
(4.8)
(4.9)
where G ∼ N (0, σV2 ), the variables Vˆ∞, Hˆ∞ are, respectively, H∞, V∞ conditioned
on Extc and (Vˆ∞, Hˆ∞), G are independent. Moreover
Xt , f − |Xt | f, ϕ
exp ((α − μ)t )
→ (V∞, f h, ϕ · H∞), in probability. (4.10)
Remark 9 The law of H∞ exhibits non-trivial dependence on the starting condition
X0 and V∞, H∞ are not independent. We expect that H∞ is non-Gaussian. We make
those observations precise in Conjecture 14 which is illustrated using the Ornstein–
Uhlenbeck superprocess from Example 2. We notice that being the limit of infinitely
divisible processes, the pair (V∞, H∞) is also infinitely divisible. Determining its
Lévy exponent would be an interesting result, though it seems unlikely to be obtained
in a general setting.
The convergence of the second coordinate in (4.10) is closer to a law of large
numbers than to a central limit theorem. Intuitively speaking, the system grows so
fast that the fluctuations become localized. This also manifests itself in the fact that
the normalization is much bigger than the classical one. Writing exp((α − μ)t ) =
exp(αt ) exp(−μt ), we can decompose the normalization into exp(αt ) and exp(−μt ).
The first term corresponds to the standard law of large numbers, and the second
one reflects the fact that the mass of the system, roughly speaking, is distributed
according to Pt∗ (the measure adjoint to Pt ). More precisely by (S3), we have eμt Pt f˜ ≈
h · f h, ϕ . Following these observations, we also conjecture that the convergence
above holds almost surely.
The proofs are given in Sect. 7.
4.4 Discussion and Remarks
Remark 10 In our paper, we assume (B1) which states that the branching mechanism
admits fourth moment. We use this assumption to verify Lyapunov’s condition in the
proof of central limit theorems. It seems that existence of (2 + )-moment for some
> 0 should be sufficient, but we do not have the necessary formulas to calculate
moments of superprocess in such a case. Further, it is not unlikely that the existence
of second moment is enough for the results to hold.
An interesting question would be to go beyond this assumption. Namely, to study
branching laws with heavy tails. It is natural to expect a different normalization and
convergence to stable laws.
5 Proof Preliminaries
In this section, we gather necessary prerequisites for the proofs in Sects. 6, 7 and 8.
5.1 Backbone Construction
Supercritical superprocesses admit a beautiful and insightful description known as the
backbone construction/decomposition. According to this construction, a supercritical
superprocess consists of subcritical superprocesses (the so-called dressing)
immigrating along the so-called (prolific) backbone which is a supercritical branching particle
system. This allows to transfer many results concerning supercritical branching
systems to superprocesses. On the conceptual level, this paper follows the strategy of [
2
],
which presents CLTs for some branching particle systems. The main issue is to control
the behavior of the dressing. We will comment on that once again after presenting the
decomposition (5.7).
Now we briefly discuss some aspects of the backbone construction referring the
reader to [5, Sect. 2.4] for more details.1 Let us recall the branching mechanism given
by (1.1), we assume that it is supercritical, i.e., α > 0. Let λ∗ be the largest root of
ψ (λ) = 0. We denote
ψ ∗(λ) := ψ (λ + λ∗).
(5.1)
1 This section is shortened version of the description in [
5
]. The author thanks the authors of [
5
] for letting
him to use it.
This happens to be a valid branching mechanism, and thus, we may consider a
superprocess with this branching mechanism, it will be referred to as X ∗. It is subcritical,
i.e., its total mass decays exponentially fast with rate
The inequality follows by the fact that ψ is strictly convex. Next we define
α∗ = −(ψ ∗) (0) = −ψ (λ∗) < 0.
It is the generating function of the branching law of the backbone process {Zt }t≥0.
More precisely, it is a Markov process consisting of finite number of individuals. Each
of them from the moment of birth lives for an independent and exponentially distributed
period of time with parameter ψ (λ∗) during which it executes an L-diffusion started
from its position of birth and at death gives birth at the same position to an independent
number of offspring with distribution described by F . The configuration of particles
can be naturally identified with an atomic measure. The space of such measures is
denoted by Ma (Rd ).
Definition 11 Fix ν ∈ MF (Rd ) and γ ∈ Ma(Rd ). Let Z be a branching particle
diffusion (i.e., a backbone) with initial configuration γ and X 0,∗ be an independent
copy of X ∗ (i.e., with subcritical branching mechanism (5.1)) such that X 0,∗ = ν. We
0
define a MF (Rd )-valued stochastic process { t }t≥0 by
= X 0,∗ + I,
where the processes {I }t≥0 is independent of X 0,∗. This process has a certain
pathwise description, namely I consists of a subcritical superprocess immigrating along
the backbone process. The full description is presented in [
5
]. The joint process
{( t , Zt )}t≥0 is Markovian, we denote its law by Pν×γ . The following equation
characterizes the transition kernel of this process
Eν×γ exp(− f, t + h, Zt ) = e− u∗f (·,t),ν − v f,h(·,t),γ ,
where f, h ∈ b+(Rd ) and e−v f,h(x,t) is the unique [
0, 1
]-valued solution of the integral
equation
e−v f,h(x,t) = Pt e−h (x ) + λ1∗
0
t
Pt−s ψ ∗
−λ∗e−v f,h(·,s) +u∗f (·, s) −ψ ∗ u∗f (·, s)
where u∗f is the solution of (1.3) with the subcritical branching mechanism ψ ∗ given
by (5.1).
(5.2)
(5.3)
(5.4)
(5.5)
(5.6)
We now present the main result concerning the backbone construction. First we
randomize the law of Pν×γ for ν ∈ MF (Rd ) by replacing the deterministic choice of γ
with a Poisson random measure having intensity λ∗ν. We denote the resulting law by
Pν .
Theorem 12 ([5, Theorem 2]) For any ν ∈ MF (Rd ), under the measure Pν the
process is Markovian and has the same law as X starting from X0 = ν.
For any 0 ≤ s < t , we decompose the immigration process I (see (5.4)) as follows
It = Dts−s +
ti,−ss ,
|Zs |
i=1
where Ds
t t≥0 describes the evolution of the dressing which appeared in the system
before time s. The process i,s describes the mass which immigrated along the
subtree stemming from the i th prolific individual at time s located at Zs (i ) (we choose
any enumeration of the particles of Z ). We have thus the following decomposition of
t = Xt0,∗ + Dts−s +
|Zs |
i=1
ti,−ss .
Yts := Xt0+,∗s + Ds .
t
Let us define Yts t≥0 by
We have Y s
0 = Xs and Y evolves according the subcritical branching mechanism ψ ∗.
Subcriticality is fundamental for our proof because this process is negligible when
t s. The third term of (5.7) is a sum of random variables indexed by the branching
process Z to which techniques similar to [
2
] can be applied. Each of the processes i,s
performs Markovian evolution described by (5.5) with the starting conditions ν = 0
and γ = δZt (i).
(5.7)
(5.8)
(5.9)
(5.10)
(5.11)
5.2 Martingales and Their Limits
We recall V and H (given by (4.1) and (4.7)). We define their analogues
{Wt }t≥0 , {It }t≥0, associated with the backbone process Z . Namely,
Wt := e−αt |Zt |,
It := e−(α−μ)t Zt , h ,
1
V∞ = λ∗ W∞, a.s.
where h is the eigenfunction introduced in (S3).
Let us assume that V , H, W and I are defined for the backbone construction.
Fact 13 Let us assume that (B1), (S1) and (S2) hold. Then, W is a positive,
L2bounded martingale. We denote its limit by W∞. Moreover
If, in addition, (S3) holds then I is a martingale, which for α > 2μ is L2-bounded. In
this case the limit
exists a.s and in L2. Moreover
I∞ = t→li+m∞ It ,
1
H∞ = λ∗ I∞, a.s.
The proof uses some facts which are presented later and thus is postponed to Sect. 7.
By Theorem 12, the backbone Z starts with a random number of particles. The
definitions of W and I and the convergences remain valid under assumption Z0 = δ0
(i.e., one particle located at 0). We denote the joint limit in this case by (Iˇ∞, Wˇ ∞).
We conjecture the following behavior of the law of (H∞, V∞).
Conjecture 14 Let us assume that (B1), (S1), (S3) and α > 2μ hold and let
be an i.i.d. sequence distributed according to (Iˇ∞, Wˇ ∞). Let ν ∈
(Iˇ∞i, Wˇ i )
∞ i≥1
MF (Rd ) and N be a Poisson point process with intensity ν independent of the
sequence. We define
1 ⎛ |N |
Hˆ∞ := λ∗ ⎝
i=1
Iˇ∞i +
|N |
i=1
⎞ 1 ⎛ |N |
h(xi )Wˇ ∞i⎠ , Vˆ∞ := λ∗ ⎝
⎞
W i
ˇ ∞⎠ ,
i=1
where |N | is the number of points in N and (x1, . . . , x|N |) are their positions.
Let (H∞, V∞) be the limits of martingales (4.2) and (4.8) for the superprocess
starting from X0 = ν then
(H∞, V∞) =d (Hˇ∞, Vˇ∞).
The conjecture is supported by the fact that it holds in the case of superprocess from
Example 2. In this case, it follows simply by Fact 13, Theorem 12 and an analogous
decomposition for the Ornstein–Uhlenbeck branching process given in [2,
Proposition 3.11]. In [2, Remark 3.14], it is proven also that in this case H∞ is not Gaussian.
5.3 Moments
This section is devoted to the presentation of the moment formulas of processes
appearing in the proofs. In the paper, we utilize moments up to order 4. We recall the branching
mechanisms ψ and ψ ∗ given in (1.1) and (5.1).
Given f ∈ C0(Rd ), we define u1f , u2f : Rd × R+ → R and u∗f,1, u∗f,2 : Rd × R+ →
R by
u1f (x , t ) := Ptα f (x ), u∗f,1(x , t ) := Ptα∗ f (x ).
(5.12)
(5.13)
u∗f,2(x , t ) := −(ψ ∗) (0)
t
(5.14)
Further, let
B3 = {(1, 1, 0), (3, 0, 0)} , B4 = {(1, 0, 1, 0), (0, 2, 0, 0), (2, 1, 0, 0), (4, 0, 0, 0)} ,
Randby cm1 m∈B3∪B4 be constants to be specified later. We define u∗f,3, u∗f,4 : Rd ×R(5+.1→5)
u∗f,k (x , t ) :=
t
−λ∗V fj + u∗f, j m j
cm3
k
j=1
u∗, j m j ⎤
f ⎦ (x )ds,
where cm2 m∈B3∪B4 and cm3 m∈B3∪B4 are constants to be specified. The usefulness
of these formulas follows by
Lemma 15 Under assumption (B1) and (S1) for any f ∈ C0(Rd ) formulas (5.13),
(5.14), (5.16), (5.17), (5.18) and (5.19) are well defined (in particular all quantities
are finite). They can used to calculate moments, viz.
(1) For any X0 ∈ MF (Rd ) we have
E Xt , f
= u1f (·, t ), X0 , Var ( Xt , f ) = − u2f (·, t ), X0 .
(5.20)
(5.16)
(5.17)
(5.18)
(5.19)
Similar formulas hold for the subcritical superprocess with the branching
mechanism ψ ∗, namely
E Xt∗, f
= u∗f,1(·, t ), X0 , Var
Xt∗, f
= − u∗f,2(·, t ), X0 .
(5.21)
(2) There is a choice of constants cm1 , cm2
k ≤ 4 we have
and cm3
such that for x ∈ R and
E0×δx f, t k = (−1)k V fk (x , t ).
(5.22)
Remark 16 We recall that the process was defined in Definition 11. Formula (5.22)
will be used to calculate moments of i,s in (5.7). To this end, notice that under E0×δx
the process has the same law as ui,s under condition Zs (i ) = x .
u≥0
Moreover, we note that the constants cm1 , cm2 and cm3 can be specified
explicitly though are not relevant to our proofs.
Using these formulas, we analyze the process Y s defined in (5.8). Let f ∈ C0(Rd ),
and using the strong Markov property, (5.20) and (5.21), we obtain
E Yts , f
= EEXs Xt∗, f
= eα∗t E Xs , Pt f
= eαs eα∗t Ps (Pt f ), X0
= eαs+α∗t Pt+s f, X0 ,
where under the measure EXs , the process X ∗ is a subcritical superprocess starting
from Xs . In the last transformation, we used the fact that P is a semigroup. Now, by
α∗ < 0, we see that indeed Yts is negligible for t s.
It will be useful to have the following bounds
Lemma 17 Assume (B1), (S1) and (S2). Given f ∈ C0(Rd ) there exists R ∈ C0(Rd )
such that
|u2f (x , t )| ≤ e2αt R(x ), |u∗f,2(x , t )| ≤ eα∗t R(x ),
|u∗f,3(x , t )| ≤ eα∗t R(x ), |u∗f,4(x , t )| ≤ eα∗t R(x ),
and
finally also
V f2(x , t ) ≤ e2αt R(x ).
(5.26)
The proofs of Lemmas 15 and 17 are technical and thus postponed to “Appendix.” We
will also need moment formulas for the backbone process. We skip proofs referring
the reader to [11] and derivation on [2, Sect. 4.1].
Lemma 18 Let us assume (B1) and (S1). Let Z be the backbone process as in Theorem
12. Then, there exists C > 0 such that for any f ∈ C0(Rd ) we have
E Z , f
= Ptα f, λ∗ν .
E Z , f 2 = λ∗
Rd
where we recall that λ∗ is given by (2.3).
0
t
Ptα f 2 (x ) + C
Pt−s (Psα f (·))2 (x )ds ν(dx ),
α
(5.23)
(5.24)
(5.25)
(5.27)
(5.28)
i=1
i=1
|Zt |
i=1
i=1
6 Proof of Theorem 4
In this section we fix f ∈ C0(Rd ) and make the standing assumption that (B1), (S1),
(S2) and α < 2μ hold.
Let us first outline the proof. We use the decomposition of given in (5.7). We
recall that V∞ is the limit of the martingale V (see (4.1) and Fact 3), that f˜ = f − f, ϕ
and finally (2.3). We start with the following random vectors
K1(t) := e−αt | t |, e−(α/2)t (| t | − eαt V∞), e−(α/2)t
where V ∞i i∈N are i.i.d. copies of V∞, Mti := e−((k−1)α/2)t (i,kt−1)t , f˜ and mit
:= E Mti |Zt = E Mti |Zt (i ) (k and further details of definitions will be specified
later).
We will show that
lim [K1(t ) − K5(t )] = 0, in probability.
t→+∞
Next, we will consider a random vector related to K5 defined by
⎛
K6(t) := ⎝ e−αt |Zt |/λ∗, | kt | −1/2
⎞
(1−V ∞i), |Zt |/λ∗ −1/2 |Zt |(Mti −mit )1{ Zt (i) <log t}⎠ .
where the limit is as in (4.5). From these results, Theorem 4 follows by standard
arguments.
Before going to the proofs, we recall (3.1) and σ 2f given by (4.4) and we state the
following technical lemma.
Lemma 19 We have σ 2f < +∞ and
Moreover, there exists R ∈ C0(Rd ) such that
V 4(x , t )| ≤ e2αt R(x ).
| f˜
The proof is deferred to the end of this section.
⎞
(6.1)
(6.2)
(6.4)
(6.5)
(6.6)
6.1 Proof of (6.2)
We will proceed defining auxiliary K2(t ), K3(t ), K4(t ) and proving that for i ∈
{1, 2, 3, 4} we have
lim |Ki+1(t ) − Ki (t )| = 0, in probability.
t→+∞
This clearly will establish (6.2). Let us fix k ∈ N such that (we recall that α∗ is
negative) k > max μ/(μ − α/2), −α/α∗ . (6.7)
We set
K2(t ) :=
e−αt |Zt |/λ∗, e−(kα/2)t (| kt | − ekαt V∞), e−(kα/2)t
kt , f˜ .
(6.8)
Obviously, the limit of K1(kt ) is the same as the one of K1(t ). Moreover, we recall
(5.9), and by Fact 3 and Fact 13, we have Vkt − Wt /λ∗ → 0 a.s. Therefore, |K2(t ) −
K1(t )| → 0 almost surely.
We will now concentrate on the second coordinate. The process of the total mass
{| t |}t≥0 is a continuous-state branching processes (CSBP) (see [12, Sect. 10]. As
such, it enjoys the branching property (see [12, 10.1]). Thus, for s ≥ kt we may
decompose
| s | =
where Fsi s≥0 are independent CSBPs having the initial mass 1 and Fˆs s≥0 is a
CSBP with the initial mass | kt | − | kt | . Analogously to (4.1) processes V i
s :=
e−αs Fsi and Vˆs := e−αs Fˆs are positive martingales with the respective limits V i and
V ∞
ˆ∞ as described in Fact 3. Passing to the limit in (6.9), we get
V∞ = e−kαt
⎝
⎛ | kt |
i=1
V ∞i + Vˆ∞⎠ .
⎞
One easily checks that
i=1
|Zt |
i=1
e−(kα/2)t | kt | − ekαt V∞ −e−(kα/2)t
(1 − V ∞i) = e−(kα/2)t | kt |− | kt | −Vˆ∞ → 0,
in probability.
e−(W(ke−1)pαa/s2s)t to(i,kt−an1)atl,yzf˜e atnhde otbhsierrdvectohoartdbinya(t5e.7o)fan(d6.(85)..22W)weerheacvalel that Mti =
E e−(kα/2)t
kt , f˜ − e−(α/2)t
t ≤ e−(kα/2)t E Y(k−1)t , | f˜| → 0. (6.11)
Mi t
(6.10)
i=1
i=1
This follows easily by (5.23) and (6.7) (the second proviso). To recapitulate, we set
⎛
K3(t ) := ⎝ e−αt |Zt |/λ∗, e−(kα/2)t
|Zt |
i=1
⎞
Mti ⎠ .
(6.12)
By (6.10) and (6.11), we have |K3(t ) − K2(t )| → 0.
We recall also mit = E Mti |Zt = E Mti |Zt (i ) , with Zt (i ) being the location of
the i th particle of Z (in some ordering). We define
⎛
K4(t ) := ⎝ e−αt |Zt |/λ∗, e(kα/2)t
By (5.22) and assumption (S2), we have
|Zt |
i=1
⎞
(Mti − mi ) .
t ⎠
|mit | ≤ c1e−((k−1)α/2)t e(k−1)αt |P(k−1)t f˜(Zt (i ))| ≤ e(α(k−1)/2)t e−μ(k−1)t R1(Zt (i )).
Further by (3.4), (5.27), the fact that X0 ∈ MF (Rd ) and using the first proviso of
(6.7) we obtain
E
|Zt |
i=1
⎞
i
|mt |⎠
≤ e(α/2)t eα((k−1)/2)t e−μ(k−1)t Pt R1, X0 → 0.
(6.13)
In this way, we have established that |K3(t ) − K4(t )| → 0.
Finally, we deal with |K5(t ) − K4(t )|. We introduce truncation in order to be able to
control moments in the next section and Lemma 19. The choice of log t is somewhat
arbitrary. We define I (t ) and use the conditional expectation to calculate
I (t ) := E ⎝
⎛
|Zt |
i=1
= e−αt E
≤ e−αt E
i=1
⎝
⎝
i=1
(Mti − mit )1{ Zt (i) ≥log t}⎠
⎞
E (Mti − mit )2|Zt (i ) 1{ Zt (i) ≥log t}⎠
⎞
E (Mti )2|Zt (i ) 1{ Zt (i) ≥log t}⎠ .
⎞ 2
We recall (5.18), (5.22) and obtain
E (Mti )2|Zt ≤ c1e−((k−1)α)t
+ u∗f˜,1(·, s)
+ u∗f˜,2(·, s) (Zt (i ))ds.
(6.14)
We treat the first term. By (S2), α < 2μ and (3.4), we obtain
c1e−((k−1)α)t
Other terms are easier and left to the reader. We conclude that
E (Mti )2|Zt ≤ R3(Zt (i )).
(6.15)
By (5.27) and the Cauchy–Schwarz inequality, we conclude that
I (t ) ≤ e−αt E
⎝
i=1
For any x ∈ R, using (S1), we get
lim sup Pt 1 · ≥log t (x ) ≤ lim sup lim sup Pt 1 · ≥y (x ) = lim sup 1 · ≥y , ϕ = 0.
t→+∞ y→+∞ t→+∞ y→+∞
Function x →
Pt 1 · ≥log t (x ) is continuous and the support of X0 is compact thus
lim sup sup
t→+∞ x∈supp(X0)
Pt 1 · ≥log t (x ) = 0.
This and (3.4) imply I (t ) → 0 and consequently |K5(t ) − K4(t )| → 0.
6.2 Proof of (6.4)
We will use characteristic functions. It will be convenient to work conditionally on
the event Et := {| kt | ≥ t } ∩ {Zt ≥ t } (we denote the corresponding expectation by
Et ). We set
⎧
χ1(θ1, θ2, θ3; t ) := Et exp ⎨ iθ1e−αt |Zt |/λ∗ + iθ2 | kt | −1/2
⎩
i=1
i=1
(1 − V ∞i)
⎭
→t 0.
(6.18)
and
We shall show that for any θ1, θ2, θ3, we have
lim |χ1(θ1, θ2, θ3; t ) − χ3(θ1, θ2, θ3; t )| = 0.
t→+∞
(6.16)
Secondly, we notice that P( ≤ | kt | ≤ t ) → 0 and P(
1Et → 1Extc a.s. and Fact 3 implies that
≤ |Zt | ≤ t ) → 0; thus,
lim χ3(θ1, θ2, θ3; t ) = e−(θ3σ f )2/2e−(θ2σV )2/2
t→+∞
E exp iθ1Vˆ∞
.
(6.17)
Using 1Et → 1Extc a.s. it is a standard task to conclude (6.4) from (6.16) and (6.17).
To get (6.16), we will introduce an intermediate function χ2 and show that
|χi (θ1, θ2, θ3; t ) − χi+1(θ1, θ2, θ3; t )| →t 0 for i ∈ {1, 2} using the central limit
theorem.
Let h be the characteristic function of (1 − V ∞i). One checks that all the random
variables in the definition of χ1, except for V i , are measurable with respect to the
∞
σ -field F generated by { ks , Zs }s≤t . Moreover, conditionally on F , V ∞i are i.i.d. By
Fwaecotb3t,awine have E(1 − V ∞i) = 0 and Var(V ∞i) = σV . Using conditional expectation,
χ1(θ1, θ2, θ3; t )
⎡ ⎧ ⎫
:= Et ⎣ exp ⎨ iθ1e−αt |Zt |/λ∗ + iθ3 |Zt |/λ∗ −1/2 |Zt | (Mti − mit )1{ Zt (i) <log t}⎬
i=1
⎭
⎤
h θ2 | kt | −1/2 | kt | ⎦ .
The central limit theorem yields h θ2/√n n
following definition
→ e−(θ2σV )2/2. This motivates the
χ2(θ1, θ2, θ3; t ) := e−(θ2σV )2/2
⎧ ⎫
Et exp ⎨ iθ1e−αt |Zt |/λ∗ + iθ3 |Zt |/λ∗ −1/2 |Zt | (Mti − mit )1{ Zt (i) <log t}⎬ .
i=1
Dominated Lebesgue’s theorem and the assumption on the event Et yield
|χ1(θ1, θ2, θ3; t )−χ2(θ1, θ2, θ3; t )| ≤ Et h θ2 |Xkt | −1/2 |Xkt | −e−(θ2σV )2/2
Similarly we will deal with the other sum. We work conditionally on Zt , and
for notational simplicity, we work with integer times. We introduce sequences
{an }n≥0 , { pn}n≥0 such that an ∈ N, pn ∈ Rdan (intuitively an is the number of
particles at time n and pn are their positions). We assume that an e−αn → a > 0,
pn(i ) ≤ log n. We denote
Sn := (λ∗)1/2an−1/2
M˜ ni − m˜ in ,
an
where M˜ ni :=
defined below (6.1)) we set also m˜ in = EM˜ ni . We are going to use the CLT to analyze
Sn. Firstly, we calculate its variance
vn := Var (Sn) = λ∗an−1
Var(M˜ ni − m˜ in)
= λ∗an−1
E(M˜ ni)2 − λ∗an−1
(m˜ in)2.
(6.19)
a−2
n
an
i=1
Therefore, the CLT implies
A proof analogous to (6.13) gives λ∗an−1 'ia=n1(m˜ in)2 → 0. By (5.22), we have
E(M˜ ni)2 = e−α(k−1)n V 2( pn(i ), (k − 1)n). Recalling (6.5), we obtain
f˜
where σ 2f is given by (4.4). Secondly, we check the Lyapunov condition. Using
Hölder’s inequality and (6.6), we get
E(M˜ ni − m˜ in)4 ≤ c1an−2
E(M˜ ni)4 ≤ an−2
R1( pn(i ))
≤ an−1
R1(x ) →n 0.
an
i=1
an
i=1
nli→m0 vn = σ 2f,
an
i=1
sup
x ≤log n
Sn →d
N
0, σ 2f .
an
i=1
an
i=1
Using the dominated convergence theorem in a similar manner as in the case of χ1 −χ2,
one can show that |χ2(θ1, θ2, θ3; t ) − χ3(θ1, θ2, θ3; t )| →t 0.
6.3 Proof of Lemma 19
In order to prove (6.5), we will show that
e−αt V 2(0, t ) → σ 2f/λ∗,
f˜
To get the first convergence, we use (5.18) and write
e−αt V 2(0, t ) =
f˜
− λ∗ 0
0
t
t
Ps f˜(·)
2
(0)ds.
Using (S2), the integrand in the last expression can be estimated as follows
2
0 ≤ L(x ) := eαs Pt−s
Using (3.4), we get L(0) ≤ c1e(α−2μ)s which, by assumption α < 2μ, is integrable
with respect to s. By (S1) for any fixed s ≥ 0, we have
Pt−s
Recalling (4.4) and appealing to dominated Lebesgue’s theorem we conclude I1(t ) →
σ 2f/λ∗ < +∞. An analogous argument, using (5.13) and (5.24), gives
1
I2(t ) → I2 := − λ∗ 0
2
e−αs ϕ, ψ (0) u∗f˜,1(·, s)
+ (α − α∗)u∗f˜,2(·, s) ds.
By (S1) ϕ is an invariant measure thus ϕ, Pu f
Fubini’s theorem, we get
= ϕ, f , using (5.13), (5.14) and
e−αs ϕ, ψ (0) Psα∗ f˜
−(α −α∗)ψ (0)
= −
ψ (0)
0
(α − α∗)ψ (0)
ψ (0)
e−αs ϕ, Psα∗ f˜
0
e−αs ϕ, Psα∗ f˜
2
2
2
ds
0
s
2
∞ e−αs eα∗(s−u) ϕ,
Puα∗ f˜
dsdu
ds +
ψ (0)
λ∗
0
Puα∗ f˜
2
du = 0.
where
Now we pass to the second statement of (6.20). We analyze the first term of (5.18)
which is hardest and leave the other terms to the reader. Namely, we will prove that
(0)ds
0
(6.22)
f (s, t ) := eαs
We recall (6.21) and notice that f (s, t ) ≤ 2 sup x ≤log t L(x ) thus
f (s, t ) ≤ 2e(α−2μ)s
+
+
≤ c1e(α−2μ)s/2.
2
Fix s ≥ 0. We denote Hs (x ) = Ps f˜ (x ) and H˜s = Hs −
and using the triangle inequality, we get
Hs , ϕ . Applying (S2)
,
lim sup f (s, t ) ≤ lim sup 2eαs
t→+∞ t→+∞
Now the convergence in (6.22) follows by Lebesgue’s dominated theorem and we
conclude the proof of (6.20).
In order to prove (6.6), we apply the triangle inequality to (5.19) and Lemma 17
V k (x, t )| ≤ c1
| f˜
+ c1
m∈Bk 0
m∈Bk 0
t
⎡ k
t
j=1
⎡ k
j=1
−λ∗V f˜j (·, s) + u∗f˜, j (·, s)
u∗f˜, j (·, s)
m j ⎤
⎦ (x)ds +
0
t
m j ⎤
⎦ (x)ds
Pt−s u∗f˜,k (·, s) (x)ds.
α
control). We will thus consider
For simplicity, we skip all terms u∗,k (which by (5.25) and α∗ < 0 are easy to
f˜
Sk (x , t ) :=
m∈Bk
0
⎡ k
j=1
(6.24)
conclude
For k = 3, we recall (5.15) for and use (3.2), (5.17) and (6.25) to get
0
α
Pt−s
0
Using assumption α < 2μ and (3.4), we estimate
e(α−μ)s Pt−s R4(x )ds ≤ R5(x )eαt
e(α−μ)s ds ≤ e(3α/2)t R5(x ).
This yields that
V 3(x , t )| ≤ e(3α/2)t R6(x ).
| f˜
get
Finally, we pass to k = 4. We recall (5.15) and use (3.2), (5.17), (6.25) and (6.26) to
0
t
4
0
t
2
4
(6.25)
(6.26)
≤
0
t
0
t
t
0
t
+
eαs R2(·) + e(α−μ)s R1(·)
Using the assumption α < 2μ and (3.4), we get
This is enough to conclude (6.6).
S4(x , t ) ≤ eαt
eαs Pt−s R7(x )ds ≤ R8(x )eαt
eαs ds ≤ e2αt R8(x ).
7 Proof of Theorem 8
In this section, we fix f ∈ C0(Rd ) and make the standing assumption that (B1), (S1),
(S2), (S3) and α > 2μ hold. Proving the convergence of the whole vectors (4.9) and
(4.10) would be notationally cumbersome. As it follows similar lines as in the proof
of Theorem 4, it is left to the reader. We focus on the most important part which is
the convergence of the second coordinate of (4.10). Recalling (3.1) and the backbone
construction given in Definition 11, we denote
Y1(t ) := e−(α−μ)t ( t , f − | t | f, ϕ ) = e−(α−μ)t
where, slightly abusing notation, we used H∞ to denote the limit of martingale (4.7)
defined for { t }t≥0. By Theorem 12, the processes X and have the same law and
thus (7.1) implies the convergence
e−(α−μ)t ( Xt , f − |Xt | f, ϕ ) − f h, ϕ · H∞ →d 0,
and hence also in probability. This establishes the convergence of the second coordinate
in (4.10). Before the proof, we formulate a technical lemma
Lemma 20 There exists R ∈ C0(Rd ) such that
V 2(x , t )| ≤ e2(α−μ)t R(x ).
| f˜
We will define intermediate processes Y2, Y3, Y4. The convergence (7.1) will follow
immediately once we show
|Y1(t + j (t )) − Y2(t )| → 0, |Y2(t ) − Y3(t )| → 0,
|Y3(t ) − Y4(t )| → 0, |Y4(t ) − f h, ϕ · H∞| → 0,
where the convergences hold in probability and j : R+ → R+ is a continuous function.
Recall (5.7) and let us set
choosing j to be any continuous function fulfilling
|Zt |
i=1
Y2(t ) := e−(α−μ)t
e−(α−μ) j (t) ij,(tt), f˜ ,
α∗
j (t ) ≥ −
α + 1 t, and r ( j (t ))eμt → 0,
(7.2)
(7.3)
(7.4)
where r is the function introduced in (S3) and α∗ defined in (5.2). Using (3.4), (5.8)
and (5.23), we get
E|Y1(t + j (t )) − Y2(t )| = E|Y jt(t)| ≤ eαt+α∗ j (t) Pt+s | f |, X0 ≤ c1eαt+α∗ j (t) → 0,
|Zt |
Clearly, E Mti − mit |Zt = 0 and Mti − mit i are independent conditionally on Zt ,
thus
E (Y2(t ) − Y3(t ))2 = e−2(α−μ)t E
= e−2(α−μ)t E
= e−2(α−μ)t E
E
⎛ |Zt | |Zt |
i=1 j=1
i=1
By (5.22) and (5.26), we get
E (Mti − mit )2|Zt ≤ E (Mti )2|Zt ≤ R1(Zt (i )).
Using (5.27), α > 2μ and (3.4), we obtain
E (Y2(t ) − Y3(t ))2 ≤ c1e−2(α−μ)t E Zt , R1 ≤ c2e−2(α−μ)t eαt Pt R1, X0 → 0,
(7.7)
which establishes the second convergence in (7.3).
We recall (5.17) and (5.22) to get
mit = e−(α−μ) j (t) eαj (t) − eα∗ j (t)
λ∗
1
λ∗
P j (t) f˜(Zt (i )) = l(t )
eμj (t)P j (t) f˜(Zt (i )) ,
where l(t ) =
1 − e(α∗−α) j (t) . Following (S3), we decompose mit = m˜ it + mˆ it with
1
λ∗
1
λ∗
i
m˜ t := l(t )
eμj (t)P j (t) f˜(Zt (i )) − f h, ϕ · h(Zt (i )) ,
i
mˆ t := l(t )
f h, ϕ · h(Zt (i )).
We recall (5.10) and write
Y4(t ) := e−(α−μ)t
mˆ it = l(t ) f h, ϕ · λIt∗ .
By (S3) we have |m˜ it | ≤ r ( j (t ))R1(Zt (i )). Applying (5.27), the second proviso of
(7.4) and (3.4) we obtain
E|Y3(t ) − Y4(t )| ≤ e−(α−μ)t
|m˜ t | ≤ e−(α−μ)t r ( j (t ))E Zt , R1
i
i=1
= eμt Pt R1, X0 r ( j (t )) → 0,
thus the third convergence of (7.3) holds.
Finally, noticing l(t ) → 1 and using Fact 13, we get
Y5(t ) →
I∞
f h, ϕ · λ∗ =
f h, ϕ · H∞, a.s.
This is the last statement of (7.3), and thus, the proof is concluded.
7.1 Proof of Lemma 20
By (3.2), (5.18), (5.13) and (5.24), we obtain
V 2(x , t )| ≤ c1
| f˜
α
Pt−s
0
α
Pt−s
Using (3.4), we get
V 2(x , t )| ≤ eαt
| f˜
0
t
e(α−2μ)s Pt−s R4(x )ds ≤ R5(x )eαt
e(α−2μ)s ds.
|Zt |
i=1
|Zt |
0
t
7.2 Proof of Fact 7
We recall h = (h1, . . . , hk ) introduced in (S3). By Lemma 15, (5.20) and (S3) for any
i ∈ {1, . . . , k}, we get
E Xt , hi =
X0, u1hi (·, t ) = eαt X0, Pt hi = e(α−μ)t X0, hi = e(α−μ)t X0, hi .
By Lemma 15 and X0 ∈ MF (Rd ) now follows that the martingale H is L2-bounded
and thus converges in L2 and a.s.
7.3 Proof of Fact 13
The fact that W is a martingale is well known (see, for example, [3, Theorem A.6.1]).
The properties of I are proved in [2, Sect. 3.3] (where its name is H ).
We now concentrate on showing (5.12). Having a.s convergence of H and I it is
sufficient to show that for some l > 0
1
H(l+1)t − λ∗ It → 0,
in probability. Recalling h = (h1, . . . , hk ) introduced in (S3) and decomposition (5.7)
we obtain for any j ∈ {1, . . . , k} the j th coordinate of H(l+1)t − λ1∗ It is given by
e−(l+1)(α−μ)t Yltt , h j +e−(α−μ)t
|Zt |
i=1
i=1
= e−(l+1)(α−μ)t Yltt , h j + e−(α−μ)t
Mti − mit
+ e−(α−μ)t
|Zt |
i 1
mt − λ∗ h j (Zt (i ))
t
0
0
t
0
|Zt |
i=1
t := e−l(α−μ)t lit,t , h j and mit := E(Mti |Zt ) = E(Mti |Zt (i )). Use (5.23),
where Mi
(3.4), (S3) to calculate
E|I1(t )| ≤ e−(l+1)(α−μ)t eαt+α∗lt P(l+1)t |h j |, X0
≤ e−(l+1)(α−μ)t eαt eα∗lt R1, X0 ≤ c1e−(l+1)(α−μ)t eαt eα∗lt .
By (5.2) we can choose l such that E|I1(t )| → 0. The proof of E (I2(t ))2 → 0 is the
same as the one of (7.7). Finally, for I3 we use (5.22) and (S3) to get
mit − λ1∗ h(Zt (i )) = λ1∗ el(α∗−α)t h(Zt (i )).
The convergence I3(t ) → 0 follows by the convergence of the martingale I . Putting
together, we obtain (5.12). Relation (5.11) can be proven using a similar but simpler
way. Details are left to the reader.
8 Proof of Theorem 6
In this section, we fix f ∈ C0(Rd ) and make the standing assumption that (B1), (S1),
(S2), (S3) and α = 2μ hold. Let us first present the outline of the proof. We start with
the following random vector
K1(t ) :=
e−αt | t |, e−(α/2)t (| t | − eαt V∞), t −1/2e−(α/2)t
(8.1)
We will define K2, K3, K4 which will fulfill the following relations. For any k >
−α/α∗ we have
|K1(t ) − K2(t ; k)| → 0, |K3(t ; k) − K4(t ; k)| → 0
in probability as t → +∞ moreover
lim sup E K2(t ; k) − K3(t ; k) 2 ≤ C/ k,
t→+∞
K4(t ; k) →d
Lk :=
,
Vˆ∞, .
Vˆ∞G1,
k − 1 1/2 .
k
Vˆ∞G2 ,
for C > 0 and we recall that · denotes the Euclidian norm. The last convergence
holds conditionally on Extc and G1, G2 are the same as in Theorem 6. Proving the
theorem is rather standard once (8.2) and (8.3) are established. Indeed, let L∞ we
denote the law of V˜∞, /V˜∞G1, /V˜∞G2 . For any μ1, μ2 probability measures on
Rd we define
sup
g∈Lip(1)
m(μ1, μ2) :=
| g, μ1 − g, μ2 |,
(8.2)
(8.3)
where Lip(1) be the space of continuous functions Rd → [
−1, 1
] with the Lipschitz
constant smaller or equal to 1. It is well known that m is a metric equivalent to weak
convergence. Moreover, when μ1, μ2 correspond to two random variables X1, X2 on
the same probability space, then we have
m(μ1, μ2) ≤ E X1 − X2 ≤ .E X1 − X2 2.
(8.4)
We fix > 0 and choose k large enough such that /E K2(t ; k) − K3(t ; k) 2 ≤
ε and m(Lk , L∞) ≤ . Further, we find Tk such that for any t > Tk , one have
m(K1(t ), K2(t ; k)) ≤ , m(K3(t ; k), K4(t ; k)) ≤ and m(K4(t ; k), Lk ) ≤ . With
these choices, we get
m(K1(t ), L∞) ≤ 5 ,
for t ≥ Tk . The proof of Theorem 6 is concluded since can be taken arbitrary small.
Before the proofs of (8.2) and (8.3), we state a technical lemma.
Lemma 21 We have
where λ∗ is given by (2.3) and σ 2f by (4.6). Moreover, there exists R ∈ C0 such that
V 4(x , t )| ≤ t 2e2αt R(x ).
| f˜
We will skip some details of the proof which are repetition of arguments used in the
proof of Theorem 4 or are easy to establish. In particular, we recall (5.7) and leave to
the reader showing that the first convergence in (8.2) holds with
K2(t ; k) := ⎝⎛ λ1∗ e−αt |Zt |, e−(kα/2)t
i=1
|Zt |
i=1
M k,i
where Mtk,i := (kt )−1/2e−((k−1)α/2)t (i,kt−1)t , f˜ and V ∞i is an i.i.d. sequence
distributed as in (4.2). We define also mtk,i := E Mtk,i |Zt = E Mtk,i |Zt (i ) ,
(8.5)
(8.6)
Htk := e−(α/2)t 'i|Z=t1| mtk,i and
K3(t ; k) := ⎛⎝ λ1∗ e−αt |Zt |, e−(kα/2)t
| kt |
i=1
|Zt |
i=1
⎞
(Mtk,i − mtk,i )⎠ .
(8.7)
≤ c2
kt
kt
α
Pt−s
kt
t
0
Rd
i=1
Ptα (P(k−1)t f˜(·))2 (x )
PsαP(k−1)t f˜(·)
2
(x )ds X0(dx ).
eαt−2μ(k−1)t Pt R12 (x )
eαs e−μ(s+(k−1))t R1(·)
This expression differs from K2(t ; k) only by H k so in order to show the first assertion
of (8.3) we need to bound it from above in L2. Applying (5.17) to mkt,i , we obtain
t = (kt )−1/2e−((kα/2)t e(k−1)αt − e(k−1)α∗t |Zt |
H k
λ∗
i=1
P(k−1)t f˜(Zt (i )).
By Lemma 18, we have
E(Htk )2 ≤ c1
e(k−2)αt ⎛ |Zt |
E
P(k−1)t f˜(Zt (i ))⎠
⎞ 2
for some C > 0. Let us now concentrated on the third coordinate of K3(t ; k). We
introduce truncation. We recall I (t ) defined in (6.14), one can follow the proof there
to show I (t ) → 0, the only change is to show (6.15), namely that E (Mti,k )2|Zt ≤
R1(Zt (i )). This is left to the reader. Therefore, we have |K3(t ; k) − K4(t ; k)| → 0 in
(8.2) with
K4(t; k) := ⎝⎛ λ1∗ e−αt |Zt |, e−(kα/2)t
|Xkt |
i=1
(1−V ∞i), e−(α/2)t
|Zt |
i=1
⎞
(Mtk,i −mtk,i )1{ Zt (i) <log t}⎠ .
Using (S2), we estimate
E(Htk )2 ≤
Applying (3.4), and recalling α = 2μ, we get that
E(Htk )2 ≤
R2, X0
R3, X0
eαt−2μ(k−1)t + eαt
e−α(k−2)t + e−α(k−2)t
2
0
0
(x )ds X0(dx ).
eαs e−2μ(s+(k−1))t ds
ds
≤ C/ k,
(8.8)
vn = λ∗an−1
E(M˜ nk,i )2 − λ∗an−1
E(m˜ kn,i )2,
an
an
i=1
where M˜ nk,i :=
EM˜ nk,i . For the second term we use (5.17), (S2) and α = 2μ to estimate
E (kt )−1/2e−((k−1)α/2)n (i,kn−1)n, f˜ |Zn(i ) = pn(i ) and m˜ kn,i =
|m˜ kn,i | ≤ c1n−1/2e((k−1)α/2)n|P(k−1)n f˜( pn(i ))| ≤ n−1/2e(k−1)(α/2−μ)n R1( pn(i ))
= n−1/2 R1( pn(i )).
We recall that pn(i ) ≤ log n so supi≤an |m˜ kn,i |
an−1 'ia=n1(m˜ kn,i )2 → 0. Using (5.22), we have
→ 0 and consequently
E(M˜ nk,i )2 = (kn)−1e−α(k−1)n V f2˜( pn(i ), (k − 1)n).
where σ 2f is given by (4.6). This completes the proof of K4(t ; k) → Lk and
consequently the whole proof of Theorem 6.
8.1 Proof of Lemma 21
To show (8.5), we will prove
t −1e−αt V 2(0, t ) → σ 2f/λ∗,
f˜
(0)ds
t −1
0
0
e−αs Pt−s
sup t −1e−αt |V f2˜(x , t ) − V 2(0, t )| → 0. (8.9)
x ≤log t f˜
× ψ (0) u∗f˜,1(·, s)
2
s
0
s
0
0
0
t
We start with I2 recalling (5.2), using (5.13), (5.14) and applying (3.4) multiple time
we obtain
|I2(t )| ≤ c1t −1
Psα∗ f˜(·)
α∗
Ps−u
Puα∗ f˜(·)
2
du (0)ds
eα∗s R1(·)
eα∗(s+u)Ps−u R12(·)du (0)ds
0
t
0
+
eα∗(s+u) R2(·)du (0)ds
e(α∗−α)s Pt−s R3(0)ds ≤ t −1
e(α∗−α)s ds → 0.
0
To treat I1, we use α = 2μ and decompose following notation of (S3)
I1(t ) = t −1
Pt−s
h(·) · f h, ϕ + Psμ f˜(·) − h(·) · f h, ϕ
2
(0)ds
= t −1
Pt−s (h(·) · f h, ϕ )2 (0)ds
+ t −1
+ 2t −1
0
0
Pt−s
Pt−s
=: I3(t ) + I4(t ) + I5(t ).
(0)ds
Psμ f˜(·) − h(·) · f h, ϕ
(h(·) · f h, ϕ ) (0)ds
sup t −1
x ≤log t
0
t
0
0
t
0
0
Recalling (S1), we check
To I4, we apply (S3) and (3.4), namely
I3(t ) → σ 2f/ψ (0).
|I4(t )| ≤ t −1
r (s)2Pt−s R12 (0)ds ≤ c1t −1
r (s)2ds → 0.
Similarly, one can prove |I5(t )| → 0. Putting these results together we conclude
I1(t ) → σ 2f/λ∗ and consequently the first convergence in (8.9) holds. Let us pass to
the second statement. We analyze the first term of (5.18), which is hardest, and leave
the others to the reader. Namely, we will show that
(0)ds
f (s, t )ds →t 0,
where f (s, t ) given in (6.23). We recall the notation of (S3) and write
= e−2μs
+ e−2μs (h(·) · f h, ϕ )2
+ 2e−2μs
Psμ f˜(·) − h(·) · f h, ϕ
(h(·) · f h, ϕ ) .
0
2
0
(8.10)
sup
x ≤log t
Using this decomposition, α = 2μ together with the triangle inequality, we get
f (s, t ) ≤ 2
Pt−s (h(·) · f h, ϕ )2 (x ) − Pt−s (h(·) · f h, ϕ )2 (0) .
We use α = 2μ, apply (S3) to the first two expressions and define H = (h · f h, ϕ )2−
(h · f h, ϕ )2 , ϕ . We write
f (s, t ) ≤ r (s)2
Pt−s R12 (x )+r (s) sup
x ≤log t
Pt−s [R2(·) (h(·) · f h, ϕ )] (x )
|Pt−s H (x )| .
Pt−s R˜3(x ) ≤ ϕ, R3 + e−μ(t−s) R4(log t ).
This will be applied to the first two terms. The third one can be analyzed similarly
using (S2)
By the fact that r (s)
0, we can write
|Pt−s H (x )| ≤ e−μ(t−s) R5(log t ).
f (s, t ) ≤ c1r (s) + e−μ(t−s) R6(log t ).
V 2(x , t )| ≤ t eαt R1(x ).
| f˜
(8.11)
0
α
Pt−s
0
α
Pt−s
For k = 3, we recall (5.15) for and use (5.17), (S2) and (8.11) to get
S3(x , t ) ≤ c1
eαs/2 R1(·)seαs R2(·) + eαs/2 R3(·)
(x )ds.
S4(x , t ) ≤ c1
V f1˜(·, s)V f3˜(·, s) +
V f2˜(·, s)
V f2˜(·, s) +
V f1˜(·, s)
0
α
Pt−s
2
2
eαs/2 R1(·)se(3α/2)s R2(·) + seαs R3(·)
2
+ eαs/2 R1(·)
seαs R3(·) +
eαs/2 R1(·)
Using (3.4), we estimate that
We conclude that
eαs/2sPt−s [ R4] (x )ds ≤ R5(x )eαt
seαs/2ds ≤ t e(3α/2)t R5(x ).
V 3(x , t )| ≤ t e(3α/2)t R6(x ).
| f˜
(8.12)
Finally, we pass to k = 4. We recall (5.15) and use (5.17), (S2), (8.11) and (8.12) to
get
s2eαs Pt−s R7(x )ds ≤ R8(x )eαt
s2eαs ds ≤ t 2e2αt R9(x ).
This is enough to conclude the proof of (8.6).
Acknowledgments Parts of this paper were written while the author enjoyed the kind hospitality of the
Probability Laboratory at Bath. The author wishes to thank Simon Harris and Andreas Kyprianou for
stimulating discussions. The author also thanks the reviewer for suggesting changes which greatly improved
the paper.
9 Appendix
This section contains proofs of Lemmas 15 and 17. First, we recall Faà di Bruno’s
formula, which states that for sufficiently smooth functions g : R → R and h : R → R
we have
dk
dx k
h(g(x )) =
am · h(m1+···+mk )(g(x )) ·
m∈Ak
j=1
g( j)(x )
m j
,
(9.1)
where am1,...,mk := m1! 1!m1 m2! 2k!!m2 ··· mk ! k!mk and the sum is over the set Ak of
all k-tuples of nonnegative integers m = (m1, . . . , mk ) satisfying the constraint
k
' j=1 j m j = k.
Fix f ∈ b+(Rd ) and recall (1.3). We introduce an additional parameter θ ≥ 0 and
denote
uθ f (x , t ) = Pt [θ f ] (x ) −
Pt−s ψ (uθ f (·, s)) (x )ds.
Formal calculations using (9.1) yield that
0
t
0
Pt−s
m∈Ak
∂
∂θ uθ f (x , t ) = Pt f (x ) −
∂
∂θ uθ f (x , s)ψ (uθ f (·, s)) (x )ds.
∂k
∂θ k uθ f (x , t ) = −
Pt−s ⎣
am · ψ (m1+···+mk )(uθ f (·, s))
∂ j
∂θ j uθ f (·, s)
m j ⎤
⎦ (x )ds,
for k ≥ 2. It is standard to check that the above formulas are valid for θ > 0.
Passing to the limit θ 0 we conclude that they remain true as long as ψ (k)(0) is
finite. We denote ukf (x , t ) := ∂∂θkk uθ f (x , t )
. The same reasonings hold for the
branching mechanism ψ ∗ given by (5.1). Theθ=re0spective quantities are denoted with
the superscript ∗ (e.g., u∗f,k ).
We will prove that under assumption (B1) for k ≤ 4 the quantities ukf and u∗,k
f
given here are the same as the ones of (5.13), (5.14) and (5.16). One checks that
u∗f,0(x , t ) = u0∗ f = 0. Recalling (ψ ∗) (0) = −α∗, we get
u∗f,1(x , t ) = Pt f (x ) + α∗
Pt−s u∗f,1(·, s) (x )ds.
0
It is straightforward to verify that this equation is solved by the first formula of (5.13)
(we recall the notation (2.1)). The second formula of (5.13) holds analogously. To
treat the case k ≥ 2, we denote
Bk := Ak \ {(0, . . . , 0, 1)} ,
(9.2)
(which in particular implies (5.15)) and write
u∗f,k (x , t ) = −
0
m∈Bk
am
m∈Bk
u∗f,k (x , t ) = −
am · ψ (m1+···+mk )(0) ·
These are the same as (5.14) and (5.16).
The validity of the above expressions for f ∈ C0 follows by a standard
integraltheoretic exercise. The formulas (5.20) and (5.21) are standard properties of the
Laplace transform (recall (1.2)).
Similar derivations hold for V fk . We fix f ∈ b+(Rd ) recall (5.5) and put ν = 0, γ =
δx , f = θ f, h = 0, θ ≥ 0 and denote
Vθ f (x , t ) := E0×δx e− θ f, t .
By (5.6), we know that Vθ f (x , t ) is the unique [
0, 1
]-valued solution of the integral
equation
1 t
Vθ f (x , t ) = 1 + λ∗ 0 Pt−s ψ ∗
−λ∗Vθ f (·, s) + uθ∗ f (·, s) − ψ ∗(uθ∗ f (·, s)) (x )ds.
Let k ≥ 1, using (9.1) we obtain (we skip some arguments to make expressions more
clear)
∂k Vθ f
∂θ k
−λ∗Vθ f + uθ∗ f
k ,
j=1
m∈Ak
∂ j Vθ f
−λ∗ ∂θ j +
∂ j uθ∗ f -m j
∂θ j
am · ψ ∗(m1+...+mk )(uθ∗ f ) ·
j=1
k , ∂ j uθ∗ f -m j ⎤
∂θ j
⎦ ds.
Similarly as before we pass to the limit θ 0, again it is possible when ψ k (0) < +∞.
Under assumption (B1) for k ≤ 4, we denote V fk := ∂k Vθ f and we will prove that
∂θk θ=0
they are the same as given by (5.17), (5.18) and (5.19). One checks that V0 f = 1 and
u∗,0
f = 0 and thus
m∈Ak
am · ψ ∗(m1+...+mk )(−λ∗) ·
am · ψ ∗(m1+...+mn)(0) ·
u∗, j m j ⎤
f ⎦ ds.
By (5.1) and (5.2), we have (ψ ∗) (−λ∗) = ψ (0) = −α. Therefore,
1 t ⎡
V fk (x , t ) = λ∗ 0 Pt−s ⎣ −α −λ∗V fk + u∗f,k +
m∈Bk
am · ψ ∗(m1+...+mk )(−λ∗)
−λ∗V fj + u∗f, j m j
+ α∗u∗f,k −
am · ψ ∗(m1+...+mk )(0) ·
k
j=1
u∗, j m j ⎤
f ⎦ ds.
The formula (5.17) follows by simple calculations. Namely, we notice that by (9.2),
we have B1 = ∅ and thus
V f1(x , t ) = − α −λ∗α∗
0t Ptα−s u∗f,1(·, s) (x )ds.
Using (5.13) and the semigroup property of P, we obtain that
V f1(x , t ) = − α −λ∗α∗
0
= − α −λ∗α∗ eαt Pt f (x )
0
= − λ1∗ Pt f (x ) eαt − eα∗t .
eα(t−s)Pt−s eα∗s Ps f (·) (x )ds
e(α∗−α)s ds
Derivation of (5.18) follows similarly, observing that B2 = {(2, 0)} and calculating
V f2(x , t ) = λ1∗ 0t Ptα−s ψ (0) −λ∗V f1(·, s) + u∗f,1(·, s)
2
2
− ψ (0) u∗f,1(·, s)
− (α − α∗)u∗f,2(·, s) (x )ds.
α∗
Pt−s
≤ eα∗t
≤ eα∗t R3(x ).
α
Pt−s
We notice that −λ∗ V f1(x , s) + u∗f,1(x , s) = Psα f (x ) which concludes. Showing (5.19)
is left to the reader.
The fact that these expressions are valid for f ∈ C0 follows by a standard
integraltheoretic exercise. (5.22) is a standard property of the Laplace transform.
Let us now pass to the estimates asserted in Lemma 17. Let us now prove the first
estimate in (5.24). Using (5.13), (5.14) and (3.4)
|u2f (x , t )| ≤
Pt−s e2αs R2
α 1 (x ) = eαt
eαs
Pt−s R12 (x )ds ≤ e2αt R2(x ).
The second inequality in (5.24) follows analogously. In order to prove (5.25), we
utilize (5.13), (5.16) and (3.4) (we recall that α∗ < 0)
|u∗f,3(x , t )| ≤ c1
u∗f,1(·, s)u∗f,2(·, s) +
(x )ds
e−α∗s Pt−s eα∗s R1(·)eα∗s R2(·) +
(x )ds
The case of u∗f,4(x , t ) follows similarly. Estimate (5.26) holds by (5.24), (3.4) and the
following calculation.
V f2(x , t ) ≤ c1
eαs Ps f (·)
2
Ps f (·)
2
+ |u∗f,2(·, s)| (x )ds
≤ eαt
eαs Pt−s R1(x )ds ≤ e2αt R2(x ).
11. Harris, S.C., Roberts, M.I: The many-to-few lemma and multiple spines. Ann. Inst. H. Poincaré Probab.
Statist. (2011)
12. Kyprianou, A.E.: Introductory Lectures on Fluctuations of Lévy Processes with Applications.
Universitext. Springer, Berlin (2006)
13. Meyn, Sean P., Tweedie, R.L.: Stability of Markovian processes. III. Foster–Lyapunov criteria for
continuous-time processes. Adv. Appl. Probab. 25(3), 518–548 (1993)
14. Pinsky, R.G.: Positive Harmonic Functions and Diffusion. Cambridge University Press, Cambridge
(1995)
1. Adamczak , R. , Miłos´, P.: U -statistics of Ornstein-Uhlenbeck branching particle system . J. Theor. Probab . 27 ( 4 ), 1071 - 1111 ( 2014 )
2. Adamczak , R. , Miłos , P. : CLT for Ornstein-Uhlenbeck branching particle system . Electron. J. Probab . 20 ( 42 ), 1 - 35 ( 2015 )
3. Athreya , K.B. , Ney . P.E.: Branching Processes. Springer, New York ( 1972 ). Die Grundlehren der mathematischen Wissenschaften, Band 196
4. Bansaye , V. , Delmas , J.-F. , Marsalle , L. , Tran , V.C. : Limit theorems for Markov processes indexed by continuous time Galton-Watson trees . Ann. Appl. Probab 21 ( 6 ), 2263 - 2314 ( 2011 )
5. Berestycki , J. , Kyprianou , A. , Murillo , A. : The prolific backbone for supercritical superdiffusions . Stoch. Proc. Appl . 121 ( 6 ), 1315 - 1331 ( 2011 )
6. Dynkin , E.B. : An Introduction to Branching Measure-Valued Processes , vol 6 of CRM Monograph Series. American Mathematical Society , Providence ( 1994 )
7. Dynkin , E.B. : Diffusions , Superdiffusions, and Partial Differential Equations. American Mathematical Society , Providence ( 2002 )
8. Englander , J. , Winter , A. : Law of large numbers for a class of superdiffusions . Ann. Inst. H. Poincaré 42 ( 2 ), 171 - 185 ( 2006 )
9. Etheridge , A. : An Introduction to Superprocesses . American Mathematical Society , Providence ( 2000 )
10. Le Gall , J.F. : Spatial Branching Processes, Random Snakes, and Partial Differential Equations . Birkhauser, Zurich ( 1999 )