Characterizations of Some Free Random Variables by Properties of Conditional Moments of Third Degree Polynomials
Wiktor Ejsmont
0
Mathematics Subject Classification
0
0
2 Free Meixner Laws, Free Cumulants, Conditional Expectation
We investigate LahaLukacs properties of noncommutative random variables (processes). We prove that some families of free Meixner distributions can be characterized by the conditional moments of polynomial functions of degree 3. We also show that this fact has consequences in describing some free Lvy processes. The proof relies on a combinatorial identity. At the end of this paper we show that this result can be extended to a qGausian variable.

46L54 46L53
1 Introduction
The original motivation for this paper comes from a desire to understand the results
about the conditional expectation which were shown in [13,16,17] and [24]. They
proved, that the first conditional linear moment and conditional quadratic variances
characterize free Meixner laws Bozejko and Bryc [13], Ejsmont [17]. LahaLukacs
type characterizations of random variables in free probability are also studied by
Szpojankowski and Wesoowski [24]. They give a characterization of noncommutative
freePoisson and freeBinomial variables by properties of the first two conditional
moments, which mimics Lukacs type assumptions known from classical probability.
In this paper we show that free Meixner variables can be characterized by the third
degree polynomial. In particular, we apply this result to describe a characterization of
free Lvy processes.
In the last part of the paper we also show that these properties are also true for
qGaussian variables. It is worthwhile to mention the work of Bryc [16], where the
LahaLukacs property for qGaussian processes was shown. Bryc proved that qGaussian
processes have linear regressions and quadratic conditional variances.
The paper is organized as follows. In Sect. 2 we review basic free probability and
free Meixner laws. We also establish a combinatorial identity used in the proof of
the main theorem. In Sect. 3 we proof our main theorem about the characterization
of free Meixner distribution by the conditional moments of polynomial functions of
degree 3. In particular, we apply this result to describe a characterization of free Lvy
processes (and some property of this processes). Finally, in Sect. 4 we compile some
basic facts about a qGausian variable and we show that the main result from Sect. 3
can be extended to a qGausian variable.
Classical Meixner distributions first appeared in the theory of orthogonal polynomials
in the paper of Meixner [20]. In free probability the Meixner systems of polynomials
were introduced by Anshelevich [1], Bozejko et al. [12] and Saitoh and Yoshida [22].
They showed that the free Meixner system can be classified into six types of laws: the
Wigner semicircle, the free Poisson, the free Pascal (free negative binomial), the free
Gamma, a law that we will call pure free Meixner and the free binomial law.
We assume that our probability space is a von Neumann algebra A with a normal
faithful tracial state : A C i.e., () is linear, continuous in weak* topology,
(XY) = (YX), (I) = 1, (XX) 0 and (XX) = 0 implies X = 0 for all
X, Y A. A (noncommutative) random variable X is a selfadjoint (i.e. X = X)
element of A. We are interested in the twoparameter family of compactly supported
probability measures (so that their moments does not grow faster than exponentially)
{a,b : a R, b 1} with the CauchyStieltjes transform given by the formula
R z y
(1 + 2b)z + a (z a)2 4(1 + b)
2(bz2 + az + 1)
where the branch of the analytic square root should be determined by the condition
that (z) > 0 (G(z)) 0 (see [22]). CauchyStieltjes transform of is a
function G defined on the upper half plane C+ = {s + ti s, t R, t > 0} and takes
values in the lower half plane C = {s + ti s, t R, t 0}.
Equation (2.1) describes the distribution with the mean equal to zero and the
variance equal to one (see [22]). The moment generating function, which corresponds to
the Eq. (2.1), has the form
1 + 2b + az (1 za)2 4z2(1 + b)
2(z2 + az + b)
for z small enough.
Let C X1, . . . , Xn denote the noncommutative ring of polynomials in variables
X1, . . . , Xn. The free cumulants are the klinear maps Rk : C X1, . . . , Xk C
defined by the recursive formula (connecting them with mixed moments)
and N C (n) is the set of all noncrossing partitions of {1, 2, . . . , n} (see [21,23]).
Sometimes we will write Rk (X) = Rk (X, . . . , X).
The Rtransform of a random variable X is RX(z) = i=0 Ri+1(X)zi , where
Ri (X) is a sequences defined by (2.3) (see [8] for more details). For readers
convenience we recall that the Rtransform corresponding to M (z) which is equal to
1 za +
(1 za)2 4z2b
where the analytic square root is chosen so that limz0 R(z) = 0 (see [22]). If X
has the distribution a,b, then sometimes we will write RX for the Rtransform of X.
For particular values of a and b the law of X is:
the Wigners semicircle law if a = b = 0;
the free Poisson law if b = 0 and a = 0;
the free Pascal (negative binomial) type law if b > 0 and a2 > 4b;
the free Gamma law if b > 0 and a2 = 4b;
the pure free Meixner law if b > 0 and a2 < 4b;
the free binomial law 1 b < 0.
Definition 2.1 Random variables X1, . . . , Xn are freely independent (free) if, for
every n 1 and every nonconstant choice of Yi {X1, . . . , Xn}, where i
{1, . . . , k} (for each k = 1, 2, 3 . . . ) we get Rk (Y1, . . . , Yk ) = 0.
The Rtransform linearizes the free convolution, i.e., if and are (compactly
supported) probability measures on R, then we have
where denotes the free convolution (the free convolution of measures , is the
law of X + Y where X, Y are free and have laws , , respectively). For more details
about free convolutions and free probability theory, the reader can consult [21,26].
If B A is a von Neumann subalgebra and A has a trace , then there exists a unique
conditional expectation from A to B with respect to , which we denote by (B).
This map is a weakly continuous, completely positive, identity preserving, contraction
and it is characterized by the property that, for any X A, (XY) = ( (XB)Y)
Fig. 1 Noncrossing partitions of {1, 2, 3, 4, 5} with the first 3 elements in the same block
for any Y B (see [10,25]). For fixed X A by (X) we denote the conditional
expectation corresponding to the von Neumann algebra B generated by X and I. The
following lemma has been proven in [13].
Lemma 2.2 Let W be a (selfadjoint) element of the von Neumann algebra A,
generated by a selfadjoint V A. If, for all n 1 we have (UVn) = (WVn),
then
We introduce the notation
N C (n) is the set of all noncrossing partitions of {1, 2, . . . , n},
N C k (m) is the set of all noncrossing partitions of {1, 2, . . . , m} (where m k
1) which have first k elements in the same block. For example for k = 3 and
m = 5, see Fig. 1.
The following lemma is a generalization of the Lemma 2.4 in [17] (the proof is also
similar)
Lemma 2.3 Suppose that Z is a element of A, mi = (Zi ) and n, k
1. Then
n1
i=0
Remark 2.4 Note that in Lemma 2.3 we could only assume that Z is an element in
a complex unital algebra A endowed with a linear function : A C satisfying
(I) = 1.
Proof of Lemma 2.3 First, we consider partitions N C k (n + k) with =
{V1, . . . , Vs } where V1 = {1, . . . , k}. The class of all, such we will denote
N C0k (n + k). It is clear that the sum of all noncrossing partitions of this form
corresponds to the term Rk (Z)mn.
On the other hand, for N C k (n + k)\N C0k (n + k) denote s() = mi n{ j : j >
k, j B1} where B1 is the block of which contains 1, . . . , k. This decomposes
N C k (n + k) into the n classes N C kj (n + k) = { N C k (n + k) : s() = j }, j =
k + 1, . . . , n + k. The set N C kj (n + k) can be identified with the product N C ( j k
1) N C k+1(n + 2k j + 1) with convention that N C (0) = {}. Indeed, the blocks of
N C kj (n + k), which partitions are the elements of {k + 1, k + 2, k + 3, . . . , j 1},
can be identified with an appropriate partitions in N C ( j 1 k), and (under the
additional constraint that the first k + 1 elements 1, . . . , k, j are in the same block)
the remaining blocks, which are partitions of the set {1, . . . , k, j, j + 1, . . . , n + k},
Fig. 2 The main structure of noncrossing partitions of {1, 2, 3, . . . , n + k} with the first k elements in the
same block
can be uniquely identified with a partitions in N C k+1(n + 2k j + 1). The above
situation is illustrated in Fig. 2.
This gives the formula
Now we rewrite the last sum based on the value of i = j k 1 where i {0, . . . , n
1}. Thus, we have
n1
n1
i=0
which proves the lemma.
Let Z be the selfadjoint element of the von Neumann algebra A from the above
lemma. We define cnk = cnk(Z) = N Ck (n+k) R (Z) and the following functions
(power series):
n=0
C k (z) =
where k 1
for sufficiently small z < and z C. This series is convergent because we consider
compactly supported probability measures, so moments and cumulants do not grow
faster than exponentially (see [8]). This implies that cnk also does not grow faster than
exponentially.
Lemma 2.5 Let Z be a (selfadjoint) element of the von Neumann algebra A then
C (k)(z) = M (z)C (k+1)(z) + Rk (Z)zk M (z)
which proves the lemma.
Example 2.6 For k = 1, we get:
C (1)(z) = M (z) 1 = M (z)C (2)(z) + R1(Z)z M (z).
Particularly, we have the coefficients of the power series 1/M (z) (Maclaurin series):
for sufficiently small z.
Similarly, by putting k = 2, we obtain:
Proof It is clear from Lemma 2.3 that we have
C (k)(z) =
cnk (Z)zk+n = c0k(Z)zk +
n=1
C (2)(z) = M (z)C (3)(z) + R2(Z)z2 M (z).
Now we present Lemma 4.1 of [13], which will be used in the proof of the main
theorem to calculate the moment generating function of free convolution.
Lemma 2.7 Suppose that X, Y are free, selfadjoint and X/, Y/ have the free
Meixner laws a/,b/ and a/,b/ respectively, where , > 0, + = 1 and
a R, b 1. Then the moment generating function M (z) for X + Y satisfies the
following quadratic equation
(z2 + az + b)M 2(z) (1 + az + 2b)M (z) + 1 + b = 0.
3 Characterization of Free Meixner Laws
The next lemma will be applied in the proof of the Theorem 3.2.
Lemma 3.1 If X and Y are free independent and centered, then the condition
Rk (X) = Rk (Y) for , > 0 and nonnegative integers k is equivalent to
Proof From the equation Rk (X) = Rk (Y) and from the freeness of X and Y it
stems that
Analogously we get
The main result of this paper is the following characterization of free Meixner laws
in the terms of the cubic polynomial condition for conditional moments.
Theorem 3.2 Suppose that X, Y are free, selfadjoint, nondegenerate, centered
( (X) = (Y) = 0) and (X2 + Y2) = 1. Then X/ and Y/ have the free
Meixner laws a/,b/ and a/,b/ , respectively, where a R, b 1 if and
only if
b2(X + Y)3 + 2ba(X + Y)2 + (b + a2)(X + Y) + aI
for some , > 0 and + = 1. Additionally, we assume that b max {, }
if b < 0 (the free binomial case).
Remark 3.3 In commutative probability Eq. (3.6) takes the form:
for some c, d, e R, which is equivalent to the assumption that the conditional
variance is quadratic. There are also a higher degree polynomial regression studied
in commutative probability, see e.g. [5,6,18,19]. They proved that some classical
random variable can be characterized by the higher degree polynomial but in a different
context as presented in this article. In free probability the result (3.6) is in some
way unexpected. As an argument we can give the Wigners semicircle law variables.
Suppose that X, Y are free, selfadjoint, nondegenerate, centered ( (X) = (Y) =
0), (X2) = (Y2) = 1 and have the same distribution. Then the following statements
are equivalent:
X and Y have the Wigners semicircle law,
((X Y)2X + Y) = 2I which follows from the main Theorem of [13] and
[17],
((X Y)(X + Y)(X Y)(X + Y)) = O which follows from the Theorem 3.2
(a = b = 0).
Thus we see that the last equation is unexpected, because in the classical case from
((X Y)2X + Y) = 2 we can easily deduce ((X Y)(X + Y)(X Y)(X + Y)) =
2(X + Y), and in fact in noncommutative probability, the conditional expectation
(XYZY) is difficult to compute (if we know (XZY)).
Proof of Theorem 3.2 : Suppose that X/ and ,Y/ have respectively the free
Meixner laws a/,b/ and a/,b/ . The condition (3.5) holds because we can
use Theorem 3.1 from the article [17]. Then, from Lemma 2.7 the moment generating
functions satisfy Eq. (2.16). If in (2.16) we multiply by (1 C (2)(z)) both sides and
use the fact (2.13) with R1(X + Y) = 0, we get
where C (2)(z) is a function for X + Y. Expanding M (z) in the series (M (z) = 1 +
i=1 zi mi ), we get
bmn+2 + amn+1 + mn = (b + 1)cn(2).
Now we apply (2.15) to the Eq. (3.7) (using the assumption R2(X + Y) = 1) and after
simple computations, we see that
for z small enough. Then using (2.14) we have
z2b + (b + za)C (2)(z) = (b + 1)C (3)(z).
Expanding the above equation in series, we get
and using (3.8) we obtain
bcn(2+)1 + acn(2) = cn(3)(b + 1),
b2mn+3 + 2bamn+2 + (b + a2)mn+1 + amn = cn(3)(b + 1)2.
From the assumption of the main Theorem and Lemma 3.1 we get
Rk (X Y, X + Y, X + Y, . . . , X + Y) = Rk (X) Rk (Y) = 0 (3.14)
Rk (X Y, X + Y, X Y, X + Y, . . . , X + Y) = 2 Rk (X) + 2 Rk (Y)
and similarly for k
Now we use the momentcumulant formula (2.3)
((X Y)(X + Y)(X Y)(X + Y)n)
R (X Y, X + Y, X Y, X + Y, X + Y, . . . , X + Y)
ntimes
R (X Y, X + Y, X Y, X + Y, X + Y, . . . , X + Y)
R (X Y, X + Y, X Y, X + Y, X + Y, . . . , X + Y).
Let us look more closely at the second sum from the last equation. We have that either
the first and the third elements are in different blocks, or they are in the same block.
In the first case, the second sum (from the last equation) vanishes because we have
(3.14). On the other hand, if they are in the same block, the sum disappears because
then we have that (X + Y) = 0. So, by (3.15) we have
((X Y)(X + Y)(X Y)(X + Y)n)
Therefore the Eq. (3.13) is equivalent to
= ((X Y)(X + Y)(X Y)(X + Y)n)(b + 1)2.
0. Now we use Lemma 2.2 to get (3.6).
: Lets suppose now, that the equality (3.5) and (3.6) holds. Multiplying (3.6) by
(X + Y)n for n 0 and applying () we obtain (3.13). Using the facts that m1 = 0
and m2 = 1, from (3.13) we obtain
b2 M (z) + 2zba M (z) + z2(b + a2)M (z) + z3a M (z)
b2z2 b2 2zba z2(b + a2) = C (3)(z)(b + 1)2.
b2 M 2(z) + 2zba M 2(z) + z2(b + a2)M 2(z) + z3a M 2(z)
(b2z2 + b2 + 2zba + z2(b + a2))M (z) + z2 M (z)(b + 1)2 = C (2)(z)(b + 1)2,
From (2.15) we get
and from (2.13) we have
(M (z) 1),
b2 M 3(z) + 2zba M 3(z) + z2(b + a2)M 3(z) + z3a M 3(z)
(b2z2 + b2 + 2zba + z2(b + a2))M 2(z) + z2 M 2(z)(b + 1)2 = (b + 1)2
((b + za)M (z) + b + 1)
((b + za + z2)M 2(z) M (z)(2b + za + 1) + b + 1) = 0.
Thus, we have found two solutions (if a = 0 and b = 0)
M (z) = (b + 1)/(b + za)
(b + za + z2)M 2(z) M (z)(2b + za + 1) + b + 1 = 0
but the first solution does not corresponds to probability measure (except for b =
0.5) because then M (z) = b +b1 + b +b1 i=1( zba )n. If b = 0.5, then the solution
corresponds to the Dirac measure at the point 2a. However, by the assumption
that the variable is nondegenerate variable, we reject this solution. Thus we have
(3.23) and Lemma 2.7 says that X and Y have the Meixner laws, which completes the
proof.
A noncommutative stochastic process (Xt ) is a free Lvy process, if it has free
additive and stationary increments. For a more detailed discussion of free Lvy processes
we refer to [7]. Let us first recall some properties of free Lvy processes which follow
from [13]. If (Xt ) is a free Lvy process such that (Xt ) = 0 and (Xt2) = t for all
t > 0 then
for all s < u. This note allows to formulate the following proposition.
Proposition 3.4 Suppose that (Xt0) is a free Lvy process such as (Xt ) = 0 and
(Xt2) = t for all t > 0. Then the increment (Xt+s Xt )/s (t, s > 0) has the free
Meixner law a/s,b/s (for some b 0) if and only if for all t < s
(Xt Xs Xt Xs ) = s2((sb+t )st)2 (b2Xs3 + 2basXs2 + (b + a2)s2Xs + as3I) + st22 Xs3
Remark 3.5 The existence of a free Lvy process was demonstrate by Biane [10] who
proved that from every infinitely divisible distribution we can construct a free Lvy
process. We assume that b 0 in Proposition 3.4 because a free Meixner variable is
infinitely divisible if and only if b 0 (see [4,13]).
Proof of Proposition 3.4 Lets rewrite Theorem 3.2 for the variables (nondegenerate)
X and Y such that (X2) = , (Y2) = and (Y) = (X) = 0. After a simple
parameter normalization ( by + , by + , a by a+ , b by +b ) we get that
X/ = X+ / + and Y/ = Y+ / + have the free Meixner laws
a/,b/ and a/,b/ , respectively, if and only if (after a simple computation)
((X Y)(X + Y)(X Y)(X + Y))
= (b + ( + ))2 (b2(X + Y)3 + 2ba( + )(X + Y)2
((t Xs sXt )Xs (t Xs sXt )Xs ) (3=.24) t 2Xs3 t 2Xs3 + s2 (Xt Xs Xt Xs ) t 2Xs3
b2Xs3 +2basXs2 + (b + a2)s2Xs
Thus, the proposition holds.
At the end of this section, we are coming to the following proposition.
Proposition 3.6 Suppose that (Xt0) is a free Lvy process such that the increments
(Xt+s Xt )/s (t, s > 0) have the free Meixner law a/s,b/s (for some b 0).
Then
b2X32t + 4bat X22t + 4(b + a2)t 2X2t + 8at 3I
1 X3 1
Proof Let Xt be as in the above proposition. First, we show that the third conditional
central moment is equal to zero i.e.
for all t > 0. From the assumption we have ((Xt 2tt X2t ))3X2t ) = ((2t Xt
t X2t ))3X2t )/(2t )3. For this reason, ((2t Xt t X2t ))3Xk2t ) = 0 for all integers k 0,
by the relation t Rk (X2t ) = 2t Rk (Xt ). Indeed, if the first element (2t Xt t X2t ) is in
the partition with the element from part Xk2t only then we have the cumulant
= 2t Rk (Xt , Xt , . . . , Xt ) t Rk (X2t , X2t , . . . , X2t ) = 0.
Now, if the first element is in the partition with the second or third element (but not
simultaneously) then cumulants are zero as well (by a similar argument presented
above). Thus, the first three elements must be in the same block, so using the fact
2t Xt t X2t = t Xt t (X2t Xt ) and X2t = X2t Xt + Xt (X2t Xt and Xt are
free) we obtain
Rk (2t Xt t X2t , 2t Xt t X2t , 2t Xt t X2t , X2t , . . . , X2t )
From Lemma 2.2 we obtain ((Xt 21 X2t ))3X2t ) = 0, or equivalently
To compute (Xt X2t Xt X2t ) we use Proposition 3.4 and to compute the expression
(Xt2X2t ) we use Proposition 3.2. from [17]. Here we dont cite this proposition
(Proposition 3.2. from the paper [17]), we note only that if we know (Xt X2t Xt X2t )
and (Xt2X2t ) then from (3.32) we can compute (Xt3X2t ) (we skip simple
calculations leading to the formula (3.28)). This completes the proof.
4 Some Consequences for a qGaussian Random Variable
In this section we consider a mapping H f G f B(H) from a real Hilbert
space H into the algebra B(H) of bounded operators acting on the space H. We also
use a parameter q (1, 1). We consider noncommutative random variables as the
elements of the von Neumann algebra A, generated by the bounded (i.e 1 < q < 1),
selfadjoint operators G f , with a state E : A C. State E is a unital linear functional
(which means that it preserves the identity), positive (which means E(X) 0 whenever
X is a nonnegative element of A), faithful (which means that if E(YY) = 0 then
Y = 0), and not necessarily tracial. In (A, E) we refer to the selfadjoint elements of the
algebra A as random variables. Similarly as in free probability any selfadjoint random
variable X has a law: this is the unique compactly supported probability measure
on R which has the same moments as X i.e. (Xn) = t nd(t ), n = 1, 2, 3, . . . .
Denote by Pn the lattice of all partitions of {1, . . . , n}. Fix a partition Pn,
with blocks {B1, . . . , Bk }. For a block B, denote by a(B) its first element. Following
[9], we define the number of restricted crossings of a partition as follows. For B
a block of and i B, i = a(B), denote p(i ) = max { j B, j < i }. For two
blocks B, C , a restricted crossing is a quadruple ( p(i ) < p( j ) < i < j ) with
i B, j C .The number of restricted crossings of B, C is
r c(B, C ) = {i B, j C : p(i ) < p( j ) < i < j }
+{ j B, i C : p( j ) < p(i ) < j < i },
and the number of restricted crossings of is r c( ) =
[2,3]).
i< j r c(Bi , B j ) (see also
Definition 4.1 For a seqence A f1 , A f2 . . . let C A f1 , A f2 , . . . , A fn denote the
noncommutative ring of polynomials in variables A f1 , A f2 , . . . , A fn . The qdeformed
cumulants are the klinear maps Rkq : C A f1 , A f2 , . . . , A fn C defined by the
recursive formula
E( A f1 . . . A fn ) =
To state Theorem 4.4 we shall need the following definition (see also [2,3]).
Definition 4.2 Random variables A f1 , A f2 , . . . , A fn are qindependent if, for every
n 1 and every nonconstant choice of Y
{1, . . . , k} (for some positive integer k) we geti Rkq ( Y{A1,f.1 ,. .A, fY2 ,k.). =.,0A. fn }, where i
Definition 4.3 A family of selfadjoint operators G f = Gf ; f H is called
qGaussian random variables if there exists a state E on the von Neumann algebra A
(generated by G f ; f H) such that the following Wick formula holds:
E(G f1 . . . G fn ) =
where P2(n) is the set of 2partitions of the set {1, 2, . . . , n}.
The existence of such random variables, far from being trivial, is ensured by Bozejko
and Speicher [11]. Our assumptions on E do not allow us to use conditional
expectations. In general, state E is not tracial so we do not know if conditional expectations
exist.
The following theorem is qversion of Theorem 3.2.
Theorem 4.4 Let G f and Gg be two qindependent variables with E(G f ) =
E(Gg) = 0, E(G2f ) =  f 2 = 1, f, g = 0, E(G2g) = g2 = 1 and
Rkq (Gg) = Rkq (G f ) (for all integers k 0 which means G f and Gg have the same
distribution) then
E (G f Gg)(G f + Gg)(G f Gg)(G f + Gg)n
= 2qE (G f + Gg)n+1
if and only if G f and Gg are qGaussian random variables.
Remark 4.5 If G f and Gg are qGaussian random variables, then the state E is a trace
and the conditional expectation exists (see [16]). In this case formula (4.4) can be
reformulated to
In particular, for q = 0 we have the thesis of Theorem 3.2 is satisfied for the variables
with the same distribution (only one way for a = b = 0).
Remark 4.6 The Theorem 4.4 can be reformulated in the following form.
Let G f and Gg be two qindependent variables and E(G f ) = E(Gg) =
0, E(G2f ) =  f 2, f, g =0, E(G2g) = g2 and  f 2 Rkq (Gg) = g2 Rkq (G f )
(for all integer k 0) then
g2G f  f 2Gg
g2G f  f 2Gg
= g2 +  f 2) f 2g2qE (G f + Gg)n+1
if and only if G f and Gg are qGaussian random variables. The proof of this theorem
is completely analogous with the one below, but is not as transparent as this expression
presented beneath.
Proof of Theorem 4.4 From assumption of the Theorem 4.4 above we deduce that
R2q (G f ) =  f 2 = 1, R2q (Gg) = g2 = 1 and R2q (G f + Gg) = f + g, f + g =
2.
: Under the assumption that G f and Gg are qGaussian random variables and
lets compute
E (G f Gg)(G f + Gg)(G f Gg)(G f + Gg)n .
= 2q
From (4.3) we see that the both sides (left and right) of the main formula of Theorem
4.4 are zero if n is even thus we investigate only the case when n is odd. Since
R2q (G f Gg, G f + Gg) = 0 (from the assumption Rkq (Gg) = Rkq (G f )) Eq. (4.7)
equals 0 if we consider 2partitions P2(n + 3) and = {V1, . . . , Vs } where
V1 = {1, k} and k {2, 4, 5, . . . , n + 3} (if the first element is in the 2partition
without third element). So we should analyse only this 2partitions P2(n + 3)
such as = {V1, . . . , Vs } and V1 = {1, 3}, see Fig. 3. We denote this partitions by
P1,3(n + 3). Moreover this partitions can be identified with the product P2(n + 1)
2
and P2(2) multiplied by q because if 1 and 3 are in the same block we always get one
more crossing.
So if we use (4.3) we get
E (G f Gg)(G f + Gg)(G f Gg)(G f + Gg)n
qrc( ) f + g, f + g (n+1)/2 = 2qE (G f + Gg)n+1 . (4.8)
:Suppose that (4.4) is true. Our proof relies on the observation that
Rnq+3(G f + Gg) = 0
for all n 0. We will prove this by induction on the length of the cumulant. Using
the definition (4.2) and the assumption of qindependence and putting n = 0 in (4.4)
we get
= R3q ((G f + Gg), (G f + Gg), (G f + Gg)) = 0.
E((G f Gg)(G f + Gg)(G f Gg)) = R3q ((G f Gg), (G f +Gg), (G f Gg))
(4.10)
We fix k and suppose that (4.9) holds for all n {0, . . . , k}. Now we will prove
Eq. (4.9) for n = k + 1. Expanding both sides of (4.4) into qcumulants and using the
fact that nonzero are only cumulants of size 2 and k + 4 we get
E (G f Gg)(G f + Gg)(G f Gg)(G f + Gg)k+1
j=1
j=1
= 2q
j=1
which implies Rkq+4(G f + G g ) = 0. Thus nonzero cumulants are only cumulants
of size 2 so we obtain that G f + G g is a qGaussian random variable. From the
q q
assumption Rk (G g ) = Rk (G f ) we infer that G f and G g are qGaussian random
variables as well. This completes the proof.
Open Problems and Remarks
In Theorem 3.2 and Proposition 3.4 of this paper we assume that the random
variables are bounded that is Xt A. It would be interesting to show if this
assumption can be replaced by Xt L 2(A).
A version of Theorem 4.4 can be formulated for qPoisson variables (see [2,
3]). The proof of this theorem is analogous with the proof of Theorem 4.4 (by
induction).
It would be worth to show whether Proposition 3.4 is true for noncommutative
generalized stochastic processes with freely independent values, see [14, 15].
Acknowledgments The author would like to thank M. Bozejko, W. Bryc, M. Anshelevich, Z. Michna,
W. Motkowski and J. Wysoczanski for several discussions and helpful comments during the preparation of
this paper. The author also thank especially the anonymous referee for very careful reading of the submitted
manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution License
which permits any use, distribution, and reproduction in any medium, provided the original author(s) and
the source are credited.