A generalised fixed point theorem of presic type in cone metric spaces and application to markov process
Reny George
0
1
KP Reshma
1
R Rajagopalan
0
0
Department of Mathematics, College of Science, AlKharj University
, AlKharj, Kingdom of Saudi Arabia
1
Department of Mathematics and Computer Science, St. Thomas College
, Ruabandha Bhilai, Durg, Chhattisgarh State, 490006,
India
A generalised common fixed point theorem of Presic type for two mappings f: X X and T: Xk X in a cone metric space is proved. Our result generalises many wellknown results. 2000 Mathematics Subject Classification 47H10

d(T(x1, x2, . . . , xk), T(x2, x3, . . . , xk+1)) .max{d(x1, x2), d(x2, x3), . . . d(xk, xk+1)
where x1, x2, ..., xk+1 are arbitrary elements in X and l (0,1). Then, there exists
some x X such that x = T(x, x, ..., x). Moreover if x1, x2, ..., xk are arbitrary points in
X and for n Nxn+k = T(xn, xn+1, ..., xn+k1), then the sequence <xn >is convergent and
lim xn = T(lim xn, lim xn, ..., lim xn). If in addition T satisfies D(T(u, u, ... u), T(v, v, ...
v)) <d(u, v), for all u, v X then x is the unique point satisfying x = T(x, x, ..., x).
Huang and Zang [3] generalising the notion of metric space by replacing the set of
real numbers by ordered normed spaces, defined a cone metric space and proved some
fixed point theorems of contractive mappings defined on these spaces. Rezapour and
Hamlbarani [4], omitting the assumption of normality, obtained generalisations of
results of [3]. In [5], Di Bari and Vetro obtained results on points of coincidence and
common fixed points in nonnormal cone metric spaces. Further results on fixed point
theorems in such spaces were obtained by several authors, see [515].
The purpose of the present paper is to extend and generalise the above Theorems
1.1 and 1.2 for two mappings in nonnormal cone metric spaces and by removing the
requirement of D(T(u, u, ... u), T(v, v, ... v)) <d(u, v), for all u, v X for uniqueness of
the fixed point, which in turn will extend and generalise the results of [3,4].
2. Preliminaries
Let E be a real Banach space and P a subset of E. Then, P is called a cone if
(i) P is closed, nonempty, and satisfies P {0},
(ii) ax + by P for all x, y P and nonnegative real numbers a, b
(iii) x P and  x P x = 0, i.e. P (P) = 0
Given a cone P E, we define a partial ordering with respect to P by x y if and
only if y  x P. We shall write x <y if x y and x y, and x y if y  x intP,
where intP denote the interior of P. The cone P is called normal if there is a number
K > 0 such that for all x, y E, 0 x y implies  x  K  y  .
Definition 2.1. [3]Let X be a non empty set. Suppose that the mapping d: X X E
satisfies:
(d1) 0 d (x, y) for all x, y X and d (x, y) = 0 if and only if x = y
(d2)d (x, y) = d (y, x) for all x, y X
(d3)d (x, y) d (x, z) + d (z, y) for all x, y, z X
Then, d is called a conemetric on X and (X, d) is called a conemetricspace.
Definition 2.2. [3]Let (X, d) be a cone metric space. The sequence {xn} in X is said to
be:
(a) A convergent sequence if for every c E with 0 c, there is n0 N such that
for all n n0, d (xn, x) c for some x X. We denote this by limn xn = x.
(b) A Cauchy sequence if for all c E with 0 c, there is no N such that d (xm,
xn) c, for all m, n n0.
(c) A cone metric space (X, d) is said to be complete if every Cauchy sequence in X is
convergent in X.
(d) A selfmap T on X is said to be continuous if limn xn = x implies that limn
T(xn) = T(x), for every sequence {xn}in X.
Definition 2.3. Let (X, d) be a metric space, k a positive integer, T: Xk X and f : X
X be mappings.
(a) An element x X said to be a coincidence point of f and T if and only if f(x) =
T(x, x, ..., x). If x = f(x) = T(x, x, ..., x), then we say that x; is a common fixed point
of f and T. If w = f(x) = T(x, x, ..., x), then w is called a point of coincidence of f and
T.
(b) Mappings f and T are said to be commuting if and only if f(T(x, x, ... x)) = T(fx,
fx, ... fx) for all x X.
(c) Mappings f and T are said to be weakly compatible if and only if they commute
at their coincidence points.
Remark 2.4. For k = 1, the above definitions reduce to the usual definition of
commuting and weakly compatible mappings in a metric space.
The set of coincidence points of f and T is denoted by C(f, T).
3. Main results
Consider a function j: Ek E such that
(a) j is an increasing function, i.e x1 <y1, x2 <y2, ..., xk < yk implies j(x1, x2, ..., xk) <j
(y1, y2, ..., yk).
(b) j(t, t, t, ...) t, for all t X
(c) j is continuous in all variables.
Now, we present our main results as follows:
Theorem 3.1. Let (X, d) be a cone metric space with solid cone P contained in a real
Banach space E. For any positive integer k, let T: Xk X and f: X X be mappings
satisfying the following conditions:
d(T(x1, x2, . . . , xk), T(x2, x3, . . . , xk+1)) (d(f x1, f x2), d(f x2, f x3), . . . , (f xk, f xk+1))
where x1, x2, ..., xk+1 are arbitrary elements in X and (0, 1k )and
there exist elements x1, x2, ..., xk in X such that
R = max
d(f x1, f x2) , d(f x2, f x3) , . . . , d(f xk, T(x1, x2, . . . , xk))
2 k
where = 1k. Then, f and T have a coincidence point, i.e. C(f, T) .
Proof. By (3.1) and (3.4) we define sequence <yn >in f(X) as yn = fxn for n = 1, 2, ..., k
and yn+k = f(xn+k) = T(xn, xn+1, ..., xn+k1), n = 1, 2, ... Let an = d(yn, yn+1). By the
method of mathematical induction, we will now prove that
T(Xk) f (X)
for all n. Clearly by the definition of R, (3.5) is true for n = 1, 2, ..., k. Let the k
inequalities an Rn, an+1 Rn+1, ..., an+k1 Rn+k1 be the induction hypothesis.
Then, we have
(R n, R n, . . . , R n) R n = R. n+k.
Thus inductive proof of (3.5) is complete. Now for n, p N, we have
Let 0 c be given. Choose > 0 such that c + N(0) P where N(0) = {y E;  y
 <}. Also choose a natural number N1 such that 1Rn N(0), for all n >N1. Then,
1Rn c for all n N1. Thus, d(yn, yn+p) 1Rn c for all n N1. Hence, sequence
<yn > is a Cauchy sequence in f(X), and since f(X) is complete, there exists v, u X
such that limnyn = v = f(u). Choose a natural number N2 such that
d(yn, yn+1) (kc+1) and d(x, yn+1) k+c1 for all n N2.
Then for all n N2
d(fu, T(u, u, . . . u)) d(fu, yn+k) + d(yn+k, T(u, u, . . . u))
= d(fu, yn+k) + d(T(xn, xn+1, . . . xn+k1), T(u, u, . . . u))
d(fu, yn+k) + d(T(u, u, . . . u), T(u, u, . . . xn)) + d(T(u, u, . . . xn), T(u, u, . . . xn, xn+1))
+ d(T(u, xn, . . . xn+k2), T(xn, xn+1 . . . xn+k1)
d(fu, yn+k) + {d(fu, fu), d(fu, fu), . . . , d(fu, f xn)}
+{d(fu, fu), d(fu, fu), . . . , d(fu, f xn), d(f xn, f xn+1)} +
+{d(fu, f xn), d(f xn, f xn+1), . . . d(f xn+k2, f xn+k1)}.
= d(fu, yn+k) + (0, 0, . . . , d(fu, f xn))
+(0, 0, . . . , d(fu, f xn), d(f xn, f xn+1)) +
+(d(fu, f xn), d(f xn, f xn+1), . . . d(f xn+k2, f xn+k1)).
k+c1 + ( (kc+1) , (kc+1) , . . . , (kc+1) ) + ( (kc+1) , (kc+1) , . . . , (kc+1) )
+ + ( (kc+1) , (kc+1) , . . . , (kc+1) )
k+c1 + (kc+1) . . . + (kc+1) = c.
Thus, d(fu, T(u, u, . . . u)) mc for all m 1.
So, mc d(fu, T(u, u, . . . u)) P for all m 1. Since mc 0 as m and P is closed,
d(fu, T(u, u, ... u)) P, but P (P) = /0/. Therefore, d(fu, T(u, u, ... u)) = 0. Thus, fu
= T(u, u, u, ..., u), i.e. C(f, T) .
Theorem 3.2. Let (X, d) be a cone metric space with solid cone P contained in a real
Banach space E. For any positive integer k, let T: Xk X and f: X X be mappings
satisfying (3.1), (3.2), (3.3) and let there exist elements x1, x2, ... xk in X satisfying (3.4).
If f and T are weakly compatible, then f and T have a unique common fixed point.
Moreover if x1, x2, ...,xk are arbitrary points in X and for n N, yn+k = f(xn+k) = T(xn,
xn+1, ... xn+k1), n = 1, 2, ..., then the sequence < yn > is convergent and lim yn = f(lim
yn) = T(lim yn, lim yn, ..., lim yn).
Proof. As proved in Theorem 3.1, there exists v, u X such that limn yn = v = f
(u) = T(u, u, u ... u). Also since f and T are weakly compatible f(T(u, u, ... u) = T(fu, fu,
fu ... fu). By (3.2) we have,
d(f fu, fu) = d(fT(u, u, . . . u), T(u, u, . . . u)) = d(T(fu, fu, fu, . . . fu), T(u, u, . . . u))
d(T(fu, fu, fu, . . . fu), T(fu, fu, . . . fu, u)) + d(T(fu, fu, . . . fu, u),
T(fu, fu, . . . , u, u)) + + d(T(fu, u, . . . u, u), T(u, u, . . . u))
(d(f fu, ffu), . . . d(f fu, f fu), d(f fu, fu)) + (d(f fu, f fu), . . . d(f fu, fu),
d(fu, fu)) + (d(f fu, fu), . . . d(fu, fu), d(fu, fu))
= (0, 0, 0, . . . d(f fu, fu)) + (0, 0 . . . 0, d(f fu, fu), 0) + .(d(f fu, fu), 0, 0 . . . 0) = kd(f fu, fu).
Repeating this process n times we get, d(f fu, fu) <kn ln d(f fu, fu). So kn ln d(f fu, fu)
 d(f fu, fu) P for all n 1. Since kn ln 0 as n and P is closed, d(f fu, fu)
P, but P (P) = {0}. Therefore, d(f fu, fu) = 0 and so f fu = fu. Hence, we have, fu = f
fu = f(T(u, u, ... u)) = T(fu, fu, fu ... fu), i.e. fu is a common fixed point of f and T, and
lim yn = f(lim yn) = T(lim yn, lim yn, ... lim yn). Now suppose x, y be two fixed points
of f and T. Then,
Repeating this process n times we get as above, d(x, y) kn ln d(x, y) and so as n
d(x, y) = 0, which implies x = y. Hence, the common fixed point is unique.
Remark 3.3. Theorem 3.2 is a proper extension and generalisation of Theorems 1.1
and 1.2.
Remark 3.4. If we take k = 1 in Theorem, 3.2, we get the extended and generalised
versions of the result of [3]and [4].
Example 3.5. Let E = R2, P = {(x, y) E\x, y 0}, X = [0, 2] and d: X X E such
that d(x, y) = (x  y , x  y ). Then, d is a cone metric on X. Let T: X2 X and f: X
X be defined as follows:
12 .max{d(fx, fy), d(fy, fz)}
Case 2. x, y [0, 1] and z [1,2]
21 .max{(fx, fy), d(fy, fz)}
Case 3. x [0, 1] and y; z [1,2]
d(T(x, y), T(y, z)) = ( x24+y y4+z ,  x24+y y4+z )
= ( x24z ,  x24z )
( x24y 1 +  y 4z ,  x24y  +  y 4z )
./12 .max{d(fx, fy), d(fy, fz)}
Case 4. x, y, z [1,2]
21 . max{(fx, fy), d(fy, fz)}.
Similarly in all other cases d(T(x, y), T(y, z)) 21 .max{(fx, fy), d(fy, fz)}. Thus, f and T
satisfy condition (3.2) with j(x1, x2) = max{x1, x2}. We see that C(f, T) = 1, f and T
commute at 1. Finally, 1 is the unique common fixed point of f and T.
4. An application to markov process
Let n1 = {x Rn+ : in=1xi = 1} denote the n  1 dimensional unit simplex. Note that
any x n1 may be regarded as a probability over the n possible states. A random
process in which one of the n states is realised in each period t = 1, 2, ... with the
probability conditioned on the current realised state is called Markov Process. Let aij
denote the conditional probability that state i is reached in succeeding period starting
in state j. Then, given the prior probability vector xt in period t, the posterior
probability in period t + 1 is given by xit+1 = aijxjt for each i = 1, 2, .... To express this in
matrix notation, we let xt denote a column vector. Then, xt+1 = Axt. Observe that the
properties of conditional probability require each aij 0 and in=1 aij = 1 for each j. If
for any period t, xt+1 = xt then xt is a stationary distribution of the Markov Process.
Thus, the problem of finding a stationary distribution is equivalent to the fixed point
problem Axt = xt.
For each i, let i = minjaij and define = in=1 i.
Theorem 4.1. Under the assumption ai,j >0, a unique stationary distribution exist for
the Markov process.
Proof. Let d: n1 n1 R2 be given by d(x, y) = ( in=1 xi yi, in=1 xi yi) for
all x, y n1 and some a 0.
Clearly d(x, y) (0,
d(x, y) = (0, 0) ( in=1 xi yi,
= y. Also x = y xi = yi for all i xi yi = 0
d(x, y) = ( in=1 xi yi, in=1 xi yi)
n n
= ( i=1 yi xi, i=1 yi xi) = d(y, x)
d(x, y) = ( in=1 xi yi, in=1 xi yi)
n n
= ( i=1 (xi zi) + (zi yi), i=1 (xi zi) + (zi yi))
( in=1 (xi zi) + (zi yi), in=1 (xi zi) + (zi yi))
= ( i=1 (xi zi), i=1 (xi zi)) + ( i=1 (zi yi), in=1 (zi yi))
n n n
= d(x, z) + d(z, x).
So n1 is a cone metric space. For x n1, let y = Ax. Then each
yi = jn=1 aijxj 0. Further more, since each in=1 aij = 1, we have
in=1 yi = in=1 jn=1 aijxj = jn=1 xj in=1 aij = jn=1 xj = 1, so y n1. Thus, we see that
A: n1 n1. We will show that A is a contraction. Let Ai denote the ith row of A.
Then for any x, y n1, we have
d(Ax, Ay) = ( i=1 (Ax)i (Ay)i, in=1 (Ax)i (Ay)i)
n
= ( i=1  jn=1 aijxj aijyj, i=1  j=1 aijxj aijyj)
n n n
= ( in=1  jn=1(aij i)(xj yj) + i(xj yj),
in=1  jn=1(aij i)(xj yj) + i(xj yj))
( in=1( jn=1(aij i)(xj yj) + i jn=1(xj yj)),
( in=1( jn=1(aij i)(xj yj) + i jn=1(xj yj))
( in=1 jn=1(aij i)xj yj, in=1 jn=1(aij i)xj yj)
(Since jn=1(xj yj) = 0)
n n
= ( j=1 xj yj i=1(aij i),
jn=1 xj yj in=1(aij i)
n n
= ( j=1 xj yj(1 ), j=1 xj yj(1 ))
(Since in=1 aij = 1 and in=1 i = )
which establishes that A is a contraction mapping. Thus, Theorem 3.2 with k = 1
and f as identity mapping ensures a unique stationary distribution for the Markov
Process. Moreover for any x0 n1, the sequence <Anx0 > converges to the unique
stationary distribution.
Acknowledgements
The authors would like to thank the learned referees for their valuable comments which helped in bringing this paper
to its present form. The first and third authors are supported by Ministry of Education, Kingdom of Saudi Arabia.
Authors contributions
RG gave the idea of this work. All authors worked on the proofs and examples. KPR and RR drafted the manuscript.
RG read the manuscript and made necessary corrections. All authors read and approved the final manuscript.
Submit your manuscript to a
journal and benefit from:
7 Convenient online submission
7 Rigorous peer review
7 Immediate publication on acceptance
7 Open access: articles freely available online
7 High visibility within the field
7 Retaining the copyright to your article