Generalized Projective Synchronization between Two Different Neural Networks with Mixed Time Delays

Discrete Dynamics in Nature and Society, May 2012

The generalized projective synchronization (GPS) between two different neural networks with nonlinear coupling and mixed time delays is considered. Several kinds of nonlinear feedback controllers are designed to achieve GPS between two different such neural networks. Some results for GPS of these neural networks are proved theoretically by using the Lyapunov stability theory and the LaSalle invariance principle. Moreover, by comparison, we determine an optimal nonlinear controller from several ones and provide an adaptive update law for it. Computer simulations are provided to show the effectiveness and feasibility of the proposed methods.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

http://downloads.hindawi.com/journals/ddns/2012/153542.pdf

Generalized Projective Synchronization between Two Different Neural Networks with Mixed Time Delays

Generalized Projective Synchronization between Two Different Neural Networks with Mixed Time Delays Xuefei Wu,1,2 Chen Xu,3 Jianwen Feng,3 Yi Zhao,3 and Xuan Zhou4 1College of Information and Engineering, Shenzhen University, Shenzhen 518060, China 2School of Computer Engineering, Shenzhen Polytechnic, Shenzhen 518055, China 3College of Mathematics and Computational Science, Shenzhen University, Shenzhen 518060, China 4School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China Received 28 December 2011; Accepted 15 March 2012 Academic Editor: Taher S. Hassan Copyright © 2012 Xuefei Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract The generalized projective synchronization (GPS) between two different neural networks with nonlinear coupling and mixed time delays is considered. Several kinds of nonlinear feedback controllers are designed to achieve GPS between two different such neural networks. Some results for GPS of these neural networks are proved theoretically by using the Lyapunov stability theory and the LaSalle invariance principle. Moreover, by comparison, we determine an optimal nonlinear controller from several ones and provide an adaptive update law for it. Computer simulations are provided to show the effectiveness and feasibility of the proposed methods. 1. Introduction Over the past two decades, the investigation on the synchronization of complex networks has attracted a great deal of attention due to its potential applications in various fields, such as physics, mathematics, secure communication, engineering, automatic control, biology, and sociology [1–9]. In the literature, there are many widely studied synchronization patterns, which define the correlated in-time behaviors among the nodes in a dynamical network, for example, complete synchronization [10–12], lag synchronization [13–15], anti-synchronization [16–18], phase synchronization [19–21], projective synchronization [22–32], and so on. Projective synchronization reflects a kind of proportionality between the synchronized states, so it is an interesting research topic and has many applications. For instance, if this proportional feature is applied to M-nary digital communication, the communication speed can be accelerated substantially. In view of this merit, many researchers throw themselves into generalized projective synchronization. Recent years have witnessed many written achievements on projective synchronization between two identical complex dynamical networks [22–30]. We introduce three typical references here. In [28], Chen et al. studied projective synchronization of time-delayed chaotic systems in a driven-response complex network, where the nodes are not partially linear and the scale factors are different from each other. In [29], Feng et al. investigated projective-anticipating and projective-lag synchronization on complex dynamical networks composed of a large number of interconnected components, in which the dynamics of the nodes of the complex networks were time-delayed chaotic systems without the limitation of the partial linearity. In [30], Wang et al. explored the problem of outer synchronization between two complex networks with the same topological structure and time-varying coupling delay with a new mixed outer synchronization behavior. In addition, a novel nonfragile linear state feedback controller is designed to realize the mixed outer synchronization between two networks and proved analytically by using the Lyapunov-Krasovskii stability theory. However, in real world, studying the phenomena of synchronizing two different complex networks is closer to reality. The so-called different implies that the drive and response networks have different node dynamics, different number of nodes, or different topological structures. Recently, some related works have come out, such as [31, 32]. In [31], Zheng et al. probed into adaptive projective synchronization between two complex networks with different topological structures, although its systems contained time-varying delays. Some results on topology identification were obtained which can be also seen as a highlight of this paper. In [32], the generalized projective synchronization with the above three differences was investigated based on the LaSalle invariance principle. However, the model of complex network only has linear coupling and coupling time delay. Due to the finite information transmission and processing speeds among the units, the connection delays in realistic modeling of many large networks with communication must be taken into account. Therefore, it is important to study the effect of time delay in synchronization of coupled systems. Usually, time delay involves two parts. One is the delay inside the systems, called internal delay. The other is caused by the exchange of information between systems referred to as coupling delay. Moreover, nonlinear functions can display more nature phenomena. Hence, the internal delay and nonlinear functions are introduced into the considered neural networks in this writing. What is more, the nonlinear functions in the drive and response networks are also different. In particular, three comparable nonlinear controllers are presented to realize the GPS based on the LaSalle invariance principle and some basic inequalities. On the contrary, an optimal nonlinear controller is produced eventually. In the last theorem, we use an adaptive control technique for the optimal nonlinear controller in order to make the feedback control gain small enough. Notation. Throughout this paper, ?? and ??×? denote ?-dimensional Euclidean space and the set of ?×? real matrices, respectively. ?min(?) represents the smallest eigenvalue of a symmetric matrix A. ⊗ is the Kronecker product. The superscript ? of ?? or ?? denotes the transpose of the vector ?∈?? or the matrix ?∈??×?. ?? is identity matrix with ? nodes. 2. Model Description and Preliminaries Consider a general neural network with mixed time delays consisting of ?1 nodes and nonlinear couplings, which is described as follows: ̇??(?)=−?1??(?)+?1?1??(?)+?1?1???−?1+?1?=1???Γ1ℎ1??+(?)?1?=1????Γ2ℎ1???−?2,?∈ℐ1,(2.1) where ??(?)=(??1(?),??2(?),…,???(?))?∈??,(?∈ℐ1={1,2,…,?1}) are the state variables of the ?th node at time ?; ?1=diag{?11,?12,…,?1?} is the decay constant matrix with ?1?>0(?∈{1,2,…,?}); ?1=(?1??)?×? and ?1=(?1??)?×? are system matrices; ?1(??(?))=[?11(??1(?)),?12(??2(?)),…,?1?(???(?))]?, ?1(??(?))=[?11(??1(?)),?12(??2(?)),…,?1?(???(?))]? and ℎ1(??(?))=[ℎ11(??1(?)),ℎ12(??2(?)),…,ℎ1?(???(?))]? are the continuous functions of the neurons; the positive constants ?1 and ?2 are internal delay and coupling delay, respectively; Γ1 and Γ2 are the inner coupling matrices at time ? and ?−?2, respectively, which describe the way of linking the components in each pair of connected two nodes; ?=(???)?1×?1 and ??=(????)?1×?1 are coupling configuration matrices which are not necessarily irreducible and symmetric. In this paper, the neural network (2.1) is used as the drive network, and the response neural network consisting of ?2 nodes is expressed by ̇??(?)=−?2??(?)+?2?2??(?)+?2?2???−?1+?2?=1???Γ1ℎ2??+(?)?2?=1????Γ2ℎ2???−?2+??(?),?∈ℐ2,(2.2) where ??(?)=(??1(?),??2(?),…,???(?))?∈??,?∈ℐ2={1,2,…,?2} are the state variables of the ?th node at time ?. Without loss of generality, we suppose ?1≥?2>0. ?2=diag{?21,?22,…,?2?} is the decay constant matrix with ?2?>0(?∈{1,2,…,?}); ?2=(?2??)?×? and ?2=(?2??)?×? are system matrices; ?2(??(?))=[?21(??1(?)),?22(??2(?)),…,?2?(???(?))]?, ?2(??(?))=[?21(??1(?)),?22(??2(?)),…,?2?(???(?))]? and ℎ2(??(?))=[ℎ21(??1(?)),ℎ22(??2(?)),…,ℎ2?(???(?))]? are the continuous functions of the neurons; ?1, ?2, Γ1, and Γ2 have the same meaning as those in (2.1). ?=(???)?2×?2 and ??=(????)?2×?2 are coupling configuration matrices which are not necessarily irreducible and symmetric either. Now, two mathematical definitions for the generalized projective synchronization are introduced as follows. Definition 2.1. If there is a nonzero constant ? such that lim?→+∞‖‖??(?)−???‖‖(?)=0,?∈ℐ2,(2.3) the GPS between neural networks (2.1) and (2.2) is said to be achieved. The parameter ? is called a scaling factor. Definition 2.2. A continuous function ?(⋅)∶?→? is said to be the nonnegative-bound function class, denoted as ?∈NBF(?), if there exists a positive scalar ?, such that 0≤?(?)−?(?)?−?≤?(2.4) holds for any ?,?∈?. The following hypothesis is used throughout the paper. Assumption 2.3. For activation functions ?1?,?2?,?1?,?2?,ℎ1?,ℎ2?(?∈{1,2,…,?}), there exist positive constants ??,??,??,??,??,??(?∈{1,2,…,?}), such that ?1?(⋅)∈NBF(??),?2?(⋅)∈NBF(??),?1?(⋅)∈NBF(??), ?2?(⋅)∈NBF(??), ℎ1?(⋅)∈NBF(??), ℎ2?(⋅)∈NBF(??). For better convenience, we denote that ?=max1≤?≤???;?=max1≤?≤???;?=max1≤?≤???;?=max1≤?≤???;?=max1≤?≤???;?=max1≤?≤???.(2.5) Lemma 2.4 (see [33]). For a matrix ?=(???)∈??×?, denote ?(?)=(1/2)max[?,?]max?,?|???|, then ?????≤?(?)??+???(2.6) holds for all ?∈??,?∈??. 3. GPS between Two Different Neural Networks with Mixed Time Delays In this section, we will make a study of GPS between two different neural networks with mixed time delays by means of the LaSalle invariance principle; the, Lyapunov direct method, and nonlinear feedback control technique. Define the synchronization errors between the drive network (2.1) and the response network (2.2) as ??(?)=??(?)−???(?),?∈ℐ2, then we have the following error system: ̇??(?)=−?2???(?)+1−?2???(?)+?2?2??(?)−??1?1??(?)+?2?2???−?1−??1?1???−?1+?2?=1???Γ1ℎ2??(?)−??1?=1???Γ1ℎ1??+(?)?2?=1????Γ2ℎ2???−?2−??1?=1????Γ2ℎ1???−?2+??(?),?∈ℐ2.(3.1) Theorem 3.1. Suppose Assumption 2.3 holds; if the nonlinear controllers are chosen as follows: ??(?)=??2??(?)−?1??(?)−????(?)−?2?2???(?)+??1?1??(?)?−?2?2????−?1+??1?1???−?1?−?2?=1???Γ1ℎ2???(?)+????Γ2ℎ2????−?2+??2?=1???Γ1ℎ1??(?)?+??2?=1????Γ2ℎ1???−?2?+??1?=?2+1???Γ1ℎ1??(?)+????Γ2ℎ1???−?2,?∈ℐ2,(3.2) where ?? are the feedback control gains, let ?=min?∈ℐ2{??}, and when ?≥−?min?1+?2?+?2?2?+1+?21+?2?+?1?|?|+2?|?|+?1|?|+??⊗Γ1?2?+1+??⊗Γ21+?2+??⊗Γ1?|?|+2?|?|+??⊗Γ2??|?|+1?2+??|?|?⊗Γ2?2|?|+?1,(3.3) where ?1 is a positive constant, then the GPS between the two neural networks (2.1) and (2.2) can be achieved. Proof. Consider the Lyapunov functional candidate 1?(?)=2?2?=1???(?)????(?)+1?2?|?|+?2?2?2?=1??−?1???(?)??+??(?)???⊗Γ2?2|??|+??⊗Γ2?2?2?=1??−?2???(?)??(?)??.(3.4) Calculating ̇? with respect to ? along the solution of (3.1), and noticing the nonlinear feedback controllers (3.2), one has ̇?(?)∣(3)=?2?=1???(?)̇????(?)+1?2?|?|+?2?2?2?=1???(?)??(?)−????−?1???−?1+???⊗Γ2?2|??|+??⊗Γ2?2?2?=1???(?)??(?)−????−?2???−?2=?2?=1???−?(?)1+?2??(?)−????(?)+?2?2??(?)−?2???(?)+?2?2???−?1−?2????−?1+?2?=1???Γ1ℎ2??(?)−ℎ2???+(?)?2?=1????Γ2ℎ2???−?2−ℎ2????−?2+??1?1??(?)?−?1??(?)+??1?1???−?1?−?1???−?1+??2?=1???Γ1ℎ1??(?)?−ℎ1??(?)+??2?=1????Γ2ℎ1???−?2?−ℎ1???−?2+??1?2?|?|+?2?2?2?=1???(?)??(?)−????−?1???−?1+???⊗Γ2?2?|?|+??⊗Γ2?2?2?=1???(?)??(?)−????−?2???−?2.(3.5) By utilizing Lemma 2.4 and Assumption 2.3, we have the following four inequalities: the first one is ?2?=1???(?)?2?2??(?)−?2????(?)≤?2?2?=1???(?)???(?)+2??(?)−?2???(?)??2??(?)−?2????(?)≤?2?2+1?2?=1???(?)??(?).(3.6) Denote ?2(?(?))=[(ℎ2(?1(?))−ℎ2(??1(?)))?,(ℎ2(?2(?))−ℎ2(??2(?)))?,…,(ℎ2(??2(?))−ℎ2(???2(?)))?]? and ?(?)=(??1(?),??2(?),…,???2(?))?, and then we can get the second inequality: ?2??=12?=1???(?)???Γ1ℎ2??(?)−ℎ2???(?)=??(?)?⊗Γ1?2(?(?))≤??⊗Γ1??(?)?(?)+??2(?(?))?2(?(?))=??⊗Γ1?2?=1???(?)??ℎ(?)+2??(?)−ℎ2???(?)?ℎ2??(?)−ℎ2???(?)≤??⊗Γ1?2+1?2?=1???(?)??(?),(3.7) and the third one is as follows: ?2?=1???(?)??1?1??(?)?−?1???(?)≤|?|?1?2?=1???(?)???(?)+1??(?)?−?1??(?)??1??(?)?−?1???(?)≤|?|?1?2?=1???(?)???(?)+2?2???(?)??=?(?)|?|+2??|?|1?2?=1???(?)??(?).(3.8) Let ?1(?(?))=[(ℎ1(?1(?)/?)−ℎ1(?1(?)))?,(ℎ1(?2(?)/?)−ℎ1(?2(?)))?,…,(ℎ1(??2(?)/?)−ℎ1(??2(?)))?]?; thus, we have the last one: ??2??=12?=1???(?)???Γ1ℎ1??(?)?−ℎ1??(?)=???(?)?⊗Γ1?1(?(?))≤|?|??⊗Γ1?2?=1???(?)??ℎ(?)+1??(?)?−ℎ1??(?)?ℎ1??(?)?−ℎ1??(?)≤|?|??⊗Γ1?2?=1???(?)???(?)+2?2???(?)??=?(?)|?|+2?|?|?⊗Γ1?2?=1???(?)??(?).(3.9) Similarly, we can obtain the following four inequalities: ?2?=1???(?)?2?2???−?1−?2????−?1?≤?2?2?=1???(?)??(?)+?2????−?1???−?1,(3.10)?2??=12?=1???(?)????Γ2ℎ2???−?2−ℎ2????−?2?≤??⊗Γ2?2?=1???(?)??(?)+?2????−?2???−?2,(3.11)?2?=1???(?)??1?1???−?1?−?1???−?1?≤|?|?1?2?=1???(?)???(?)+2?2????−?1???−?1,?(3.12)?2??=12?=1???(?)????Γ2ℎ1???−?2?−ℎ1???−?2?≤|?|??⊗Γ2?2?=1???(?)???(?)+2?2????−?2???−?2.(3.13) Substituting (3.6)–(3.13) into (3.5), we can obtain ̇?(?)≤−?min?1+?2?+?2?2?+1+?21+?2?+?1?|?|+2?|?|+?1|?|+??⊗Γ1?2?+1+??⊗Γ21+?2+??⊗Γ1?|?|+2|??|+??⊗Γ2+??|?|−?1?2+??|?|?⊗Γ2?2?|?|?(?)?(?),(3.14) Taking account of condition (3.3), we have ̇?(?)≤−?1??(?)?(?)≤0. Clearly, ?={??(?)=0,??=?,?∈ℐ2} is the largest invariant set contained in {̇?(?)=0}={??(?)=0,?∈ℐ2}. In terms of the LaSalle invariant principle, the trajectory asymptotically converges to the largest invariant set ? with any initial value of (3.1), namely, lim?→+∞‖??(?)‖=0,?∈ℐ2. Hence, the GPS between neural networks (2.1) and (2.2) is realized. The proof is completed. In Theorem 3.1, the generalized projective synchronization of two different neural networks with time delay has been investigated by choosing suitable nonlinear feedback controllers. However, it requires feedback control gains to be large in ?? given by (3.2), which is not practical. Hence, it is desirable to improve the scheme for reducing the feedback control gains to be as small as possible. Now, we give the following improvement scheme. Theorem 3.2. Suppose Assumption 2.3 holds; if the nonlinear controllers are chosen as follows: ???(?)=?2−?1??(?)−?2?2???(?)+??1?1??(?)−?2?2????−?1+??1?1???−?1−?2?=1???Γ1ℎ2???(?)+????Γ2ℎ2????−?2+??1?=1???Γ1ℎ1??(?)+????Γ2ℎ1???−?2−????(?),?∈ℐ2,(3.15) where ?? are the feedback gains, denote ?=min?∈ℐ2{??}, and if ?≥−?min?2?+?2?2?+1+?21+?2+??⊗Γ1?2?+1+??⊗Γ21+?2+?2,(3.16) where ?2 is a positive constant, then the GPS between neural networks (2.1) and (2.2) can be achieved. Proof. Select the following Lyapunov functional candidate: 1?(?)=2?2?=1???(?)???(?)+?2?2?2?=1??−?1???(?)???(?)??+??⊗Γ2?2?2?=1??−?2???(?)??(?)??.(3.17) Differentiating ? with respect to time along (3.1), we have ??∣??(3)=?2?=1???(?)̇???(?)+?2?2?2?=1???(?)??(?)−????−?1???−?1?+??⊗Γ2?2?2?=1???(?)??(?)−????−?2???−?2=?2?=1???(?)−?2??(?)−????(?)+?2?2??(?)−?2???(?)+?2?2???−?1−?2????−?1+?2?=1???Γ1ℎ2??(?)−ℎ2???+(?)?2?=1????Γ2ℎ2???−?2−ℎ2????−?2?+?2?2?2?=1???(?)??(?)−????−?1???−?1?+??⊗Γ2?2?2?=1???(?)??(?)−????−?2???−?2.(3.18) Substituting (3.6), (3.7), (3.10), and (3.11) into (3.18), we can obtain ̇?(?)≤−?min?2?+?2?2?+1+?21+?2+??⊗Γ1?2?+1+??⊗Γ21+?2?−??(?)?(?).(3.19) By condition (3.16), we have ̇?(?)≤−?2??(?)?(?)≤0. Similarly, in light of the proof of Theorem 3.1, the GPS between neural networks (2.1) and (2.2) can be achieved under nonlinear controllers (3.15) too. Remark 3.3. Comparing the infimum of the feedback control gain ? in Theorem 3.2 with that in Theorem 3.1 implies that ?res=−?min(?1)+?(?1)(|?|+?2/|?|)+?(?1)|?|+?(?⊗Γ1)(|?|+?2/|?|)+?(??⊗Γ2)|?|+?(?1)?2/|?|+?(??⊗Γ2)?2/|?| is the extra part compared with the needed feedback gain in Theorem 3.2. Usually, it can be considered that ?res>0, which will be demonstrated in simulation. Furthermore, in order to obtain much smaller feedback control gains, we choose more suitable controllers as follows. Theorem 3.4. Suppose Assumption 2.3 holds. Under the nonlinear controllers ??(?)=??2??(?)−?1??(?)−?2?2???(?)+??1?1??(?)−?2?2????−?1+??1?1???−?1−?2?=1???Γ1ℎ2???(?)+????Γ2ℎ2????−?2+??1?=1???Γ1ℎ1??(?)+????Γ2ℎ1???−?2−????(?),?∈ℐ2,(3.20) where ?? are the feedback control gains, denote ?=min?∈ℐ2{??}, and if ?≥−?min?1+?2?+?2?2?+1+?21+?2+??⊗Γ1?2?+1+??⊗Γ21+?2+?3,(3.21) where ?3 is some positive constant, then the GPS between the two neural networks (2.1) and (2.2) can be achieved. Proof. Choose the same Lyapunov functional as (3.17) in the proof of Theorem 3.2, and then we can get ????∣(3)=?2?=1???−?(?)1+?2??(?)−????(?)+?2?2??(?)−?2???(?)+?2?2???−?1−?2????−?1+?2?=1???Γ1ℎ2??(?)−ℎ2???+(?)?2?=1????Γ2ℎ2???−?2−ℎ2????−?2?+?2?2?2?=1???(?)??(?)−????−?1???−?1?+??⊗Γ2?2?2?=1???(?)??(?)−????−?2???−?2.(3.22) Combining (3.6), (3.7), (3.10), (3.11), and (3.22), we could change inequality (3.19) into ̇?(?)≤−?min?1+?2?+?2?2?+1+?21+?2+??⊗Γ1?2?+1+??⊗Γ21+?2?−??(?)?(?).(3.23) Taking into account condition (3.21), the GPS can be realized under nonlinear controllers (3.18). Remark 3.5. It is easy to find that the infimum of ? in (3.21) has one more term −?min(?1) than (3.16). Because ?1 is diagonally positive definite, −?min(?1)<0 demonstrates the required ? in Theorem 3.4 is smaller than the one in Theorem 3.2. If the two neural networks (2.1) and (2.2) have identical number of nodes, node dynamics, and topological structure, that is, ?1=?2, ?1=?2, ?1=?2, ?1=?2, ?1=?2, ?1=?2, ???=???, ????=????, and Γ1=Γ2, the error system (3.1) can be rewritten as follows: ̇??(?)=−?1??(?)+?1?1??(?)−??1??(?)+?1?1???−?1−??1???−?1+??(?)+?1?=1???Γ1ℎ1??(?)−?ℎ1??+(?)?1?=1????Γ2ℎ2???−?2−?ℎ2???−?2,?∈ℐ2.(3.24) Thus, we can obtain the following corollary for synchronizing the error system (3.24). Corollary 3.6. Suppose the two neural networks (2.1) and (2.2) have identical number of nodes, node dynamics, and topological structure. If the following nonlinear controllers: ??(?)=−?1?1???(?)−??1??(?)−?1?1????−?1−??1???−?1−????(?)−?1?=1???Γ1ℎ1??(?)−?ℎ1??−(?)?1?=1????Γ2ℎ2???−?2−?ℎ2???−?2,?∈ℐ1,(3.25) are employed, where ?? are the feedback control gains, denote ?=min?∈ℐ2{??}, and when ?≥−?1+?(?1)(?2+1)+?(?1)(1+?2), the error system (3.24) can be synchronized. Proof. We construct the Lyapunov function as follows: 1?(?)=2?2?=1???(?)???(?)+?1?2?2?=1??−?1???(?)??(?)??.(3.26) We can conclude that ?≥−?1+?(?1)(?2+1)+?(?1)(1+?2) by using the same method as that in the above theorems. Furthermore, let us assume that system (3.24) is without time-delay terms; then, system (3.24) reduces to ̇??(?)=−?1??(?)+?1?1??(?)−??1??+(?)?1?=1???Γ1ℎ1??(?)−?ℎ1??(?)+??(?),?∈ℐ1.(3.27) Then, a simpler corollary can be produced. Corollary 3.7. The following controllers: ??(?)=−?1?1???(?)−??1??−(?)?1?=1???Γ1ℎ1??(?)−?ℎ1??(?)−????(?),?∈ℐ1,(3.28) can be applied to synchronize the system (3.27) when the feedback control gain ? satisfies the inequality ?≥−?1+?(?1)(?2+1). Similarly, in the drive network (2.1) and the response network (2.2), if ?1=??1, ?2=??2, ?1=?2=0, ℎ1, ℎ2 are linear functions and ?=?=0, the error system (3.1) can be rewritten as follows: ̇???(?)=−1+?2??(?)+?2??+(?)?2?=1?????2???−?2−????(?),?∈ℐ2.(3.29) Thus, we can obtain the following corollary for synchronizing the error system (3.24). Corollary 3.8. By applying the nonlinear controllers ??(?)=??2??(?)−?1??(?)−?2???(?)+??1??(?)−??2?=1????Γ2???−?2+??1?=1????Γ2???−?2−????(?),?∈ℐ2,(3.30) where ?? are the feedback control gains, denote ?=min?∈ℐ2{??}, and when ?≥−?min(?1+?2)+2?(??⊗Γ2)+1, the GPS between the two neural networks (2.1) and (2.2) can be achieved. Proof. We construct the Lyapunov function as follows: 1?(?)=2?2?=1???(?)???(?)+??⊗Γ2??−?2???(?)??(?)??.(3.31) We can conclude that ?≥−?min(?1+?2)+2?(??⊗Γ2)+1 by using the same method as that in the above theorems. Meanwhile, this conclusion also contains the result of [32]. It is easy to see that the theoretical feedback gains given in the above results (Theorems 3.1–3.4) are too conservative, usually much larger than the needed value; clearly, it is desirable to make the feedback gains as small as possible. Here, the adaptive technique is adopted to achieve this goal. Theorem 3.9. Suppose that Assumption 2.3 holds and the feedback controllers are chosen as (3.20). If the feedback control gains satisfy the update law ̇??=?????(?)??(?),?∈ℐ2,(3.32) (?? are arbitrary positive constants), then the GPS between the neural network (2.1) and (2.2) can be realized. Proof. We construct the Lyapunov function as follows: 1?(?)=2?2?=1???(?)???(?)+?2?2?2?=1??−?1???(?)??1(?)??+2?2?=11????−?2?+??⊗Γ2?2?2?=1??−?2???(?)??(?)??,(3.33) where ? is a sufficient large positive constant to be determined. Let ?11(?)=2?2?=11????−?2.(3.34) Then we have ̇?1(?)=?2?=1???−???(?)??(?).(3.35) Combining (3.18) and (3.35) with (3.33), one can obtain the following inequality: ̇?(?)≤−?min?1+?2?+?2?2?+1+?21+?2+??⊗Γ1?2?+1+??⊗Γ21+?2?−??(?)?(?).(3.36) Therefore, according to the above proof of the three theorems, it is easy to verify the conclusion of Theorem 3.9. 4. Numerical Simulations In this section, several examples are given to verify the conclusions established above. Consider the two neural networks (2.1) and (2.2) with the following parameters ?1=⎡⎢⎢⎣⎤⎥⎥⎦1001,?2=⎡⎢⎢⎣⎤⎥⎥⎦0.97001.1,?1=⎡⎢⎢⎣⎤⎥⎥⎦2−0.11−53.2,?2=⎡⎢⎢⎣⎤⎥⎥⎦,?2.1−0.1−5.13.21=⎡⎢⎢⎣⎤⎥⎥⎦−1.60.1−0.18−2.4,?2=⎡⎢⎢⎣⎤⎥⎥⎦−1.50−0.15−2.3,Γ1=Γ2=⎡⎢⎢⎣⎤⎥⎥⎦,⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦1001?=−5111111−5111111−3100111−4101101−4111001−3,??=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦,⎡⎢⎢⎢⎢⎢⎢⎣⎤⎥⎥⎥⎥⎥⎥⎦−1010000−1001001−1000001−1000100−1001000−1?=−31111−31111−31111−3,??=⎡⎢⎢⎢⎢⎢⎢⎣⎤⎥⎥⎥⎥⎥⎥⎦,−10101−10001−10010−1(4.1) and ???(?)=???(?)=tanh(?(?)) for ?=1,2, ?1=6 and ?2=4. Simulation 1 The four graphs in Figure 1 represent the motion trace of drive and response systems. Comparing Figures 1(b), 1(c), and 1(d) with Figure 1(a), we can find a trivial conclusion. Because the parameters in response system and the ones in drive system are very similar, the shape of the motion orbit of response systems looks like the one of drive system. But the biases between the four traces and the “0” orbit is rather different. Figure 1: (a) Trajectory of the drive system. (b) trajectory of the response system with ?=0.7. (c) trajectory of the response system with ?=−0.7. (d) Trajectory of the response system with ?=−1. Simulation 2 Setting ?=1, ?=1, ?=1, ?=1, ?=1, ?=1, it is easy to verify that Assumption 2.3 holds. Choose ???(?)=100(12−3?−?) and ???(?)=100(−8+3?+?) for ?≤0 as the initial values and define ??=?>?‖‖??(?)−??‖‖(?),??=?>?‖‖??(?)−??‖‖(?),???=64?=1?=1‖‖??(?)+??‖‖(?),(4.2) for measuring the process of projective synchronization. In Figure 2, the top three plots present the synchronization process of drive system with ?=0.7,−0.7,−1, respectively; the middle three graphs demonstrate the change trend of ?? as ?→∞ with the former three projective factors; the bottom three ones represent the synchronization trace of the two neural networks for the three ?s. The nine figures say that, for different ?, the required synchronization times are all in the interval [4, 6]. However, the change amplitudes of the nine curves show obvious differences. By the same way, for the three projective factors, we can provide the three infimums of feedback control gains that are ?≥40.65 with ?=0.7, ?≥40.65 with ?=−0.7 and ?≥39.33 with ?=−1, respectively. Figure 2: The change process of ??, ??, and ??? as time ?→∞ for different projective factors. Remark 4.1. The reason why we apply three different nonlinear controllers for obtaining Theorems 1, 2, and 3 is that we want to find smaller feedback control gain. By computing, we have ?≥40.65 for Theorem 3.1, ?≥21.43 for Theorem 3.2, and ?≥20.43 for Theorem 3.4, respectively. It shows that our conjecture and simulation meet these results. Simulation 3 In this simulation, we want to verify the synchronization process of the two neural networks when an adaptive control is applied to the response system. Figure 3 shows the time evolution of ??, ??, ??? and ?(?), as well as the fourth plot Figure 3(d) tell us that the feedback control gain trends towards 11.15 approximately as ?→∞ which is far lower than 39.33. Actually, this proves that the adaptive control method can diminish the feedback control gain. Figure 3: (a), (b), and (c) show the change process of ??, ??, and ??? with the adaptive control (3.32); (d) shows the change curve of the adaptive control gain. 5. Conclusion The GPS between two neural networks with mixed time delays and different parameters was investigated in this paper. By means of the Lyapunov stability theories, the GPS was realized under control of three nonlinear controllers. By comparison, we found that the nonlinear controller in Theorem 3.4 was simpler and easier to ensure that the GPS was achieved. Therefore, it was also applied in practical design. According to the optimal nonlinear controller, an adaptive update technique was designed to guarantee that the feedback control gain was sufficiently small. Eventually, several numerical simulations verified the validity of those results. Acknowledgments The authors thank the referees and the editor for their valuable comments on this paper. This work was supported by the National Science Foundation of China under Grant no. 61070087, the Guangdong Education University Industry Cooperation Projects (2009B090300355), the Foundation for Distinguished Young Talents in Higher Education of Guangdong (LYM11115), the Shenzhen Basic Research Project (JC200903120040A, JC201006010743A), and the Shenzhen Polytechnic Youth Innovation Project (2210k3010020). References S. H. Strogatz, “Exploring complex networks,” Nature, vol. 410, no. 6825, pp. 268–276, 2001. View at Publisher · View at Google Scholar · View at ScopusM. E. J. Newman, “The structure and function of complex networks,” SIAM Review, vol. 45, no. 2, pp. 167–256, 2003. View at Publisher · View at Google ScholarA.-L. Barabási, “Scale-free networks: a decade and beyond,” American Association for the Advancement of Science, vol. 325, no. 5939, pp. 412–413, 2009. View at Publisher · View at Google ScholarC. T. Butts, “Revisiting the foundations of network analysis,” American Association for the Advancement of Science, vol. 325, no. 5939, pp. 414–416, 2009. View at Publisher · View at Google ScholarD. Centola, “The spread of behavior in an online social network experiment,” Science, vol. 329, no. 5996, pp. 1194–1197, 2010. View at Publisher · View at Google Scholar · View at ScopusG. Shinar and M. Feinberg, “Structural sources of robustness in biochemical reaction networks,” Science, vol. 327, no. 5971, pp. 1389–1391, 2010. View at Publisher · View at Google Scholar · View at ScopusW. Yu, J. Cao, and J. Lü, “Global synchronization of linearly hybrid coupled networks with time-varying delay,” SIAM Journal on Applied Dynamical Systems, vol. 7, no. 1, pp. 108–133, 2008. View at Publisher · View at Google ScholarK. Wang, X. Fu, and K. Li, “Cluster synchronization in community networks with nonidentical nodes,” Chaos, vol. 19, no. 2, Article ID 023106, 10 pages, 2009. View at Publisher · View at Google ScholarJ. Lü and G. Chen, “A time-varying complex dynamical network model and its controlled synchronization criteria,” Institute of Electrical and Electronics Engineers, vol. 50, no. 6, pp. 841–846, 2005. View at Publisher · View at Google ScholarJ.-W. Wang, Q. Ma, L. Zeng, and M. S. Abd-Elouahab, “Mixed outer synchronization of coupled complex networks with time-varying coupling delay,” Chaos, vol. 21, no. 1, Article ID 013121, 8 pages, 2011. View at Publisher · View at Google ScholarQ. Zhu and J. Cao, “Adaptive synchronization under almost every initial data for stochastic neural networks with time-varying delays and distributed delays,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 4, pp. 2139–2159, 2011. View at Publisher · View at Google ScholarQ. Gan, R. Xu, and X. Kang, “Synchronization of chaotic neural networks with mixed time delays,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 2, pp. 966–974, 2011. View at Publisher · View at Google ScholarI. Kanter, M. Zigzag, A. Englert, F. Geissler, and W. Kinzel, “Synchronization of unidirectional time delay chaotic networks and the greatest common divisor,” EPL, vol. 93, no. 6, Article ID 60003, 2011. View at Publisher · View at Google Scholar · View at ScopusK. S. Sudheer and M. Sabir, “Adaptive modified function projective synchronization of multiple time-delayed chaotic Rossler system,” Physics Letters, Section A, vol. 375, no. 8, pp. 1176–1178, 2011. View at Publisher · View at Google Scholar · View at ScopusC. K. Zhang, Y. He, and M. Wu, “Exponential synchronization of neural networks with time-varying mixed delays and sampled-data,” Neurocomputing, vol. 74, no. 1–3, pp. 265–273, 2010. View at Publisher · View at Google Scholar · View at ScopusC. K. Ahn, “Anti-synchronization of time-delayed chaotic neural networks based on adaptive control,” International Journal of Theoretical Physics, vol. 48, no. 12, pp. 3498–3509, 2009. View at Publisher · View at Google Scholar · View at ScopusC. K. Ahn, “Adaptive H∞ anti-synchronization for time-delayed chaotic neural networks,” Progress of Theoretical Physics, vol. 122, no. 6, pp. 1391–1403, 2009. View at Publisher · View at Google Scholar · View at ScopusZ. M. Ge, Y. T. Wong, and S. Y. Li, “Temporary lag and anticipated synchronization and anti-synchronization of uncoupled time-delayed chaotic systems,” Journal of Sound and Vibration, vol. 318, no. 1-2, pp. 267–278, 2008. View at Publisher · View at Google Scholar · View at ScopusA. Prasad, J. Kurths, and R. Ramaswamy, “The effect of time-delay on anomalous phase synchronization,” Physics Letters, Section A, vol. 372, no. 40, pp. 6150–6154, 2008. View at Publisher · View at Google Scholar · View at ScopusD. Ghosh, A. Ray, and A. R. Chowdhury, “Generalized and phase synchronization between two different time-delayed systems,” Modern Physics Letters B, vol. 22, no. 19, pp. 1867–1878, 2008. View at Publisher · View at Google Scholar · View at ScopusD. V. Senthilkumar, M. Lakshmanan, and J. Kurths, “Transition from phase to generalized synchronization in time-delay systems,” Chaos, vol. 18, no. 2, Article ID 023118, 12 pages, 2008. View at Publisher · View at Google Scholar · View at ScopusD. Zhang and J. Xu, “Projective synchronization of different chaotic time-delayed neural networks based on integral sliding mode controller,” Applied Mathematics and Computation, vol. 217, no. 1, pp. 164–174, 2010. View at Publisher · View at Google ScholarD. Ghosh, “Generalized projective synchronization in time-delayed systems: nonlinear observer approach,” Chaos, vol. 19, no. 1, Article ID 013102, 9 pages, 2009. View at Publisher · View at Google ScholarD. Ghosh, S. Banerjee, and A. Roy Chowdhury, “Generalized and projective synchronization in modulated time-delayed systems,” Physics Letters, Section A, vol. 374, no. 21, pp. 2143–2149, 2010. View at Publisher · View at Google Scholar · View at ScopusN. Vasegh and F. Khellat, “Projective synchronization of chaotic time-delayed systems via sliding mode controller,” Chaos, Solitons and Fractals, vol. 42, no. 2, pp. 1054–1061, 2009. View at Publisher · View at Google Scholar · View at ScopusD. Ghosh, P. Saha, and A. Roy Chowdhury, “Linear observer based projective synchronization in delay Rössler system,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 6, pp. 1640–1647, 2010. View at Publisher · View at Google ScholarQ. Gan, R. Xu, and X. Kang, “Synchronization of chaotic neural networks with mixed time delays,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 2, pp. 966–974, 2011. View at Publisher · View at Google ScholarJ. Chen, L. Jiao, J. Wu, and X. Wang, “Projective synchronization with different scale factors in a drivenresponse complex network and its application in image encryption,” Nonlinear Analysis, vol. 11, no. 4, pp. 3045–3058, 2010. View at Publisher · View at Google Scholar · View at ScopusC. F. Feng, X. J. Xu, S. J. Wang, and Y. H. Wang, “Projective-anticipating, projective, and projective-lag synchronization of time-delayed chaotic systems on random networks,” Chaos, vol. 18, no. 2, Article ID 023117, 6 pages, 2008. View at Publisher · View at Google Scholar · View at ScopusJ.-W. Wang, Q. Ma, L. Zeng, and M. S. Abd-Elouahab, “Mixed outer synchronization of coupled complex networks with time-varying coupling delay,” Chaos, vol. 21, no. 1, Article ID 013121, 8 pages, 2011. View at Publisher · View at Google ScholarS. Zheng, Q. Bi, and G. Cai, “Adaptive projective synchronization in complex networks with time-varying coupling delay,” Physics Letters. A, vol. 373, no. 17, pp. 1553–1559, 2009. View at Publisher · View at Google ScholarX. Wu and H. Lu, “Generalized projective synchronization between two different general complex dynamical networks with delayed coupling,” Physics Letters, Section A, vol. 374, no. 38, pp. 3932–3941, 2010. View at Publisher · View at Google Scholar · View at ScopusW. Wu, W. Zhou, and T. Chen, “Cluster synchronization of linearly coupled complex networks under pinning control,” IEEE Transactions on Circuits and Systems. I, vol. 56, no. 4, pp. 829–839, 2009. View at Publisher · View at Google Scholar


This is a preview of a remote PDF: http://downloads.hindawi.com/journals/ddns/2012/153542.pdf

Xuefei Wu, Chen Xu, Jianwen Feng, Yi Zhao, Xuan Zhou. Generalized Projective Synchronization between Two Different Neural Networks with Mixed Time Delays, Discrete Dynamics in Nature and Society, 2012, DOI: 10.1155/2012/153542