Finding Quasi-Optimal Network Topologies for Information Transmission in Active Networks
Hussein MS (2008) Finding Quasi-Optimal Network Topologies for Information Transmission in Active Networks. PLoS
ONE 3(10): e3479. doi:10.1371/journal.pone.0003479
Finding Quasi-Optimal Network Topologies for Information Transmission in Active Networks
Murilo S. Baptista 0
Josue´ X. de Carvalho 0
Mahir S. Hussein 0
Raya Khanin, University of Glasgow, United Kingdom
0 Max-Planck-Institut f u ̈r Physik komplexer Systeme, Dresden, Deutschland, 2 Centro de Matema ́tica da Universidade do Porto , Porto , Portugal , 3 Institute of Physics, University of Sa ̃o Paulo , Sa ̃o Paulo , Brasil
This work clarifies the relation between network circuit (topology) and behaviour (information transmission and synchronization) in active networks, e.g. neural networks. As an application, we show how one can find network topologies that are able to transmit a large amount of information, possess a large number of communication channels, and are robust under large variations of the network coupling configuration. This theoretical approach is general and does not depend on the particular dynamic of the elements forming the network, since the network topology can be determined by finding a Laplacian matrix (the matrix that describes the connections and the coupling strengths among the elements) whose eigenvalues satisfy some special conditions. To illustrate our ideas and theoretical approaches, we use neural networks of electrically connected chaotic Hindmarsh-Rose neurons.
-
Funding: This study was financed by the Max Planck Institute for the Physics of Complex Systems, FCT, FAPESP, CNPq, and the Martin Gutzwiller prize 2007/2008
(MSH).
Competing Interests: The authors have declared that no competing interests exist.
Introduction
Given an arbitrary time dependent stimulus that externally
excites an active network formed by systems that have some
intrinsic dynamics (e.g. neurons and oscillators), how much
information from such stimulus can be realized by measuring
the time evolution of one of the elements of the network ?
Determining how and how much information flows along
anatomical brain paths is an important requirement for the
understanding of how animals perceive their environment, learn
and behave [
1,2,3
].
Even though the approaches of Ref. [
1,2,3,4,5,6
] have brought
considerable understanding on how and how much information
from a stimulus is transmitted in a neural network, the relation
between network circuits (topology) and information transmission
in a neural as well as an active network is still awaiting a more
quantitative description [7]. And that is the main thrust of the
present manuscript, namely, to present a quantitative way to relate
network topology with information in active networks. Since
information might not always be easy to be measured or quantified
in experiments, we endeavour to clarify the relation between
information and synchronization, a phenomenon which is often
not only possible to observe but also relatively easy to characterize.
We initially proceed along the same line as in Refs. [
8,9
], and study
the information transfer in autonomous systems. However, instead of
treating the information transfer between dynamical systems
components, we treat the transfer of information per unit time
exchanged between two elements in an autonomous chaotic active
network. Thus, we neglect the complex relation between external
stimulus and the network and show how to calculate an upper bound
value for the mutual information rate (MIR) exchanged between two
elements (a communication channel) in an autonomous network.
Ultimately, we discuss how to extend this formula to non-chaotic
networks suffering the influence of a time-dependent stimulus.
Most of this work is directed to ensure the plausibility and
validity of the proposed formula for the upper bound of MIR (Sec.
Results) and also to study its applications in order to clarify the
relation among network topology, information, and
synchronization. We do not rely only on results provided by this formula, but
we also calculate the MIR by the methods in Refs. [
10,11
] and by
symbolic encoding the trajectory of the elements forming the
network and then measuring the mutual information provided by
this discrete sequence of symbols.
To illustrate the power of the proposed formula, we applied it to
study the exchange of information in networks of coupled chaotic
maps (Sec. Methods) and in Hindmarsh-Rose neural networks
bidirectionally electrically coupled (Sec. Results). Our formula can
be used to a larger class of active networks than the ones here
considered. As the networks formed by elements coupled both
electrically and chemically (see Ref. [
12
]). Still, the studied
network topologies are much simpler than the ones found in the
brain [
13,14
]. Nevertheless, we do believe our approaches can be
used to better understand how information is transfered in more
realistic networks as the scale-free networks [15], the small-world
netw (...truncated)