Particle Swarm Optimization with Double Learning Patterns

Computational Intelligence and Neuroscience, Dec 2015

Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

http://downloads.hindawi.com/journals/cin/2016/6510303.pdf

Particle Swarm Optimization with Double Learning Patterns

Particle Swarm Optimization with Double Learning Patterns Yuanxia Shen, Linna Wei, Chuanhua Zeng, and Jian Chen School of Computer Science and Technology, Anhui University of Technology, Maanshan 243002, China Received 16 July 2015; Revised 11 October 2015; Accepted 15 October 2015 Academic Editor: Manuel Grana Copyright © 2016 Yuanxia Shen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. 1. Introduction Particle Swarm Optimization (PSO) [1, 2], firstly proposed by Kennedy and Eberhart in 1995, was inspired by the simulation of simplified social behaviors including fish schooling and bird flocking. Similar to genetic algorithm, it is also a population-based algorithm, but it has no evolutionary operations such as crossover, mutation, or selection. PSO finds the global best solution by adjusting the trajectory of each particle not only towards its personal best particle pbest but also towards the historically global best particle gbest [3]. Recently, PSO has been successfully applied to optimization problems in many fields [4–7]. In the basic PSO [1], each particle in the swarm learns from pbest and gbest. During the evolutionary process, gbest is the only shared information acquired by the whole swarm, which finally leads to all particles converging to the same destination and the diversity losing quickly. If gbest is a local optimum far from the global one, the swarm is easy to be trapped in local optimum. The learning mechanism of the basic PSO can cause a fast convergence rate, but it easily leads to the premature convergence when solving multimodal optimization problems. In order to overcome this problem, researchers proposed many strategies to improve it. An adaptive strategy of the learning parameter [3, 8–18] is an effective way to improve the PSO performance. Shi and Eberhart [8] proposed a linearly decreasing inertia weight (LDIW) to balance the local search and global search. Ratnaweera et al. [3] proposed a time-varying acceleration coefficient (TVAC), which is beneficial to enhancing the exploration ability of particles in the early evolutionary phase and improving the local searching ability of particles in the late phase. In [3], the two variants of the PSO-TVAC were developed, namely, the PSO-TVAC with mutation (MPSO-TVAC) and the self-organizing hierarchical PSO-TVAC (HPSO). Zhan et al. [9] proposed an adaptive PSO in which the learning parameters were adaptively adjusted with the change of the evolutionary states of the swarm. Kundu et al. [10] proposed a nonlinearly time-varying acceleration coefficient and an aging guideline to avoid the premature convergence. They also suggested a mean learning strategy to enhance the exploitation search. To increase the swarm diversity, auxiliary techniques are introduced into PSO’s framework, such as genetic operators [3, 12, 13, 19], differential evolution [20], and artificial bee colony (ABC) [21, 22]. Mahmoodabadi et al. [22] combined the multicrossover and the bee colony mechanism to improve the exploration capability of PSO. In [9], an elitist learning strategy, similar to the mutation operation, is developed to help the gbest particle jump out of the local optima. The topological structure of the swarm has a significant effect on the performance of PSO [23–26]. Kennedy [23] pointed out that small neighborhood is fit for complex problems, while large neighborhood is used for simple problems. Parsopoulos and Vrahatis [24] integrated the benefits of the global PSO and the local PSO and then proposed a unified PSO (UPSO). Mende (...truncated)


This is a preview of a remote PDF: http://downloads.hindawi.com/journals/cin/2016/6510303.pdf

Yuanxia Shen, Linna Wei, Chuanhua Zeng, Jian Chen. Particle Swarm Optimization with Double Learning Patterns, Computational Intelligence and Neuroscience, 2015, 2016, DOI: 10.1155/2016/6510303