Multiswarm Particle Swarm Optimization with Transfer of the Best Particle
Multiswarm Particle Swarm Optimization with Transfer of the Best Particle
Xiao-peng Wei,1 Jian-xia Zhang,1 Dong-sheng Zhou,2 and Qiang Zhang2
1School of Mechanical Engineering, Dalian University of Technology, Dalian 116024, China
2Key Laboratory of Advanced Design and Intelligent Computing, Ministry of Education, Dalian University, Dalian 116622, China
Received 2 March 2015; Revised 23 June 2015; Accepted 12 July 2015
Academic Editor: Reinoud Maex
Copyright © 2015 Xiao-peng Wei et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We propose an improved algorithm, for a multiswarm particle swarm optimization with transfer of the best particle called BMPSO. In the proposed algorithm, we introduce parasitism into the standard particle swarm algorithm (PSO) in order to balance exploration and exploitation, as well as enhancing the capacity for global search to solve nonlinear optimization problems. First, the best particle guides other particles to prevent them from being trapped by local optima. We provide a detailed description of BMPSO. We also present a diversity analysis of the proposed BMPSO, which is explained based on the Sphere function. Finally, we tested the performance of the proposed algorithm with six standard test functions and an engineering problem. Compared with some other algorithms, the results showed that the proposed BMPSO performed better when applied to the test functions and the engineering problem. Furthermore, the proposed BMPSO can be applied to other nonlinear optimization problems.
Many nonlinear optimization problems are attracting increasing attention from researchers, with conflicting objectives and using various random search methods. Global optimization algorithms are employed widely to solve these problems . Particle swarm optimization (PSO) is a type of random optimization method, which was inspired by the flocking behavior of birds [2, 3]. Kennedy and Eberhart were the first to propose PSO . Compared with other swarm intelligence algorithms, PSO has a simple structure and rapid convergence rate, and it is easy to perform, which makes it an effective method for solving nonlinear optimization problems [5, 6].
In recent years, many researchers have tried to improve PSO to overcome its shortcomings; that is, it exhibits premature convergence and is readily trapped by local optima . In order to improve the efficiency and effectiveness of multiobjective particle swarm optimization, a competitive and cooperative coevolutionary multiobjective particle swarm optimization algorithm (CCPSO) was presented by Goh et al. in 2010 . A competitive and cooperative coevolution mechanism was introduced in the proposed CCPSO, which does not handle the ZDT4 problem well, so it cannot be applied widely. Rathi and Vijay presented a modified PSO (EPSO) in 2010 , where two stages are used to balance the local and global search. First, the bandwidth of microstrip antenna (MSA) was modeled by a benchmark function. Second, the first output was then employed as an input to obtain the new output in the form of five parameters. The proposed EPSO is efficient and accurate. In 2011, a multiswarm self-adaptive and cooperative particle swarm optimization (MSCPSO) was proposed by Zhang and Ding , which employs four subswarms: subswarms 1 and 2 are basic, but subswarm 3 is influenced by subswarms 1 and 2, while subswarm 4 is affected by subswarms 1, 2, and 3. The four subswarms employ a cooperative strategy. While it achieved good performances in solving complex multimodal functions, MSCPSO was not applied to practical engineering problems. A new chaos-enhanced accelerated PSO algorithm was proposed by Gandomi et al. in 2013 , which delivered good performances when applied to a complex problem, but it is not easy to operate. Ding et al. developed the multiswarm cooperative chaos particle swarm optimization algorithm in 2013 , which includes chaos and multiswarm cooperative strategies. This method was proposed only to optimize the parameters of a least squares support vector machine. In order to establish and optimize the alternative path, an efficient routing recovery protocol with endocrine cooperative particle swarm optimization was proposed by Hu et al. in 2015 , which employs a multiswarm evolution equation. Qin et al. proposed a novel coevolutionary particle swarm optimizer with parasitic behavior in 2015 , where a host swarm and parasite swarm exchange information. This method performs better in terms of the solution accuracy and convergence, but the structure is complex.
In the present study, we introduce a multiswarm particle swarm optimization with transfer of the best particle (BMPSO), in order to improve the global search capacity and to avoid trapping in local optima. The proposed algorithm employs three slave swarms and a master swarm. The best particle and worst particle are selected by the PSO from every slave swarm. The best particle is then transferred to the next slave swarm to replace the worst particle. The three best particles are then stored in the master swarm. Finally, the optimal value is obtained by PSO from the master swarm. Compared with other optimization algorithms, this parasitism strategy is easier to understand. The control parameters for the proposed algorithm do not increase and it can find the optimal solution easily.
The remainder of this paper is organized as follows. In Section 2, we provide an overview of the standard PSO. Our proposed BMPSO is explained in Section 3. We present the results of our numerical experimental simulations as well as comparisons in Section 4. In Section 5, we apply our proposed BMPSO to a practical engineering problem. Finally, we give our conclusions.
2. Standard PSO
In 1995, the standard PSO algorithm was presented by Kennedy and Eberhart . It was developed further subsequently due to its easy implementation, high precision, and fast convergence. Similar to other evolutionary algorithms, it starts from random solutions and performs iterative searches to find the best solution. The quality of the solution is evaluated based on fitness.
PSO utilizes a swarm population of no quality and volume particles. Each particle moves in a -dimensional space according to its own experience and that of neighbors while searching for the best solution. The position of the th particle is expressed by the following vector: . The velocity is expressed by the following vector: . In each step, the particles move according to the following formulae: where , and are acceleration constants, and represent two independent uniformly distributed random variables between 0 and 1, is the best previous position of the particle itself, and is the best global value.
In 1998, an inertia weight was introduced into formula (1) by Shi and Eberhart , as shown by the following formula :
It was demonstrated that the inclusion of a suitable inertia weight allows the best solution to be searched more accurately. This weight can balance exploration and exploitation.
The standard PSO procedure is illustrated as follows.
Step 1. Initialize the position and velocity of each particle randomly in the population.
Step 2. Evaluate the value of the fitness function. Store the values in pbest and gbest.
Step 3. Update the position and velocity of each particle using formulae (2) and (3).
Step 4. Compare pbest and gbest, and then update gbest.
Step 5. Assess whether the conditions are met or not. If not, go back to Step 3.
3. General Description of BMPSO
The standard PSO is readily trapped by local optima and it is susceptible to premature convergence , while solving multiobjective problems with constraint conditions. Thus, we propose a new method for multiswarm PSO with transfer of the best particle called BMPSO. Our proposed algorithm can be applied to unconstrained problems but also with constraint conditions. It has the ability to escape local optima and prevent premature convergence.
3.1. Best Particle Coevolutionary Mechanism
In this section, we provide a general description of BMPSO. The proposed method employs three slave swarms and a master swarm. There is a specific relationship among the four swarms. We propose three swarms as slaves, which are designated slave-1, slave-2, and slave-3. First, the best particle-1 of slave-1 is selected, which is then stored in the master swarm. Second, the worst particle-2 from slave-2 is replaced with the best particle-1. The best particle-2 is then found, which is stored in the same manner as particle-1. The same strategy is applied to slave-3. When the three slave swarms have been processed, the master best particle is found in the master swarm as the optimal value. This strategy is not complete until various conditions have been met. The proposed BMPSO involves cooperation and competition. This evolutionary strategy is called parasitism in nature . The structure of the BMPSO is shown in Figure 1.
Figure 1: Structure of BMPSO.
Our improved algorithm comprises three slave swarms and a master swarm, where a parasitism strategy is used to balance exploration and exploitation. The control parameters for the proposed BMPSO include the number of particles, inertia weight, dimension of particles, acceleration coefficients, and iterations. The number of particles is determined by the complexity of the problem, that is, from 5 to 100. The inertia weight decides how to inherit from current velocity. The dimension of the particles is determined by the optimization problem as the dimension of the result required. The acceleration coefficients give the particles the capacity for self-summary and learning from others, and they usually have a value of 2. The number of iterations can be determined by the experimental requirements. The pseudocodes for the BMPSO are presented as follows.
Algorithm BMPSO Begin Specify the population of each slave swarm Initialize the velocity and position Evaluate the fitness value Find the best particle and worst particle in each slave swarm Use the best particle to replace the worst particle in the next slave swarm Store the best particle in the master swarm Find the optimal value in the master swarm Repeat Until a terminate-condition is met End
3.2. Diversity Analysis of BMPSO
In order to explain the proposed algorithm in detail, we illustrate the search capacity of each particle in the four swarms. The worst particle is replaced by the best particle. The best particle will lead the other particles away from a local optimum. Figure 2 shows the evolutionary processes for the particles in slave-1, slave-2, slave-3, and the master swarm.
Figure 2: Evolutionary processes based on the Sphere function.
The evolutionary processes based on the Sphere function in are shown in Figure 2. The graphs represent the results obtained by the proposed algorithm in a single run. Figure 2(a) shows that the particles in the four swarms performed their search behavior in a smooth manner. The diversity was improved when the best particle replaced the worst particle. This point is illustrated clearly by the experiments with the test function. Figure 2(b) shows the status of the particles during different generations based on the distances among four particles. The proposed algorithm made a greater effort to avoid becoming trapped by a local optimum after the 50th generation, while still considering the convergence speed. Thus, it maintained the diversity of the fitness value but not at the cost of the convergence speed.
The number of particles affects the optimization ability of BMPSO. In order to verify this, Figure 3 illustrates the convergence characteristics using the Sphere function as in the example (, 20, 30, 50, 80).
Figure 3: Convergence characteristics for the Sphere function.
Figure 3 shows that the final optimized fitness value tended to improve as the number of particles increased. However, this is not as obvious in the later stage. In addition, there must be a greater communication cost when the number of particles increases. In the proposed BMPSO, is sufficient for most problems.
In the standard PSO, the inertia weight is very important, because it affects the balance between local and global search. It describes the influence of inertia on speed. When the inertia weight is higher, the global optimization ability is better. In order to determine the appropriate value for , we performed experiments with the test function and Figure 4 shows the results obtained with different values for .
Figure 4: Different inertia weights.
Figure 4 clearly demonstrates that the optimum fitness value is not easy to reach, when the inertia weight is too small or too large. In the proposed BMPSO, is the optimum value.
To a certain extent, the dimension () of particles represents the complexity of a problem, where the search capacity decreases as increases. We performed experiments to determine the suitable scope for using the Sphere function as an example, and Figure 5 illustrates the results obtained.
Figure 5: Results obtained using different dimensions ().
Figure 5 shows that the proposed BMPSO is more effective with a smaller dimension. In the proposed algorithm, is a suitable dimension.
In this section, we considered the influence of different parameters on the proposed algorithm. This diversity analysis demonstrated the effectiveness of BMPSO, where the best particle shares more information with others and it replaces the worst particle during the evolutionary process. This guides the other particles to prevent them from being trapped by local optima and avoids premature convergence. The proposed algorithm balances exploration and exploitation in an effective manner.
4. Numerical Experiments
In order to determine whether the proposed algorithm is effective for nonlinear optimization problems, we performed experiments using standard test functions [21, 22]. The proposed algorithm was simulated and verified on the MATLAB platform. The results were compared with those obtained using a modified particle swarm optimizer called W-PSO , an improved particle swarm optimization combined with chaos called CPSO , a completely derandomized self-adaptation in evolution strategies called CMAES , and a multiswarm self-adaptive and cooperative particle swarm optimization called MSCPSO .
4.1. Standard Test Functions
In order to validate the efficiency of BMPSO, the six standard test functions were employed in experiments to search for the optimum value of the fitness function. The six test functions were Sphere, Rastrigin, Griewank, Schwefel, Elliptic, and Rosenbrock . The global optima for these six standard test functions are equal to zero. The formulae for the six functions are shown in Table 1.
Table 1: Benchmark functions.
Sphere, Schwefel, and Elliptic are typical unimodal functions, and thus it is relatively easy to search for the optimum value, which is considered to be the simple single mode state. Rastrigin and Griewank are typical nonlinear multimodal functions, with a wide search space. The peak shape emerges in high and low volatile hops, and it is usually considered difficult to handle complex multimodal problems using optimization algorithms. The global optimum value for Rosenbrock is in a smooth and narrow parabolic valley. Thus, it is difficult to search for the optimum value because the function supplies little information for optimization algorithms. The six standard test functions comprise a class of complex nonlinear problems .
4.2. Results and Comparative Study
In order to compare the results in a standard manner, all of the experiments were performed on the same computer and using MATLAB R2013b. The parameters for the algorithms included the number of particles , inertia weight , dimension of particles , acceleration coefficients and , and iterations , which were set as follows. For W-PSO and the proposed BMPSO, the parameters were set as , , , , and , 30. For CPSO, the parameters were set as , , , , , , , and , 30. For CMAES, the parameters were set as described in . For MSCPSO, the parameters were , , , , , , 30, , , and . In this study, all of the experiments using the algorithms were replicated independently 50 times. The best, worst, mean, and standard deviations of the fitness values were recorded, so the results summarized the performance of the algorithms . The simulation results are presented in Table 2 () and Table 3 ().
Table 2: Results for 10D problems.
Table 3: Results for 30D problems.
In Tables 2 and 3, the best results are highlighted in bold. Table 2 shows that the proposed BMPSO performed better compared with W-PSO, CPSO, CMAES, and MSCPSO for Rosenbrock on “Std.” The results obtained by BMPSO were closest to the theoretical value. It should be noted that the optimal value of 0 could be found for Rosenbrock. Table 3 shows that our proposed BMPSO performed better for Sphere, Griewank, Schwefel, Elliptic, and Rosenbrock compared with the other algorithms, whereas W-PSO performed the best for Rastrigin. According to this analysis, we conclude that the proposed BMPSO has a greater capacity for handling most nonlinear optimization problems compared with W-PSO, CPSO, CMAES, and MSCPSO.
5. Engineering Application
We also solved an engineering problem based on lightweight optimization design for a gearbox to confirm the efficiency of the proposed BMPSO. In the development of the autoindustry, lightweight optimization design for gearboxes is attracting much attention because of energy and safety issues. There are three standard types of lightweight methods [26, 27]. In order to solve the problem effectively, Yang et al. proposed a method that establishes a simplified model first, before analyzing it using ANSYS and then establishing an approximate model, with the response surface methodology, followed by final optimization with a genetic algorithm (GA) . A simplified 3 model of a gearbox is shown in Figure 6. This simplified principle has no effect on the finite element analysis.
Figure 6: A model of a gearbox.
In the optimization process, the variables are the bottom thickness , axial thickness , and lateral thickness . Detailed descriptions of the establishment of the fitness function can be found in . The optimization problem for the lightweight optimization design of a gearbox can be represented by the following formula:
For the proposed BMPSO, the parameters were set as follows: , , , , and . The parameters for the GA were set as described in . This experiment was performed with 50 independent runs. The results obtained were expressed as the average of 50 runs.
The optimization results presented in Table 3 show that the weight of the gearbox is 31.3458 kg according to the proposed BMPSO, which is 8.4682 kg less than the original and 0.6368 kg less compared with GA. Thus, it can be concluded that the proposed BMPSO performed better compared with the GA. Therefore, the proposed BMPSO is effective in handling complex problems (see Table 4).
Table 4: Optimization results.
In this study, we proposed an improved multiswarm PSO method with transfer of the best particle called BMPSO, which utilizes three slave swarms and a master swarm that share information among themselves via the best particle. All of the particles in the slave swarms search for the global optimum using the standard PSO. The best particle replaces the worst in order to guide other particles to prevent them from becoming trapped by local optima. We performed a diversity analysis of BMPSO using the Sphere function. We also applied the proposed algorithm to standard test functions and an engineering problem.
We introduced parasitism into the standard PSO to develop a multiswarm PSO that balances exploration and exploitation. The advantages of the proposed BMPSO are that it is easy to understand, has a low number of parameters, and is simple to operate. In the proposed algorithm, the strategy of using the best particle to replace the worst enhances the global search capacity when solving nonlinear optimization problems. The optimum solution is obtained from the master swarm. The diversity analysis also demonstrated the obvious improvements obtained using the proposed algorithm. Compared with previously proposed algorithms, our BMPSO delivered better performance. The result obtained by the BMPSO in an engineering problem demonstrated its efficiency in handing a complex problem.
In further research, convergence analysis of the method should be performed in detail. Our further research will also focus on extensive evaluations of the proposed BMPSO by solving more complex and discrete practical optimization problems.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is supported by the National Natural Science Foundation of China (Grant no. 61425002).
References X.-S. Yang, Engineering Optimization: An Introduction with Metaheuristic Applications, vol. 7, John Wiley & Sons, 2010. X. Chu, M. Hu, T. Wu, J. D. Weir, and Q. Lu, “AHPS2: an optimizer using adaptive heterogeneous particle swarms,” Information Sciences, vol. 280, pp. 26–52, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusS. Saini, N. Zakaria, D. R. A. Rambli, and S. Sulaiman, “Markerless human motion tracking using hierarchical multi-swarm cooperative particle swarm optimization,” PLoS ONE, vol. 10, no. 5, Article ID e0127833, 2015. View at Publisher · View at Google ScholarJ. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, IEEE, Perth, Australia, November-December 1995. View at Publisher · View at Google ScholarE. Talbi, Metaheuristics, From Design to Implementation, vol. 6, John Wiley & Sons, 2009. Y. Wang, B. Li, T. Weise, J. Wang, B. Yuan, and Q. Tian, “Self-adaptive learning based particle swarm optimization,” Information Sciences, vol. 181, no. 20, pp. 4515–4538, 2011. View at Publisher · View at Google ScholarQ. Qin, S. Cheng, Q. Zhang, L. Li, and Y. Shi, “Biomimicry of parasitic behavior in a coevolutionary particle swarm optimization algorithm for global optimization,” Applied Soft Computing, vol. 32, no. 7, pp. 224–240, 2015. View at Publisher · View at Google ScholarC. K. Goh, K. C. Tan, D. S. Liu, and S. C. Chiam, “A competitive and cooperative co-evolutionary approach to multi-objective particle swarm optimization algorithm design,” European Journal of Operational Research, vol. 202, no. 1, pp. 42–54, 2010. View at Publisher · View at Google Scholar · View at ScopusA. Rathi and R. Vijay, “Expedite particle swarm optimization algorithm (EPSO) for optimization of MSA,” Swarm, Evolutionary, and Memetic Computing, vol. 6466, pp. 163–170, 2010. View at Google ScholarJ. Zhang and X. Ding, “A multi-swarm self-adaptive and cooperative particle swarm optimization,” Engineering Applications of Artificial Intelligence, vol. 24, no. 6, pp. 958–967, 2011. View at Publisher · View at Google Scholar · View at ScopusA. H. Gandomi, G. J. Yun, X.-S. Yang, and S. Talatahari, “Chaos-enhanced accelerated particle swarm optimization,” Communications in Nonlinear Science and Numerical Simulation, vol. 18, no. 2, pp. 327–340, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusG. Ding, L. Wang, P. Yang, P. Shen, and S. Dang, “Diagnosis model based on least squares support vector machine optimized by multi-swarm cooperative chaos particle swarm optimization and its application,” Journal of Computers, vol. 8, no. 4, pp. 975–982, 2013. View at Publisher · View at Google Scholar · View at ScopusY.-F. Hu, Y.-S. Ding, L.-H. Ren, K.-R. Hao, and H. Han, “An endocrine cooperative particle swarm optimization algorithm for routing recovery problem of wireless sensor networks with multiple mobile sinks,” Information Sciences, vol. 300, pp. 100–113, 2015. View at Publisher · View at Google ScholarQ. Qin, S. Cheng, Q. Zhang, L. Li, and Y. Shi, “Biomimicry of parasitic behavior in a coevolutionary particle swarm optimization algorithm for global optimization,” Applied Soft Computing, vol. 32, pp. 224–240, 2015. View at Publisher · View at Google ScholarY. H. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 69–73, IEEE, Anchorage, Alaska, USA, May 1998. View at Publisher · View at Google ScholarB. Liu, L. Wang, Y.-H. Jin, F. Tang, and D.-X. Huang, “Improved particle swarm optimization combined with chaos,” Chaos, Solitons and Fractals, vol. 25, no. 5, pp. 1261–1271, 2005. View at Publisher · View at Google Scholar · View at ScopusN. Hansen and A. Ostermeier, “Completely de-randomized self-adaptation in evolution strategies,” Evolutionary Computation, vol. 9, no. 2, pp. 159–195, 2001. View at Publisher · View at Google Scholar · View at ScopusY. V. Pehlivanoglu, “A new particle swarm optimization method enhanced with a periodic mutation strategy and neural networks,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 3, pp. 436–452, 2013. View at Publisher · View at Google Scholar · View at ScopusJ. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at Publisher · View at Google Scholar · View at ScopusA. E. Douglas, Symbiotic Interactions, Oxford University Press, Oxford, UK, 1994. X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at Publisher · View at Google Scholar · View at ScopusS. C. Esquivel and C. A. C. Coello, “On the use of particle swarm optimization with multimodal functions,” in Proceedings of the Congress on Evolutionary Computation (CEC '03), pp. 1130–1136, IEEE, Canberra, Australia, December 2003. View at Publisher · View at Google Scholar · View at ScopusK. T. Li, M. N. Omidvar, Z. Yang, and K. Qin, “Benchmark functions for the CEC' 2013 special session and competition on large-scale global optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '13), Cancun, Mexico, June 2013. T. Xiang, X. Liao, and K.-W. Wong, “An improved particle swarm optimization algorithm combined with piecewise linear chaotic map,” Applied Mathematics and Computation, vol. 190, no. 2, pp. 1637–1645, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusG. J. Koehler, “Conditions that obviate the no-free-lunch theorems for optimization,” INFORMS Journal on Computing, vol. 19, no. 2, pp. 273–279, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusY. Shi, P. Zhu, L. Shen, and Z. Lin, “Lightweight design of automotive front side rails with TWB concept,” Thin-Walled Structures, vol. 45, no. 1, pp. 8–14, 2007. View at Publisher · View at Google Scholar · View at ScopusY. Zhang, X. M. Lai, P. Zhu, and W. R. Wang, “Lightweight design of automobile component using high strength steel based on dent resistance,” Materials and Design, vol. 27, no. 1, pp. 64–68, 2006. View at Publisher · View at Google Scholar · View at ScopusG. Yang, J. Zhang, Q. Zhang, and X. Wei, “Research on lightweight optimization design for gear box,” in Proceedings of the 7th International Conference on Intelligent Robotics and Applications, pp. 576–585, Guangzhou, China, 2014.