Article Preview
Top1. Introduction
Optimization is the process of finding the most promising solution for a given problem. It requires maximizing or minimizing a desired objective function globally in a multi-dimensional search space. Optimization has already been one of the most important topics in science and engineering research. In research areas such as machine learning, artificial intelligence, and complex systems, there are plenty of instances that can be regarded as optimization problems. Generally, a continuous optimization problem can be defined as , where is the number of the decision variables. Each variable satisfies a boundary constraint .
Conventional optimization techniques, such as gradient-based methods, can be successfully applied to some relatively less complicated applications. But they are easily getting trapped in the local optima for many multimodal problems in real world. The stochastic algorithms that have been proposed for globally solving optimization problems can be divided into two major categories. The first category is based on the local search strategy such as the simulated annealing algorithm (SA). SA (Kirkpatrick, Gelatt, & Vecchi, 1983) was firstly derived from the analogy to heating and cooling process of materials. Many variants of SA algorithms were subsequently proposed to improve the performance of SA (Ingber, 1996; Wu & Chow, 2007). The second category is based on the global search strategy such as Genetic Algorithm (GA) (Vose, 1999; Kumar & Rockett, 1998; Maruyama & Lgarashi, 2008), Evolution Strategy (ES) (Beyer, 2001), Differential Evolution (DE) (Storn & Price, 1997; Kim et al., 2007; Lampinen & Storn, 2004), Particle Swarm Optimization (PSO) (Eberhart & Kennedy, 1995; Kennedy & Eberhart, 1995; Shi, & Eberhart, 1998; Eberhart & Shi, 2001), and Self-Organizing Migrating Algorithm (SOMA) (Zelinka, 2004). The second category, also called Evolutionary Algorithms (EAs), has become increasingly popular for solving highly complicated optimization problems. Recently, there are other newly developed optimization algorithms using neural networks (Huhse et al., 2002; Milano, Koumoutsakos, & Schmidhuber, 2004). An algorithm called self-organizing potential field network (SOPFN) is proposed with significant performance improvement on solving multimodal problems (Xu & Chow, 2010). SOPFN combines the self-organizing map (SOM) (Kohonen, 1997) and vector potential field algorithm (VPF) (Masoud & Masoud, 2000; Khatib, 1986) to model the search space as a self-organizing potential field. Compared with local search methods, global search methods are not easily getting trapped in the local optima. But the global search algorithms need much more computation time to find the optimum, as they costly explore the whole search space. For example, the original PSO (Eberhart & Kennedy, 1995; Kennedy & Eberhart, 1995) is proposed to evolve each individual using the successful histories cooperatively. In the original PSO, each individual called a particle adapts its position towards its own historical best position and the best position found so far by its companion.