Article Preview
Top1. Introduction
Many optimization algorithms have been proposed for solving the continuous nonlinear function optimization problem:
The vector is composed of D real-valued variables, and the vectors and are assumed finite and to satisfy. The function is typically multimodal, so that local optima do not in general correspond to global optima. Examples for some well-known optimization methods used to solve this problem are Genetic Algorithms (GA) (Goldberg 1989), Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995) and Differential Evolution (DE) (Storn and Price 1995).
In this paper we are looking for a method that is easy to understand, easy to code, easy to use, but nevertheless efficient:
- •
Easy to use–requires no tuning and has few parameters (ideally, none that are visible to the user).
- •
Easy to implement – typically has around 50 instructions in a language like MATLAB®.
- •
Easy to understand (i.e. one can “visualize” what happens in 2D or 3D problems).
- •
Efficient (i.e. based on properties that are valid in any dimension (or, at least, on a large range of dimensions, say 2 to 100)).
Bio-inspired algorithms (Olariu and Zomaya 2005) may be good candidates, but they are coming from the 3D world and most of them only are good in low dimensional problems.
Hence the idea was to “revisit” the classical Nelder-Mead (NM) simplex method (Nelder and Mead 1965), which is both purely geometrical and nevertheless quite intuitive and easy to implement. However, NM is well known to be bad for multimodal problems because it is easily trapped into a local optimum (Lagarias et al. 1998). In addition, NM suffers when applied to large dimensional problems (Wright 1996). Consequently, several modifications to NM have been proposed in the literature to improve its performance for global optimization (e.g. Zhao et al. 2009 and Gao and Han 2012). A recent modification has been proposed by Karimi and Siarry (2012) where a global optimization method based on the NM method was proposed. The proposed method, called Global Simplex Optimization (GSO), incorporates a multi-stage, stochastic and weighted version of the NM reflection operator. GSO generally performed better than five optimization methods (including CMA-ES (Hansen 2006)) on 17 functions. However, no single problem in their study has a dimension greater than 10.