Article Preview
Top1. Introduction
Evolutionary Multi-objective Optimization (EMO) method is a population-based approach to solve Multi-objective Optimization Problem (MOP). A significant amount of work has been done in recent past, wherein researchers developed a number of efficient algorithms such as SPEA2 (Zitzler, Laumanns, & Thiele, 2001), NSGA-II (Deb, Pratap, Agarwal, & Meyarivan, 2002), PESA-II (Corne, Jerram, Knowles, Oates, & Martin, 2001), -EMO (Deb, Mohan, & Mishra, 2005) and MOEA/D (Q. Zhang & Li, 2007), and NSGA-III (Deb & Jain, 2014). The foremost reason for using Evolutionary Algorithm's (EA) in solving MOP is the unique feature of population-based approach where each individual is a possible solution candidate exploring a number of solution space in single simulation run, giving rise to a good approximation of complete Pareto set and consequently the Pareto front. Primarily, there are two aspirations that generic MOEA sets to achieve while solving MOPs (Deb, 1999): first, converge the search onto the true global Pareto front, and second, produce a diversified set of Pareto points in the generated approximate Pareto front. However, a major problem in EMO appears in the form of Dominance Resistance (DR) phenomenon (Li, Li, Tang, & Yao, 2015), wherein an exponentially increasingly fraction of the generated population becomes nondominated, thus adversely affecting the generation of the better population. Under such circumstances where dominance based primary criterion fails to define preference among solutions, a relaxed dominance criterion such ε-dominance (Laumanns, Thiele, Deb, & Zitzler, 2002) is applied. Additionally, in order to counter the effect of DR, and enhance the selection pressure towards the Pareto front, a density-based secondary criterion, called Active Diversity Preservation mechanism (ADP) is applied. However, computing the density or extent of crowding of solutions in a population and identification of neighbors becomes computationally expensive in a large-dimensional space. It has been shown that purely Pareto based approaches like NSGA-II and SPEA2 have deteriorated due to DR and ADP effects (Purshouse & Fleming, 2007) on Many objective Optimization Problems (MaOPs) where a number of objective functions are greater than 2. An approximation of diversity estimates the computations faster may cause an unacceptable distribution of solutions at the end (Deb & Jain, 2014). Additionally, in some cases, the final solution set might not even converge to the true Pareto front (Wagner, Beume, & Naujoks, 2007).
In view of the above discussion, both issues of convergence as well as diversity preservation have been addressed in the present work. In the present approach, both the Pareto based concept with the aggregation based concept is combined to use merits of both and to reduce the computational cost. A new Normalized Dominance operator is proposed with Pareto dominance-based vector optimization method to counter the DR phenomenon. An external elitist archive is maintained in parallel which generates an evenly distributed global Pareto front while reducing the overall computational cost. A concept of merger probability is also implemented to enforce restricted mating and a fair chance selection mechanism. It is shown that the present framework works fairly well on different pairs of crossover and mutation operators. Furthermore, it can be scaled up to at least 10 objective problems. Remainder of this paper is organized as follows: next, in this section, the multi-objective optimization problem definition is defined in brief followed by a section where a thorough literature review is presented. Section 2 introduces the details of the proposed Normalized Dominance operator , an efficient binary ranking method along with modifications introduced in GA methods. Section 3 details the experimental setup and description of simulations performed on 2-objectives followed by 3, 5, 8 and 10-objective benchmark test problems. The results obtained are discussed in section 4 with final conclusion provided in section 5.