Decomposition of Black-Box Optimization Problems by Community Detection in Bayesian Networks

Decomposition of Black-Box Optimization Problems by Community Detection in Bayesian Networks

Marcio K. Crocomo, Jean P. Martins, Alexandre C. B. Delbem
Copyright: © 2012 |Pages: 19
DOI: 10.4018/jncr.2012100101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Estimation of Distribution Algorithms (EDAs) have proved themselves as an efficient alternative to Genetic Algorithms when solving nearly decomposable optimization problems. In general, EDAs substitute genetic operators by probabilistic sampling, enabling a better use of the information provided by the population and, consequently, a more efficient search. In this paper the authors exploit EDAs' probabilistic models from a different point-of-view, the authors argue that by looking for substructures in the probabilistic models it is possible to decompose a black-box optimization problem and solve it in a more straightforward way. Relying on the Building-Block hypothesis and the nearly-decomposability concept, their decompositional approach is implemented by a two-step method: 1) the current population is modeled by a Bayesian network, which is further decomposed into substructures (communities) using a version of the Fast Newman Algorithm. 2) Since the identified communities can be seen as sub-problems, they are solved separately and used to compose a solution for the original problem. The experiments showed strengths and limitations for the proposed method, but for some of the tested scenarios the authors’ method outperformed the Bayesian Optimization Algorithm by requiring up to 78% fewer fitness evaluations and being 30 times faster.
Article Preview
Top

Background

Fundamentals of Schema Theory

All developments described in this section assume the same scenario analyzed by Holland (1975), using a simple GA with proportionate selection operator, one-point crossover and bit-flip mutations. The population X is composed of n decision vectors xi = (x1,..., x)T ∈ {0,1}.

In GA's context, each sample is called an individual, and the set of all individuals a populationX. Assuming that the search space is not random, if we take from the population those individuals which are best evaluated according to some function f(x), we are selecting solutions which share some important characteristics, measured by function f. Holland (1975) calls such features schema, since they identify the inner structure of the individuals which has made them to be selected at the first place.

A schema is a string representing similarities among the solutions in a population (Goldberg, 2002). Furthermore, using a simple similarity template we can represent any possible schema by adding a wild-card character, which will identify any dissimilarities among the n ℓ-bit strings.

  • Definition 1.1 (Schemata): A schemata S is a string of ℓ characters belonging to the alphabet {0,1,*} which defines a similarity template from a population of n strings of size ℓ.

We say that one solution x is represented by a schema S if x differs from S only in positions where si = *, for 1 ≤ i ≤ ℓ. Therefore, a schema S defines a set:

jncr.2012100101.m01

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 11: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 2 Issues (2017)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing