Online Evolution of Adaptive Robot Behaviour

Online Evolution of Adaptive Robot Behaviour

Fernando Silva, Paulo Urbano, Anders Lyhne Christensen
Copyright: © 2014 |Pages: 19
DOI: 10.4018/ijncr.2014040104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The authors propose and evaluate a novel approach to the online synthesis of neural controllers for autonomous robots. The authors combine online evolution of weights and network topology with neuromodulated learning. The authors demonstrate our method through a series of simulation-based experiments in which an e-puck-like robot must perform a dynamic concurrent foraging task. In this task, scattered food items periodically change their nutritive value or become poisonous. The authors demonstrate that the online evolutionary process, both with and without neuromodulation, is capable of generating controllers well adapted to the periodic task changes. The authors show that when neuromodulated learning is combined with evolution, neural controllers are synthesised faster than by evolution alone. An analysis of the evolved solutions reveals that neuromodulation allows for a more effective expression of a given topology's potential due to the active modification of internal dynamics. Neuromodulated networks learn abstractions of the task and different modes of operation that are triggered by external stimulus.
Article Preview
Top

Introduction

The development of control systems for autonomous robotics has progressed significantly since the manual programming in low-level languages of assembly robotics (Lozano-Perez, 1983). In particular, evolutionary computation techniques have been widely studied with the purpose of automating the design of robotic systems (Floreano & Keller, 2010). In evolutionary robotics (ER), robot controllers are typically based on artificial neural networks (ANNs) due to their capacity to tolerate noise in sensors. The parameters of the ANN, namely the connection weights, the neuron bias terms, and occasionally the topology, are optimised by an evolutionary algorithm (EA), a process termed neuroevolution (Yao, 1999; Floreano et al., 2008).

Evolutionary synthesis of neurocontrollers is usually performed offline in simulation, which presents a number of limitations. When a suitable neurocontroller is found, it is deployed on real robots. Since no evolution or adaptation takes place online, the controllers are fixed solutions that remain static throughout the robot’s lifetime. If environmental conditions or task parameters become distinct from those encountered during offline evolution, the evolved controllers may be incapable of solving the task as they have no means to adapt.

Online evolution is a process of continuous adaptation that allows robots to modify their behaviour in order to respond to changes in the task or in environmental conditions. An EA is executed on the robots themselves while they perform their tasks. This way, robots may be capable of long-term self-adaptation in a completely autonomous manner. In recent years, different approaches to online evolution have been proposed, see for instance (Watson et al., 2002; Bianco & Nolfi, 2004; Bredeche et al., 2012). Notwithstanding, in such contributions, online neuroevolution has been limited to evolving weights in fixed-topology ANNs. In a recent study (Silva et al., 2012), we proposed a novel approach called odNEAT. odNEAT is an efficient online, distributed, and decentralised EA for online evolution in groups of robots. odNEAT optimises both weights and network topology as part of a continuous evolutionary process.

Evolution is a form of adaptation that acts at genotypic level. Controllers produced are static in the sense that they do not change their parameters while they are controlling the robot. Whereas evolution produces phylogenetic adaptation, online learning operates on a shorter time-scale. Learning acts at phenotypic level and gives each individual controller the capability to self-adjust during task execution. Several studies indicate that learning can accelerate the evolution of good solutions, a phenomenon known as the Baldwin effect (Hinton & Nowlan, 1987).

Agents controlled by ANNs can learn from experience by dynamically changing their internal synaptic strengths. This mechanism is inspired by how organisms in nature adapt to cope with dynamic and unstructured environments as a result of synaptic plasticity (Niv et al., 2002). In this article, we synthesise behavioural control for autonomous robots based on online evolution and online learning. We combine odNEAT, capable of efficient evolution of weights and network topology, with neuromodulation (Soltoggio et al., 2008). In biological organisms, neuromodulation is a form of synaptic modification involving modulatory neurons that diffuse chemicals at target synapses. Neuromodulation has been suggested as essential for stabilising classical Hebbian plasticity and memory (Bailey et al., 2000). The combination of online evolution and neuromodulated learning allows the evolutionary process to explore two distinct types of plasticity: (i) structural plasticity, the generation of new connections and neurons, which in turn redefines the network topology, and (ii) synaptic plasticity that changes the strength of existing connections in a given topology.

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 11: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 2 Issues (2017)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing