Generalized pheromone update for ant colony learning in continuous state spaces


Reference:
J. van Ast, R. Babuska, and B. De Schutter, "Generalized pheromone update for ant colony learning in continuous state spaces," Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC 2010), Barcelona, Spain, pp. 2617-2624, July 2010.

Abstract:
In this paper, we discuss the Ant Colony Learning (ACL) paradigm for non-linear systems with continuous state spaces. ACL is a novel control policy learning methodology, based on Ant Colony Optimization. In ACL, a collection of agents, called ants, jointly interact with the system at hand in order to find the optimal mapping between states and actions. Through the stigmergic interaction by pheromones, the ants are guided by each others experience towards better control policies. In order to deal with continuous state spaces, we generalize the concept of pheromones and the local and global pheromone update rules. As a result of this generalization, we can integrate both crisp and fuzzy partitioning of the state space into the ACL framework. We compare the performance of ACL with these two partitioning methods by applying it to the control problem of swinging-up and stabilizing an under-actuated pendulum.


Downloads:
 * Corresponding technical report: pdf file (247 KB)
      Note: More information on the pdf file format mentioned above can be found here.


Bibtex entry:

@inproceedings{vanBab:10-019,
        author={J. van Ast and R. Babu{\v{s}}ka and B. {D}e Schutter},
        title={Generalized pheromone update for ant colony learning in continuous state spaces},
        booktitle={Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC 2010)},
        address={Barcelona, Spain},
        pages={2617--2624},
        month=jul,
        year={2010}
        }



Go to the publications overview page.


This page is maintained by Bart De Schutter. Last update: December 15, 2015.