Convergence analysis of ant colony learning


Reference:
J. van Ast, R. Babuska, and B. De Schutter, "Convergence analysis of ant colony learning," Proceedings of the 18th IFAC World Congress, Milan, Italy, pp. 14693-14698, Aug.-Sept. 2011.

Abstract:
In this paper, we study the convergence of the pheromone levels of Ant Colony Learning (ACL) in the setting of discrete state spaces and noiseless state transitions. ACL is a multi-agent approach for learning control policies that combines some of the principles found in ant colony optimization and reinforcement learning. Convergence of the pheromone levels in expected value is a necessary requirement for the convergence of the learning process to optimal control policies. In this paper, we derive upper and lower bounds for the pheromone levels and relate those to the learning parameters and the number of ants used in the algorithm. We also derive upper and lower bounds on the expected value of the pheromone levels.


Downloads:
 * Online version of the paper
 * Corresponding technical report: pdf file (148 KB)
      Note: More information on the pdf file format mentioned above can be found here.


Bibtex entry:

@inproceedings{vanBab:11-012,
        author={J. van Ast and R. Babu{\v{s}}ka and B. {D}e Schutter},
        title={Convergence analysis of ant colony learning},
        booktitle={Proceedings of the 18th IFAC World Congress},
        address={Milan, Italy},
        pages={14693--14698},
        month=aug # {--} # sep,
        year={2011},
        doi={10.3182/20110828-6-IT-1002.01533}
        }



Go to the publications overview page.


This page is maintained by Bart De Schutter. Last update: March 21, 2022.