Title:  Learning in Games via Reinforcement and Regularization

 

Abstract

We investigate a class of reinforcement learning dynamics where players adjust their strategies based on their actionsÕ cumulative payoffs over timeÑspecifically, by playing mixed strategies that maximize their expected cumulative payoff minus a regularization term. A widely studied example is exponential reinforcement learning, a process induced by an entropic regularization term which leads mixed strategies to evolve according to the replicator dynamics. However, in contrast to the class of regularization functions used to define smooth best responses in models of stochastic fictitious play, the functions used in this paper need not be infinitely steep at the boundary of the simplex; in fact, dropping this requirement gives rise to an important dichotomy between steep and nonsteep cases. In this general framework, we extend several properties of exponential learning, including the elimination of dominated strategies, the asymptotic stability of strict Nash equilibria, and the convergence of time-averaged trajectories in zero-sum games with an interior Nash equilibrium.

 

 

 

BIO:  Panayotis Mertikopoulos graduated valedictorian from the Physics Department of the University of Athens in 2003, majoring in astrophysics and theoretical mechanics. He obtained the M.Sc. and M.Phil. degrees in mathematics from Brown University, USA, in 2005 and 2006 respectively, and the Ph.D. degree in applied mathematics from the University of Athens in 2010. During 2010-2011, he was a post-doctoral researcher at the Economics and Operations Research Department of ƒcole Polytechnique, Paris, France. Since 2011, he has been a CNRS researcher at the Laboratoire d'Informatique de Grenoble, Grenoble, France.

 

P. Mertikopoulos was an Embeirikeion Foundation Fellow between 2003 and 2006, and received the best paper award in NETGCOOP '12. He is a member of the steering committee of the Optimization and Decision Theory branch of the French Society of Industrial and Applied Mathematics. His main research interests lie in algorithmic learning, optimization, game theory, and their applications to networks and operations research.