Deep Reinforcement Learning from
作者:Johannes Heinrich J.HEINRICH@CS.UCL.AC.UK
David Silver D.SILVER@CS.UCL.AC.UK
University College London, UK
摘要:
Many real-world applications can be described as large-scale games of imperfect information. To deal with these challenging domains, prior
work has focused on computing Nash equilibria in a handcrafted abstraction of the domain.
In this paper we introduce the first scalable end-to-end approach to learning approximate Nash equilibria without any prior knowledge. Our method combines fictitious self-play with deep reinforcement learning. When applied to Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium, whereas common reinforcement learning methods diverged. In Limit Texas Hold’em, a poker game of real world scale, NFSP learnt a competitive strategy
that approached the performance of human experts and state-of-the-art methods.