Deep learning for adaptive playing strength in computer games

Eli Omid David, Nathan S. Netanyahu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this abstract we present our initial results on the first successful attempt to train computer chess programs to realistically exhibit different playing strengths. While the main target of research in computer chess has always been achieving stronger playing strength, the seemingly easier task of creating realistically weaker programs remains challenging. Nowadays human chess players from novice to grandmaster are easily defeated by state-of-the-art chess programs, and thus gain little enjoyment or experience by playing against such an overwhelmingly superior opponent. As a result, commercial chess programs have always tried to allow the human user to adjust their strength to best match theirs. Previous attempts used by commercial chess programs involved either limiting the amount of time or search depth used by the program, or randomly playing inferior moves with some probability. All these methods have resulted in unrealistic playing style, which yields little benefit and enjoyment for the human opponents (i.e., the computer program does not pass a Turing test). In our work, we train the chess program to exhibit a targeted playing strength realistically, without any artificial handicap. To do so, we build on our previous DeepChess [1] work which allowed us to train an end-to-end neural network from scratch, achieving a state-of-the-art chess program. Here, instead of training the deep neural network on datasets of grandmaster chess players only, we train two separate neural networks using DeepChess architecture. Using the ChessBase Mega Database, we extract two hundred thousand positions from games where both players were rated above 2500 Elo, and train the first neural network (to which we refer as DeepChessStrong). Similarly, we train a second neural network that using two hundred thousand positions from games where both players were rated below 2300 (to which we refer as DeepChessWeak). To compare the performance of these two chess programs, we conducted 100 games at a time control of 30 min per game for each side. The result was DeepChessStrong defeating DeepChessWeak by of 78.5% to 21.5%, corresponding to a rating difference of 225 Elo in favor of DeepChessStrong. These results present the first successful attempt at adaptive adjustment of playing strength in computer chess, while producing realistic playing style. This method can be extended to additional games in order to achieve realistic playing style at different playing strength levels.

Original languageEnglish
Title of host publicationArtificial Neural Networks and Machine Learning – ICANN 2017 - 26th International Conference on Artificial Neural Networks, Proceedings
EditorsAlessandra Lintas, Alessandro E. Villa, Stefano Rovetta, Paul F. Verschure
PublisherSpringer Verlag
Pages741-742
Number of pages2
ISBN (Print)9783319686110
StatePublished - 2017
Event26th International Conference on Artificial Neural Networks, ICANN 2017 - Alghero, Italy
Duration: 11 Sep 201714 Sep 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10614 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference26th International Conference on Artificial Neural Networks, ICANN 2017
Country/TerritoryItaly
CityAlghero
Period11/09/1714/09/17

Bibliographical note

Publisher Copyright:
© Springer International Publishing AG 2017.

Fingerprint

Dive into the research topics of 'Deep learning for adaptive playing strength in computer games'. Together they form a unique fingerprint.

Cite this