Beyond Star: A Learning Architectural Model to the Generalization of Strategies in RTS Games
Generalization; Reinforcement Learning; StarCraft.
One of the main research fields under Artificial Intelligence, on the context of Digital Games, is the study of Real-Time Strategy Games (RTS), which are commonly considered the successors of classic strategy games such as Checkers, Chess, Backgammon and Go, and impose great challenges to this area’s researchers due to the great complexity involved. Currently, the field aims to study the RTS using StarCraft I and II as stage of experimentation. The main feature sought in artificial agents developed to this kind of game is high performance, having as its main objective to defeat specialist human players. On this context it is inserted the generalization problematic, that is the capacity of an artificial agent of reusing previous experiences, from different contexts, to a new environment. Generalization is a very studied field by the scientific community, but still poorly explored on the context of RTS. By this reason, this work proposes the Beyond Star model, which consists in an architecture to generically represent the state-space of Real-Time Strategy Games, using as base deep reinforcement learning techniques aiming to learn effective strategies to be applied in several RTS environments. As a basis to the architecture, it was developed a platform titled URNAI, a tool that integrates several learning algorithms and environments, such as StarCraft II and DeepRTS. To analyse if the solution is capable of allowing agent generalization, trainings were done in DeepRTS and tests were carried out in StarCraft II. It was verified that the trained agents were capable of generalizing their knowledge from one environment to the other, showing a promising result that allowed to validate this work’s proposal.