Beyond Star: A Learning Architectural Model to the Generalization of Strategies in Starcraft
Generalization; Reinforcement Learning; Starcraft.
One of the main research fields under Artificial Intelligence, on the context of Digital Games, is the study of Real-Time Strategy Games (RTS), which are commonly considered the successors of classic strategy games such as Checkers, Chess, Backgammon and Go, and impose great challenges to this area’s researches due to the great complexity involved.Currently, the field aims to study the RTS using Starcraft I and II as stage of experimentation. The main feature sought in artificial agents developed to this kind of game is high performance, having as its main objective to defeat specalist human players. On this context is inserted the generalization problematic, which is the capacity of an artificial agent of reusing previous experiences, from different contexts, to a new environment. Generalization is a very studied field by the cientific community, but still poorly explored on the context of RTS. By this reason, this work proposes the Beyond Star model, which consists in an architecture to generically represent the state-space of Starcraft, using as base deep reinforcement learning techniques aiming to learn effective strategies to Starcraft from simplified environments.