Back in January, Google's DeepMind team announced that its AI, dubbed AlphaStar, had beaten two top human professional players at StarCraft.

"This is a dream come true," said DeepMind co-author Oriol Vinyals, who was an avid StarCraft player 20 years ago.

By playing itself over and over again, AlphaZero trained itself to play Go from scratch in just three days and soundly defeated the original AlphaGo 100 games to 0.

The most recent version combined deep reinforcement learning (many layers of neural networks) with a general-purpose Monte Carlo tree search method.

With AlphaZero's success, DeepMind's focus shifted to a new AI frontier: games of partial (incomplete) information, like poker, and multi-player video games like Starcraft II.

Not only is the gameplay map hidden to players, but they must also control hundreds of units (mobile game pieces that can be built to influence the game) and buildings (used to create units or technologies that strengthen those units) simultaneously.

The text above is a summary, you can read full article here.