As much as we detest the usual F2P model, we wish EA had done this one differently.
Facebook researchers have chosen Minecraft as the training ground for the development of the next stage of artificial intelligence, as the technology looks to conquer a major challenge.systems have been learning to carry out specific tasks, including playing soccer and filling in gaps in images, and have proven to be better at some of them than humans, such as in games such as in Texas Hold ’em poker and Quake III‘s Capture the Flag mode.Facebook Research’s Arthur Szlam and his colleagues, who have started working on an A.I.assistant that can perform a variety of tasks, have decided to break through the limitation with the help of Minecraft.Minecraft, possibly the best-selling video game of all time, allows players to move around a 3D environment, exploring and building in a limitless world.The game’s infinite variety, in combination with simple and predictable rules, makes it an ideal environment for training A.I.
Google’s DeepMind artificial intelligence lab surpassed another challenge with a computer program that was able to defeat human opponents in Quake III Arena‘s Capture the Flag mode.This is not the first time that a DeepMind program proved to be capable of beating human players.In 2016, AlphaGo defeated Lee Sedol, the best Go player in the world, with a 4 to 1 score.DeepMind has now turned to Quake III Arena‘s Capture the Flag mode to exhibit its capabilities.In Capture the Flag, two multiplayer teams attempt to capture the flag of their opponents and bring it back to their home base to score, while also trying to prevent their opponents from doing the same by shooting them to make them drop the flag if they are carrying one.The game mode is a step up from previous tests due to its multiplayer nature, as it requires teamwork between A.I.
On this episode of Digital Trends Live, host Greg Nibler is joined by DT Senior Writer Parker Hall to discuss the trending tech stories of the day.They include Amazon’s interest in becoming your new cell phone provider, a victory by Google’s DeepMind A.I.over Quake III, a report that the Galaxy Note may lose its physical buttons and headphone jack, our Worldwide Developers Conference (WWDC) preview, our weekly Tech Briefs segment, and the latest on Star Wars: Galaxy’s Edge, which opens today at Disneyland.As part of our Digital Trends Best Jobs in Tech series, we talk with Ciara Pressler, founder of Pregame and author of Game Plan: Achieve Your Goals in Life, Career, and Business, about how to turn your goals into reality.We then welcome Daniel Burrows, founder and chief executive officer of XStream Trucking, about developing products that maximize efficiency in long-haul trucks and the company’s foldable wing that reduces drag.Timothy Childs, founder and CEO of Treasure8, then joins us to discuss how to eliminate food waste through advanced dehydration technology.
DeepMind researchers have taught artificially intelligent gamers to play a popular 3D multiplayer first-person video game with human-like skills - a previously insurmountable task.The reinforcement learning-trained AI agents demonstrate an uncanny ability to develop and use independently learned high-level strategies to compete and cooperate in the game environment.Reinforcement learning (RL), a method used to train artificially intelligent agents, has shown success in producing artificially intelligent players that are able to navigate increasingly complex single-player environments.These agents can also achieve superhuman mastery in competitive two-player turn-based games, like chess and Go.However, the ability to play multiplayer games, particularly those that involve teamwork and interaction between multiple independent players, has eluded the capabilities of AI systems to date.Here, Max Jaderberg and colleagues present a RL-trained AI agent that can achieve human-level performance in the seminal multiplayer 3D first-person video game, Quake III Arena Capture the Flag.
Two teams each have a marker located at their respective bases, and the objective is to capture the other team’s marker and return it safely back to their base.Where capture the flag is concerned in the video game domain, non-player characters have traditionally been programmed with heuristics and rules affording limited freedom in choice.In a paper published this week in the journal Science roughly a year following the preprint, researchers at DeepMind, the London-based subsidiary of Google parent company Alphabet, describe a system capable not only of learning how to play capture the flag in Id Software’s Quake III Arena, but of devising entirely novel human-level team-based strategies.He further explained that the key technique at play is reinforcement learning, which employs rewards to drive software policies toward goals — in the DeepMind agents’ case, whether their team won or not.“The specific way we trained our [AI] … is a good example of how to scale up and operationalize some classic evolutionary ideas.”DeepMind’s cheekily-dubbed For The Win (FTW) agents learn directly from on-screen pixels using a convolutional neural network, a collection of mathematical functions (neurons) arranged in layers modeled after the visual cortex.
Back in early July, we reported on A.I.players managing to defeat humans in the shooter Quake III Arena by making use of tactics and objective-focused strategies rather than just killing opponents.Artificial players have now managed to prove they can compete against one of the most hardcore competitive games around, Dota 2, and they were more than up to the challenge.team drafted its players for the first game and projected that it had 95 percent change of beating its opponent.It came out of the gate strong, getting five kills in a row before the humans were able to get one, and it only got more lopsided from there.The OpenAI Five took a tower, managed to eventually wipe the entire human team before taking their base, and declared victory a short time later.
OpenAI researchers reach the highest score yet on the computer game Montezeuma's Revenge through reinforcement learning, DeepMind teaches its bots to play Capture the Flag on Quake III Arena and the US Department of Education are exploring the idea of using AI to mark essays.Early on in training, the agent begins every episode near the end of the demonstration.Once the agent is able to beat or at least tie the score of the demonstrator on the remaining part of the game in at least 20 per cent of the rollouts, we slowly move the starting point back in time,” it explained in a blog post.“We keep doing this until the agent is playing from the start of the game, without using the demo at all, at which point we have an RL-trained agent beating or tying the human expert on the entire game.”The agent reached a score of 74,500.Players can chase after opponents in order to tag them and send them back to their spawning point.
Those among us who fear that we’ve already passed the point of no return when it comes to artificial intelligence becoming self-aware and plotting to murder the human race will likely cite A.I.research company DeepMind’s latest experiment as further proof of that notion.Using Id Software’s Quake III Arena, DeepMind has managed to train artificial players to be even more effective than their human counterparts.The challenge for DeepMind was not to see if its A.I.agents could defeat human players in battle, but rather if they could work together on procedurally generated levels to complete an objective — in this case, capture the flag.This forced them to actually learn the strategies needed to win in a similar manner to how human players might improve at the game.
AI agents continue to rack up wins in the video game world.Last week, OpenAI’s bots were playing Dota 2; this week, it’s Quake III, with a team of researchers from Google’s DeepMind subsidiary successfully training agents that can beat humans at a game of capture the flag.As we’ve seen with previous examples of AI playing video games, the challenge here is training an agent that can navigate a complex 3D environment with imperfect information.Usually this means one version of the AI agent playing against an identical clone.DeepMind gave extra depth to this formula by training a whole cohort of 30 agents to introduce a “diversity” of play styles.As ever, it’s impressive how such a conceptually simple technique can generate complex behavior on behalf of the bots.
With Quake Champions, Saber Interactive has set itself the task to make a modern Quake: frenetic and crazy as it ever was, but with larger-than-life champions, each with their own hooks and special abilities.It’s a fusion of the grim and gothic original, the heady, explosiveness of Quake III Arena, and a champion formula that calls to mind titles like Overwatch.It’s not unfamiliar territory for Saber, which has dabbled in a lot of different genres in the 16 years since it was founded by Matthew Karch, Andrey Iones, and Anton Krupkin.It’s the shooters that the studio is perhaps best known for, however.The tech side of things is what makes Saber gravitate towards them, thinks Karch.That focus has been there since the start.
A beta for Bethesda and id Software’s Quake Champions is set to kick off a week from now although given its closed nature, not everyone will get in right away.Bethesda is now accepting beta sign-ups over on the official Quake Champions website; simply enter your e-mail address and country of residence and confirm that you’re at least 18 years or older to throw your name in the hat.Quake Champions is thought by many to be id Software’s attempt to make a modern esports title.Although based off the Quake namesake and gameplay that made Quake III Arena such a great game, Champions can perhaps be best viewed as Quake for the Overwatch era (again, with lots of esports elements mixed in).id Software on Thursday also unveiled another of the game’s characters, Anarki the Transhuman Punk from Quake III Arena.For the uninitiated, Anarki is more or less a modern-day version of Michelangelo from the Teenage Mutant Ninja Turtles (same voice, same personality and yes, he even has a skateboard-like hoverboard).
Its gaming legacy isn t as great.The popular sci-fi franchise about human s exploring the galaxy centuries in the future, is now 50 years old.Text-based games about the Enterprise and her crew starting hit computers in the 70s, followed by arcade and console titles that explored every aspect imaginable from the franchise: exploration, diplomacy, ship combat, and more.And yet, the best Star Trek game I ever played was a mindless first-person shooter based on the worst show in the series, Voyager.Activision released Star Trek: Elite Force for PC in 2000.New engines like id Tech III and Unreal Engine were dramatically improving 3D graphics.At the time, id Tech III was top-of-the-line.Gamers had a taste of it with Quake III Arena in 1999, but that title was multiplayer-only.Elite Force gave us our first single-player experience that showed off the new engine.City on the Edge of Forever it wasn t.However, developer Raven Software s execution was fantastic.
Bethesda has been busy showing off Quake Champions, with the first gameplay video being revealed over at QuakeCon 2016.Of course, this follows the reboot of Doom which went down very well – multiplayer aside, that is, and hopefully that's where Quake Champions will redress the balance.The video clip shows a game which returns firmly to Quake III Arena Q3A roots in terms of the look, levels and the gameplay.So we are talking towering levels which you bounce around, with rocket-jumping and jump pads aplenty.Zebedee would be entirely at home.So there's fast-paced action aplenty with the emphasis on pure deathmatch, or as the trailer puts it: "Pure speed, pure skill, pure FPS."
More

Top