Google’s DeepMind artificial intelligence program AlphaStar will battle attendees at BlizzCon 2019 in matches of the classic Blizzard real-time strategy game StarCraft 2.In the Blizzard Arcade section of the fan event in Anaheim, California, Blizzard has set up machines for fans to play against the AI system.This battle is likely to be fruitless for BlizzCon fans.The AI is reportedly better than 99.8% of StarCraft 2 players.A decade ago, this would have been a funny joke.But this means it can probably beat me, which is not such a difficult task.
Not bad if you have over $3 million to splash out on cloudDeepMind’s AlphaStar AI bot has reached Grandmaster level at StarCraft II, a popular battle strategy computer game, after ranking within the top 0.15 per cent of players in an online league.Three neural networks were trained to play a series of 1V1 matches as each species in the game.“The supervised agent was rated in the top 16 per cent of human players, the midpoint agent within the top 0.5 per cent, and the final agent, on average, within the top 0.15 per cent, achieving a Grandmaster level rating for all three races,” according to the results published in a paper in Nature this week.AlphaStar Final performed the best out of them all, and was ranked above 99.8 per cent of amateur human players in the Battle.net league.The performance for AlphaStar Final, however, was not calculated from scratch and instead picked up from where AlphaStar Mid left off and after it had played an additional 90 games on top.
Developed by DeepMind, the system is ranked above the 99.8 percentile of active players on Battle.net, the official game server of StarCraft II.UK-based DeepMind, which is owned by Google’s parent company Alphabet Inc., previously developed systems capable of playing chess, Go, and shogi at a superhuman level, but StarCraft II presented an entirely different set of challenges.Released by Blizzard Entertainment in 2010, StarCraft II is a science fiction-themed real-time strategy video game in which two players compete against each other.Gamers can choose to play as one of three alien species – Terrans, Protoss, and Zerg – each with their own strengths, weaknesses, and idiosyncrasies.“I’ve found AlphaStar’s gameplay incredibly impressive – the system is very skilled at assessing its strategic position, and knows exactly when to engage or disengage with its opponent.”StarCraft II has attracted the interest of AI researchers owing to its complex and open-ended gameplay.
DeepMind's artificial intelligence platforms have become legendary for their ability to master complex games like chess, shogi and Go, crushing our puny human brains with advanced machine learning techniques.Earlier this year a new version of the AI built for real-time strategy game StarCraft II, dubbed AlphaStar, was unveiled and carried on DeepMind's tradition of putting humans to shame, trampling some of the top human StarCraft II players in the world.Why would researchers build an AI for a niche video game title and what can it teach us about artificial intelligence and machine learning?There are countless strategies and counter strategies which help human players, at the top levels of play, to win.It's like an incredibly complicated game of rock-paper-scissors.In a game of StarCraft II, players can't see what their opponent is doing like they might in chess or Go.
Back in January, Google's DeepMind team announced that its AI, dubbed AlphaStar, had beaten two top human professional players at StarCraft."This is a dream come true," said DeepMind co-author Oriol Vinyals, who was an avid StarCraft player 20 years ago.By playing itself over and over again, AlphaZero trained itself to play Go from scratch in just three days and soundly defeated the original AlphaGo 100 games to 0.The most recent version combined deep reinforcement learning (many layers of neural networks) with a general-purpose Monte Carlo tree search method.With AlphaZero's success, DeepMind's focus shifted to a new AI frontier: games of partial (incomplete) information, like poker, and multi-player video games like Starcraft II.Not only is the gameplay map hidden to players, but they must also control hundreds of units (mobile game pieces that can be built to influence the game) and buildings (used to create units or technologies that strengthen those units) simultaneously.
UGC
Eye Hospitals partnered with DeepMind, one of the world’s leading AI services companies.Through the partnership, researchers hope to use one million anonymous retinal images to train artificial intelligence (AI) in the automated diagnosis of optical coherence tomography (OCT) images.OCT images are complex and take a long time for doctors to evaluate, which affects how quickly patients can obtain a formal diagnosis and initiate treatment.Moorefields research team does not need to learn code because AI is produced through user-friendly deep learning software.The technique is now proven to match the accuracy of expert ophthalmologists and optometrists and generate the right referral information.The diagnostic capabilities of AI are benchmarked against doctors’ decisions at Moorefields Eye Hospital, demonstrating its real-world application.How does it work?Say you have 1,000 photos of cats and 1,000 dogs, and you want to train AI.I think it is still a few years away for use in patient care and a lot of extra work is needed.We wonder, for example, whether we can train the AI algorithm to look at a photo and see if a patient can qualify for a clinical trial.We started by obtaining five publicly available medical image data sets.
Skin conditions are among the most ordinary sort of disease just behind colds, fatigue, and headaches.You might be surprised to know that, it is estimated that 25 percent of all treatments provided to patients around the globe are for skin conditions and that up to 37 percent of patients seen in the clinics have a minimum of one skin complaint.The massive case workload and a worldwide shortage of dermatologists have forced patients to seek out general practitioners, who tend to be less precise than experts in identifying patient’s conditions.In a paper (“A Deep Learning System for Differential Diagnosis of Skin Diseases“) and accompanying this blog article, they report that it accomplishes accuracy across 26 skin conditions when presented with pictures and metadata about a patient case.“We developed a deep learning system (DLS) to address the most common skin conditions seen in primary care,” wrote Google AI software engineer Yuan Liu and Google Health technical program manager Dr. Peggy Bui.During instruction, the model leveraged over 50,000 differential diagnoses supplied by over 40 dermatologists.
Machine learning and AI may be deployed on such grand tasks as finding exoplanets and creating photorealistic people, but the same techniques also have some surprising applications in academia: DeepMind has created an AI system that helps scholars understand and recreate fragmentary ancient Greek texts on broken stone tablets.These clay, stone or metal tablets, inscribed as much as 2,700 years ago, are invaluable primary sources for history, literature and anthropology.They’re covered in letters, naturally, but often the millennia have not been kind and there are not just cracks and chips but entire missing pieces that may comprise many symbols.Such gaps, or lacunae, are sometimes easy to complete: If I wrote “the sp_der caught the fl_,” anyone can tell you that it’s actually “the spider caught the fly.” But what if it were missing many more letters, and in a dead language, to boot?Not so easy to fill in the gaps.Doing so is a science (and art) called epigraphy, and it involves both intuitive understanding of these texts and others to add context; one can make an educated guess at what was once written based on what has survived elsewhere.
Third new senior health roles in 4 months.Google's parent firm made its third big health hire in four months yesterday in the form of Karen DeSalvo, a one-time Barack Obama administration official.In addition to recruiting senior folk, the ad and search giant is going after the healthcare market on various fronts by ramping cloud services and providing analysis and diagnosis tools like DeepMind.DeSalvo, who leaves a position teaching at the University of Texas medical school, will become chief health officer, a new role, later this year, according to several US news sources and DeSalvo's Twitter feed.I am so excited to be part of this team whose mission or better health for all is aligned with mine.— Dr. Karen DeSalvo (@KBDeSalvo) October 18, 2019
Can AI agents learn to generalize beyond its immediate experience?In a study conducted in collaboration with Stanford and the University College London, DeepMind scientists investigated whether systems could apply the knowledge they’d learned in one task to other, tangentially related tasks.They report that in environments ranging from a grid-world to an interactive 3D room generated in Unity (a game engine), their AI-driven agents correctly exploited the “compositional nature” of a language to interpret never-seen-before instructions.“[While] AI systems trained in idealized or reduced situations may fail to exhibit a compositional or systematic understanding of their experience, this competence can readily emerge when, like human learners, they have access to many examples of richly varying, multi-modal observations as they learn,” wrote the contributing scientists in a preprint paper summarizing the research.“This suggests that, during training, the agent learns not only how to follow training instructions, but also general information about how word-like symbols compose and how the combination of those words affects what the agent should do in its world.”The team investigated to what extent they could impart an AI model with systematicity, the concept of cognition whereby the ability to entertain a thought implies the ability to entertain thoughts with semantically related content.
UGC
Applications interact with each other through a number of APIs, legacy systems, and an increase in complexity from one day to the next.However, the increased complexity leads to a fair share of challenges that can be overcome by machine-based intelligence.As software development life cycles become more complex as day and delivery time decreases, testers need to provide feedback and evaluation to development teams promptly.Given the breakneck pace of new software and product launches, there is no way to test soberly and rigorously in this day and age.Releases that happen once a month are now done on a weekly basis and updates are a factor almost every day.After observing the hierarchy of controls, testers can create a technical map, looking at the AI   Graphical User Interface (GUI) to obtain labels for various controls.Since testing is about verification of results, access to many areas of test data is essential.Interestingly, Google DeepMind has created an AI program that uses deep reinforcement learning to play video games, thereby generating a lot of test data.Below the line, the Artificial Intelligence test site will be able to track users who are doing exploratory testing, to evaluate and identify applications being tested using the human brain.By automating repetitive test cases and manual testing, testers can focus more on making data-driven connections and decisions.Finally, the limited time to test risk-based automation is a critical factor when it comes to helping users decide which tests to run to get the greatest coverage.
The development comes after the co-founder of London-based DeepMind announced last month that he was taking an extended leave of absence from the firm.Mustafa Suleyman left the company, amid speculation that parent company Google had taken over the bulk of his responsibilities.The news that DeepMind’s health division has completed its move to Google’s control came a in a blog post from Dr Dominic King, UK Site Lead at Google Health.“Today, with our healthcare partners, the team is excited to officially join the Google Health family,” said Dr King.“Under the leadership of Dr. David Feinberg, and alongside other teams at Google, we’ll now be able to tap into global expertise in areas like app development, data security, cloud storage and user-centered design to build products that support care teams and improve patient outcomes.”“It’s clear that a transition like this takes time.
AI-powered voice assistants are quite useful, especially for hands-free use and control of smart assistant.The catch to all that is, because it’s voice-driven, it’s even more at the mercy of spoken language, which is admittedly harder to get right when it comes to speech recognition.Complicating matters is that the voices that give feedback might not be to everyone’s taste or language for that matter.That’s why Google is now rolling out nine new voices, seven of which are not even in English.Even back in the days of Siri, voice assistants were criticized for sounding so artificial, even robotic.It has been a long time but voices have improved greatly over the past years, especially as artificial intelligence and machine learning become more advanced.
Google Assistant can talk and sing like John Legend in the U.S., and it’s conversant in over 30 languages in 80 countries (up from 8 languages and 14 countries in 2017).But in the years since its international launch, Google’s AI interlocutor hasn’t offered a choice of voices outside of the U.S. Fortunately, that’s changing.These will join the 11 English voices already available stateside, six of which were previewed at Google’s I/O 2018 developer conference last year.Google Assistant product manager Brant Ward said each voice was synthesized by a machine learning system — WaveNet — pioneered by Alphabet’s DeepMind.For the uninitiated, WaveNet mimics things like stress and intonation (referred to in linguistics as prosody) by identifying tonal patterns in speech.In addition to producing much more convincing speech snippets than previous AI models, it’s also more efficient.
More

Top