in FIFA ‘19 was something special?You haven’t seen anything yet!That’s because search giant Google is developing its own soccer-playing artificial intelligence.And, if the company’s history with machine intelligence is anything to go by, it’ll be something quite special.In the abstract for a paper describing the work, the researchers note that: “Recent progress in the field of reinforcement learning has been accelerated by virtual learning environments such as video games, where novel algorithms and ideas can be quickly tested in a safe and reproducible manner.We introduce the Google Research Football Environment, a new reinforcement learning environment where agents are trained to play football in an advanced, physics-based 3D simulator.”
Facebook AI Research, together with Google’s DeepMind, University of Washington, and New York University, today introduced SuperGLUE, a series of benchmark tasks to measure the performance of modern, high performance language-understanding AI.SuperGLUE was made on the premise that deep learning models for conversational AI have “hit a ceiling” and need greater challenges.Considered state of the art in many regards in 2018, BERT’s performance has been surpassed by a number of models this year such as Microsoft’s MT-DNN, Google’s XLNet, and Facebook’s RoBERTa, all of which were are based in part on BERT and achieve performance above a human baseline average.SuperGLUE is preceded by the General Language Understanding Evaluation (GLUE) benchmark for language understanding in April 2018 by researchers from NYU, University of Washington, and DeepMind.GLUE assigns a model a numerical score based on performance on nine English sentence understanding tasks for NLU systems, such as the Stanford Sentiment Treebank (SST-2) for deriving sentiment from a data set of online movie reviews.RoBERTa currently ranks first on GLUE’s numerical score leaderboard with state-of-the-art performance on 4 of 9 GLUE tasks.
DeepMind, likely the world’s largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years.DeepMind also has more than $1 billion in debt due in the next 12 months.Gary Marcus is founder and CEO of Robust.AI and a professor of psychology and neural science at NYU.The dollars involved are large, perhaps more than in any previous AI research operation, but far from unprecedented when compared with the sums spent in some of science’s largest projects.Still, the rising magnitude of DeepMind’s losses is worth considering: $154 million in 2016, $341 million in 2017, $572 million in 2018.In my view, there are three central questions: Is DeepMind on the right track scientifically?
UK health secretary Matt Hancock has been accused of being "obsessed by technology" for its own sake following the UK government's vague announcement about injecting £250m into a AI laboratory for the NHS.The fund is aimed at improving cancer screening by speeding up the results of tests – including mammograms, brain scans, eye scans and heart monitoring – and also go towards predictive models to better estimate future needs of beds, drugs, devices or surgeries.But the government has not released further details of how the money is intended to be spent, with the funds not due to be released for another two years.Dr Neil Bhatia, a Hampshire GP, told The Register: "This is a huge amount of money, and there are a lot more deserving things NHS needs.All the news around DeepMind and the Royal Free Trust doesn't inspire a great deal of confidence in governance and consent.The health secretary Matt Hancock is obsessed by technology, but he can't see the wood for the trees.
This is a product which would have cost a lot of money and time to develop, at least to ensure it is the most useful of its kind, while there was little immediate return on investment.Now Google is reaping the commercial benefits of Maps, but it is still keeping an eye on new features, improved experience and, eventually, additional revenues.“Not only does Google Maps help you navigate, explore, and get things done at home, but it’s also a powerful travel companion,” Rachel Inman wrote on Google’s blog.“After you’ve booked your trip, these new tools will simplify every step of your trip once you’ve touched down–from getting around a new city to reliving every moment once you’re home.”Google is not a company which makes money by accident.The acquisitions of Android and DeepMind certainly added new elements to the business model, its smart speakers and push into the connected car offer more engagement points moving away from traditional user interface, and Maps is an on-going project which seems to never get old.
DeepMind, the U.K.-based AI research subsidiary acquired by Alphabet in 2014 for $500 million, today detailed ecological research its science team is conducting to develop AI systems that’ll help study the behavior of animal species in Tanzania’s Serengeti National Park.It hopes to expedite the analysis of data from hundreds of motion-detecting field cameras, which have captured millions of images since they were deployed by the Serengeti Lion Research program over nine years ago.“The Serengeti is one of the last remaining sites in the world that hosts an intact community of large mammals … As human encroachment around the park becomes more intense, these species are forced to alter their behaviours in order to survive,” wrote DeepMind in a blog post.“Increasing agriculture, poaching, and climate abnormalities contribute to changes in animal behaviors and population dynamics, but these changes have occurred at spatial and temporal scales which are difficult to monitor using traditional research methods.”For nearly a decade, conservationists have tapped the aforementioned cameras to keep tabs on animals within the park’s core, enabling them to study their distribution and demography.The images aren’t of much use absent annotations, however, which is why it’s fallen to volunteers to identify species by hand using a web-based tool called Zooniverse.
The UK government has announced it’s rerouting £250M (~$300M) in public funds for the country’s National Health Service (NHS) to set up an artificial intelligence lab that will work to expand the use of AI technologies within the service.The Lab, which will sit within a new NHS unit tasked with overseeing the digitisation of the health and care system (aka: NHSX), will act as an interface for academic and industry experts, including potentially startups, encouraging research and collaboration with NHS entities (and data) — to drive health-related AI innovation and the uptake of AI-driven healthcare within the NHS.Last fall the then new in post health secretary, Matt Hancock, set out a tech-first vision of future healthcare provision — saying he wanted to transform NHS IT so it can accommodate “healthtech” to support “preventative, predictive and personalised care”.In a press release announcing the AI lab, the Department of Health and Social Care suggested it would seek to tackle “some of the biggest challenges in health and care, including earlier cancer detection, new dementia treatments and more personalised care”.Other suggested areas of focus include:improving cancer screening by speeding up the results of tests, including mammograms, brain scans, eye scans and heart monitoring
At his Transform 2019 talk last month, Lyft’s head of product and machine learning, Gil Arditi, focused on this research and ripped through example after example of research in the past year that’s propelled head-turning advances in AI and machine learning.Below are just a few of the examples he covered (watch his entire talk above).OpenAI Five has used the technology to defeat humans in Dota 2, a multiplayer online battle arena video game — and has won 4,075 games against human players for a victory rate of 99.4%.To accomplish those achievements, Arditi said, “OpenAI Five trained for more than 45,000 years of gameplay, and AlphaStar took more than $26 million in the monetized value of the compute resources they used.”Until recently, Google-developed BERT solved a wide range of problems in NLP, including things like sentence classification, sentence space similarity, questions, and answers.Arditi asserts there’s a new contender that emerged from a research paper several weeks ago called XLnet.
London-based DeepMind, owned by Google parent company Alphabet, is working on a number of health care-based projects, and on Wednesday it published its latest research showing how doctors may be able to predict a quick-onset, deadly condition to save more patient lives.As if it's not bad enough being admitted to hospital for one illness or injury, in-patients in medical facilities are also at risk of developing secondary conditions that can pose serious threats to their health.Among them, acute kidney disease claims the lives of 500,000 US patients every year, according to the Center for Disease Control and Prevention.Acute kidney injuries can be deadly, and they pose a real problem for physicians.But using AI, DeepMind has a solution that could help doctors spot potential kidney injuries 48 hours before they occur, giving them valuable time to get ahead of the problem and potentially allowing them to prevent the condition in up to 30% of patients.In a study published in the journal Nature, DeepMind outlined work it conducted with the US Department of Veterans Affairs in which it used anonymized data to develop machine learning tools that correctly predict nine out of 10 patients who later went on to require dialysis.
DeepMind, the Google-owned UK AI research firm, has published a research letter in the journal Nature in which it discusses the performance of a deep learning model for continuously predicting the future likelihood of a patient developing a life-threatening condition called acute kidney injury (AKI).The company says its model is able to accurately predict that a patient will develop AKI “within a clinically actionable window” up to 48 hours in advance.In a blog post trumpeting the research, DeepMind couches it as a breakthrough — saying the paper demonstrates artificial intelligence can predict “one of the leading causes of avoidable patient harm” up to two days before it happens.“This is our team’s biggest healthcare research breakthrough to date,” it adds, “demonstrating the ability to not only spot deterioration more effectively, but actually predict it before it happens.”Even a surface read of the paper raises some major caveats, though.Not least that the data used to train the model skews overwhelmingly male: 93.6%.
Acute kidney injury, or AKI, is a condition in which the kidneys stop filtering waste products from the blood.Worse still, because it’s difficult to detect, AKI kills upwards of 400,000 people annually in both countries combined despite the more than $1.2 billion (£1 billion) the U.K.’s National Health Service (NHS) spends treating it each year.Over the course of two separate joint studies conducted with the U.S. Department of Veterans Affairs and The Royal Free London NHS Foundation Trust (RFL), DeepMind’s health care division — DeepMind Health — investigated ways to flag AKI warning signs clinicians might otherwise fail to spot.“Over the last few years, our team at DeepMind has focused on finding an answer to the complex problem of avoidable patient harm, building digital tools that can spot serious conditions earlier and helping doctors and nurses deliver faster, better care to patients in need,” wrote DeepMind.“These results comprise the building blocks for our long-term vision of preventative healthcare, helping doctors to intervene in a proactive, rather than reactive, manner.”DeepMind reaffirmed that it’ll cede oversight of Streams to Google Health in the coming months, as previously announced, and that the development team will work alongside colleagues at Google to create tools designed to support health care practitioners.
DeepMind is teaming up with Waymo, a fellow unit of Google parent Alphabet, to train self-driving cars, using the same method that was created to teach artificial intelligence bots how to play StarCraft II.Waymo’s self-driving vehicles utilize neural networks to carry out tasks such as detecting objects on the road, predicting how other cars will behave, and planning its next moves.Training the neural networks has required “weeks of fine-tuning and experimentation, as well as enormous amounts of computational power,” DeepMind said in the blog post where it announced the collaboration with Waymo.DeepMind and Waymo joined forces to create a more efficient process of training and refining the algorithms of self-driving vehicles, utilizing population-based training.This technique, inspired by the concept of biological evolution, speeds up the learning process for neural networks by focusing on the “fittest” specimens, which are the A.I.models that are the most efficient in carrying out tasks.
Finding new ways to train neural networks is becoming more important all the time, especially as the race to develop autonomous cars heats up.This has led developers to come up with some reasonably inventive ways of getting their networks up to speed, and one of them seems to involve the game StarCraft II.What could a nearly decade-old game possibly have to do with training state-of-the-art neural networks?More than you'd think actually.See, according to a report published Thursday by MIT Technology Review, the techniques that some people are applying to make the game's AI smarter and harder to beat can be carried over to neural network development.See, in StarCraft II, you're tasked with controlling dozens of individual units, each with unique skills, all while managing resources and fighting an opponent who is trying to wipe you out.
Mid-engine Corvette, new Toyotas, prototype tests and more: Roadshow's week in reviewGoogle's DeepMind is using StarCraft II to help train self-driving carsDrivers in Anchorage, Alaska can pay for a parking ticket in school supplies2020 Pikes Peak International Hill Climb won't include motorcyclesA small Kia pickup truck might be in the works