But there’s a darker side to this transformation: These learning systems remain remarkably easy to fool using so-called “adversarial attacks.” Even worse is that leading researchers acknowledge they don’t really have a solution for stopping mischief makers from wreaking havoc on these systems.
“Can we defend against these attacks?” said Nicolas Papernot, a research scientist at Google Brain, the company’s deep learning artificial intelligence research team.
At its most basic, an adversarial attack refers to the notion of introducing some kind of element into a machine learning model designed specifically to incorrectly identify something.
In the middle, someone has overlaid this pixelated image that is not necessarily visible to the human eye into the panda image.
First, image recognition for machine learning, while it may have greatly advanced, still remains rudimentary.
Unfortunately, that makes the jobs of hackers much easier.