Thanks to advances in machine learning, computers have gotten really good at identifying what’s in photographs.
They started beating humans at the task years ago, and can now even generate fake images that look eerily real.
While the technology has come a long way, it’s still not entirely foolproof.
In particular, researchers have found that image detection algorithms remain susceptible to a class of problems called adversarial examples.
By altering a handful of pixels, a computer scientist can fool a machine learning classifier into thinking, say, a picture of a rifle is actually one of a helicopter.
Organizations like Google and the US Army have studied adversarial examples, but what exactly causes them is still largely a mystery.