Computer vision has improved massively in recent years, but it’s still capable of making serious errors.

So much so that there’s a whole field of research dedicated to studying pictures that are routinely misidentified by AI, known as “adversarial images.” Think of them as optical illusions for computers.

But while a lot attention in this field is focused on pictures that have been specifically designed to fool AI (like this 3D printed turtle which Google’s algorithms mistakes for a gun), these sorts of confusing visuals occur naturally as well.

To demonstrate this, a group of researchers from UC Berkeley, the University of Washington, and the University of Chicago, created a dataset of some 7,500 “natural adversarial examples.” They tested a number of machine vision systems on this data, and found that their accuracy dropped by as much as 90 percent, with the software only able to identify just two or three percent of images in some cases.

You can see what these “natural adversarial examples” look like in the gallery below:

In an accompanying paper, the researchers say the data will hopefully help train more robust vision systems.

The text above is a summary, you can read full article here.