It can, for instance, enable highly accurate facial recognition, see through the pixelation in photos, and even—as Facebook's Cambridge Analytica scandal showed—use public social media data to predict more sensitive traits like someone's political orientation.

Those same machine-learning applications, however, also suffer from a strange sort of blind spot that humans don't—an inherent bug that can make an image classifier mistake a rifle for a helicopter, or make an autonomous vehicle blow through a stop sign.

Just a few small tweaks to an image or a few additions of decoy data to a database can fool a system into coming to entirely wrong conclusions.

Now privacy-focused researchers, including teams at the Rochester Institute of Technology and Duke University, are exploring whether that Achilles' heel could also protect your information.

"Attackers are increasingly using machine learning to compromise user privacy," says Neil Gong, a Duke computer science professor.

"Attackers share in the power of machine learning and also its vulnerabilities.

The text above is a summary, you can read full article here.