Some tips on how to avoid miscreants deceiving your code

Adversarial attacks that trick one machine-learning model can potentially be used to fool other so-called artificially intelligent systems, according to a new study.

It's hoped the research will inform and persuade AI developers to make their smart software more robust against these transferable attacks, preventing malicious images, text, or audio that hoodwinks one trained model from tricking another similar model.

Neural networks are easily deceived by what's called adversarial attacks, which input data producing one output is subtly changed to produce a completely different one.

For example, you could show a gun to an object classifier that correctly guesses it's a gun, and then change just a small part of its coloring to fool the AI into thinking it's a red-and-blue-striped golfing umbrella.

Adding a few pixels here and there causes an image of banana to be classified as a toaster.

The text above is a summary, you can read full article here.