That’s to say the models can be deceived by specially crafted patches attached to real-world targets.

Most research in adversarial attacks involves rigid objects like glass frames, stop signs, or cardboard.

In a preprint paper, they claim it manages to achieve up to 79% and 63% success rates in digital and physical worlds, respectively, against the popular YOLOv2 model.

Incidentally, the university team speculated that their technique could be combined with a clothing simulation to design such a T-shirt.

The researchers from today’s study note that a number of adversarial transformations are commonly used to fool classifiers, including scaling, translation, rotation, brightness, noise, and saturation adjustment.

But they say these are largely insufficient to model the deforming cloth caused by a moving person’s pose changes.

The text above is a summary, you can read full article here.