But there’s a darker side to this transformation: These learning systems remain remarkably easy to fool using so-called “adversarial attacks.” Even worse is that leading researchers acknowledge they don’t really have a solution for stopping mischief makers from wreaking havoc on these systems.“Can we defend against these attacks?” said Nicolas Papernot, a research scientist at Google Brain, the company’s deep learning artificial intelligence research team.At its most basic, an adversarial attack refers to the notion of introducing some kind of element into a machine learning model designed specifically to incorrectly identify something.In the middle, someone has overlaid this pixelated image that is not necessarily visible to the human eye into the panda image.First, image recognition for machine learning, while it may have greatly advanced, still remains rudimentary.Unfortunately, that makes the jobs of hackers much easier.
Brainiacs at Northeastern University, MIT, and IBM Research in the US teamed up to create the 1980s-esque fashion statement, according to a paper quietly emitted via arXiv in mid-October.This means the wearer could slip past visitor or intruder detection systems, and so on.That means the pattern on the shirt has been carefully designed to manipulate just the right parts of a detection system's neural network to make it misidentify the wearer.“We highlight that the proposed adversarial T-shirt is not just a T-shirt with printed adversarial patch for clothing fashion, it is a physical adversarial wearable designed for evading person detectors in a real world,” the paper said.In this case, the adversarial T-shirt helped a person evade detection.The two convolutional neural networks tested, YOLOv2 and Faster R-CNN, have been trained to identify objects.
That’s to say the models can be deceived by specially crafted patches attached to real-world targets.Most research in adversarial attacks involves rigid objects like glass frames, stop signs, or cardboard.In a preprint paper, they claim it manages to achieve up to 79% and 63% success rates in digital and physical worlds, respectively, against the popular YOLOv2 model.Incidentally, the university team speculated that their technique could be combined with a clothing simulation to design such a T-shirt.The researchers from today’s study note that a number of adversarial transformations are commonly used to fool classifiers, including scaling, translation, rotation, brightness, noise, and saturation adjustment.But they say these are largely insufficient to model the deforming cloth caused by a moving person’s pose changes.
It can, for instance, enable highly accurate facial recognition, see through the pixelation in photos, and even—as Facebook's Cambridge Analytica scandal showed—use public social media data to predict more sensitive traits like someone's political orientation.Those same machine-learning applications, however, also suffer from a strange sort of blind spot that humans don't—an inherent bug that can make an image classifier mistake a rifle for a helicopter, or make an autonomous vehicle blow through a stop sign.Just a few small tweaks to an image or a few additions of decoy data to a database can fool a system into coming to entirely wrong conclusions.Now privacy-focused researchers, including teams at the Rochester Institute of Technology and Duke University, are exploring whether that Achilles' heel could also protect your information."Attackers are increasingly using machine learning to compromise user privacy," says Neil Gong, a Duke computer science professor."Attackers share in the power of machine learning and also its vulnerabilities.
This is the fashion world satirized by Zoolander and Sacha Baron Cohen’s character Brüno; where wearability and even sartorial attractiveness is deprioritized in favor of crazy designs and artistic boundary pushing.Kate Rose’s clothing line Adversarial Fashion is as confusing as anything you’ll find on any catwalk in the world.They were discussing the rise of automatic number-plate recognition technology, which has been widely adopted in the U.S. at city, county, state and federal levels.The American psychologist Abraham Maslow once pointed out that to a person who has only a hammer, there’s a tendency for everything to look like a nail.As a person drives around and is captured by multiple cameras, automatic number-plate recognition systems provide a constantly-updated means of monitoring their approximate (or even specific) location.“It was pulling in data from misclassified things like billboards and picket fences.
The garments in the Adversarial Fashion collection are covered with license plate images that trigger automated license plate readers, or ALPRs, to inject junk data into systems used to monitor and track civilians.ALPRs -- which are typically mounted on street poles, streetlights, highway overpasses and mobile trailers -- use networked surveillance cameras and image recognition to track license plate numbers, along with location, date and time.Hacker and fashion designer Kate Rose showed off her inaugural line at the DefCon cybersecurity conference in Las Vegas over the weekend.It was inspired by a conversation with a friend who works at the Electronic Frontier Foundation about the "low specificity" or inaccuracy of a lot of plate readers on police cars.The Adversarial Fashion garments, she said, highlight the need to make computer-controlled surveillance less invasive and harder to use without human oversight."A person walking along the sidewalk or in a crosswalk is often close enough, as the readers take in a pretty large visual field, and have ... problems with specificity."
When it comes to image recognition tech, it’s still remarkably easy to fool the machines.And while it’s some good comedy when a neural network mistakes a butterfly for a washing machine, the consequences of this idiocy are pretty nightmarish when you think about rolling these flawed systems out into the real world.Researchers from the University of California, Berkeley, the University of Washington, and the University of Chicago published a paper this month to really drive home the weaknesses of neural networks when it comes to correctly identifying an image.The images selected for the dataset were pulled from millions of user-labelled animal images from the website iNaturalist as well as objects tagged by users on Flickr, according to the paper.They downloaded 81,413 dragonfly images from iNaturalist and filtered that down to 8,925.An “algorithmically suggested shortlist” spit out 1,452 images, and from there, they manually selected 80.
Computer vision has improved massively in recent years, but it’s still capable of making serious errors.So much so that there’s a whole field of research dedicated to studying pictures that are routinely misidentified by AI, known as “adversarial images.” Think of them as optical illusions for computers.But while a lot attention in this field is focused on pictures that have been specifically designed to fool AI (like this 3D printed turtle which Google’s algorithms mistakes for a gun), these sorts of confusing visuals occur naturally as well.To demonstrate this, a group of researchers from UC Berkeley, the University of Washington, and the University of Chicago, created a dataset of some 7,500 “natural adversarial examples.” They tested a number of machine vision systems on this data, and found that their accuracy dropped by as much as 90 percent, with the software only able to identify just two or three percent of images in some cases.You can see what these “natural adversarial examples” look like in the gallery below:In an accompanying paper, the researchers say the data will hopefully help train more robust vision systems.
While these errors can sometimes be the result of the required learning curve for artificial intelligence, it is becoming apparent that a far more serious problem is posing an increasing risk: adversarial data.For the uninitiated, adversarial data describes a situation in which human users intentionally supply an algorithm with corrupted information.UC Berkeley professor Dawn Song notably tricked a self-driving car into thinking that a stop sign says the speed limit is 45 miles per hour.A malicious attack of this nature could easily result in a fatal accident.This is largely because of the way algorithms can “see” things in the data that we humans are unable to discern.The misidentification occurred because the AI was looking at an apparently imperceivable set of pixels that led it to improperly identify photos.
Thanks to advances in machine learning, computers have gotten really good at identifying what’s in photographs.They started beating humans at the task years ago, and can now even generate fake images that look eerily real.While the technology has come a long way, it’s still not entirely foolproof.In particular, researchers have found that image detection algorithms remain susceptible to a class of problems called adversarial examples.By altering a handful of pixels, a computer scientist can fool a machine learning classifier into thinking, say, a picture of a rifle is actually one of a helicopter.Organizations like Google and the US Army have studied adversarial examples, but what exactly causes them is still largely a mystery.
In some cases, the CNNs had overcome the adversarial images and correctly applied labels, but in other instances they had whiffed.What the researchers found is that humans are quite good at intuiting a machine’s logic, even when that logic returns a seemingly ridiculous error.But they make mistakes that we usually don’t make.” He said that when he encountered some of those apparently silly errors himself, he noticed there actually seemed to be a logic behind it.'” After looking at an image a CNN had misclassified as an armadillo, let’s say, he could understand why an AI may perceive it as “armadillo-ish.”With this in mind, Zhou and Firestone designed a series of experiments to probe further.Each of the eight experiments in the study contained 200 individuals, save for one that had 400.
For this second in what will be a regular series of conversations exploring the ethics of the technology industry, I was delighted to be able to turn to one of our current generation’s most important young philosophers of tech.Around a decade ago, Williams won the Founder’s Award, Google’s highest honor for its employees.The inaugural winner of Cambridge University’s $100,000 “Nine Dots Prize” for original thinking, Williams was recognized for the fruits of his doctoral research at Oxford University, on how “digital technologies are making all forms of politics worth having impossible, as they privilege our impulses over our intentions and are designed to exploit our psychological vulnerabilities in order to direct us toward goals that may or may not align with our own.” In 2018, he published his brilliantly written book Stand Out of Our Light, an instant classic in the field of tech ethics.It’s a chilling prospect, and yet somehow, if you read to the end of the interview, you’ll see Williams manages to end on an inspiring and hopeful note.It’s the feeling that, you know, the car’s already been built, the dashboard’s been calibrated, and now to move humanity forward you just kind of have to hold the wheel straightI spent my formative years in a town called Abilene, Texas, where my father was a university professor.
Some tips on how to avoid miscreants deceiving your codeAdversarial attacks that trick one machine-learning model can potentially be used to fool other so-called artificially intelligent systems, according to a new study.It's hoped the research will inform and persuade AI developers to make their smart software more robust against these transferable attacks, preventing malicious images, text, or audio that hoodwinks one trained model from tricking another similar model.Neural networks are easily deceived by what's called adversarial attacks, which input data producing one output is subtly changed to produce a completely different one.For example, you could show a gun to an object classifier that correctly guesses it's a gun, and then change just a small part of its coloring to fool the AI into thinking it's a red-and-blue-striped golfing umbrella.Adding a few pixels here and there causes an image of banana to be classified as a toaster.
Tuesday, June 26st, Rockville, MD - Today, Insilico Medicine, Inc., a Rockville-based next-generation artificial intelligence company specializing in the application of deep learning for target identification, drug discovery and aging research announces the publication of a new research paper "Reinforced Adversarial Neural Computer for De Novo Molecular Design" in The Journal of Chemical Information and Modeling.The authors presented an original deep neural network architecture named Reinforced Adversarial Neural Computer (RANC) for the de novo design of novel small-molecule organic structures utilizing the generative adversarial network (GAN) and reinforcement learning (RL) methods.Although computer-aided approaches in this field are well-studied, the application of deep learning methods in this research area is at the beginning facing a lot of challenges.The comparative results have shown that RANC trained on the SMILES string representation of the molecules outperforms the other methods by several metrics relevant to drug discovery: the number of unique structures, passing medicinal chemistry filters, Muegge criteria and high quantitative estimate of drug-likeness scores.MW, logP, TPSA) and lengths of the SMILES strings in the training dataset.Therefore, RANC can be reasonably regarded as a promising starting point to develop novel molecules with activity against different biological targets or pathways.
More

Top