It’s a well-established fact that object- and face-detecting algorithms are vulnerable to adversarial attack, as evidenced by a 2014 study conducted by researchers at Google and New York University. That’s to say the models can be deceived by specially crafted patches attached to real-world targets.
Most research in adversarial attacks involves rigid objects like glass frames, stop signs, or cardboard. But scientists at Northeastern University and the MIT-IBM Watson AI Lab propose what they are calling an “adversarial” T-shirt, one with a printed image that evades person-detectors even when it’s deformed by a wearer’s changing pose. In a preprint paper, they claim it manages to achieve up to 79% and 63% success rates in digital and physical worlds, respectively, against the popular YOLOv2 model.
This is similar to a study conducted by engineers from the university of KU Leuven in Belgium earlier this year that showed how patterns printed on patches worn around the neck could be used to fool person-detecting AI. Incidentally, the university team speculated that their technique could be combined with a clothing simulation to design such a T-shirt.
The researchers from today’s study note that a number of adversarial transformations are commonly used to fool classifiers, including scaling, translation, rotation, brightness, noise, and saturation adjustment. But they say these are largely insufficient to model the deforming cloth caused by a moving person’s pose changes. Instead, they employed a data interpolation and smoothing technique called Thin Plate Spline (TPS), which models coordinate transformations with affine (which preserves points, straight lines, planes) and non-affine components to provide a means of learning adversarial patterns for non-rigid objects.
The researchers’ T-shirt has a checkerboard pattern, where each intersection between two checkerboard grid regions serves as a control point to generate TPS transformation.
In a series of experiments, the team collected two digital data sets for learning and testing their proposed attack algorithm in both the physical and digital world. The training corpus contained 30 videos of a virtual moving person in a digital environment wearing the adversarial T-shirt across four different scenes, while the second contained 10 videos captured in the same setting but with different virtual people. The third — a real-world data set — comprised 10 test videos of a moving person wearing a physical adversarial T-shirt.
In simulation, the researchers report a 65% attack success rate, and 79% when attacking a person-detecting R-CNN model and YOLOv2. In real-world tests, they say the adversarial T-shirt fools YOLOv2 and R-CNN upwards of 65% of the time — at least when only a single T-shirt wearer is within view. With two or more people, the success rate drops.
The approach likely wouldn’t fool more sophisticated object- and people-detecting models from the likes of Amazon Web Services, Google Cloud Platform, and Microsoft Azure, of course, and 65% is only slightly better than chance. But the researchers assert their work is a first step toward adversarial wearables that can evade detection of moving persons.
“Since T-shirt[s] [are] non-rigid object[s] … deformation induced by pose change of a moving person is taken into account when generating adversarial perturbations,” wrote the paper’s coauthors. “Based on our [study], we hope to provide some implications on how the adversarial perturbations can be implemented with human clothing, accessories, paint on face, and other wearables.”
This article originally appeared in VentureBeat.