We present a method to create universal, robust, targeted adversarial image patches
in the real world. The patches are universal because they can be used to attack any
scene, robust because they work under a wide variety of transformations, and
targeted because they can cause a classifier to output any target class. These
adversarial patches can be printed, added to any scene, photographed, and presented
to image classifiers; even when the patches are small, they cause the classifiers
to ignore the other items in the scene and report a chosen target class.