We present a technique for defending object detection networks against adversarial patch attacks (APAs). APAs introduce carefully crafted overt regions into an image in order to fool the network to create false detections. We leverage adversarial training via a conditional Generative Adversarial Network (GAN) that seeks to produce effective attacks on the object detector whilst simultaneously training the detector to resist those attacks. We report experiments with several common detection networks (Faster/Mask RCNN and RetinaNet). We show our training-time defence offers resilience against our GAN generated APAs that also translates to other unseen APAs targeting object detectors.
Learn More