Researchers at UC Berkeley have published the Natural Adversarial Examples dataset, consisting of 7,500 images of natural phenomenon designed to fool image classification algorithms. Adversarial examples significantly reduce a classifier’s accuracy due to subtle visual elements that convince the algorithm it is seeing, for example, a manhole cover, rather than a dragonfly. Testing a classifier’s resilience to adversarial examples can help researchers overcome common flaws in classifier design, such as over-reliance on color or background cues.