Publications Details

Publications / SAND Report

Deep Deception: Exemplars of Adversarial Machine Learning and Countermeasures Applicable to International Safeguards

Farley, David R.; Katinas, Christopher M.

As a follow-up to our more comprehensive report on Adversarial Machine Learning (AML), here we provide demonstrations of AML attacks against the Limbo image database of UF6 cylinders in a variety of orientations and amongst a variety of distractor images. We demonstrate the Carlini & Wagner AML attack against a subset of Limbo images, with 100% attack success rate; meaning all attacked images were misclassified by a highly accurate trained model, yet the image changes were imperceptible to the human eye. We also demonstrate successful attacks against segmented images (images with more than one targeted object). Finally, we demonstrated the Fast Fourier Transform countermeasure that can be used to detect AML attacks on images. The intent of this and our previous report is to inform the IAEA and stakeholders of both the promise of machine learning, which could greatly improve the efficiency of surveillance monitoring, but also of the real threat of AML and potential defenses.