On the Security of Randomized Defenses Against Adversarial Samples

by | 0 comments

(From the abstract)

Deep Learning has been shown to be particularly vulnerable to adversarial samples. To combat adversarial strategies, numerous defensive techniques have been proposed. Among these, a promising approach is to use randomness in order to make the classifcation process unpredictable and presumably harder for the adversary to control. In this paper, we study the efectiveness of randomized defenses against adversarial samples. To this end, we categorize existing state-of-the-art adversarial strategies into three attacker models of increasing strength, namely blackbox, graybox, and whitebox (a.k.a. adaptive) attackers. We also devise a lightweight randomization strategy for image classifcation based on feature squeezing, that consists of pre-processing the classifer input by embedding randomness within each feature, before applying feature squeezing.

We evaluate the proposed defense and compare it to other randomized techniques in the literature via thorough experiments. Our results indeed show that careful integration of randomness can be efective against both graybox and black-box attacks without signifcantly degrading the accuracy of t

This article is accepted for the 15th ACM Asia Conference on Computer and Communications Security (ASIA CCS ’20), June 1–5, 2020.

the 15th ACM Asia Conference on

Computer and Communications Security (ASIA CCS ’20), June 1–5, 2020

Kumar Sharad, Giorgia Azzurra Marson, Hien Thi Thu Truong and Ghassan Karame

NEC Laboratories Europe

WordPress Appliance - Powered by TurnKey Linux