A General Approach to Inaudible Adversarial Perturbations in a Black-box Setting
Abstract: Deep learning is currently being deployed in many speech recognition systems. While these systems can achieve state-of-the-art performance, they are known to be susceptible to adversarial perturbations. These are minor perturbations to the input data, crafted specifically to cause erroneous behavior from the system. Some previous work have put effort into placing the perturbations in accordance with psychoacoustics, meaning placing the perturbations in areas of a signal that are perceptually limited for humans. In this work, a general method for optimizing perturbations according to psychoacoustics is presented. The formulation allows for a non-gradient based optimization strategy to be implemented. Two greedy optimization algorithms are developed using the proposed method. Inaudible perturbations are shown to be ineffective, which conform with the current academic understanding. However, when allowing the perturbations to be 18 dB stronger than the psychoacoustical defined perceptual limit, targeted success-rate of 64% and untargeted success-rate of 87% is achieved on a keyword spotting task.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)