Deep neural networks have been used for various applications as spam filtering, fraud detection, or medical information processing. However, adversarial attacks and backdoor attacks threaten the security of these systems.
The former uses adversarial examples which are maliciously crafted by perturbing original model input to fool the victim model. The latter produces attacker-specified outputs on the inputs embedded with pre-designed triggers.
A recent paper looks into the style transfer in textual adversarial and backdoor attacks. For adversarial attacks, original inputs are iteratively transformed into multiple text styles. For backdoor attacks, training samples are transformed into a selected trigger style and fed into the victim model. The attack success rates exceed 90% and reveal the inability of existing NLP models to properly handle the feature of text style when facing security threats.
Adversarial attacks and backdoor attacks are two common security threats that hang over deep learning. Both of them harness task-irrelevant features of data in their implementation. Text style is a feature that is naturally irrelevant to most NLP tasks, and thus suitable for adversarial and backdoor attacks. In this paper, we make the first attempt to conduct adversarial and backdoor attacks based on text style transfer, which is aimed at altering the style of a sentence while preserving its meaning. We design an adversarial attack method and a backdoor attack method, and conduct extensive experiments to evaluate them. Experimental results show that popular NLP models are vulnerable to both adversarial and backdoor attacks based on text style transfer — the attack success rates can exceed 90% without much effort. It reflects the limited ability of NLP models to handle the feature of text style that has not been widely realized. In addition, the style transfer-based adversarial and backdoor attack methods show superiority to baselines in many aspects. All the code and data of this paper can be obtained at this https URL.
Research paper: Qi, F., Chen, Y., Zhang, X., Li, M., Liu, Z., and Sun, M., “Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer”, 2021. Link: https://arxiv.org/abs/2110.07139