#136 DeepAuth: Protecting Integrity of Deep Neural Networks using Wavelet-based Steganography


More

  • None

Rejected

[PDF] Submission (1.6MB) Sep 14, 2019, 8:53:48 AM UTC · bf2474b220f711ff3771a63cd70165c396487eaea616d81c8a07180e3ef36342bf2474b2

Training Deep Neural Networks (DNNs) is very expensive in terms of computational power and the amount of training data required. This led to the emerging business trend where pre-trained models are treated as a first-class object and traded as services or commodities in the Internet marketplace such as Machine Learning as a Service (MLaaS). However, researchers have shown that such pre-trained models are vulnerable to different types of security threats, such as poisoning attacks. Hence, it is essential to protect the integrity and authenticity of these models in the emerging marketplace. The challenge is to construct a method to derive a signature of such pre-trained models and embed it while preserving the accuracy of the original model. Also, the embedded signature should be securely concealed within the model and can be verified at all times. To address these challenges, this paper proposes a novel wavelet-based steganographic technique, called DeepAuth. DeepAuth generates the signature of a DNN model using its structural information. To maintain the model accuracy, DeepAuth uses a wavelet-based technique to transform weights in each layer from the spatial domain to the frequency domain. It then identifies the approximation coefficients to preserve the accuracy of the model and utilizes the detailed coefficients to hide the signature using a combination of secure key and scrambling vector. Our analysis results show that DeepAuth can hide about 1KB signature in each layer with 256-bit security level without having any impact on the accuracy of the model. Several experiments are performed on 3 pre-trained models (ResNet18, VGG16, and AlexNet) using 3 datasets (MNIST, CIFAR-10, and ImageNet) against 3 types of manipulation attacks (input poisoning, output poisoning, and fine-tuning). The results demonstrate that DeepAuth is verifiable at all times without degrading the classification accuracy, and robust against a multitude of known DNN manipulation attacks.

A. Abuadbba, H. Kim, S. Nepal

  • Security and privacy of systems based on machine learning and AI

To edit this submission, sign in using your email and password.