Title:
Adversarial Resilient and Privacy Preserving Deep learning

Thumbnail Image
Author(s)
Wei, Wenqi
Authors
Advisor(s)
Liu, Ling
Advisor(s)
Person
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Supplementary to
Abstract
Deep learning is being deployed in the cloud and on edge devices for a wide range of domain-specific applications, ranging from healthcare, cyber-manufacturing, autonomic vehicles, to smart cities and smart planet initiatives. While deep learning creates new opportunities for business, engineering, and scientific discoveries, it also introduces new attack surfaces to the modern computing systems that incorporate deep learning as a core component for algorithmic decision making and cognitive machine intelligence, ranging from data poisoning and model inversion during the training phase and adversarial evasion attacks during model inference phase, aiming to cause the well-trained model to misbehave randomly or purposefully. This dissertation research addresses these problems with dual focuses: First, it aims to provide a fundamental understanding of the security and privacy vulnerabilities inherent in deep neural network training and inference. Second, it develops an adversarial resilient framework and a set of optimization techniques to safeguard the deep learning systems, services, and applications against adversarial manipulations and gradient leakage induced privacy violations, while maintaining the accuracy and convergence performance of deep learning systems. This dissertation research has made three unique contributions towards advancing the knowledge and technological foundation for privacy-preserving deep learning with adversarial robustness against deceptions. The first main contribution is an in-depth investigation into security and privacy threats inherent in deep learning, represented by gradient leakage attacks during both centralized and distributed training, model manipulation with data poisoning during model training, and deception queries to well-trained models at the inference phase, represented by adversarial examples and out-of-distribution inputs. By introducing a principled approach to investigating gradient leakage attacks and different attack optimization methods in both centralized model training and federated learning environments, we provide a comprehensive risk assessment framework for an in-depth analysis of different attack mechanisms and attack surfaces that an adversary may leverage to reconstruct the private training data. Similarly, we take a holistic approach to creating an in-depth understanding of both adversarial examples and out-of-distribution examples in terms of their adversarial transferability and their inherent divergence. We also present a comprehensive study on the data poisoning to reveal its effectiveness and robust statistics under the complication scenarios of federated learning. Our research exposes the root causes for these adversarial vulnerabilities and provides transformative enlightenment on designing mitigation strategies and effective countermeasures. The second main contribution of this dissertation is to develop a cross-layer strategic ensemble verification methodology (XEnsemble) for enhancing the adversarial robustness of DNN model inference in the presence of adversarial examples and out-of-distribution examples. XEnsemble by design has three unique capabilities. (i) XEnsemble builds diverse input denoising verifiers by leveraging different data cleaning techniques. (ii) XEnsemble develops a disagreement-diversity ensemble learning methodology for guarding the output of the prediction model against deception. (iii) XEnsemble provides a suite of algorithms to combine input verification and output verification to protect the DNN prediction models from both adversarial examples and out-of-distribution inputs. The third contribution is the development of gradient leakage attack resilient deep learning for both centralized model training and distributed model training systems with privacy enhancing optimizations. To circumvent gradient leakage attacks, we investigate different strategies to add noise to the intermediate model parameter updates during model training (centralized or federated learning) with dual optimization goals: (i) the amount of noise added should be sufficient to remove the privacy leakages of private training data, and (ii) the amount of noise added should not be too much to hurt the overall accuracy and convergence of the trained model. We provide a theoretical formalization to certify the robustness provided differential privacy noise injection against gradient leakage attack. We also extend the conventional deep learning with differential privacy approach with the fixed privacy parameters for DP controlled noise injection by introducing adaptive privacy parameters to both centralized deep learning with differential privacy and federated deep learning with differential privacy.
Sponsor
Date Issued
2022-04-19
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI