Dissertations, Theses, and Capstone Projects
Date of Degree
9-2022
Document Type
Dissertation
Degree Name
Ph.D.
Program
Computer Science
Advisor
Michael I. Mander
Committee Members
Rivka Levitan
Ioannis Stamos
Andrew Rosenberg
Subject Categories
Artificial Intelligence and Robotics | Data Science | Other Computer Sciences
Keywords
Machine Learning Adversarial Neural Network
Abstract
In this work, I introduce the Finite Gaussian Neuron (FGN), a novel neuron architecture for artificial neural networks aimed at protecting against adversarial attacks.
Since 2014, artificial neural networks have been known to be vulnerable to adversarial attacks, which can fool the network into producing wrong or nonsensical outputs by making humanly imperceptible alterations to inputs. While defenses against adversarial attacks have been proposed, they usually involve retraining a new neural network from scratch, a costly task.
My works aims to:
- easily convert existing models to Finite Gaussian Neuron architecture,
- while preserving the existing model's behavior on real data,
- and offering resistance against adversarial attacks.
I show that converted and retrained Finite Gaussian Neural Networks (FGNN) always have lower confidence (i.e., are not overconfident) in their predictions over randomized and Fast Gradient Sign Method adversarial images when compared to classical neural networks, while maintaining high accuracy and confidence over real MNIST images.
To further validate the capacity of Finite Gaussian Neurons to protect from adversarial attacks, I compare the behavior of FGNs to that of Bayesian Neural Networks against both randomized and adversarial images, and show how the behavior of the two architectures differs.
Finally I show some limitations of the FGN models by testing them on the more complex SPEECHCOMMANDS task, against the stronger Carlini-Wagner and Projected Gradient Descent adversarial attacks.
The code used for this work is available at https://github.com/grezesf/FGN---Research under the GPL 3.0 open source license. Work done with PyTorch.
Recommended Citation
Grezes, Felix, "Finite Gaussian Neurons: Defending Against Adversarial Attacks by Making Neural Networks Say "I Don’t Know"" (2022). CUNY Academic Works.
https://academicworks.cuny.edu/gc_etds/5129
Dissertation code repository at time of deposit
Included in
Artificial Intelligence and Robotics Commons, Data Science Commons, Other Computer Sciences Commons