A comparison of CNN and DBN image classification models under adversarial conditions
LE3 .A278 2020
2020
Silver, Danny
Acadia University
Bachelor of Science
Honours
Computer Science
We investigate and compare Convolutional Neural Networks (CNN) and Deep Belief Network’s (DBN) ability to withstand several common attacks to limit their performance. We propose that CNN makes a strong inductive bias assumption about the relationship between pixels that are proximal to each other that makes it valuable to adversarial attacks. We implement several attacks using the MNIST and CIFAR-10 dataset where we modify pixels of the test images in different ways to challenge the CNN and DBN models. Each experiment is run multiple times to develop and test models. The accuracy of the models for each method is analyzed. The results show that DBN models generally perform better under attack than the CNN models. When the assumption of the relationship between pixels is removed, the advantage of CNNs convolutional inductive bias no longer exist.
The author retains copyright in this thesis. Any substantial copying or any other actions that exceed fair dealing or other exceptions in the Copyright Act require the permission of the author.
https://scholar.acadiau.ca/islandora/object/theses:3527