Тип работы:
Предмет:
Язык работы:


MASTER’S THESIS BOOSTING ADVERSARIAL TRAINING IN ADVERSARIAL MACHINE LEARNING

Работа №192230

Тип работы

Дипломные работы, ВКР

Предмет

прочее

Объем работы87
Год сдачи2022
Стоимость4650 руб.
ПУБЛИКУЕТСЯ ВПЕРВЫЕ
Просмотрено
0
Не подходит работа?

Узнай цену на написание


Аннотация
ABSTRACT 6
LIST OF ABBREVIATIONS 8
INTRODUCTION 12
1 Overview (Theoretical) 14
1.1 What is machine learning 14
1.2 Unsupervised learning 14
1.3 Supervised learning 15
1.3.1 Regression 15
1.3.2 Classification 16
1.4 Cost function 16
1.5 Gradient decent 17
1.5.1 Batch gradient descent (BGD) 17
1.5.2 Stochastic gradient descent (SGD) 18
1.6 Normal equation 19
1.7 Hyperparameters 19
1.7.1 Learning rate 19
1.7.2 Momentum 20
1.7.3 Batch size 20
1.7.4 Weight decay 20
1.7.5 Epochs 21
1.8 Datasets 21
1.8.1 MNIST 22
1.8.2 SVHN 22
1.8.3 CIFAR10 23
1.8.4 CIFAR100 23
1.8.5 ImageNet 24
1.9 Linear regression 24
1.10 Logistic regression 25
1.11 Neural network 27
1.11.1 Convolutional neural network (CNN) 28
1.12 Problems and solutions 29
1.12.1 Features scaling 29
1.12.2 Mean normalization 30
1.12.3 Learning rate problems 30
1.12.4 Training problems 31
1.12.5 Random initialization 33
1.13 Adversarial Machine Learning 34
1.13.1 Evasion Attacks and Defenses 34
1.13.2 Data Poisoning and Backdoor Attack 35
1.13.3 Typical Adversarial Samples 35
2 Overview (Technical) 37
2.1 Programming languages 37
2.2 MATLAB 37
2.3 Python 37
2.4 TensorFlow 38
2.4.1 TensorBoard 38
2.5 PyTorch 39
2.6 Keras 39
3 Boosting Fast AT with Learnable Adversarial Initialization 40
3.1 Introduction 40
3.2 Related Works 42
3.2.1 Attack Methods 42
3.2.2 Adversarial Training Methods 43
3.3 The Proposed Method 45
3.3.1 Pipeline of the Proposed Method 45
3.3.2 Architecture of the Generative Network 46
3.3.3 Formulation of the Proposed Method 47
3.4 Experiments 49
3.4.1 Experimental Settings 49
3.4.2 Hyper-parameter Selection 50
3.4.3 Relieving Catastrophic Overfitting 51
3.4.4 Comparisons with State-of-the-art Methods 52
3.4.5 Performance Analysis 56
4 Smooth Adversarial Training 58
4.1 Introduction 58
4.2 Related Works 59
4.3 ReLU Worsens Adversarial Training 60
4.3.1 Adversarial Training setup 60
4.3.2 How Does Gradient Quality Affect Adversarial Training? ... 60
4.3.3 Can ReLU’s Gradient Issue Be Remedied? 62
4.4 Smooth Adversarial Training 63
4.4.1 Adversarial Training with Smooth Activation Functions 64
4.4.2 Ruling Out the Effect From x < 0 65
4.4.3 Case Study: Stabilizing Adversarial Training with ELU using
CELU 65
4.5 Exploring the Limits of Smooth Adversarial Training 67
4.5.1 Scaling-up ResNet 67
4.5.2 SAT with EfficientNet 68
4.6 Sanity Tests for SAT 70
4.7 SAT on CIFAR10 71
CONCLUSION 72
Future works 72
ACKNOWLEDGEMENTS 73
REFERENCES 74

In this work, we will first provide a brief overview of Machine Learning, then describe a new and vitally important area of Machine Learning called Adversarial Machine Learning, before focusing on Adversarial Training as one of the best solutions to the problems in the field of Adversarial Machine Learning. We will examine a recently proposed Adversarial Training algorithm and present our observations. Then provide a comprehensive study on a new concept for activation functions which helps in robustness of the ML models. Finally, propose a new research topic for further development of the new Adversarial Training method.
Right after the advent of the first Adversarial Machine Learning discussions, many researchers and engineers have started developing new algorithms for new attacks as well as new defenses on ML models. These attacks could lead to catastrophic results, for example by attacking autonomous driving cars they can cause misinterpretation of road signs and as a result heavy accidents will occur. All these new algorithms and methods are in need of powerful computational resources and time; almost all of the useful algorithms need hours and even days to compile (train the ML model) and neither everyone has access to such powerful resources, nor these resources are free to use. As a result, understanding the optimum methods is critical in order to reduce the amount of time and hardware required for training.
The relevance of the research topic lies in researching on new methods to make the Machine Learning models robust against different attacks. The object of the master's thesis is to find the most optimal algorithms in order to advance the robustness of Machine Learning models. The subject of research is to develop Adversarial Training techniques which are the most effective methods in increasing robustness of ML models.
The purpose of this research is to provide a study on Adversarial Machine Learning, or more precisely Adversarial Training in Computer Vision, and to find the most practical Adversarial Training algorithm in terms of being fast and robust at the time of writing this dissertation, as well as to suggest new research directions for future developments.
The scope of the research is to provide the results of conducted attacks on famous image classification datasets (CIFAR10, CIFAR100, Tiny ImageNet, ImageNet) using different attack methods.
To achieve the goals of this work, the following tasks were performed:
- Study theoretical resources to become familiar with concepts
- Identifying the existing researches on the subject.
- Understanding the problems in the subject.
- Identifying the outstanding solutions.
- Implementing the algorithms.
- Identifying the best available algorithm.
- Testing and tweaking the algorithm.
- Providing the results of the successful implementations.
- Identifying the potentials for further development.
- Suggesting new research directions for further development.

Возникли сложности?

Нужна помощь преподавателя?

Помощь в написании работ!


In this research first we investigated a proposed sample-dependent adversarial initialization to boost Fast AT. To produce an effective initialization, a generative network conditioned on a benign image and its gradient information from the target network were used. The generative network and the target network are optimized together and play a game during the training phase. The former learns to generate stronger adversarial instances depending on the current target network by producing a dynamic sample-dependent initialization. To improve model robustness, the latter uses the generated adversarial cases for training. The proposed initialization eliminates catastrophic overfitting, therefore improving model robustness, when compared to widely used random initialization methods in Fast AT. Extensive experimental results back up the proposed method's advantages.
In the second part of the research, we looked into a new concept called smooth adversarial training, which enforces architectural smoothness by using adversarial training to replace nonsmooth activation functions with their smooth approximations. Without compromising accuracy or incurring additional calculation costs, SAT improves adversarial robustness. Extensive research has shown that SAT is generally effective. SAT reports the state-of-the-art adversarial robustness on ImageNet with EfficientNetL1, which exceeds the previous art by 9.5 percent in accuracy and 11.6 percent in robustness.


1. J. P. Mueller e L. Massaron, Machine Learning for dummies, For Dummies, 2016.
2. S. Raschka e V. Mirjalili, Python Machine Learning, Packt Publishing, 2017.
3. G. Toschi, “Il Machine Learning e divertente!”, 2017. [Online]. Available: https://medium.com/botsupply/il-machine-learning-%C3%A8-divertente-parte-1-97d4bce99a06.
4. L. N. Smith, “A disciplined approach to neural network hyper-parameters: Part 1 - - learning rate, batch size, momentum, and weight decay”, https://arxiv.org/abs/1803.09820, 2018.
5. J. Brownlee, “Difference Between a Batch and an Epoch in a Neural Network”, 2018. [Online]. Available: https://machinelearningmastery.com/differencebetween-a-batch-and- an-epoch/.
6. Y. LeCun, C. Cortes e C. J. Burges, «THE MNIST DATABASE”, 1999. [Online]. Available: http://yann.lecun.com/exdb/mnist/.
7. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng Reading Digits in Natural Images with Unsupervised Feature Learning NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011. Available: http://ufldl .stanford.edu/housenumbers
8. A. Krizhevsky, V. Nair e G. Hinton, “Cifar Datasets”, 2009. [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html.
9. F.-F. Li e C. Fellbaum, “ImageNet”, 2009. [Online]. Available:
http://www.imagenet.org/.
10. D. Vasani, “How do pre-trained models work?”, 2019. [Online]. Available: https://towardsdatascience.com/how-do-pretrained-models-work-11fe2f64eaa2.
11. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow e R. Fergus, “Intriguing properties of neural networks”, https://arxiv.org/abs/1312.6199, 2013.
12. A. Marchisio, M. A. Hanif, S. Rehman, M. Martina e M. Shafique, “A Methodology for Automatic Selection of Activation Functions to Design Hybrid Deep Neural Networks”, https://arxiv.org/abs/1811.03980, 2018.
13. C. Coleman, D. Narayanan, D. Kang, T. Zhao, J. Zhang, L. Nardi, P. Bailis, K. Olukotun, C. Rd e M. Zaharia, “DAWNBench: An End-to-End Dee Learning Benchmark and Competition”.
14. Y. Guo, C. Zhang, C. Zhang e Y. Chen, “Sparse DNNs with Improved Adversarial Robustness”, https://arxiv.org/abs/1810.09619, 2018.
15. A. Katharopoulos e F. Fleuret, “Not All Samples Are Created Equal: Deep Learning with Importance Sampling”, https://arxiv.org/abs/1803.00942, 2018....190


Работу высылаем на протяжении 30 минут после оплаты.




©2025 Cервис помощи студентам в выполнении работ