Essays about: "Adversarial Examples"
Showing result 11 - 15 of 15 essays containing the words Adversarial Examples.
-
11. Explainable AI as a Defence Mechanism for Adversarial Examples
University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)Abstract : Deep learning is the gold standard for image classification tasks. With its introduction came many impressive improvements in computer vision outperforming all of the earlier machine learning models. READ MORE
-
12. Generation of Synthetic Images with Generative Adversarial Networks
University essay from Blekinge Tekniska Högskola/Institutionen för datalogi och datorsystemteknikAbstract : Machine Learning is a fast growing area that revolutionizes computer programs by providing systems with the ability to automatically learn and improve from experience. In most cases, the training process begins with extracting patterns from data. The data is a key factor for machine learning algorithms, without data the algorithms will not work. READ MORE
-
13. Robustness of a neural network used for image classification : The effect of applying distortions on adversarial examples
University essay from Högskolan i Gävle/DatavetenskapAbstract : Powerful classifiers as neural networks have long been used to recognise images; these images might depict objects like animals, people or plain text. Distorted images affect the neural network's ability to recognise them, they might be distorted or changed due to distortions related to the camera. READ MORE
-
14. Behaviour of logits in adversarial examples: a hypothesis
University essay from KTH/Skolan för datavetenskap och kommunikation (CSC)Abstract : It has been suggested that the existence of adversarial examples, i.e. slightly perturbed images that are classified incorrectly, imply that the theory that deep neural networks learn to identify a hierarchy of concepts does not hold, or that the network has not managed to learn the true underlying concepts. READ MORE
-
15. Methods for Increasing Robustness of Deep Convolutional Neural Networks
University essay from Högskolan i Halmstad/Akademin för informationsteknologiAbstract : Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep neural networks seem vulnerable to small amounts of non-random noise, created by exploiting the input to output mapping of the network. Applying this noise to an input image drastically decreases classication performance. READ MORE