Essays about: "Adversarial Examples"

Showing result 11 - 15 of 15 essays containing the words Adversarial Examples.

  1. 11. Explainable AI as a Defence Mechanism for Adversarial Examples

    University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Author : Harald Stiff; [2019]
    Keywords : ;

    Abstract : Deep learning is the gold standard for image classification tasks. With its introduction came many impressive improvements in computer vision outperforming all of the earlier machine learning models. READ MORE

  2. 12. Generation of Synthetic Images with Generative Adversarial Networks

    University essay from Blekinge Tekniska Högskola/Institutionen för datalogi och datorsystemteknik

    Author : Mousa Zeid Baker; [2018]
    Keywords : classification; deep learning; generative adversarial network; machine learning;

    Abstract : Machine Learning is a fast growing area that revolutionizes computer programs by providing systems with the ability to automatically learn and improve from experience. In most cases, the training process begins with extracting patterns from data. The data is a key factor for machine learning algorithms, without data the algorithms will not work. READ MORE

  3. 13. Robustness of a neural network used for image classification : The effect of applying distortions on adversarial examples

    University essay from Högskolan i Gävle/Datavetenskap

    Author : Rasmus Östberg; [2018]
    Keywords : LeNet; Distorted Images; MNIST; Adversarial Examples;

    Abstract : Powerful classifiers as neural networks have long been used to recognise images; these images might depict objects like animals, people or plain text. Distorted images affect the neural network's ability to recognise them, they might be distorted or changed due to distortions related to the camera. READ MORE

  4. 14. Behaviour of logits in adversarial examples: a hypothesis

    University essay from KTH/Skolan för datavetenskap och kommunikation (CSC)

    Author : Martin Svedin; Trolle Geuna; [2017]
    Keywords : ;

    Abstract : It has been suggested that the existence of adversarial examples, i.e. slightly perturbed images that are classified incorrectly, imply that the theory that deep neural networks learn to identify a hierarchy of concepts does not hold, or that the network has not managed to learn the true underlying concepts. READ MORE

  5. 15. Methods for Increasing Robustness of Deep Convolutional Neural Networks

    University essay from Högskolan i Halmstad/Akademin för informationsteknologi

    Author : Matej Uličný; [2015]
    Keywords : adversarial examples; deep neural network; noise robustness;

    Abstract : Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep neural networks seem vulnerable to small amounts of non-random noise, created by exploiting the input to output mapping of the network. Applying this noise to an input image drastically decreases classication performance. READ MORE