Detecting Images Outside Training Distribution for Fingerprint Spoof Detection

University essay from Lunds universitet/Matematik LTH

Abstract: Artificial neural networks are known to run into issues when given samples that deviate from the training distribution, where the network may confidently provide an incorrect answer. Out-of-distribution detection methods aims to provide a solution to this issue, by detecting data that deviates from the distribution used to train the model. This thesis looks at the possibility of using out-of-distribution methods in a more challenging context, where the data looked at is more semantically similar than what is often looked at in the literature. Three out-of-distribution detection methods are tested and evaluated at separating fingerprint images from out-of-distribution images, including spoof molds of fingerprints, generated fingerprints using GANs, and non-finger images. The results on the non-finger dataset are in line with the literature, and all methods give promising results on the non-finger dataset. None of the methods are however able to separate the more challenging spoof molds dataset from the in-distribution images. However, on the generated dataset the methods are able to provide somewhat encouraging results, though the performance is significantly lower than on the semantically different datasets.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)