Investigating Robustness of DNNs

From ISLAB/CAISR
Revision as of 08:34, 17 December 2014 by Jens (Talk | contribs)

Title Investigating Robustness of DNNs
Summary This master thesis project aims at characterizing sensitivity to classification of images (based on deep neural networks).
Keywords deep neural networks, robustness
TimeFrame Spring 2015
References Szegedy, Christian, et al. "Intriguing properties of neural networks." arXiv preprint arXiv:1312.6199 (2013).

Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." Science 313.5786 (2006): 504-507.

Hinton, Geoffrey E. "Learning multiple layers of representation." Trends in cognitive sciences 11.10 (2007): 428-434.

Nguyen, Anh, Jason Yosinski, and Jeff Clune. "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images." arXiv preprint arXiv:1412.1897 (2014).

Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012.

Prerequisites Learning Systems, Data Mining, Parallel programming
Author
Supervisor Jens Lundström, Stefan Byttner
Level Master
Status Open


Deep Neural Networks (DNNs) have gained much interest during the last years. Among many sucessful applications, DNNs have shown outstanding performance in the task of learning feature representations and classification of images. A state-of-art, high accurate, neural network trained to classify 1.2 million images using 60 million parameters and 650,000 neurons was developed by Hinton et al. (2012). However, recent findings reveal delicate difficulties on noise robustness in DNNs, Szegedy et al (2013). This master thesis project aims at two related studies. Firstly, the master student will investigate how meaningsless, to human, images are classified with high confidence using DNNs, as reported by other studies. Secondly, the student will investige DNNs misclassifications of images with small pertubations, not visible to humans. Moreover, the student is also encouraged to apply image preprocessing methods in order to increase classification accuracy.

Four work packages are suggested:

1. Background study on DNNs and related reserarch. 2. Practical tests on DNNs on medium size datasets. 3. Investigation of distorted (meaningless) images classified with high confidence. 4. Investigation of misclassifications of images with small pertubations not visible to humans.

The result is expected to include investigation results and conclusions on both of the concerned research questions described above.