This project is maintained by coxlab

In this project, we built a web demo in which users can explore such kind of intriguing phenomenon simultaneously on multiple powerful deep learning networks [1-3] given any images and object classes, and by answering the questionnaire, potentially help us better characterize deep learning algorithms and the way they agree or disagree with human vision. Our algorithm is improved from methods proposed in [7,10] and implemented based on MatConvNet [11] and minConf [12], which should be generally efficient. However, due to the limitation of hardware resources, the demo site is running in CPU mode and isn't able to serve too many concurrent hacking requests. Thus, we do encourage users to use their own machines with our freely downloadable source code, where GPU mode is fully supported too, if frequent requests are to be made. Setting up mirror sites for this web demo would be very much appreciated, and the information can be shared through the wiki page. Do try out the web demo before reading the following results and see if you think deep learning algorithms are easily fooled too.

Click here to continue reading.

```
@misc{ostrichinator,
author = {C.-Y. Tsai and D. Cox},
title = {Are Deep Learning Algorithms Easily Hackable?},
howpublished = {\url{http://coxlab.github.io/ostrichinator}},
year = 2015
}
```

[2] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets.

[3] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.

[4] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In

[5] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. OverFeat: Integrated recognition, localization and detection using convolutional networks.

[6] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions.

[7] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks.

[8] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images.

[9] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples.

[10] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps.

[11] A. Vedaldi and K. Lenc. MatConvNet – convolutional neural networks for MATLAB.

[12] M. W. Schmidt, E. Berg, M. P. Friedlander, and K. P. Murphy. Optimizing costly functions with simple constraints: A limited-memory projected quasi-newton algorithm. In

[13] G. E. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In

[14] W. J. Scheirer, L. P. Jain, and T. E. Boult. Probability models for open set recognition.

[15] J. Long, N. Zhang, and T. Darrell. Do convnets learn correspondence?

[16] A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them.

[17] A. S. Razavian, H. Azizpour, A. Maki, J. Sullivan, C. H. Ek, and S. Carlsson. Persistent evidence of local image properties in generic convnets.

[18] S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples.

[19] D. L. Yamins, H. Hong, C. Cadieu, and J. J. DiCarlo. Hierarchical modular optimization of convolutional networks achieves representations similar to macaque IT and human ventral stream. In

[20] E. Vig, M. Dorr, and D. Cox. Large-scale optimization of hierarchical features for saliency prediction in natural images. In