-
Q1: k-Nearest Neighbor classifier (20 points) The notebook knn.ipynb will walk you through implementing the kNN classifier.
-
Q2: Training a Support Vector Machine (25 points) The notebook svm.ipynb will walk you through implementing the SVM classifier.
-
Q3: Implement a Softmax classifier (20 points) The notebook softmax.ipynb will walk you through implementing the Softmax classifier.
-
Q4: Two-Layer Neural Network (25 points) The notebook two_layer_net.ipynb will walk you through the implementation of a two-layer neural network classifier.
-
Q5: Higher Level Representations: Image Features (10 points)
-
Q1: Fully-connected Neural Network (20 points) The notebook FullyConnectedNets.ipynb will introduce you to our modular layer design, and then use those layers to implement fully-connected networks of arbitrary depth. To optimize these models you will implement several popular update rules.
-
Q2: Batch Normalization (30 points) In notebook BatchNormalization.ipynb you will implement batch normalization, and use it to train deep fully-connected networks.
-
Q3: Dropout (10 points) The notebook Dropout.ipynb will help you implement Dropout and explore its effects on model generalization.
-
Q4: Convolutional Networks (30 points) In the IPython Notebook ConvolutionalNetworks.ipynb you will implement several new layers that are commonly used in convolutional networks.
-
Q5: PyTorch / TensorFlow on CIFAR-10 (10 points) For this last part, you will be working in either TensorFlow or PyTorch, two popular and powerful deep learning frameworks. You only need to complete ONE of these two notebooks. You do NOT need to do both, and we will not be awarding extra credit to those who do.
Open up either PyTorch.ipynb or TensorFlow.ipynb. There, you will learn how the framework works, culminating in training a convolutional network of your own design on CIFAR-10 to get the best performance you can.
Assignment #3: Image Captioning with Vanilla RNNs and LSTMs, Neural Net Visualization, Style Transfer, Generative Adversarial Networks
-
Q1: Image Captioning with Vanilla RNNs (29 points) The notebook RNN_Captioning.ipynb will walk you through the implementation of an image captioning system on MS-COCO using vanilla recurrent networks.
-
Q2: Image Captioning with LSTMs (23 points) The notebook LSTM_Captioning.ipynb will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs, and apply them to image captioning on MS-COCO.
-
Q3: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points) The notebooks NetworkVisualization-TensorFlow.ipynb, and NetworkVisualization-PyTorch.ipynb will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
-
Q4: Style Transfer (15 points) In thenotebooks StyleTransfer-TensorFlow.ipynb or StyleTransfer-PyTorch.ipynb you will learn how to create images with the content of one image but the style of another. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
-
Q5: Generative Adversarial Networks (15 points) In the notebooks GANS-TensorFlow.ipynb or GANS-PyTorch.ipynb you will learn how to generate images that match a training dataset, and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awarded if you complete both notebooks.