Skip to content
/ needle Public

A deep learning library, comparable to a very minimal version of PyTorch or TensorFlow

Notifications You must be signed in to change notification settings

0324wy/needle

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Needle

Completed an online course on 10-714 Deep Learning Systems offered by CMU in order to delve into the internals of PyTorch and TensorFlow, and understand how they function at a fundamental level.

Designed and built a deep learning library called Needle, comparable to a very minimal version of PyTorch or TensorFlow, capable of efficient GPU-based operations, automatic differentiation of all implemented functions, and the necessary modules to support parameterized layers, loss functions, data loaders, and optimizers.

Project 0

Build a basic softmax regression algorithm, plus a simple two-layer neural network. Create these implementations both in native Python (using the numpy library), and (for softmax regression) in native C/C++.

✅ A basic add function

✅ Loading MNIST data: parse_mnist function

✅Softmax loss: softmax_loss function

✅Stochastic gradient descent for softmax regression

✅SGD for a two-layer neural network

✅Softmax regression in C++

Project 1

Build a basic automatic differentiation framework, then use this to re-implement the simple two-layer neural network we used for the MNIST digit classification problem in HW0.

✅Implementing forward computation

  • ✅PowerScalar
  • ✅EWiseDiv
  • ✅DivScalar
  • ✅MatMul
  • ✅Summation
  • ✅BroadcastTo
  • ✅Reshape
  • ✅Negate
  • ✅Transpose

✅Implementing backward computation

  • ✅EWiseDiv
  • ✅DivScalar
  • ✅MatMul
  • ✅Summation
  • ✅BroadcastTo
  • ✅Reshape
  • ✅Negate
  • ✅Transpose

✅Topological sort: allow us to traverse through (forward or backward) the computation graph, computing gradients along the way

✅Implementing reverse mode differentiation

✅Softmax loss

✅SGD for a two-layer neural network

Project 2

Implement a neural network library in the needle framework.

✅Implement a few different methods for weight initialization

✅Implement additional modules

  • ✅Linear: needle.nn.Linear class
  • ✅ReLU:needle.nn.ReLU class
  • ✅Sequential: needle.nn.Sequential class
  • ✅LogSumExp: needle.ops.LogSumExp class
  • ✅SoftmaxLoss: needle.nn.SoftmaxLoss class
  • ✅LayerNorm1d: needle.nn.LayerNorm1d class
  • ✅Flatten: needle.nn.Flatten class
  • ✅BatchNorm1d: needle.nn.BatchNorm1d class
  • ✅Dropout: needle.nn.Dropout class
  • ✅Residual: needle.nn.Residual class

✅Implement the step function of the following optimizers.

  • ✅SGD: needle.optim.SGD class
  • ✅Adam: needle.optim.Adam class

✅Implement two data primitives: needle.data.DataLoader and needle.data.Dataset

  • ✅Transformations: RandomFlipHorizontal function and RandomFlipHorizontal class
  • ✅Dataset: needle.data.MNISTDataset class
  • ✅Dataloader: needle.data.Dataloader class

✅Build and train an MLP ResNet

  • ✅ResidualBlock: ResidualBlock function
  • ✅MLPResNet: MLPResNet function
  • ✅Epoch: epoch function
  • ✅Train Mnist: train_mnist function

Project 3: Building an NDArray library

Build a simple backing library for the processing that underlies most deep learning systems: the n-dimensional array (a.k.a. the NDArray).

✅Python array operations

  • ✅reshape: reshape function
  • ✅permute: permute function

About

A deep learning library, comparable to a very minimal version of PyTorch or TensorFlow

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published