A CV toolkit for my papers.
-
Updated
Jun 2, 2024 - Python
A CV toolkit for my papers.
How to use Cross Replica / Synchronized Batchnorm in Pytorch
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124
My solutions for Assignments of CS231n: Convolutional Neural Networks for Visual Recognition
Deep learning projects including applications (face recognition, neural style transfer, autonomous driving, sign language reading, music generation, translation, speech recognition and NLP) and theories (CNNs, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, hyperparameter tuning, regularization, optimization, Residual Networks). …
Review materials for the TWiML Study Group. Contains annotated versions of the original Jupyter noteboooks (look for names like *_jcat.ipynb ), slide decks from weekly Zoom meetups, etc.
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks
Cross-platform mobile Neural network C library for training and inference on the device. CPU only. It fits for time-series data.
Playground repository to highlight the problem of BatchNorm layers for an blog article
MNIST Classification using Neural Network and Back Propagation. Written in Python and depends only on Numpy
Code to fold batch norm layer of a DNN model in pytorch
Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning. TL;DR: Fine-tuning only the batch norm affine parameters leads to similar performance as to fine-tuning all of the model parameters
Solutions for Andrej Karpathy's "Neural Networks: Zero to Hero" course
Add a description, image, and links to the batchnorm topic page so that developers can more easily learn about it.
To associate your repository with the batchnorm topic, visit your repo's landing page and select "manage topics."