π Deeplite Torch Zoo π is a collection of state-of-the-art efficient computer vision models for embedded applications in PyTorch.
For information on YOLOBench, click here.
The main features of this library are:
- High-level API to create models, dataloaders, and evaluation functions
- Single interface for SOTA classification models:
- timm models,
- pytorchcv models,
- other SOTA efficient models (EdgeViT, FasterNet, GhostNetV2, MobileOne)
- Single interface for SOTA YOLO detectors (compatible with Ultralytics training):
- YOLOv3, v4, v5, v6-3.0, v7, v8
- YOLO with timm backbones
- other experimental configs
from deeplite_torch_zoo import get_model, list_models
model = get_model(
model_name='edgevit_xs', # model names for imagenet available via `list_models('imagenet')`
dataset_name='imagenet', # dataset name, since resnet18 is different for e.g. imagenet and cifar100
pretrained=False, # if True, will try to load a pre-trained checkpoint
)
# creating a model with 42 classes for transfer learning:
model = get_model(
model_name='fasternet_t0', # model names for imagenet available via `list_models('imagenet')`
num_classes=42, # number of classes for transfer learning
dataset_name='imagenet', # take weights from checkpoint pre-trained on this dataset
pretrained=False, # if True, will try to load all weights with matching tensor shapes
)
from deeplite_torch_zoo import get_model
model = get_model(
model_name='yolo4n', # creates a YOLOv4n model on COCO
dataset_name='coco', # (`n` corresponds to width factor 0.25, depth factor 0.33)
pretrained=False, # if True, will try to load a pre-trained checkpoint
)
# one could create a YOLO model with timm backbone,
# PAN neck and YOLOv8 decoupled anchor-free head like this:
model = get_model(
model_name='yolo_timm_fbnetv3_d', # creates a YOLO with FBNetV3-d backbone from timm
dataset_name='coco', #
pretrained=False, # if True, will try to load a pre-trained checkpoint
custom_head='yolo8', # will replace default detection head
# with YOLOv8 detection head
)
from deeplite_torch_zoo import get_dataloaders
dataloaders = get_dataloaders(
data_root='./', # folder with data, will be used for download
dataset_name='imagewoof', # datasets to if applicable,
num_workers=8, # number of dataloader workers
batch_size=64, # dataloader batch size (train and test)
)
# dataloaders['train'] -> train dataloader
# dataloaders['test'] -> test dataloader
#
# see below for the list of supported datasets
The list of supported datasets is available for classification and object detection.
from deeplite_torch_zoo import get_eval_function
eval_function = get_eval_function(
model_name='yolo8s',
dataset_name='voc',
)
# required arg signature is fixed for all eval functions
metrics = eval_function(model, test_dataloader)
from deeplite_torch_zoo.trainer import Detector
model = Detector(model_name='yolo7n') # will create a wrapper around YOLOv7n model
# (YOLOv7n model with YOLOv8 detection head)
model.train(data='VOC.yaml', epochs=100) # same arguments as Ultralytics trainer
# alternatively:
torch_model = get_model(
model_name='yolo7n',
dataset_name='coco',
pretrained=False,
custom_head='yolo8',
)
model = Detector(torch_model=torch_model) # either `model_name` or `torch_model`
model.train(data='VOC.yaml', epochs=100) # should be provided
PyPI version:
$ pip install deeplite-torch-zoo
Latest version from source:
$ pip install git+https://github.com/Deeplite/deeplite-torch-zoo.git
We provide several training scripts as an example of how deeplite-torch-zoo
can be integrated into existing training pipelines:
-
- support for Knowledge Distillation
- training recipes provides (A1, A2, A3, USI, etc.)
We always welcome community contributions to expand the scope of deeplite-torch-zoo
and also to have additional new models and datasets. Please refer to the documentation for the detailed steps on how to add a model and dataset. In general, we follow the fork-and-pull
Git workflow.
- Fork the repo on GitHub
- Clone the project to your own machine
- Commit changes to your own branch
- Push your work back up to your fork
- Submit a Pull request so that we can review your changes
NOTE: Be sure to merge the latest from "upstream" before making a pull request!
Repositories used to build Deeplite Torch Zoo
- YOLOv3 implementation: ultralytics/yolov3
- YOLOv5 implementation: ultralytics/yolov5
- flexible-yolov5 implementation: Bobo-y/flexible-yolov5
- YOLOv8 implementation: ultralytics/ultralytics
- YOLOv7 implementation: WongKinYiu/yolov7
- YOLOX implementation: iscyy/yoloair
- westerndigitalcorporation/YOLOv3-in-PyTorch
- The implementation of deeplab: pytorch-deeplab-xception
- The implementation of unet_scse: nyoki-mtl/pytorch-segmentation
- The implementation of fcn: wkentaro/pytorch-fcn
- The implementation of Unet: milesial/Pytorch-UNet
- The implementation of models on CIFAR100 dataset: kuangliu/pytorch-cifar
- The implementation of Mobilenetv1 model on VWW dataset: qfgaohao/pytorch-ssd
- The implementation of Mobilenetv3 model on VWW dataset: d-li14/mobilenetv3.pytorch
- d-li14/mobilenetv2.pytorch
- d-li14/efficientnetv2.pytorch
- apple/ml-mobileone
- osmr/imgclsmob
- huggingface/pytorch-image-models
- moskomule/senet.pytorch
- DingXiaoH/RepLKNet-pytorch
- huawei-noah/Efficient-AI-Backbones
- torchvision dataset implementations: pytorch/vision
- MLP implementation: aaron-xichen/pytorch-playground
- AutoAugment implementation: DeepVoltaire/AutoAugment
- Cutout implementation: uoguelph-mlrg/Cutout
- Robustness measurement image distortions: hendrycks/robustness
- Registry implementation: openvinotoolkit/openvino/tools/pot
- Torch profiler implementation: zhijian-liu/torchprofile