diff --git a/examples/vai_quantizer/README.md b/examples/vai_quantizer/README.md index 384a5d0f5..811d2c7bf 100644 --- a/examples/vai_quantizer/README.md +++ b/examples/vai_quantizer/README.md @@ -4,3 +4,151 @@ + +## QuickStart guide + +The step-by-step instructions are made for easy start with the vai quantization tool for model optimization via neural network deep compression. +Follow the guide to quickly launch the tool, the steps are operated with the pytorch model framework as the example. +The procedure for all frameworks is similar to each other, you may find the desired framework in [this directory](https://github.com/Xilinx/Vitis-AI/tree/master/src/vai_quantizer) +and replace correspondingly. +1. Prerequisite and installation.
+Make sure that you have the latest version of Vitis-AI and the model quantization Python package is installed. +The packages are different depending on what the model framework is.
+For Pytorch - `pytorch_nndct` must be installed. You may find the installation options in [this document](https://github.com/Xilinx/Vitis-AI/tree/master/src/vai_quantizer/vai_q_pytorch#install-from-source-code).
+If the following command line does not report error, the installation is done.
+ + ``` + python -c "import pytorch_nndct" + ``` + + > **_NOTE:_** The further steps with code-snippets can be combined in the single jupyter notebook or the python script. + +2. Import the modules.
+ + ``` + import torch + from torch.utils.data import DataLoader, Dataset + import torchvision.transforms as transforms + from PIL import Image + import os + from pytorch_nndct.apis import torch_quantizer + ``` + +3. Prepare the dataset, dataloader, model and evaluation function.
+For the example, the pretrained resnet18 classification model is used from the torchvision.
+Dataset and DataLoader from the custom data:
+ + ``` + # Define your custom dataset class + class CustomDataset(Dataset): + def __init__(self, data_dir, transform=None): + self.data_dir = data_dir + self.transform = transform + self.data = [] # List to store image filenames and labels + + # Assuming directory structure: data_dir/class_name/image.jpg + classes = os.listdir(data_dir) + self.class_to_idx = {cls_name: idx for idx, cls_name in enumerate(classes)} + + for class_name in classes: + class_dir = os.path.join(data_dir, class_name) + if os.path.isdir(class_dir): + for file_name in os.listdir(class_dir): + if file_name.endswith(".jpg"): + self.data.append((os.path.join(class_dir, file_name), self.class_to_idx[class_name])) + + def __len__(self): + return len(self.data) + + def __getitem__(self, idx): + img_path, label = self.data[idx] + image = Image.open(img_path).convert("RGB") + + if self.transform: + image = self.transform(image) + + return image, label + + # Define transformations to apply to the images + transform = transforms.Compose([ + transforms.Resize((224, 224)), + transforms.ToTensor(), + transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), + ]) + + # Create an instance of your custom dataset + data_dir = "/path/to/your/dataset" + custom_dataset = CustomDataset(data_dir, transform=transform) + + # Create a custom dataloader + batch_size = 8 + dataloader = DataLoader(custom_dataset, batch_size=batch_size, shuffle=True) + # val_dataloader - you may also declare the dataloader with validation data + ``` + +Evaluation function with top1 accuracy:
+ + ``` + def evaluate_model(model, dataloader): + model.eval() + device = next(model.parameters()).device + + total_samples = 0 + correct_predictions = 0 + + with torch.no_grad(): + for inputs, labels in dataloader: + inputs, labels = inputs.to(device), labels.to(device) + outputs = model(inputs) + _, predicted = torch.max(outputs, 1) + + total_samples += labels.size(0) + correct_predictions += (predicted == labels).sum().item() + + top1_accuracy = correct_predictions / total_samples + return top1_accuracy + ``` + +The pretrained model object:
+ + ``` + from torchvision.models.resnet import resnet18 + model = resnet18(pretrained=True) + ``` + +4. Generate a quantizer with quantization needed input and get converted model.
+ + ``` + quant_mode = "calib" + input = torch.randn([batch_size, 3, 224, 224]) + quantizer = torch_quantizer(quant_mode, model, (input)) + quant_model = quantizer.quant_model + ``` + + As we have pretrained model, the training process is omitted and here only the quantization of the model weights is presented. + > **_NOTE:_** quant_mode: An integer that indicates which quantization mode the process is using. "calib" for calibration of quantization. "test" for evaluation of quantized model. + +5. Forward with converted model by evaluating on the validation data.
+ + ``` + top_acc1 = evaluate(quant_model, val_dataloader) + ``` + +6. Output the quantization result and deploy model.
+ + ``` + if quant_mode == 'calib': + quantizer.export_quant_config() + if deploy: + quantizer.export_torch_script() + quantizer.export_onnx_model() + quantizer.export_xmodel() + ``` + +Xmodel file for Vitis AI compiler and other artifacts will be generated under output directory “./quantize_result”. +It will be further used to deploy this model to the DPU device. +``` + ResNet_int.xmodel: deployed XIR format model + ResNet_int.onnx: deployed onnx format model + ResNet_int.pt: deployed torch script format model +``` diff --git a/examples/wego/README.md b/examples/wego/README.md index 8b61f02ce..3bd0b781f 100644 --- a/examples/wego/README.md +++ b/examples/wego/README.md @@ -40,6 +40,42 @@ Please refer to the following links to run the wego demos targeting different fr - [TensorFlow 2.x](./tensorflow-2.x) - [TensorFlow 1.x](./tensorflow-1.x) +## QuickStart guide + +The step-by-step instructions are made for easy start with the WeGO tool for model optimization. +Follow the guide to quickly launch the tool, for guide instructions, PyTorch InceptionV3 model was picked as an example +to do [the compiling an offline quantized model](https://github.com/Xilinx/Vitis-AI/tree/master/examples/wego/pytorch/01_compiling_offline_quantized_models) and run it. +1. Prerequisite and installation.
+Make sure that you have the latest version of Vitis-AI and the [WeGO Example Recipes](https://github.com/Xilinx/Vitis-AI/tree/master/examples/wego#prepare-wego-example-recipes) are downloaded.
+Follow [the preparation step](https://github.com/Xilinx/Vitis-AI/tree/master/examples/wego#preparation) to get this done. +2. Setup Conda Environment for WeGO-Torch.
+Suppose you have entered the Vitis-AI CPU docker container, then using following command to activate the conda env for WeGO-Torch.
+ ```bash + $ conda activate vitis-ai-wego-torch + ``` +3. Change directory to the corresponding classification folder in the WeGO folder.
+ ```bash + $ cd ./pytorch/01_compiling_offline_quantized_models/classification/ + ``` +4. Install the python dependencies.
+ ``` + $ pip install -r requirements.txt + ``` +5. Run the WeGO tool.
+Since we utilize the InceptionV3 pre-saved model weights from the WeGORecipes.
+For the example, two different running modes can be selected to enable accuracy and performance test purpose with different running options provided.
+ - **normal**: example will accept one single image as input and then perform the normal inference process using single thread. The output result of this mode will be either top-5 accuracy or an image, which is decided by the model type.
+ ```bash + $ bash run.sh inception_v3 normal + ``` + - **perf**: example will accept one single image as input but a large image pool will be created instead (i.e. copying the input image many times). The performance profiling process will accept this large image pool as input and then run using multi-threads. The output result of this mode will be the performance profiling result(i.e. the FPS numbers).
+ ```bash + $ bash run.sh inception_v3 perf + ``` +> **_NOTE:_** You may also enable the OnBoard tool option in the `run.sh`, it will collect data during the inference process, allowing to visualize using TensorBoard. + +6. Collect the WeGO artifacts in the `./_wego_torch` directory.
+Generated `.xmodel` artifact and meta files are saved for further deploy in the DPU device. # Reference