-
This repository makes use of Matterport's implementation of Mask-RCNN:
- Tensorflow 1.15
- Supports Python <=3.7 (Tested on Python3.7)
Please use
virtualenv
to create an environment with python3.7. Other python versions may lead to installation issues -
By default, the provided
requirements.txt
provides installation for CPU version of the libraries i.e. Tensorflow and PyTorch. Please install the GPU versions if necessarypython3 -m pip install tensorflow-gpu=1.15 cudatoolkit
-
Model weights: Please download the following model weights and place them in the
weights
folder- Damage Prediction (Mask-RCNN): Google Drive
- Damage Prediction (YoloV5): Google Drive
- CarSide Prediction (YoloV5): Google Drive
-
Clone this repo as follows
git clone <THIS_REPO_SSH/HTTPS> --recurse-submodules
This is because this repo implicitly uses other repositories.
If you have accidentally cloned without the
--recurse-submodules
flag, try the following:git submodule init git submodule update
-
Install the requirements by running
python3 -m pip install -r requirements.txt
Video Demo of Flask Application
- Select and upload an image from your local machine.
- Submit image for inference.
- Predicted instance segmentation masks, bounding boxes and estimated costs results will be displayed in the web interface.
insurtech-demo-video.mp4
Jupyter notebooks are provided in the development
folder.
The website is deployed using flask
. Code for this is in app.py
, and the scripts for ML related services are in scripts/
folder. The website can be deployed by running:
python3 app.py
To see the website, go to localhost:5000
.
- Using VGG Image Annotator to annotate (Polygon Tool)
- HTML file is in "Processing" folder, then dataset_v1 folder
- Load project-v1.json file
- If can knock and no internal is seen e.g. dent then it's light (< $400)
- Anything with respray or dent too much is moderate ($400 - $1000)
- Anything with replace is severe (>$1000)
Scratches and dents
-
Generate Yolo-formatted annotations using the
development/Notebook/Yolov5.ipynb
[Note the configs] -
Edit the
data.yaml
(saved to output_dir) with the corresponding directories. Thedata.yaml
file should look like this# All paths are relative to yolov5 directory train: data/YoloDataset/train/images val: data/YoloDataset/val/images test: data/YoloDataset/test/images # Optional # The following is an example nc: 3 # Number of Classes names: ['Back', 'Front', 'Side'] # Name of classes, sorted.
-
Move the
data.yaml
file toyolov5/data
directory -
Assuming training and validation data are present in
yolov5/data/YoloDataset
directory, structure workspace as follows:yolov5 |___data |___YoloDataset |___train |___images [Put all images here] |___labels [Put all labels here] |___val |___images |___labels |___test |___images |___labels |___data.yaml
-
Run the following command in the yolov5 directory to train:
python train.py --img 460 --batch 16 --epochs 300 --data data/YoloDataset.yaml --weights yolov5s.pt
-
For prediction, refer to the yolov5 github. Maybe something like this:
model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # TODO: load model weights here @app.route('/predict', methods = ['POST']) def getPrediction(img): results = model(img) # TODO: Format results appropriately return results