My goal is to simplify the installation and training of pre-trained deep learning models through the GUI (or you can call web app) without writing extra code. Set your dataset and start the training right away and monitor it with TensorBoard or DLTGUI tool. No more many parameters, no more data preprocessing.
While developing this application, I was inspired by the DIGITS system developed by NVIDIA.
- You won't have any problems for training image classification algorithms.
- It is easy to train a image classification model, save the model, and make predictions from the saved model.
- A few parameters!
- You will be able to train on pre-trained models.
- It doesn't exist for 1.0 but, it will be much easier to train and use object detection algortihms.
- You can train your model on GPU or CPU.
- Parallel operation is possible.
- You won't be needing a second terminal and a script code to run TensorBoard.
In the words of Stephen Hawking:
Science is beautiful when it makes simple explanations of phenomena or connections between different observations. Examples include the double helix in biology and the fundamental equations of physics.
Guide - Youtube Video (Coming Soon)
- Bug Fixes (There was a problem about showing heatmap for Cuda >= 10.0, fixed).
- Bug fixes.
-
Many bugs have been solved.
-
You will be able to Fine-Tuning your model. In this way, you can easily increase the success rate of the model.
-
You will be able to see which parts your model focuses on while classifying images (Class activation map, heat map - heatmap - available for MobileNetV2 only)
- Bug fixes.
- Now you can do data augmentation using Augmentor.
- Now you can choose CPU or GPU before the training.
- You are able to choose activation function for singe-class training. (Sigmoid and ReLu [new])
- Added SimpleCNNModel
- Fixed bugs
- Fixed single class problem, now you can train one-class model,
- Added sigmoid as activation function and binary_crossentropy as loss function,
- Added new function to DLGUI (prepare_data, sigmoid and more)
- Added new example dataset.
- Now you can use InceptionV3, VGG16, VGG19 and NASNetMobile models. [Image Classification]
- Anaconda 64-bit
- Python 3.7.3
- Tensorflow 2.0.1
- CUDA and CUDNN ( Minimum Cuda 10.0 - for gpu usage)
- Numpy 1.16.4
- Matplotlib
- PIL
- subprocess
- pathlib
- Augmentor
- MobileNetV2
- Inception V3
- VGG16
- VGG19
- NASNetMobile
- SimpleCnnModel
The following is an example of how a dataset should be structured. Before you train a deep learning model, put all your dataset into datasets directory.
├──datasets/
├──example_dataset/
├── cat
│ ├── img_1.jpg/png
│ └── img_2.jpg/png
├──flower_photos/
├── daisy
│── dandelion
│── roses
│── sunflowers
│── tulips
For image classification.
- Clone this repo.
cd Deep-Learning-Training-GUI
- On your conda terminal:
pip install -r requirements.txt
- Set your dataset directory as I show above.
- When you set your dataset, go to the terminal and run
python app.py
. You can access the program onlocalhost:5000
- Now you will see the home page.
- You must enter the path where your dataset is located. For example, I want to select the
flower_photos
folder in the datasets and I will write to the form element like this:datasets/flower_photos
- Split the dataset, we need to specify what percentage of the training data we will use as a test.
- Pre-trained Models - Currently only MobileNetV2 is available, but in future versions you can easily select other pre-trained models for fine-tuning [not available yet].
- CPU / GPU - You need to specify whether you want to train on the GPU or CPU (the first version will automatically run on the GPU).
- Number Of Classes - I'll go again from the flower_photos example. There are 5 separate folders under the
flower_photos
folder. This is our class count. When you train your own data set, you have to create as many folders here as you have classes. - Batch Size - Specifies whether the training samples are uploaded to the training network in escapes. If you have a 1080 Ti or better GPU, you can set it to 64 or 128. The higher Batch Size, less noise that the model learns.
- Epoch - The number of training data shown to the model network. So if you make 10 Epoch, the training data will be shown to the model network 10 times.
When you start to training, you will be able to access TensorBoard without writing any script on terminal!
Check localhost:6006
Contributions with example scripts for other frameworks (PyTorch or Caffe 2) and other pre-trained models are welcome!
Coming soon.
- I would like to thank impROS for giving me feedbacks.
- LexCybermac
- Release 5 pre-trained models.
- Choosing CPU or GPU before the training.
- Choosing Activation Function for singe-class training. (Sigmoid and ReLu)
- Data Augmentation
- Fine-Tuning
- Heatmap on predicted images.
- Object Detection - Mask RCNN.
-
Font Awesome [4]
-
Boostrap V4 [5]
-
How to Easily Deploy Machine Learning Models Using Flask [6]
-
Simple and efficient data augmentations using the Tensorfow tf.Data and Dataset API [8]
-
Marcus D Bloice, Peter M Roth, Andreas Holzinger, Biomedical image augmentation using Augmentor, Bioinformatics, https://doi.org/10.1093/bioinformatics/btz259 [9]