Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat/add_callbacks #233

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
weights/*
.ipynb_checkpoints
__pycache__
logs
78 changes: 42 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Implementation of deep learning framework -- Unet, using Keras
## Implementation of deep learning framework -- Unet, using Keras

The architecture was inspired by [U-Net: Convolutional Networks for Biomedical Image Segmentation](http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/).

Expand All @@ -8,16 +8,18 @@ The architecture was inspired by [U-Net: Convolutional Networks for Biomedical I

### Data

The original dataset is from [isbi challenge](http://brainiac2.mit.edu/isbi_challenge/), and I've downloaded it and done the pre-processing.

You can find it in folder data/membrane.

### Data augmentation

The data for training contains 30 512*512 images, which are far not enough to feed a deep learning neural network. I use a module called ImageDataGenerator in keras.preprocessing.image to do data augmentation.

See dataPrepare.ipynb and data.py for detail.
You can use your custom data, I used this git for my research work. Data directory should be like this

```
|- Dataset
|- images
|- image.jpg
|- image2.jpg
|- labels
|- image.jpg
|- image2.jpg
```
<br>

### Model

Expand All @@ -28,55 +30,59 @@ This deep neural network is implemented with Keras functional API, which makes i
Output from the network is a 512*512 which represents mask that should be learned. Sigmoid activation function
makes sure that mask pixels are in \[0, 1\] range.

<br>

### Training

The model is trained for 5 epochs.
you can use this notebook for training purpose [Notebook Path](trainUnet.ipynb)

1. You can also make your own custom Generator
2. In this git **zhixuhao** is using flow from directory generator (which takes care of your ram even if dataset is to large)

After 5 epochs, calculated accuracy is about 0.97.
### Start tensorboard

Loss function for the training is basically just a binary crossentropy.
```bash

conda activate your_env
tensorboard --logdir logs/ --port 6006 --bind_all

---

## How to use
```

### Dependencies
<br>

This tutorial depends on the following libraries:
### Evaluation

In this git we focused on

* Tensorflow
* Keras >= 1.0
1. dice-coeffecient

Also, this code should be compatible with Python versions 2.7-3.5.

### Run main.py
<br>
Now you can see the predicted mask at the time of training so that you know how's your model performing

You will see the predicted results of test image in data/membrane/test
<br>

### Or follow notebook trainUnet
![img/u-net-architecture.png](img/prediction.png)



### Results
<br>

Use the trained model to do segmentation on test images, the result is statisfactory.
### Dependencies

This tutorial depends on the following libraries:

![img/0test.png](img/0test.png)
Run the following bash commands to make a seperate environemnt for this git

![img/0label.png](img/0label.png)
1. Install conda first
```bash
conda create -n unet_env python=3.6 tensorflow keras, opencv-python matplotlib
conda activate unet_env
```


## About Keras

Keras is a minimalist, highly modular neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.

Use Keras if you need a deep learning library that:

allows for easy and fast prototyping (through total modularity, minimalism, and extensibility).
supports both convolutional networks and recurrent networks, as well as combinations of the two.
supports arbitrary connectivity schemes (including multi-input and multi-output training).
runs seamlessly on CPU and GPU.
Read the documentation [Keras.io](http://keras.io/)

Keras is compatible with: Python 2.7-3.5.
92 changes: 92 additions & 0 deletions custom_callbacks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
import keras
from IPython.display import clear_output
import matplotlib.pyplot as plt




'''

Added by Sohaib Anwaar

[email protected]

'''

class ValidatePredictions(keras.callbacks.Callback):
'''

This custom callback helps you to validate your prediction while training.

'''
def __init__(self,model, generator):

'''

This is the init of this class which takes all the required params to get started

params: model : (keras.model) Model which you are using for training
params: generator : (Image_generator) Validation or Training Generator

'''

super(ValidatePredictions, self).__init__()

self.image, self.label = next(generator)
self.model = model




def display(self, display_list):
'''

Display Fucntion which display your image, groundtruth and prediction

params: display_list : (list) List of images to display in a sequece
e.g [image1, image2, image3] where image1, image2, image3 is numpy array

'''

# Title List to display on the top of the image
gt_list = ["Image", "Label", "Prediction"]

# Plot figure size
plt.figure(figsize=(15, 15))

# Appending all plot figs
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i+1)
plt.title(gt_list[i])
plt.imshow(display_list[i])
plt.axis('off')

# Showing images
plt.show()

def on_epoch_begin(self, epoch, logs=None):

'''

This function will execute on the start of the epoch and plot the prediction

params: epoch : (int) Epoch Number
params: logs : (dict) logs generated till now, i.e val_loss, loss, accuracy etc you can explore
with logs.keys()

'''

# Getting the first 5 images
images_to_pred = self.image[:5]

# Predicting the images
pred = self.model.predict(images_to_pred)

# Plotting images
for i in range(images_to_pred.shape[-1]):
self.display([self.image[i], self.label[i], pred[i]])





44 changes: 25 additions & 19 deletions dataPrepare.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,7 @@
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\SoftWare\\Anaconda2\\envs\\python3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
" from ._conv import register_converters as _register_converters\n",
"Using TensorFlow backend.\n"
]
}
],
"outputs": [],
"source": [
"from data import *"
]
Expand Down Expand Up @@ -45,7 +35,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -59,7 +49,7 @@
" zoom_range=0.05,\n",
" horizontal_flip=True,\n",
" fill_mode='nearest')\n",
"myGenerator = trainGenerator(20,'data/membrane/train','image','label',data_gen_args,save_to_dir = \"data/membrane/train/aug\")"
"myGenerator = trainGenerator(20,'/media/sohaib/additional_/DataScience/knee_mri/dataset/unet_format/train/','images','masks',data_gen_args,save_to_dir = None)"
]
},
{
Expand All @@ -71,9 +61,18 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 3,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found 10488 images belonging to 1 classes.\n",
"Found 10488 images belonging to 1 classes.\n"
]
}
],
"source": [
"#you will see 60 transformed images and their masks in data/membrane/train/aug\n",
"num_batch = 3\n",
Expand All @@ -93,21 +92,28 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"image_arr,mask_arr = geneTrainNpy(\"data/membrane/train/aug/\",\"data/membrane/train/aug/\")\n",
"#np.save(\"data/image_arr.npy\",image_arr)\n",
"#np.save(\"data/mask_arr.npy\",mask_arr)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "tensorflow2",
"language": "python",
"name": "python3"
"name": "tensorflow2"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -119,7 +125,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
"version": "3.7.10"
}
},
"nbformat": 4,
Expand Down
Binary file added img/prediction.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 18 additions & 2 deletions model.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,22 @@
from keras import backend as keras



def dice_coef(y_true, y_pred):
smooth = 1
y_true_f = keras.flatten(y_true)
y_pred_f = keras.flatten(y_pred)
intersection = keras.sum(y_true_f * y_pred_f)



return (2. * intersection + smooth) / (keras.sum(y_true_f) + keras.sum(y_pred_f) + smooth)

def dice_coef_loss(y_true,y_pred):

return 1-dice_coef(y_true,y_pred)


def unet(pretrained_weights = None,input_size = (256,256,1)):
inputs = Input(input_size)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
Expand Down Expand Up @@ -52,9 +68,9 @@ def unet(pretrained_weights = None,input_size = (256,256,1)):
conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9)

model = Model(input = inputs, output = conv10)
model = Model(inputs = inputs, outputs = conv10)

model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy'])
model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy', dice_coef])

#model.summary()

Expand Down
Loading