diff --git a/README.md b/README.md index 3fcda1d..17d2213 100644 --- a/README.md +++ b/README.md @@ -216,6 +216,66 @@ If you don't have access to GPU or appropriate hardware and don't want to instal ![DeepLIIF Website Demo](images/deepliif-website-demo.gif) +DeepLIIF can also be accessed programmatically through an endpoint by posting a multipart-encoded request +containing the original image file: + +``` +POST /api/infer + +Parameters + +img (required) +file: image to run the models on + +resolution +string: resolution used to scan the slide (10x, 20x, 40x), defaults to 20x + +pil +boolean: if true, use PIL.Image.open() to laod the image, instead of python-bioformats + +slim +boolean: if true, return only the segmentation result image +``` + +For example, in Python: + +```python +import os +import json +import base64 +from io import BytesIO + +import requests +from PIL import Image + +# Use the sample images from the main DeepLIIF repo +images_dir = './Sample_Large_Tissues' +filename = 'ROI_1.png' + +res = requests.post( + url='https://deepliif.org/api/infer', + files={ + 'img': open(f'{images_dir}/{filename}', 'rb') + }, + # optional param that can be 10x, 20x (default) or 40x + params={ + 'resolution': '20x' + } +) + +data = res.json() + +def b64_to_pil(b): + return Image.open(BytesIO(base64.b64decode(b.encode()))) + +for name, img in data['images'].items(): + output_filepath = f'{images_dir}/{os.path.splitext(filename)[0]}_{name}.png' + with open(output_filepath, 'wb') as f: + b64_to_pil(img).save(f, format='PNG') + +print(json.dumps(data['scoring'], indent=2)) +``` + ## Synthetic Data Generation The first version of DeepLIIF model suffered from its inability to separate IHC positive cells in some large clusters, resulting from the absence of clustered positive cells in our training data. To infuse more information about the