Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference operator for "Algo bundles" trained with Auto3DSeg #418

Open
nvahmadi opened this issue Apr 15, 2023 · 1 comment
Open

Inference operator for "Algo bundles" trained with Auto3DSeg #418

nvahmadi opened this issue Apr 15, 2023 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@nvahmadi
Copy link

Is your feature request related to a problem? Please describe.
I would like to create a MAP from a MONAI Algo "bundle" that was trained with Auto3DSeg. It seems that this is not supported yet, because Auto3DSeg does not produce regular bundles. For example, the monai_bundle_inference_operator first checks whether there is a JSON/YAML config file. But Auto3DSeg Algos are structured differently, with python files for training/inference under the subfolder ./scripts, and an execution model based on Python Fire.

Describe the solution you'd like
An inference operator for MONAI "bundles" that are Algo directories which were trained with Auto3DSeg, e.g. monai_algo_inference_operator.py. Would be great to be able to take trained Algos and deploy them directly. 🙂

Describe alternatives you've considered
I've spent a few hours trying to understand all the steps performed in monai_bundle_inference_operator.py, and implement my own operator class by inheriting from it, i.e. class MonaiAlgoInferenceOperator(MonaiBundleInferenceOperator). But it was too complex or it would take me too long, given certain slight differences: For example, the MonaiBundleInferenceOperator assumes a separation of the inference into 3 parts: pre-processing, compute, and post-processing. But a trained Auto3DSeg Algo object encapsulates these steps into a single inference class, see below. Maybe there is a way to re-use that and create a very simple InferenceOperator for DeployAppSDK. Hopefully the code snippet below can help illustrate what I mean, maybe even help towards a solution.

Additional context
In principle, inference with an Algo object is easy, following this code snippet. let's assume I want to run inference with thetrained fold 0 of the Algo template segresnet, and I want to run it on a Nifti file located at image_filepath on disk:

import pickle

# add algorithm_templates folder to system path
import sys
algo_templates_dir = '/path/to/algorithm_templates'
sys.path.append(os.path.abspath(algo_templates_dir))

# load algorithm template class from templates folder
import segresnet

# read pickle file of trained algo object, let's assume from fold 0
pkl_filename='/path/to/workdir/segresnet_0/algo_object.pkl'
with open(pkl_filename, "rb") as f_pi:
    data_bytes = f_pi.read()
data = pickle.loads(data_bytes)
algo_bytes = data.pop("algo_bytes")
algo = pickle.loads(algo_bytes)

# choose inferer part of algo, and run inference
# (note: this already includes pre- and post-processing!)
inferer = algo.get_inferer()
pred = inferer.infer(image_filepath)

I still haven't fully understood the requirements to create an Inference Operator for DeployAppSDK. But maybe it's possible to inherit from InferenceOperator directly and implement a very simple inference operator like this:

class MonaiAlgoInferenceOperator(InferenceOperator):
    def __init__(
        self,
        *args,
        **kwargs,
    ):
        super().__init__(*args, **kwargs)
        # read pickle file of trained algo object, let's assume from fold 0
        pkl_filename='/path/to/workdir/segresnet_0/algo_object.pkl'
        with open(pkl_filename, "rb") as f_pi:
            data_bytes = f_pi.read()
        data = pickle.loads(data_bytes)
        algo_bytes = data.pop("algo_bytes")
        self.algo = pickle.loads(algo_bytes)

        # choose inference part of algo, model and load model weights
        self.inferer = self.algo.get_inferer()
        
    # imho, the abstract classes pre-process and post-process can stay undefined

    # we only need to define the compute function
    def compute(self, op_input, op_output, context):
        # not sure whether/how the inferer should be pulled from context?
        # (I already have it in memory as self.inferer)
        # ...
        
        # not sure what needs to be done to op_input before model ingestion
        # ...
        
        # run inference, incl. pre- and post-processing
        pred = self.inferer.infer(op_input)
        
        # not sure what needs to be done to pred to become op_output
        # op_output = do_something(pred)
        
        # send output, analoguous to MonaiBundleInferenceOperator
        self._send_output(output_dict[name], name, first_input_v.meta, op_output, context)

I know this is very amateurish, apologies 😅
Looking forward to more expert suggestions!

@nvahmadi nvahmadi added the enhancement New feature or request label Apr 15, 2023
@MMelQin
Copy link
Collaborator

MMelQin commented Apr 17, 2023

@nvahmadi Thanks for you request.

Yes, we had encountered such models along with the custom app scripts in MONAI Model Zoo, e.g. the Lung Nodule Detection. The Inference config for these models use steps/sections that are needed for inference but not supported by the MONAIBundleInference, i.e. anything that is not pre_processing, infer, and post_processing.

The App SDK IS designed to allow a dev user to create custom operators, while leveraging built-in ones where applicable, to compose the application. This means that a custom inference operator will need to be developed, using the code snippet shown above. DICOM input can still be processed by the built-in operators, or if NIfTI input, either a new loader operator or the built-in nii dataloader can be used. Please check out this example here for creating the deploy app for the aforementioned Lung Nodule Detection model (please note this app also integrates with Nuance platform, so it has extra modules. I do have a MONAI deploy native version, but was not checked in)

@MMelQin MMelQin self-assigned this Jun 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants