Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Site-specific heads for lesion segmentation instead of a universal model #26

Open
naga-karthik opened this issue Jul 31, 2024 · 2 comments

Comments

@naga-karthik
Copy link
Member

We are working on universal/contrast-agnostic models for lesion or spinal cord segmentation. But it seems like going for a single model/segmentation head for segmenting the lesions of all sites is an sub-optimal objective. Rather, depending on the number of sites the (aggregated) dataset consists of, we can train multiple segmentation heads where the model could learn the site-specific details in the images. Note that the backbone architecture is trained on all sites only the final few-layers are fine-tuned for specific sites. Example: Let's say the U-Net has n=5 layers in the encoder-decoder networks, it is pre-trained on all sites but, say, the n-1th layer is fine-tuned on the specific sites.

When it is deployed on SCT, one of the inputs could be the site which is running the model on their end. Based on the input site, we can retrieve the site-specific segmentation head and output the segmentation that's best-suited for that site. My hypothesis is that this approach could give better results compared to a universal model (with only 1 segmentation head) that is forced to learn the features of images of all sites.

@jcohenadad
Copy link
Member

jcohenadad commented Jul 31, 2024

@naga-karthik what do you mean by "model/segmentation head"? Are you talking about developing a model for head segmentation? (ie brain?) or are you referring to the "head" of the model (in which case could you please clarify what you mean by that)

@naga-karthik
Copy link
Member Author

apologies for the confusion!

are you referring to the "head" of the model

Yes, indeed. so in any DL network the last layer or a last set of n layers are responsible for segmentation. These are also the layers are the fine-tuned if need arises as well. So, when I refer to "head" I essentially meant like a separate set of layers for each site that are specifically trained to learn features of images from those sites.

In the literature for image classification, we have seen that having specific "heads" for each class tends to work better (in terms of accuracy) compared to one head trying to predict all classes. This might/might not translate to segmentation as well but that's something to test. Because only the final set of layers are used for learning class-specific info (or, in our case, site-specific info) the remaining part of the network could be pre-trained model taken from the internet, or, we could do supervised pretraining using the datasets we have. In summary, all we have to try is to take a segmentation model (pretrained or regular) add a few final layers and fine-tune the model on our data.

hope this clarification makes more sense!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants