You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are working on universal/contrast-agnostic models for lesion or spinal cord segmentation. But it seems like going for a single model/segmentation head for segmenting the lesions of all sites is an sub-optimal objective. Rather, depending on the number of sites the (aggregated) dataset consists of, we can train multiple segmentation heads where the model could learn the site-specific details in the images. Note that the backbone architecture is trained on all sites only the final few-layers are fine-tuned for specific sites. Example: Let's say the U-Net has n=5 layers in the encoder-decoder networks, it is pre-trained on all sites but, say, the n-1th layer is fine-tuned on the specific sites.
When it is deployed on SCT, one of the inputs could be the site which is running the model on their end. Based on the input site, we can retrieve the site-specific segmentation head and output the segmentation that's best-suited for that site. My hypothesis is that this approach could give better results compared to a universal model (with only 1 segmentation head) that is forced to learn the features of images of all sites.
The text was updated successfully, but these errors were encountered:
@naga-karthik what do you mean by "model/segmentation head"? Are you talking about developing a model for head segmentation? (ie brain?) or are you referring to the "head" of the model (in which case could you please clarify what you mean by that)
Yes, indeed. so in any DL network the last layer or a last set of n layers are responsible for segmentation. These are also the layers are the fine-tuned if need arises as well. So, when I refer to "head" I essentially meant like a separate set of layers for each site that are specifically trained to learn features of images from those sites.
In the literature for image classification, we have seen that having specific "heads" for each class tends to work better (in terms of accuracy) compared to one head trying to predict all classes. This might/might not translate to segmentation as well but that's something to test. Because only the final set of layers are used for learning class-specific info (or, in our case, site-specific info) the remaining part of the network could be pre-trained model taken from the internet, or, we could do supervised pretraining using the datasets we have. In summary, all we have to try is to take a segmentation model (pretrained or regular) add a few final layers and fine-tune the model on our data.
We are working on universal/contrast-agnostic models for lesion or spinal cord segmentation. But it seems like going for a single model/segmentation head for segmenting the lesions of all sites is an sub-optimal objective. Rather, depending on the number of sites the (aggregated) dataset consists of, we can train multiple segmentation heads where the model could learn the site-specific details in the images. Note that the backbone architecture is trained on all sites only the final few-layers are fine-tuned for specific sites. Example: Let's say the U-Net has
n=5
layers in the encoder-decoder networks, it is pre-trained on all sites but, say, then-1
th layer is fine-tuned on the specific sites.When it is deployed on SCT, one of the inputs could be the
site
which is running the model on their end. Based on the inputsite
, we can retrieve the site-specific segmentation head and output the segmentation that's best-suited for that site. My hypothesis is that this approach could give better results compared to a universal model (with only 1 segmentation head) that is forced to learn the features of images of all sites.The text was updated successfully, but these errors were encountered: