-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH]: Smoothing images as a preprocessing step #161
Conversation
One particular smoothing function that is quite different from others is FSL's SUSAN. This is particularly interesting because it is the smoothing function used by XCP engine. If we do want to implement this (as an additional dependency...) then the nipype function create_susan_smooth may be of interest. This blogpost shows how to use it. Command line GUI/wiki for SUSAN is here. There is also a wrapper for the susan command in python |
@LeSasse Do you already have a use-case for this? Will help to check the implementation(s) on real-world usage. |
So, I have a use case where I would like to test the current pipeline without SUSAN and compare it with the results when using SUSAN. In addition, I have equivalent data preprocessed with xcpengine which uses SUSAN so we can also compare it to that. |
Sounds good, will let you know when I have something concrete. |
@LeSasse @fraimondo I think we can have either (i) 1 class handling 1/2/3 "backends" (nilearn, AFNI, FSL) or (ii) 3 classes named accordingly. Having 1 class will of course bloat it but keep it in one piece. I personally prefer (ii) but if (i) is better from usage POV, I don't mind it either. |
Since, there's no agreement or disagreement, I'll proceed with (ii). |
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #161 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 1 1
Lines 1 1
=========================================
Hits 1 1
Flags with carried forward coverage won't be shown. Click here to find out more. |
If there is no "common" API that the different "backends" use, then having one single class can be difficult (e.g. yield too many optional parameters conditional to the backend). In that case, I would KISS with (ii). |
For me, different backends should not alter the result, rather than the kind of tool used. If the result is not the same, then it should not be a backend, but a different class. |
After looking at the implementations, they don't exactly do the same thing in the sense that they have different parameters which affect the result so having distinct classes make sense. |
sounds good to me |
093f716
to
ca61535
Compare
5e9d5fd
to
e953ba4
Compare
e953ba4
to
9adc5cd
Compare
c25d24e
to
97f1fad
Compare
97f1fad
to
7be281a
Compare
7731d40
to
c637037
Compare
c637037
to
9054ff4
Compare
@LeSasse This PR works for you right? Before merging, would be cool if we can check your pipeline comparison works as expected. |
Saw this just now, yeah works smoothly on my side. Consider it approved! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Are you requiring a new dataset or marker?
Which feature do you want to include?
One standard preprocessing step in a lot of MRI analyses (that is not applied by fMRIprep) is smoothing images after confound regression. It would be good to have this as an additional inbuilt preprocessing step.
How do you imagine this integrated in junifer?
The easiest way will probably be using the nilearn function for smooting images, although one can also use AFNI, but I dont expect that to yield very different results.
Do you have a sample code that implements this outside of junifer?
No response
Anything else to say?
No response