-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drift Correction issues with multi-shank probe #686
Comments
The issue with the drift scatter plot is a bug in the plot code, I'll have that fixed soon. Thanks for catching that! |
Another suggestion that might help, is you could try continuing to increase the batch size until you no longer see that noisy outlier. |
Is there any way to enforce a smoother drift estimate (besides using massive batch sizes, I'm already using much larger batches than default)? I am working in an unusual model species and brain area where we have many more neurons per unit volume than typical for mouse cortex (usually 100-200 units per recording), and also much less electrode drift, and I know from previous experience this kind of data is often sufficient to get reliable drift estimates (at least assuming rigidity, which seems valid to me on visual examination). But occasionally there are units which essentially turn on or off throughout the recording and disrupt motion estimates. I think you can see some of this in the spike amplitude plot I shared? I often end up turning drift correction off entirely, because estimated drift is usually +-10um or so, unless occasionally I get these massive errors. But it would be fantastic to impose some kind of smoother or robustness or prior on the drift correction... |
There is already a temporal smoothing step in the drift correction, which smooths the "Z" cross-correlation curves before taking the max (which is the drift estimate). This effectively should work like a prior. We could make the smoothing constant a parameter (@jacobpennington). However, Jacob and I discussed this case today and we think there might be more going on. It looks like there is a bit of a large physical readjustment happening around 3000 sec. There are many neurons that are either only active before or only active after the re-adjustment. This could be due to physical modifications in tissue or distortions beyond what we can fix with drift correction. In the extreme case, it would correspond to neurons dying off or getting silenced by the big physical movement. If you increase the batch size and it still does not help, then I think my hypothesis above might be true. In that case, the next best thing you could is split data around such critical points, spike sort separately, and use one of the packages that match neurons over days to make the matchings, since that matching is likely to be highly nonlinear. |
Let me get back to you with a better example, I agree with your point about
the plot I shared in the first post and I have better examples to
demonstrate the issue I mentioned in the most recent post. But the short
version is I would definitely explore adjusting the smoothing if you can
expose that to the user in some way.
…On Fri, May 10, 2024 at 6:31 PM Marius Pachitariu ***@***.***> wrote:
There is already a temporal smoothing step in the drift correction, which
smooths the "Z" cross-correlation curves before taking the max (which is
the drift estimate). This effectively should work like a prior. We could
make the smoothing constant a parameter ***@***.***
<https://github.com/jacobpennington>).
However, Jacob and I discussed this case today and we think there might be
more going on. It looks like there is a bit of a large physical
readjustment happening around 3000 sec. There are many neurons that are
either only active before or only active after the re-adjustment. This
could be due to physical modifications in tissue or distortions beyond what
we can fix with drift correction. In the extreme case, it would correspond
to neurons dying off or getting silenced by the big physical movement.
If you increase the batch size and it still does not help, then I think my
hypothesis above might be true. In that case, the next best thing you could
is split data around such critical points, spike sort separately, and use
one of the packages that match neurons over days to make the matchings,
since that matching is likely to be highly nonlinear.
—
Reply to this email directly, view it on GitHub
<#686 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABJNQGWQLMQW3QJ42RFZHXDZBVDCTAVCNFSM6AAAAABHLRKYRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBVGM2DKMZSGI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
@marius10p is
|
@Selmaan I'm going to add the smoothness parameter Marius mentioned, it's not the one I mentioned above. In the meantime, if you want to try it with your data and see if it helps you can change the values on line 139 in
The 0.5s are the sigma for x,y, and time, respectively. |
This parameter has been added in the latest version as |
great, thanks! I'm just back from a break, updated kilosort and now seeing
the options. Just one q, what are the units for these parameters?
…On Tue, May 28, 2024 at 2:30 PM Jacob Pennington ***@***.***> wrote:
Closed #686 <#686> as
completed.
—
Reply to this email directly, view it on GitHub
<#686 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABJNQGW6MTNB6N7DGBWKAIDZETEK5AVCNFSM6AAAAABHLRKYRWVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJSHE3DCNZRHA3TINI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
|
Smoothing is applied prior to generating those plots. |
ok. I'll look into this more then trying a couple different settings...my expectation was that there would be less 'spikiness' in the plots |
@Selmaan that is a very stable drift trace, and the apparent "noise" is just due to hitting the integer floor of the estimation process which is about half a micron for NP. You won't have drift related problems with this data (and you could even turn drift correction off). This is very different from the first example you posted. |
Does that correspond to batches with no spikes, or very few spikes? Do you wait for the probe to settle at the beginning of the recording? |
I'll check this on some of our high drift data as well to be sure, but FYI the documentation for the parameter was incorrect. The first axis is for a correlation we're computing (don't recommend changing this value), the second is for time (in units of batches), and the third is for the vertical axis on the probe (units of registration blocks). |
Ah this might explain why it was behaving differently than I expected! I’ll
retry with corrected parameters.
…On Tue, Jun 25, 2024 at 2:34 PM Jacob Pennington ***@***.***> wrote:
I'll check this on some of our high drift data as well to be sure, but FYI
the documentation for the parameter was incorrect. The first axis is for a
correlation we're computing (don't recommend changing this value), the
second is for time (in units of batches), and the third is for the vertical
axis on the probe (units of registration blocks).
—
Reply to this email directly, view it on GitHub
<#686 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABJNQGRTHN5T45DBAHGJLNTZJGZZPAVCNFSM6AAAAABHLRKYRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOBZGY4TSMBSGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
This is what I got when testing on a high drift simulation for different values for the second and third (time in batches, y in registration blocks) positions. Let me know if anything is unclear. If you want additional details, the |
I am a little unclear actually. The registration block smoothing makes
complete sense to me. But the temporal smoothing by batch is a bit
confusing, it looks here like the drift traces are more coarsely
discretized? I expected it to look smoother and/or to dampen fluctuations
as the temporal smoothing gets very high. So maybe I'm not getting what the
parameter really does?
…On Mon, Jul 1, 2024 at 11:01 AM Jacob Pennington ***@***.***> wrote:
This is what I got when testing on a high drift simulation for different
values for the second and third (time in batches, y in registration blocks)
positions. Let me know if anything is unclear.
smoothing_test.png (view on web)
<https://github.com/MouseLand/Kilosort/assets/35777376/0eb10111-b0e0-4b5d-aa08-9cb0f458c862>
—
Reply to this email directly, view it on GitHub
<#686 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABJNQGVPVCHXQUBBYYJRLUTZKFVLFAVCNFSM6AAAAABHLRKYRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMBQGQYDANJXGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Okay, we did identify a bug related to this that we'll have to work on. Thanks again for your patience and for helping point that out. |
no problem, thanks for working on this feature!
…On Wed, Jul 3, 2024 at 2:06 PM Jacob Pennington ***@***.***> wrote:
Okay, we did identify a bug related to this that we'll have to work on.
Thanks again for your patience and for helping point that out.
—
Reply to this email directly, view it on GitHub
<#686 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABJNQGSHV5S66PELNGGSLF3ZKQ4RHAVCNFSM6AAAAABHLRKYRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMBWHEYTOOBQGA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@marius10p I'm finding myself in a situation where it may be best for me to sort 2 experimental runs separately, as they were done on a low channel count probe (16 channels, 50um inter-contact distance) and separated by a 30-min wait period (pre- and post- drug application). These multielectrode specs make drift correction inaccurate. Would you mind pointing me to one of these packages you mention that matches neurons over days? I'm trying to brainstorm a good way to match the same neuron across the 2 recordings which has drifted 1 channel above/below its original position in the baseline recording. It helps that in my recordings, individual neurons generally drift no more than 1 channel (making knowledge of its channel index very helpful), though a quantitative way would be best. Thanks |
Sure, here's two that I know of: https://github.com/EnnyvanBeest/UnitMatch |
Describe the issue:
I am using kilosort 4 with a 2-shank 64ch probe. Most of my recordings have essentially no drift and the algorithm has been working beautifully, thanks for the work put into this! However I am concerned there is a bug with how depth is handled for multi-shank probes for the drift correction step, which screws up the drift correction module.
Below is the channel map displayed in the ks4 gui. Note that the probe goes from 0-330um depth.
this is the spike position across probe colored by cluster, showing that spikes are well localized on the probe and match the channel map dimensions
However when I look at the spike map used in drift correction, I see that spike depths range twice what they should, i.e. from 0 to ~600
Furthermore, even though there is a clear single instability in the recording corresponding to a shift of <20um, the drift correction algorithm has a noisy and unstable estimate which vastly overcompensates, flickering btw 0 and 60ums of drift early in the recording:
(this is using 180,000 batch size with a 30,000 sampling rate, to account for fewer spikes on a 64ch probe than a neuropixels)
Reproduce the bug:
No response
Error message:
No response
Version information:
Following install instructions, using python 3.9, pytorch-cuda 11.8, kilosort v4.0.6
Context for the issue:
No response
Experiment information:
No response
The text was updated successfully, but these errors were encountered: