-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose n_bins
argument to cebra_sklearn_helpers.align_embeddings
instead of fixing default value internally
#24
Comments
Thanks for reporting -- as a quick check, can you avoid the error by lowering the number of bins? |
Hi, thanks for the quick reply ! I was planning to give it a go, but I have to double check how to change a function within a package. Should I just change from the main directory and then run
python setup.py install
right?
Cheers,
Giuseppe
…On Jun 23 2023, at 5:36 PM, Steffen Schneider ***@***.***> wrote:
Thanks for reporting -- as a quick check, can you avoid the error by lowering the number of bins?
—
Reply to this email directly, view it on GitHub (#24 (comment)), or unsubscribe (https://github.com/notifications/unsubscribe-auth/AHQLVO2YRTIKUIXFIEG4FP3XMXAZ3ANCNFSM6AAAAAAZRZAPEU).
You are receiving this because you authored the thread.
|
Easiest is to clone the repo and do a local install doing
We might consider exposing the number of bins to the API in the future -- thanks for catching this! |
Ok, I’ll let you know how that goes!
…
On Jun 23, 2023 at 5:44 PM, <Steffen Schneider ***@***.***)> wrote:
Easiest is to clone the repo and do a local install doing
pip install -e .
We might consider exposing the number of bins to the API in the future -- thanks for catching this!
—
Reply to this email directly, view it on GitHub (#24 (comment)), or unsubscribe (https://github.com/notifications/unsubscribe-auth/AHQLVO5OQC2ZJ2FPEH7GGC3XMXBXFANCNFSM6AAAAAAZRZAPEU).
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
n_bins
argument to cebra_sklearn_helpers.align_embeddings
instead of fixing default value internally
PR #25 now contains a suggestion --- let me know if that fixes your issue. |
Changing the number of bins works, thanks! Reading through the consistemcy score demo it says Cheers |
@drsax93 ,
An appropriate number of bins would be one that you could also use to plot a histogram of your data; there should be no empty bins (this is what caused your original error), but there should not be too few (in the extreme case, a single bin would cause the consistency to be always at 100%). So ideally try to find the largest number of bins that avoid the issue that you saw above for best results.
The runs are with respect to fitting 10 independent CEBRA models. This is sth you have to do as an input for that function. I.e., you would fit 10 models (in the simplest case, just running through a for loop), compute the embeddings, and pass the results to the function. Does that make sense? |
Yes, thanks!
I interpreted the text as suggesting that the score was obtained on subsamples, but the way you explained just now makes sense.
…On Jun 26 2023, at 2:50 PM, Steffen Schneider ***@***.***> wrote:
@drsax93 (https://github.com/drsax93) ,
>
> Could you comment on how to choose the appropriate number?
>
An appropriate number of bins would be one that you could also use to plot a histogram of your data; there should be no empty bins (this is what caused your original error), but there should not be too few (in the extreme case, a single bin would cause the consistency to be always at 100%).
So ideally try to find the largest number of bins that avoid the issue that you saw above for best results.
>
> Reading through the consistemcy score demo it says Correlation matrices depict the after fitting a linear model between behavior-aligned embeddings of two animals, one as the target one as the source (mean, n=10 runs), but I don't see any shuffling / subsampling procedure in the code -- is it so?
>
The runs are with respect to fitting 10 independent CEBRA models. This is sth you have to do as an input for that function. I.e., you would fit 10 models (in the simplest case, just running through a for loop), compute the embeddings, and pass the results to the function.
Does that make sense?
—
Reply to this email directly, view it on GitHub (#24 (comment)), or unsubscribe (https://github.com/notifications/unsubscribe-auth/AHQLVO6IMIYMXBANFUGQM2TXNGHUBANCNFSM6AAAAAAZRZAPEU).
You are receiving this because you were mentioned.
|
Is there an existing issue for this?
Bug description
Hello,
I am trying to compute the consistency score across different embeddings from hippocampal population activity that have been obtained using 2d tracking position as the auxiliary variable.
To compute the consistency score I have tried to use as labels either the linearised 2d position or another discrete labelling, but I get an error in
cebra_sklearn_helpers.align_embeddings
when quantising the embeddings with new labels. I believe it might be due to the high number of bins (n_bins
) used within the_coarse_to_fine()
function.. What do you think the issue may be?Operating System
operating system: Ubuntu 20.04
CEBRA version
cebra version 0.2.0
Device type
gpu
Steps To Reproduce
Here is a snippet of the code
Relevant log output
Anything else?
The problematic
bin_index
varies depending on the discretisation of the position / labelsCode of Conduct
The text was updated successfully, but these errors were encountered: