Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible broadcasting bug in version 2.1.1 #36

Open
CaseyMcGrath opened this issue Nov 4, 2022 · 5 comments
Open

Possible broadcasting bug in version 2.1.1 #36

CaseyMcGrath opened this issue Nov 4, 2022 · 5 comments
Labels

Comments

@CaseyMcGrath
Copy link

I noticed that I'm getting a new broadcasting error message which I think coincides with the recent version update to PTMCMCSampler. I wrote a very simple toy model problem with only three parameters, and I do not get the following error when using version 2.0.0. However, when using 2.1.1, the following bit of code:

injection = [2, 5, 0.5]
burnin = 100
it = 10000
sampler.sample(np.asarray(injection), it, burn=burnin, thin=1, SCAMweight=1, AMweight=1, DEweight=1)

Now returns the following error message:


ValueError Traceback (most recent call last)
in
2 it = 10000
3
----> 4 sampler.sample(np.asarray(injection), it, burn=burnin, thin=1, SCAMweight=1, AMweight=1, DEweight=1) #covUpdate=500,

2 frames
/usr/local/lib/python3.7/dist-packages/PTMCMCSampler/PTMCMCSampler.py in _updateDEbuffer(self, iter, burn)
756
757 self._DEbuffer = shift_array(self._DEbuffer, -len(self._AMbuffer)) # shift DEbuffer to the left
--> 758 self._DEbuffer[-len(self._AMbuffer) :] = self._AMbuffer # add new samples to the new empty spaces
759
760 # SCAM jump

ValueError: could not broadcast input array from shape (1000,3) into shape (100,3)

@jellis18
Copy link
Collaborator

jellis18 commented Nov 5, 2022

I am not up to speed on recent changes. @AaronDJohnson, @paulthebaker any help here?

@AaronDJohnson
Copy link
Collaborator

It looks like my DEbuffer change is not behaving properly when modifying the burn kwarg. I'll investigate.

@AaronDJohnson
Copy link
Collaborator

The issue is that normally burn > covUpdate, and I had assumed this would be the case. This is because burn only sets the size of _DEbuffer and when the neff calculations start happening. 100 might be a large enough buffer size for such a small sampling problem (so that DEbuffer is still much larger than the autocorrelation length which will surely be quite small here).

The question is what should be done in this scenario. There are two options that I think are viable: thin the extra samples from AMbuffer so that they fit in the smaller DEbuffer, or require burn to be larger than covUpdate.

@paulthebaker what do you think?

@CaseyMcGrath
Copy link
Author

Hi @AaronDJohnson , I see, thanks for checking this! Honestly as I said, I was testing just a super simple toy model problem just to verify I had set everything up correctly and to see if the sampler was working. So I had very low values for the burn and number of iterations because I just wanted it to finish quickly. In my real-world problem that I'm building up to these will very likely be much larger and will probably naturally meet the burn > covUpdate criterion.

That said, I'll leave it to you guys to decide how best to adjust for this small bug, but honestly one option is to simply add to the docstring a notation that burn > covUpdate , or add a check for this condition and output an error message if it is not met (along the lines of what your second solution idea was).

@jellis18
Copy link
Collaborator

jellis18 commented Nov 8, 2022

I'd say just add a check and raise a ValueError if burn < covUpdate in the __init__. @CaseyMcGrath you can make a PR with this change if you want

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants