Replies: 4 comments
-
I reran this with more draws and 2 chains, got similar results: |
Beta Was this translation helpful? Give feedback.
-
Hi @hyang336, as an initial reaction to this. Could you be a bit more specific about the parameters that you used to generate your simulated data here? (I assume the fit it to simulated?) One thing that strikes me as off, is the fact that the Which Also, but this is separate from the validity of my statements above: I don't think I fully understood how the racing particles utilize the confidence signals and how that maps onto the choice of GLM in your case. Best, |
Beta Was this translation helpful? Give feedback.
-
Sidenote, this post might be better hosted under |
Beta Was this translation helpful? Give feedback.
-
Hi, Alex, Thanks for your reply. I first chose some intercept and slope values for the GLM, I also randomly generated 2000 trials of neural (confidence) data.
Then I used these to generate drift rates for the 4 particles, 2000 trials each, according to the GLM I mentioned:
I also removed trials where any of the v's fall outside of [0 , 2.5]. Then I used these v's and some other prespecified parameters to generate simulated data:
As you can see the ground truth z was set at 0 (second value in the np.repeat() above), so in a sense that parameter was recovered correctly. I didn't fix z in the model estimating step, but I did fix a to be the ground truth value. Do you think fixing z instead will lead to better performance? I want particle one to reflect evidence of low confidence, particle two to reflect evidence of slightly higher confidence, and so on. Assuming we only have a 1-d array representing confidence covariate across trials, then particle one's drift rate can be modeled as having a monotonically decreasing relationship with the covariate (i.e. negative slope), while for particle two, it needs to be nonmonotonic because particle two accumulate evidence the fastest when the covariate is not at its lowest, but at a slightly higher level. And I want 1 type of GLM (i.e. one set of parameters) to be able to accommodate these complex relationship. Does this make sense? PS. Is there a way to move this whole thread to Discussion? |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
First of all, I am not sure if this is a bug or am intrinsic limitation of generalized linear model, PyMC, Bambi, or HSSM. Here is what I am trying to do:
I want to model a scenario where several competing (racing) accumulators whose drift rates depend on a 1-d signal (e.g. confidence), such that the accumulators' drift rates reach maximum at different values of the 1-d signal. For example, one accumulator may represent a confidence rating of 1 on a 5-point scale, and thus should have its max drift rate when a 1-d confidence signal is at its lowest value. Whereas another accumulator representing 2 on the 5-point scale will attain its max drift rate when the confidence signal is at its intermediate value.
To achieve these, I need a generalized linear model where the independent variables and dependent variables can have monotonic or nonmonotonic relationship. Inspired by the beta distribution, I came up with the following model using one of the race_4 models in HSSM. For each accumulator, its drift rate v is a function of the sum of the log transform of the covariate and the log transform of one minus the covariate (the covariate takes the value between 0 and 1), with an exponential link function. In other words, v = exp(intercept + alog(cov) + blog(1-cov)). This is basically taking log of the beta density and rearrange some terms.
Having selected a and b and the intercept, I generated some v's and then some data from those v's. But when I attempted parameter recovery, even though the chain appeared converged, the recovered parameters are way off. I suspect that this may have something to do with the fact that the two independent variables are not really independent. If that is the case, is there an alternative approach to achieve what I intended?
See attached code and figure for more details.
HSSM version
0.1.5
To Reproduce
Screenshots
Additional context
This model was run on a Linux cluster with its own Jax (for bug-fix reasons).
Beta Was this translation helpful? Give feedback.
All reactions