-
-
Notifications
You must be signed in to change notification settings - Fork 123
Roadmap
Add examples for the following cases:
- Offsets #418.
- Potentials #384. Don't forget to explain what is a potential and where the name comes from.
- Beta family #369.
- Categorical family #436.
- Posterior predictive check #252. I think we already use it in some places, but it is not advertised enough.
- Piecewise regression #437.
Bayes Rules book is a good candidate for Bambi. It uses rstanarm
quite a lot and we have (almost?) all the features needed to reproduce the book.
This formula object will allow us to pass multiple formulas for different components of the model. We take the distributional model term from brms (this vignette), which says:
[...] refer to a model, in which we can specify predictor terms for all parameters of the assumed response distribution.
In other words, we not only have predictors for the mean, but also predictors for other parameters such as the dispersion. For example, this enables heteroskedastic linear regression models.
The type of models just mentioned above.
It would be really nice if we could move forward with what it is in #365. At some point, we're going to face it. For example, if we allow distributional models and we use the same predictor for the mean and the standard deviation. How do we allow users to use different priors for each case?
At the moment, both numerical and categorical predictors go into the same design matrix X and they contribute to the model via pm.math.dot(X, beta)
. It would be more efficient to do a regular dot product for numerical variables, but we could do it more efficiently if we use sparse multiplication for categorical predictors. There's still some research to do, though.
The same idea applies to group-specific effects. It's more likely to have high sparse settings here.
Once we have an stable PyMC 4.0 release we should move Bambi to work with that version.
Bayesian additive regression trees are often praised in the literature as model that "generally works" with almost no tuning needed from the user (often the comparison is made against GPs). So it may be good to have an option in Bambi to run them. This should be done after we move Bambi to use PyMC 4.x. As BART offers an heuristic for variable importance it could be nice to see how it behaves compared to projpred
Projection predictive variable selection for generalized linear models
- Revisit and expand tests in general
Decrease our dependency on statsmodels- Consolidate ArviZ integration
- Document new functionality
- Support new functionality (including loo-related diagnostics)
-
Add/improve examples(this is always and ongoing effort, but we can say this has been achieved compared to the previous stage) Revisit default priors see #230- Work on porting code from books
- Regression and other stories
- Statistical Rethinking
- INLA support
- Allow "R-side" covariance structures and covariance priors in general (for varying effects too) #110
Bambi fails when p > n #278- Add example of posterior predictive sampling (and or check) #252
Add example of prior predictive sampling (and or check) #251