Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add converter from Turing using both Chains and Model #133

Open
wants to merge 36 commits into
base: main
Choose a base branch
from

Conversation

sethaxen
Copy link
Member

This PR is a working prototype of the Turing part of the proposal in #132. With this PR, we can compute the final full InferenceData from the Turing example in the quickstart in a single line:

julia> idata = from_turing(
           turing_chns;
           model=param_mod,
           rng=rng,
           observed_data=(y=y,),
           dims=Dict("y" => ["school"], "σ" => ["school"], "θ" => ["school"]),
           coords=Dict("school" => schools),
       )
InferenceData with groups:
	> posterior
	> posterior_predictive
	> log_likelihood
	> sample_stats
	> prior
	> prior_predictive
	> sample_stats_prior
	> observed_data
	> constant_data

It's marked draft because

  1. I'm not 100% convinced we should do this.
  2. the exact interface may change
  3. I have not yet tested this on more complex models; do I make assumptions that may be invalid? (@torfjelde, do you see any issues there)?
  4. I need to add an option for user to pass false to a group name if they don't want it to be generated.
  5. Needs a test suite

Copy link
Contributor

@torfjelde torfjelde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like nice!

There are a couple of things that will break for certain models, but I provided some solutions that I'm pretty certain should work in essentially all cases:)

src/turing.jl Outdated Show resolved Hide resolved
src/turing.jl Outdated Show resolved Hide resolved
src/turing.jl Outdated Show resolved Hide resolved
src/turing.jl Outdated Show resolved Hide resolved
@sethaxen
Copy link
Member Author

Thanks, @torfjelde! I love that these changes both are more general and simplify the code quite a bit. Is it possible, given a DynamicPPL.Model, to determine what in its arguments are observed data (i.e. on the left-hand side of a tilde expression)? It'd be nice to extract that from the model instead of requiring the user to provide it.

These are apparently overwritten by  predict
@sethaxen sethaxen marked this pull request as ready for review May 20, 2021 22:16
@torfjelde
Copy link
Contributor

Is it possible, given a DynamicPPL.Model, to determine what in its arguments are observed data (i.e. on the left-hand side of a tilde expression)?

You could just call pointwise_loglikelihoods once and then you have them as keys:)

Actually, did you figure this out on your own? Looking at _compute_log_likelihood it looks like it!

@sethaxen
Copy link
Member Author

You could just call pointwise_loglikelihoods once and then you have them as keys:)

Actually, did you figure this out on your own? Looking at _compute_log_likelihood it looks like it!

Oh, you know, I didn't realize that all I had to do was sample from the prior, then compute the log likelihood, then I would have the observed data names. It's a little awkward, but it works now! Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants