-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjoint method to find the gradient of the Laplace approximation/mode #343
Comments
13 tasks
The rewrites in optimistix might be helpful here to understand what is going on. Also see the docs. |
Some notes on how they use this in Stan. |
An example of this for the fixed point optimiser in jax. |
11 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is part of INLA roadmap #340.
From the Stan paper:
I think the jax implementation uses the tensor of derivatives but not 100% sure.
The text was updated successfully, but these errors were encountered: