-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Brainstorm about speeding up TJLF #4
Comments
Ah, okay, this is the stuff I don't know anything about. Each call to
If we can preallocate the workspace and call |
The width depends on the type of mode. Above a certain |
@bclyons12 there could certainly be some allocation savings if we preallocated the workspace. As you know, we just need to be careful if we eventually desire to multi-thread the calculation per k_y that each thread uses its own work matrix. |
@tomneiser should we expect those widths to change significantly between iterations of the FluxMatcher? If not, we could happily pay the price of calculating the optimal width once, and then use those same widths for all subsequent FluxMatcher iterations. |
I think this is a good idea. The widths don't change every iteration, maybe every 30 iterations we can re-evaluate them |
For typical use-cases each eigensolve tends to be fast (grows with number of species and number of basis). The main issue is that TJLF does a lot of eigensolves! (order ~200)
Many of the eigensolves are used to find the width of the modes. Can such width be determined only for few Ky's and not for each one of them? For example, when width is entered by user, the same width is used for all Ky's. Perhaps we can calculate only few widths (eg at lowest and highest Kw) and interpolate between them?
The text was updated successfully, but these errors were encountered: