-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add parallel constraint/variable check to report_numerical_issues
#1385
Conversation
… into diagnostics_testing
…idaes-pse into diagnostics_testing
…hange default parallel tolerance to 1e-8
…stics-parallel
Not sure why all Andrew's commits from #1375 are showing up here... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Thanks for doing this @Robbybp
Looks like this is breaking tests with new numerical warnings. I'll have to look into this. |
@Robbybp Taking a quick look at the CSTR test, it is because the toolbox is now detecting near parallel components (outlet temperature and reaction rate in the case of the CSTR). I suspect some of these would fall under the category of "expected" (given some of the test models are very simple). I did a quick check of setting the default tolerance for near-parallel components to 1e-6 which brought the number of failures down to around 14. |
Tightening the tolerance to 1e-8 seems to help even more. This is backwards-incompatible, but I'm fine with tightening the tolerance here, as I think 1e-8 as a "distance from singularity" tolerance is more consistent with our other tolerances for large/small coefficients and residuals. |
@Robbybp That sounds fine to me too - I am not sure where the value of 1e-4 came from anyway (it was probably arbitrary). I don't think it is a huge issue for backward compatibility either, but we will need to update the relevant test cases to be "more parallel" (which should just mean changing the coefficient in the duplicate constraint). |
model.c1 = Constraint(expr=model.v1 == model.v2 - 0.99999 * model.v4) | ||
model.c2 = Constraint(expr=model.v1 + 1.00001 * model.v4 == 1e-8 * model.v3) | ||
model.c1 = Constraint(expr=1e-8 * model.v1 == 1e-8 * model.v2 - 1e-8 * model.v4) | ||
model.c2 = Constraint(expr=1e-8 * model.v1 + 1e-8 * model.v4 == model.v3) | ||
model.c3 = Constraint( | ||
expr=1e8 * (model.v1 + model.v4) + 1e10 * model.v2 == 1e-6 * model.v3 | ||
expr=1e3 * (model.v1 + model.v4) + 1e3 * model.v2 == model.v3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test was pretty fragile. The coefficients on v3
are small relative to those on other variables, so its dot product is vacuously parallel with anything. This didn't show up previously because we were relying on the absolute tolerance of 1e-4 to give it an empty sparsity pattern. Maybe this raises the question of what to do when one of the vectors we are testing is zero. I've removed the coefficients on v3
so that this isn't an issue for now.
Also, the 1e-8 ratio between small and large coefficients for v1
and v2
lead to relative "distances-from-parallel" that are very close to our new tolerance of 1e-8, so I've adjusted some coefficients to give us a little more buffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really because the relative tolerance should be comparing to is diff <= tolerance * unorm *vnorm
, not diff <= tolerance * max(unorm, vnorm)
. The absolute tolerance probably can get phased out entirely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe unorm * vnorm
is the right denominator for the relative tolerance. I would still like to have an absolute tolerance, though, as I wouldn't want (1e-9, 2e-9)
to be considered parallel to (1, 2)
. (Unless we want to consider a zero vector as "parallel to everything", which I think is debatable.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I support filtering out rows/columns with norms that are effectively zero, because otherwise they'll just spam the log as being parallel to every other row. No need to use an absolute tolerance for the inner product, though, we can just choose to skip displaying the row/column if one of the norms is too small.
In the diagnostics toolbox, the tolerance for this filtering should be the CONFIG.jacobian_small_value_warning
, the same tolerance as used in display_variables_with_extreme_jacobians
and display_constraints_with_extreme_jacobians
to display a warning message. That way it's displayed (once) to the user there.
model.c1 = Constraint(expr=model.v1 == model.v2) | ||
model.c2 = Constraint(expr=model.v1 == 1e-8 * model.v3) | ||
model.c1 = Constraint(expr=1e-2 * model.v1 == model.v2) | ||
model.c2 = Constraint(expr=1e-2 * model.v1 == 1e-8 * model.v3) | ||
model.c3 = Constraint(expr=1e8 * model.v1 + 1e10 * model.v2 == 1e-6 * model.v3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly, I updated coefficients on v1
to give some buffer around the relative tolerance. The small coefficients on v3
are also causing it to appear as "parallel to" to the other variables, which again raises the question of how to handle zero vectors.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1385 +/- ##
==========================================
- Coverage 77.63% 77.62% -0.01%
==========================================
Files 391 391
Lines 64375 64387 +12
Branches 14257 14260 +3
==========================================
+ Hits 49978 49982 +4
- Misses 11827 11834 +7
- Partials 2570 2571 +1 ☔ View full report in Codecov by Sentry. |
If we want to use a (in my opinion) better method for calculating parallel constraints, #1395 needs to be merged before this. |
Fixes #1380
Summary/Motivation:
These checks are fast and precise, so they should go in the general high-level reporting method.
Changes proposed in this PR:
report_numerical_issues
Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: