You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see no use in the distinction between tests that "crashed" and tests that "failed". I only see unnecessary complexity. Am I wrong?
In particular this comes up in relation to the "expected to fail" feature for tests. I notice that the documentation says "This option has no effect on tests which are not implemented or which have crashed." Why would this feature affect failures but not crashes? (For that matter why would it not affect not implemented tests? If the test is not implemented, don't mark it as expected to fail! But I digress....)
In the summary output, failures and crashes are listed separately, and not next to each other, leaving it up to the user to do the addition, a minor annoyance:
testworks-run FAILED in 0.029578 seconds:
Ran 7 suites: 6 passed (85.714288%), 1 failed, 0 skipped, 0 not implemented, 0 crashed
Ran 45 tests: 44 passed (97.777776%), 0 failed, 0 skipped, 0 not implemented, 1 crashed
What does it mean for a suite to fail vs crash? Unclear. There is one sentence in the doc that hints at it: "If setup-function signals an error the entire suite is skipped and marked as “crashed”." Is that the only way it's marked as crashed? Why not mark a suite crashed if one of its tests crashed too? Dunno.
In my view we should remove the distinction. Simplify simplify simplify.
(But what to do when a suite setup function fails is an open question. I suggest we simply note the suite setup function failure in the results and let the tests themselves run as normal. They will probably fail due to the setup failure, providing an unmistakable signal.)
The text was updated successfully, but these errors were encountered:
For me knowing that a test signaled an unhandled condition is, at a visceral level, much more serious than just knowing that an assertion failed, and I'm more likely to go fix those sooner. If #86 were implemented that would change of course.
For as long as the distinction remains, then "failed" and "crashed" should at least be adjacent in the report output.
I'm going to close this bug. d8d9a07 addresses at least the display aspects, I think, since CRASHED will be displayed if any test in the suite crashed.
I see no use in the distinction between tests that "crashed" and tests that "failed". I only see unnecessary complexity. Am I wrong?
In particular this comes up in relation to the "expected to fail" feature for tests. I notice that the documentation says "This option has no effect on tests which are not implemented or which have crashed." Why would this feature affect failures but not crashes? (For that matter why would it not affect not implemented tests? If the test is not implemented, don't mark it as expected to fail! But I digress....)
In the summary output, failures and crashes are listed separately, and not next to each other, leaving it up to the user to do the addition, a minor annoyance:
What does it mean for a suite to fail vs crash? Unclear. There is one sentence in the doc that hints at it: "If setup-function signals an error the entire suite is skipped and marked as “crashed”." Is that the only way it's marked as crashed? Why not mark a suite crashed if one of its tests crashed too? Dunno.
In my view we should remove the distinction. Simplify simplify simplify.
(But what to do when a suite setup function fails is an open question. I suggest we simply note the suite setup function failure in the results and let the tests themselves run as normal. They will probably fail due to the setup failure, providing an unmistakable signal.)
The text was updated successfully, but these errors were encountered: