-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermittent test failures #588
Comments
This comment was marked as outdated.
This comment was marked as outdated.
I see this error
I've seen the error in these tests: List
|
I see this error
I've seen the error in these tests: List
I've also seen the similar error:
I've seen that error in these tests: List
|
I see this error
I've seen that error in these tests: List
|
@matthew-white does #499 issue look similar to your
Does the failure rate decrease if you increase the test mocha default test timeout? |
This comment was marked as outdated.
This comment was marked as outdated.
I think they're similar insomuch as they're both failing at an ladjs/supertest#352 has some related discussion. Maybe we could try updating SuperTest? Though it doesn't seem like the version we're on should be having this issue.
For some reason, I'm now seeing test failures at a much lower rate. I'm not seeing some of the errors that I did yesterday. I'm still seeing |
I've seen this test failure just once so far:
It's another SuperTest error, so may be related to After this test failure, 190 other tests timed out. |
I've seen this test failure just once so far:
|
I wrote a little Bash script to run #!/bin/bash -eo pipefail
testloop() {
local count=${1:-10}
touch testloop.txt
local temp=$(mktemp)
local i line
for i in $(seq $count); do
{ make test || true; } | tee $temp
line=$(grep -n -m 1 passing $temp | cut -d ':' -f 1)
tail -n +$line $temp >> testloop.txt
done
}
testloop 25 # or some other number @alxndrsn, do you see any of these errors if you run tests say 25 times? My current theory is that |
This comment was marked as outdated.
This comment was marked as outdated.
I've also seen that I ran my Bash script on CircleCI, running I'm continuing to see other errors, but only sometimes. One thing I tried doing is to update SuperTest to the latest version. |
Here's a new one:
|
Seen with current
Adding
Modifying all explicit This has been discussed previously at #532. Another option that works on my machine is doubling all the explicit timeouts. I've opened a PR for this at #847. |
|
There are failures of these tests in CI due to timeouts. CircleCI occasionally: * https://app.circleci.com/pipelines/github/getodk/central-backend/1741/workflows/7d9014c0-3358-42d0-ad81-32dc2a195ebb/jobs/2583 Github Actions frequently: * https://github.com/alxndrsn/odk-central-backend/actions/runs/6048978821/job/16415366255#step:7:10856 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6049092424/job/16415712862#step:7:10849 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6049135803/job/16415849308#step:7:10842 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6049985595/job/16418307942#step:7:10862 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6049504238/job/16416917068#step:7:10879 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6087419828/job/16515902635#step:7:10869 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6087485126/job/16516120203#step:7:10890 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6087623353/job/16517132146#step:7:10883 Ref: getodk#588
There are failures of these tests in CI due to timeouts. CircleCI occasionally: * https://app.circleci.com/pipelines/github/getodk/central-backend/1741/workflows/7d9014c0-3358-42d0-ad81-32dc2a195ebb/jobs/2583 Github Actions frequently: * https://github.com/alxndrsn/odk-central-backend/actions/runs/6048978821/job/16415366255#step:7:10856 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6049092424/job/16415712862#step:7:10849 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6049135803/job/16415849308#step:7:10842 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6049985595/job/16418307942#step:7:10862 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6049504238/job/16416917068#step:7:10879 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6087419828/job/16515902635#step:7:10869 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6087485126/job/16516120203#step:7:10890 * https://github.com/alxndrsn/odk-central-backend/actions/runs/6087623353/job/16517132146#step:7:10883 Ref: #588
This comment was marked as resolved.
This comment was marked as resolved.
This is to prevent intermittent test failures: #588 (comment)
Seeing intermittent faliures with auth(?) for OIDC tests, e.g.:
|
This comment was marked as resolved.
This comment was marked as resolved.
@alxndrsn, I feel like we're doing better with intermittent test failures than when I first filed this issue. Glancing over this issue, it's unclear to me which failures are still an issue and which have been resolved (e.g., by the update to SuperTest). What do you think about us closing this issue, then filing an individual issue for each new test failure we see in the future? |
If there are any test failures listed here that you know are still an issue or that you don't want to miss, I'd be happy to file issues for them. |
I think it's inevitable that this issue is stale, so I think we can just open new tickets for any failures we observe in future. |
This should make tests fail faster, and make failures easier to understand: Previously: `Error: end of central directory record signature not found` Now: `Error: expected 200 "OK", got 400 "Bad Request"` Related: * getodk#595 * getodk#588 * getodk#1052
Tests often pass locally for me, but there are sometimes one or more failures. The failures aren't consistent: I've seen a few different errors, and even when it's the same error, it's often a different test that fails. It sounds like @ktuite is seeing some of these same errors, so I thought I'd create an issue to track them. Rather than creating an issue for each error, I thought I'd create a single issue to track the different errors we've seen. (We can always spin one off into its own issue if it starts a discussion, but starting things here seems like it might work well.) Feel free to comment if you encounter an error not listed here, to 👍 an error that you've seen, or to add more information to any of the comments below.
The text was updated successfully, but these errors were encountered: