Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pull Requests/Merge to master #306

Open
bartoszmajsak opened this issue May 26, 2015 · 16 comments
Open

Pull Requests/Merge to master #306

bartoszmajsak opened this issue May 26, 2015 · 16 comments
Labels

Comments

@bartoszmajsak
Copy link
Contributor

Hi guys,

I understand we gave up on rebase approach and we do merge from GitHub. Fine with me.

I'm just wondering why do we merge PR which leads to failing build? Such approach has a risk of introducing lots of issues to the whole project because we are missing an opportunity to validate it.

I assume a PR is at least executed on local env of the approver before hitting a magic button, isn't it?

@arun-gupta
Copy link
Contributor

@aheusingfeld any idea why Travis build is failing?

@arjantijms
Copy link
Contributor

I'm just wondering why do we merge PR which leads to failing build?

@bartoszmajsak

I think the reason for failing the build is not correct.

The build is now failed when any test fails. But the tests point out incompatibilities/bugs in various servers, so almost by definition not all tests can pass.

For instance, no matter the commit, the build is now ALWAYS failed because some early test in Batch fails. But if JBoss (which is used to run the tests against I think) itself has a bug here, then the test is still valid, and JBoss should fix it. Meanwhile, the build should not fail here.

The build should for a PR probably only fail if it doesn't compile.

I assume a PR is at least executed on local env of the approver before hitting a magic button, isn't it?

I personally test every PR indeed, but what do you want to see as a result? That's my question. A failing test does not mean the PR is not valid.

@arjantijms
Copy link
Contributor

@arun-gupta

any idea why Travis build is failing?

It seems it actually runs the tests and aborts on the first failure, then marks the entire build as failed. See my comments here as well: #303

@bartoszmajsak
Copy link
Contributor Author

For instance, no matter the commit, the build is now ALWAYS failed because some early test in Batch fails. But if JBoss (which is used to run the tests against I think) itself has a bug here, then the test is still valid, and JBoss should fix it.

Which essentially means that we (and people potentially interested in Java EE goodies) have no idea if this code is valid or not unless they somehow manage to run relevant parts.

I think the problem lies in the way how the repository is structured, but also how we have builds defined. I believe we can do way better.

At this point all of the infrastructure around serves no purpose in my humble opinion.

@arjantijms
Copy link
Contributor

Which essentially means that we (and people potentially interested in Java EE goodies) have no idea if this code is valid or not unless they somehow manage to run relevant parts.

No, the test is valid. Their server is not.

It points out to people they can't use that feature on their specific server. To run all tests and don't have the run abort on the very first failure --fail-at-end can be used.

That said, we should check external and each other's commits to really make sure the test is indeed valid. This is not always easy, I mean who knows enough about JASPIC here to double check that my JASPIC tests are valid? Again, a failing test does not indicate an invalid test.

I personally go to great lengths to check my tests, and validate with the spec lead of JASPIC in case of doubts and run them on multiple servers. In some cases a seemingly simple commit has taken weeks of testing the test itself.

I would like an easier way to run just the JPA tests or just the JASPIC tests though. Now the easiest way seems to be to just comment out unneeded modules in the main pom.xml file.

@aheusingfeld
Copy link
Contributor

any idea why Travis build is failing?

It seems it actually runs the tests and aborts on the first failure, then marks the entire build as failed. See my comments here as well: #303

That's not totally correct. It seems that Travis builds cannot have more than 10000 lines of output. If you take a look at all the builds, they are all stuck at this log line no matter what it is. I'll ask the travis guys about it.

ah, and by the way

I think the problem lies in the way how the repository is structured, but also how we have builds defined. I believe we can do way better.

I'm 100% with Bartosz on that one. I think it'd be much better if we create separate builds for the sub modules and the builds would also be way faster.

@arjantijms
Copy link
Contributor

It seems that Travis builds cannot have more than 10000 lines of output. If you take a look at all the builds, they are all stuck at this log line no matter what it is.

Okay, that's interesting. It looked like the abort was on the first failure, but could be a coincidence then.

@aheusingfeld
Copy link
Contributor

intermediate result: downloading dependencies alone takes ~3 minutes! https://api.travis-ci.org/jobs/64117841/log.txt?deansi=true

�[31;1mTimeout (20 minutes) reached. Terminating "mvn -q --fail-at-end verify -Ptomee-embedded-arquillian"�[0m

travis_time:end:092faa74:start=1432656942261690260,finish=1432658142724578223,duration=1200462887963
�[0K
�[31;1mThe command "travis_wait mvn -q --fail-at-end verify -Ptomee-embedded-arquillian" exited with 1.�[0m

Our build has been killed after 20 minutes, probably because of the 'quiet' flag. I will remove the flag again so travis shows the real failure and I'll also add the --fail-at-end option so that all tests are run. Nevertheless we should think about splitting the build to at least get faster results.

@aheusingfeld
Copy link
Contributor

There are really a lot of broken tests in the Travis build. Maybe one of you already wants to take a closer look into them while I'm trying to get the build working? https://api.travis-ci.org/jobs/64335298/log.txt?deansi=true

@arjantijms
Copy link
Contributor

One additional type of "failure" that I see in the log is that some tests are run against TomEE, but TomEE is a web profile+ implementation and doesn't support all technologies that are tested.

For instance, JASPIC fails on TomEE, but TomEE doesn't support JASPIC.

@aheusingfeld
Copy link
Contributor

FYI I split the travis build by subfolder so they could theoretically run in parallel. I think the output of this very clearly highlights our current problem: it's just too huge to be one piece! If the build of a single subproject takes more than 6 minutes, how are we still able to speak of "fast feedback"?

@bartoszmajsak I guess splitting by subproject is a reasonable first step. We now need to see whether we can limit builds only to the subprojects where changes have been made. The obvious way would be to really split the github project but I don't like this too much as it has (organizational) side effects.

If none of you minds, I'd like to focus on splitting only the build and try to filter the subprojects via Travis' explicit include feature. If this doesn't work, we can still think about plan B. Does that make sense?

@kubamarchwicki
Copy link
Contributor

Split by subprojects makes a perfect sense.

Though I have a different question, we are testing against wildfly-embedded-arquillian (8.2.0.Final). Not that I have anything against Wildfly, but why aren't the tests run against Glassfish (afterall, the reference implementation)?

@aheusingfeld
Copy link
Contributor

@kubamarchwicki forget what I said in the comment above, I tried different things locally and I think that splitting into multiple subprojects, i.e. javaee7-jpa-samples, is the only right way to go.

Not that I have anything against Wildfly, but why aren't the tests run against Glassfish

I have no feelings about this. I doubt though that switching the server will have an impact on the OutOfMemoryError we currently have for the JPA build :-o

@kubamarchwicki
Copy link
Contributor

Didnt know a out OOM with JPA :o

Maybe a different approach, have a wildfly / glassfish instance running and
runt the remote tests? For the sake of a build?

Splitting into multiple independent projects might be a way to go as well
but it complicates the build process (slightly). Unless we cover the basics
and make sure the parent pom and some utils libs will be distributed as
maven dependencies (even with jitpack.io) so that people will still be able
to focus on the functionalities and have the infrastructure provided.

What's your view?
On 6 Jun 2015 12:15, "Alexander Heusingfeld" [email protected]
wrote:

@kubamarchwicki https://github.com/kubamarchwicki forget what I said in
the comment above, I tried different things locally and I think that
splitting into multiple subprojects, i.e. javaee7-jpa-samples, is the
only right way to go.

Not that I have anything against Wildfly, but why aren't the tests run
against Glassfish

I have no feelings about this. I doubt though that switching the server
will have an impact on the OutOfMemoryError we currently have for the JPA
build https://travis-ci.org/javaee-samples/javaee7-samples/jobs/65640032
:-o


Reply to this email directly or view it on GitHub
#306 (comment)
.

@aheusingfeld
Copy link
Contributor

I switched the build to glassfish, let's see how it goes.

it complicates the build process (slightly)

We'll need a javaee7-samples-parent project that holds the parent pom.xml which has properties for each libraries version and a proper <dependency-management> section. We can then publish this to bintray or reference it via jitpack.io - seems to be a matter of taste. Which efforts do you see apart from that?

@kubamarchwicki
Copy link
Contributor

There are two util projects; utils for batch and generic test utils
(arquillian config and Parameterized test rule)
After that we should be able to split and distill a project per spec
element.

Though that's quite a revolution :)
On 6 Jun 2015 13:00, "Alexander Heusingfeld" [email protected]
wrote:

I switched the build to glassfish
227c82f,
let's see how it goes.

it complicates the build process (slightly)

We'll need a javaee7-samples-parent project that holds the parent pom.xml
which has properties for each libraries version and a proper
section. We can then publish this to bintray or
reference it via jitpack.io - seems to be a matter of taste. Which
efforts do you see apart from that?


Reply to this email directly or view it on GitHub
#306 (comment)
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants