Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provide an easy way to diagnose CI / CD pipeline failures #318

Open
jstrachan opened this issue Sep 21, 2017 · 2 comments
Open

provide an easy way to diagnose CI / CD pipeline failures #318

jstrachan opened this issue Sep 21, 2017 · 2 comments

Comments

@jstrachan
Copy link
Contributor

we've been finding some builds sometimes fail inside a CI / CD pipeline but not on a users laptop. So it'd be nice if folks had a way to try diagnose build failures a little easier.

Ideally long term we'd turn a build pod into a Che Workspace that folks could just open. Though there's the issue that build pods tend to be composed (maven + clients + jnlp) whereas Che workspaces tend to be one single docker image with all the CLI tools there.

Possible ideas:

fork the pipeline to add a pause

We could provide a way to let folks fork a failing pipeline and add a dummy input step into the pipeline before the failure line; then re-run that pipeline; then let folks kubectl exec -it nameOfPod -c containerName bash then they can cd into the folder and run whatever commands they need etc.

I wonder if we can improve on this somewhat? e.g. automatically generating the PR for the build failure with the input added? Automating the CLI commands to ssh into the paused build pod, cd to the right folder and run the last failing command?

Pause / Resume / Retry of pipelines

I think CloudBees has a pause/resume/retry thingy but its not available in the OSS version AFAIK

Create a Pod YAML for a build pod

we could create a Pod YAML by default and add it to the Jenkins build as an artefact thats based on the Jenkins pipeline build pods; then make the pod do the git clone on startup (via an init-container) & then wait; so that folks could kubectl exec into it?

here's an example init-container to do a git clone on startup to a volume (which can be mounted into all the containers in the pod)
https://github.com/fabric8io/fabric8-platform/blob/master/apps/keycloak/src/main/fabric8/deploymentConfig.yml#L12-L29

so that whether you work in the 'maven' container of the Pod or the clients container - the same git clone is visible?

then to test out a build, folks could just run this pod YAML then kubectl exec into the pod? if we put this YAML at a canonical place in Jenkins we could then write a gofabric8 CLI to run a command in the latest build's Pod YAML or something?

@rawlingsj
Copy link
Contributor

rawlingsj commented Sep 21, 2017

One issue could be only admins usually have kubectl exec as anyone that does would be able to see mounted secrets in the build pod.

@jstrachan
Copy link
Contributor Author

ah yeah good point! I guess folks could always run the build pod on any kubernetes cluster? Maybe we remove the secrets for doing git / docker / nexus pushes?

Or we only let folks run the build pod in their own tenant?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants