You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we've been finding some builds sometimes fail inside a CI / CD pipeline but not on a users laptop. So it'd be nice if folks had a way to try diagnose build failures a little easier.
Ideally long term we'd turn a build pod into a Che Workspace that folks could just open. Though there's the issue that build pods tend to be composed (maven + clients + jnlp) whereas Che workspaces tend to be one single docker image with all the CLI tools there.
Possible ideas:
fork the pipeline to add a pause
We could provide a way to let folks fork a failing pipeline and add a dummy input step into the pipeline before the failure line; then re-run that pipeline; then let folks kubectl exec -it nameOfPod -c containerName bash then they can cd into the folder and run whatever commands they need etc.
I wonder if we can improve on this somewhat? e.g. automatically generating the PR for the build failure with the input added? Automating the CLI commands to ssh into the paused build pod, cd to the right folder and run the last failing command?
Pause / Resume / Retry of pipelines
I think CloudBees has a pause/resume/retry thingy but its not available in the OSS version AFAIK
Create a Pod YAML for a build pod
we could create a Pod YAML by default and add it to the Jenkins build as an artefact thats based on the Jenkins pipeline build pods; then make the pod do the git clone on startup (via an init-container) & then wait; so that folks could kubectl exec into it?
so that whether you work in the 'maven' container of the Pod or the clients container - the same git clone is visible?
then to test out a build, folks could just run this pod YAML then kubectl exec into the pod? if we put this YAML at a canonical place in Jenkins we could then write a gofabric8 CLI to run a command in the latest build's Pod YAML or something?
The text was updated successfully, but these errors were encountered:
ah yeah good point! I guess folks could always run the build pod on any kubernetes cluster? Maybe we remove the secrets for doing git / docker / nexus pushes?
Or we only let folks run the build pod in their own tenant?
we've been finding some builds sometimes fail inside a CI / CD pipeline but not on a users laptop. So it'd be nice if folks had a way to try diagnose build failures a little easier.
Ideally long term we'd turn a build pod into a Che Workspace that folks could just open. Though there's the issue that build pods tend to be composed (maven + clients + jnlp) whereas Che workspaces tend to be one single docker image with all the CLI tools there.
Possible ideas:
fork the pipeline to add a pause
We could provide a way to let folks fork a failing pipeline and add a dummy input step into the pipeline before the failure line; then re-run that pipeline; then let folks kubectl exec -it nameOfPod -c containerName bash then they can cd into the folder and run whatever commands they need etc.
I wonder if we can improve on this somewhat? e.g. automatically generating the PR for the build failure with the input added? Automating the CLI commands to ssh into the paused build pod, cd to the right folder and run the last failing command?
Pause / Resume / Retry of pipelines
I think CloudBees has a pause/resume/retry thingy but its not available in the OSS version AFAIK
Create a Pod YAML for a build pod
we could create a Pod YAML by default and add it to the Jenkins build as an artefact thats based on the Jenkins pipeline build pods; then make the pod do the git clone on startup (via an init-container) & then wait; so that folks could kubectl exec into it?
here's an example init-container to do a git clone on startup to a volume (which can be mounted into all the containers in the pod)
https://github.com/fabric8io/fabric8-platform/blob/master/apps/keycloak/src/main/fabric8/deploymentConfig.yml#L12-L29
so that whether you work in the 'maven' container of the Pod or the clients container - the same git clone is visible?
then to test out a build, folks could just run this pod YAML then kubectl exec into the pod? if we put this YAML at a canonical place in Jenkins we could then write a gofabric8 CLI to run a command in the latest build's Pod YAML or something?
The text was updated successfully, but these errors were encountered: