-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Developer Experience with Spread #9
Comments
Cool, it seems great! And an important and nice thing is that if the image has something in the path the host volume is mounted, its contents will be replaced by the contents of the host. So you can mount while your image has the code there too. How can I help with this? |
Overall sounds good. I do have some philosophical questions in regards to the developer/command workflow though. In redspread/spread/#59, @mfburnett mentions the "build-push-deploy" paradigm, and I agree that being explicit with commands is important. From what I understand, So even though the images are available to the shared daemon after building, does that mean they should automatically be pushed and deployed locally? Seems like a single responsibility principle violation. In this situation, I'd expect to follow a workflow like |
@rata: How does this compare to your current workflows? I would love any feedback on how to make this as dev friendly as possible and if you are feeling ambitious feel free to contribute features you think would help enhance the workflow. I think there could be a whole suite of these which implement common Kube development that are currently cumbersome. @jsquirrelz: My thinking with Something I've found gets tiresome in development with Kube is the cycle of building images, pushing images, and updating objects. Since the duration of building and pushing is variable (and can take a while), it leads to significant disruption in my development "flow". I also like not having to push because it allows offline work. Maybe we keep |
@ethernetdan: I agree that a command named "build" should only build. But another command that "builds and deploys" should be there too, IMHO. I think that:
Then:
And I think this address the concerns of all so far. What do you think? Am I missing something? |
@ethernetdan totally get you on the significant disruption to your development flow. I haven't used a local registry before so I'm not familiar with the average downtime you'd experience pushing locally but I'd imagine it'd be much faster than pushing to public registry on Docker Hub or a private on another server? If not, pardon my ignorance. But like @rata said:
And pushing to a local registry (i.e. Generally I'm imagining an easy plug & play for different environments where you deploy to development/staging/production (which all could have different registries/clusters defined in a config file). I agree, there should be a single command that builds/deploys. I just can't decide if changing that to build/push/deploy for local development is unnecessary extra work, or introducing inconsistency in development practices when you need to push to staging/production. Nonetheless, a local registry isn't necessary to deploy locally so my vote would to be not include one. But since it's really an isolated service, it could probably be incorporated pretty easily down the line if there was ever a true need for it. I'm down to open an issue and lead that investigation if you think it could help. Overall I think @ethernetdan and @rata have described a solid development flow 👏 The key developer experiences I'm seeing are:
I do think one of the key points @rata made was:
Is old container/image cleanup currently supported by P.S. When I want to start with a fresh docker daemon, I usually run a script like:
There has got to be a better way, right? |
@jsquirrelz: Yes, with kubernetes you can update a container and that will delete the old one and create a new one. No need for that script that kill everything, kubernetes can handle this for us :) |
I agree here too. A
I think something like
So doing a
Kubernetes does container and image cleanup, so it should be okay. Though it only cleans up images when free space reaches a certain threshold, by default.
Isn't building them enough, so we don't have to push them? And just maybe manually deploy them again? I feel like a command similar to For all our k8s work at the moment we use bash scripts (for "building + pushing" and "deploying"), and then a Makefile to automate all the |
@rata @zoidbergwill @jsquirrelz sorry for taking so long to respond
I definitely see value in consistency, it would be nice if deploying to a remote cluster followed a similar workflow as deploying locally. An unfortunate side effect of running a registry within localkube would be that we would end up storing the image twice (registry + Daemon). This might be something that we could simplify using versioning (redspread/spread#122) by instead of pushing images to other clusters, pushing versioned references to images, leaving the actual transport up to Docker/Kubernetes.
+1, I like that interface. Any preference between deploy and run? (or another name)
Should we just have users create Volume objects themselves or is there someway we could better facilitate that?
Not sure what to do with this one; changing that many fields seems invasive but at the same time would bring convenience. |
On Sun, Apr 03, 2016 at 02:57:25PM -0700, Dan Gillespie wrote:
(sorry, will reply to the resto tomorrow)
No, this is wrong. Using ":latest" on kubernetes automatically sets the policy |
The primary goal of localkube is to provide a streamlined development experience for people working with Kubernetes. This involves abstracting the complexities of operating a Kubernetes cluster in a development context.
Here is the workflow I imagine for localkube
Developing Docker images/Kubernetes objects
docker-machine
to bring up VM (non-Linux only)kubectl
creds for youdocker build
the image that you want to work withspread build .
to deploy to clusterspread build .
each time you want to deploy changesDeveloping code
If you are developing code I would follow the same process above but would mount the code being changed using a HostPathVolumeSource. This way changes are immediately available within the container.
Linux users can simply mount the path storing the code. For OS X and Windows users docker-machine mounts
/Users
andC:\Users
respectively inside the root of the VM. More information, see this page.The text was updated successfully, but these errors were encountered: