Skip to content

Commit

Permalink
Improve resource cleaning
Browse files Browse the repository at this point in the history
  • Loading branch information
joyrex2001 committed May 20, 2021
1 parent a6108ac commit 11e634d
Show file tree
Hide file tree
Showing 8 changed files with 260 additions and 34 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
run:
go run main.go -v 2
go run main.go -pP -v 2

build:
CGO_ENABLED=0 go build -ldflags \
Expand Down
10 changes: 8 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Kubedock is an experimental implementation of the docker api that will orchestrate containers into a kubernetes cluster, rather than running containers locally. The main driver for this project is to be able running [testcontainers-java](https://www.testcontainers.org) enabled unit-tests in k8s, without the requirement of running docker-in-docker within resource heavy containers.

The current implementation is limited, but able to run containers that just expose ports, copy resources towards the container, or mount volumes. Containers that 'just' expose ports, require logging and copy resources to running containers will probably work. Volume mounting is implemented by copying the local volume towards the container, changes made by the container to this volume are not synced back. All data is considered emphemeral.
The current implementation is limited, but able to run containers that just expose ports, copy resources towards the container, or mount volumes. Containers that 'just' expose ports, require logging and copy resources to running containers will probably work. Volume mounting is implemented by copying the local volume towards the container, changes made by the container to this volume are not synced back. All data is considered emphemeral. If a container has network aliases configured, it will create k8s services with the alias as name. However, if aliases are present, a port mapping should be configured as well (as a service requires a specific port mapping).

## Quick start

Expand Down Expand Up @@ -40,7 +40,13 @@ The below use-cases are mostly not working:

## Resource reaping

Kubedock will dynamically create deployments and services in the configured namespace. If kubedock is requested to delete a container, it will remove the deployment and related services. However, if e.g. a test fails and didn't clean up its started containers, these resources will remain in the namespace. To prevent unused deployments and services lingering around, kubedock will automatically delete deployments and services that are older than 5 minutes (default) if it's owned by the current process. If the deployment is not owned by the running process, it will delete it after 10 minutes if the deployment or service has the label `kubedock=true`.
### Automatic reaping

Kubedock will dynamically create deployments and services in the configured namespace. If kubedock is requested to delete a container, it will remove the deployment and related services. However, if e.g. a test fails and didn't clean up its started containers, these resources will remain in the namespace. To prevent unused deployments and services lingering around, kubedock will automatically delete deployments and services that are older than 15 minutes (default) if it's owned by the current process. If the deployment is not owned by the running process, it will delete it after 30 minutes if the deployment or service has the label `kubedock=true`.

### Forced cleaning

The reaping of resources can also be enforced at startup, and at exit. When kubedock is started with the `--prune-start` argument, it will delete all resources that have the `kubedock=true` before starting the API server. If the `--prune-exit` argument is set, kubedock will delete all the resources it created in the running instance before exiting (identified with the `kubedock.id` label).

# See also

Expand Down
6 changes: 5 additions & 1 deletion cmd/root.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,10 @@ func init() {
rootCmd.PersistentFlags().StringP("namespace", "n", "default", "Namespace in which containers should be orchestrated")
rootCmd.PersistentFlags().String("initimage", config.Image, "Image to use as initcontainer for volume setup")
rootCmd.PersistentFlags().DurationP("timeout", "t", 1*time.Minute, "Container creating timeout")
rootCmd.PersistentFlags().DurationP("reapmax", "r", 5*time.Minute, "Reap all resources older than this time")
rootCmd.PersistentFlags().DurationP("reapmax", "r", 15*time.Minute, "Reap all resources older than this time")
rootCmd.PersistentFlags().StringP("verbosity", "v", "1", "Log verbosity level")
rootCmd.PersistentFlags().BoolP("prune-start", "P", false, "Prune all existing kubedock resources before starting")
rootCmd.PersistentFlags().BoolP("prune-exit", "p", false, "Prune all created resources on exit")

viper.BindPFlag("server.listen-addr", rootCmd.PersistentFlags().Lookup("listen-addr"))
viper.BindPFlag("server.socket", rootCmd.PersistentFlags().Lookup("socket"))
Expand All @@ -52,6 +54,8 @@ func init() {
viper.BindPFlag("kubernetes.timeout", rootCmd.PersistentFlags().Lookup("timeout"))
viper.BindPFlag("reaper.reapmax", rootCmd.PersistentFlags().Lookup("reapmax"))
viper.BindPFlag("verbosity", rootCmd.PersistentFlags().Lookup("verbosity"))
viper.BindPFlag("prune-start", rootCmd.PersistentFlags().Lookup("prune-start"))
viper.BindPFlag("prune-exit", rootCmd.PersistentFlags().Lookup("prune-exit"))

viper.BindEnv("server.listen-addr", "SERVER_LISTEN_ADDR")
viper.BindEnv("server.socket", "SERVER_SOCKET")
Expand Down
2 changes: 0 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -101,8 +101,6 @@ github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.7.1 h1:qC89GU3p8TvKWMAVhEpmpB2CIb1hnqt2UdKZaP93mS8=
github.com/gin-gonic/gin v1.7.1/go.mod h1:jD2toBW3GZUr5UMcdrwQA10I7RuaFOl/SGeDjXkfUtY=
github.com/gin-gonic/gin v1.7.2-0.20210519235755-e72e584d1aba h1:2jUZdpT0sXVBSeXbatd/CdRBaOK8YYFlnx5v5LBmOj4=
github.com/gin-gonic/gin v1.7.2-0.20210519235755-e72e584d1aba/go.mod h1:jD2toBW3GZUr5UMcdrwQA10I7RuaFOl/SGeDjXkfUtY=
github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
Expand Down
72 changes: 53 additions & 19 deletions internal/backend/delete.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,28 @@ import (
"github.com/joyrex2001/kubedock/internal/model/types"
)

// DeleteAll will delete all resources that kubedock=true
func (in *instance) DeleteAll() error {
if err := in.deleteServices("kubedock=true"); err != nil {
klog.Errorf("error deleting services: %s", err)
}
return in.deleteDeployments("kubedock=true")
}

// DeleteWithKubedockID will delete all resources that have given kubedock.id
func (in *instance) DeleteWithKubedockID(id string) error {
if err := in.deleteServices("kubedock.id=" + id); err != nil {
klog.Errorf("error deleting services: %s", err)
}
return in.deleteDeployments("kubedock.id=" + id)
}

// DeleteContainer will delete given container object in kubernetes.
func (in *instance) DeleteContainer(tainr *types.Container) error {
if err := in.deleteServices(tainr.ShortID); err != nil {
if err := in.deleteServices("kubedock.containerid=" + tainr.ShortID); err != nil {
klog.Errorf("error deleting services: %s", err)
}
return in.cli.AppsV1().Deployments(in.namespace).Delete(context.TODO(), tainr.ShortID, metav1.DeleteOptions{})
return in.deleteDeployments("kubedock.containerid=" + tainr.ShortID)
}

// DeleteContainersOlderThan will delete containers than are orchestrated
Expand All @@ -28,14 +44,9 @@ func (in *instance) DeleteContainersOlderThan(keepmax time.Duration) error {
return err
}
for _, dep := range deps.Items {
if dep.ObjectMeta.DeletionTimestamp != nil {
klog.V(3).Infof("skipping deployment %v, already in deleting state", dep)
continue
}
old := metav1.NewTime(time.Now().Add(-keepmax))
if dep.ObjectMeta.CreationTimestamp.Before(&old) {
if in.isOlderThan(dep.ObjectMeta, keepmax) {
klog.V(3).Infof("deleting deployment: %s", dep.Name)
if err := in.deleteServices(dep.Name); err != nil {
if err := in.deleteServices("kubedock.containerid=" + dep.Name); err != nil {
klog.Errorf("error deleting services: %s", err)
}
if err := in.cli.AppsV1().Deployments(dep.Namespace).Delete(context.TODO(), dep.Name, metav1.DeleteOptions{}); err != nil {
Expand All @@ -56,12 +67,7 @@ func (in *instance) DeleteServicesOlderThan(keepmax time.Duration) error {
return err
}
for _, svc := range svcs.Items {
if svc.ObjectMeta.DeletionTimestamp != nil {
klog.V(3).Infof("skipping service %v, already in deleting state", svc)
continue
}
old := metav1.NewTime(time.Now().Add(-keepmax))
if svc.ObjectMeta.CreationTimestamp.Before(&old) {
if in.isOlderThan(svc.ObjectMeta, keepmax) {
klog.V(3).Infof("deleting service: %s", svc.Name)
if err := in.cli.CoreV1().Services(svc.Namespace).Delete(context.TODO(), svc.Name, metav1.DeleteOptions{}); err != nil {
return err
Expand All @@ -71,11 +77,22 @@ func (in *instance) DeleteServicesOlderThan(keepmax time.Duration) error {
return nil
}

// deleteServices will delete k8s service resources which have the
// label kubedock with the given id as value.
func (in *instance) deleteServices(id string) error {
// isOlderThan will check if given resource metadata has an older timestamp
// compared to given keepmax duration
func (in *instance) isOlderThan(met metav1.ObjectMeta, keepmax time.Duration) bool {
if met.DeletionTimestamp != nil {
klog.V(3).Infof("ignoring %v, already in deleting state", met)
return false
}
old := metav1.NewTime(time.Now().Add(-keepmax))
return met.CreationTimestamp.Before(&old)
}

// deleteServices will delete k8s service resources which match the
// given label selector.
func (in *instance) deleteServices(selector string) error {
svcs, err := in.cli.CoreV1().Services(in.namespace).List(context.TODO(), metav1.ListOptions{
LabelSelector: "kubedock.containerid=" + id,
LabelSelector: selector,
})
if err != nil {
return err
Expand All @@ -87,3 +104,20 @@ func (in *instance) deleteServices(id string) error {
}
return nil
}

// deleteDeployments will delete k8s deployments resources which match the
// given label selector.
func (in *instance) deleteDeployments(selector string) error {
deps, err := in.cli.AppsV1().Deployments(in.namespace).List(context.TODO(), metav1.ListOptions{
LabelSelector: selector,
})
if err != nil {
return err
}
for _, svc := range deps.Items {
if err := in.cli.AppsV1().Deployments(svc.Namespace).Delete(context.TODO(), svc.Name, metav1.DeleteOptions{}); err != nil {
return err
}
}
return nil
}
169 changes: 160 additions & 9 deletions internal/backend/delete_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ import (
"github.com/joyrex2001/kubedock/internal/model/types"
)

func TestDeleteContainer(t *testing.T) {
func TestDeleteContainerKubedockID(t *testing.T) {
tests := []struct {
in *types.Container
kub *instance
out bool
ins int
}{
{
kub: &instance{
Expand All @@ -27,20 +27,171 @@ func TestDeleteContainer(t *testing.T) {
Name: "tb303",
Namespace: "default",
},
Status: appsv1.DeploymentStatus{
ReadyReplicas: 1,
}),
},
in: &types.Container{ID: "rc752", ShortID: "tb303", Name: "f1spirit"},
ins: 1,
},
{
kub: &instance{
namespace: "default",
cli: fake.NewSimpleClientset(&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "tb303",
Namespace: "default",
Labels: map[string]string{"kubedock.containerid": "tb303", "kubedock.id": "6502"},
},
}),
},
in: &types.Container{ID: "rc752", ShortID: "tb303", Name: "f1spirit"},
ins: 1,
},
{
kub: &instance{
namespace: "default",
cli: fake.NewSimpleClientset(&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "tb303",
Namespace: "default",
Labels: map[string]string{"kubedock.containerid": "tb303", "kubedock.id": "z80"},
},
}),
},
in: &types.Container{ID: "rc752", ShortID: "tb303", Name: "f1spirit"},
out: false,
ins: 0,
},
}

for i, tst := range tests {
res := tst.kub.DeleteContainer(tst.in)
if (res != nil && !tst.out) || (res == nil && tst.out) {
t.Errorf("failed test %d - unexpected return value %s", i, res)
if err := tst.kub.DeleteWithKubedockID("z80"); err != nil {
t.Errorf("failed test %d - unexpected error %s", i, err)
}
deps, _ := tst.kub.cli.AppsV1().Deployments("default").List(context.TODO(), metav1.ListOptions{})
cnt := len(deps.Items)
if cnt != tst.ins {
t.Errorf("failed delete instances test %d - expected %d remaining deployments but got %d", i, tst.ins, cnt)
}
}
}

func TestDeleteContainers(t *testing.T) {
tests := []struct {
in *types.Container
kub *instance
cnt int
}{
{
kub: &instance{
namespace: "default",
cli: fake.NewSimpleClientset(&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "tb303",
Namespace: "default",
},
}),
},
in: &types.Container{ID: "rc752", ShortID: "tb303", Name: "f1spirit"},
cnt: 1,
},
{
kub: &instance{
namespace: "default",
cli: fake.NewSimpleClientset(&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "tb303",
Namespace: "default",
Labels: map[string]string{"kubedock.containerid": "tb303", "kubedock.id": "6502"},
},
}),
},
in: &types.Container{ID: "rc752", ShortID: "tb303", Name: "f1spirit"},
cnt: 0,
},
{
kub: &instance{
namespace: "default",
cli: fake.NewSimpleClientset(&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "tb303",
Namespace: "default",
Labels: map[string]string{"kubedock.containerid": "tb303", "kubedock.id": "z80"},
},
}),
},
in: &types.Container{ID: "rc752", ShortID: "tb303", Name: "f1spirit"},
cnt: 0,
},
}

for i, tst := range tests {
if err := tst.kub.DeleteContainer(tst.in); err != nil {
t.Errorf("failed test %d - unexpected error %s", i, err)
}
deps, _ := tst.kub.cli.AppsV1().Deployments("default").List(context.TODO(), metav1.ListOptions{})
cnt := len(deps.Items)
if cnt != tst.cnt {
t.Errorf("failed test %d - expected %d remaining deployments but got %d", i, tst.cnt, cnt)
}
}
}

func TestDeleteContainerKubedock(t *testing.T) {
tests := []struct {
in *types.Container
kub *instance
all int
}{
{
kub: &instance{
namespace: "default",
cli: fake.NewSimpleClientset(&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "tb303",
Namespace: "default",
},
}),
},
in: &types.Container{ID: "rc752", ShortID: "tb303", Name: "f1spirit"},
all: 1,
},
{
kub: &instance{
namespace: "default",
cli: fake.NewSimpleClientset(&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "tb303",
Namespace: "default",
Labels: map[string]string{"kubedock": "true", "kubedock.id": "6502"},
},
}),
},
in: &types.Container{ID: "rc752", ShortID: "tb303", Name: "f1spirit"},
all: 0,
},
{
kub: &instance{
namespace: "default",
cli: fake.NewSimpleClientset(&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "tb303",
Namespace: "default",
Labels: map[string]string{"kubedock": "true", "kubedock.id": "z80"},
},
}),
},
in: &types.Container{ID: "rc752", ShortID: "tb303", Name: "f1spirit"},
all: 0,
},
}

for i, tst := range tests {
if err := tst.kub.DeleteAll(); err != nil {
t.Errorf("failed test %d - unexpected error %s", i, err)
}
deps, _ := tst.kub.cli.AppsV1().Deployments("default").List(context.TODO(), metav1.ListOptions{})
cnt := len(deps.Items)
if cnt != tst.all {
t.Errorf("failed delete all test %d - expected %d remaining deployments but got %d", i, tst.all, cnt)
}
}
}
Expand Down Expand Up @@ -81,7 +232,7 @@ func TestDeleteServices(t *testing.T) {
}

for i, tst := range tests {
if err := tst.kub.deleteServices(tst.id); err != nil {
if err := tst.kub.deleteServices("kubedock.containerid=" + tst.id); err != nil {
t.Errorf("failed test %d - unexpected error %s", i, err)
}
svcs, _ := tst.kub.cli.CoreV1().Services("default").List(context.TODO(), metav1.ListOptions{})
Expand Down
2 changes: 2 additions & 0 deletions internal/backend/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ import (
type Backend interface {
StartContainer(*types.Container) error
CreateServices(*types.Container) error
DeleteAll() error
DeleteWithKubedockID(string) error
DeleteContainer(*types.Container) error
DeleteContainersOlderThan(time.Duration) error
DeleteServicesOlderThan(time.Duration) error
Expand Down
Loading

0 comments on commit 11e634d

Please sign in to comment.