Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k3s 1.18.8 pods stuck in CrashLoopBackOff #2158

Closed
vvanouytsel opened this issue Aug 24, 2020 · 18 comments
Closed

k3s 1.18.8 pods stuck in CrashLoopBackOff #2158

vvanouytsel opened this issue Aug 24, 2020 · 18 comments

Comments

@vvanouytsel
Copy link

vvanouytsel commented Aug 24, 2020

Environmental Info:
K3s Version:

[root@fox ~]# ./k3s -v
k3s version v1.18.8+k3s1 (6b595318)

Node(s) CPU architecture, OS, and Version:

[root@fox ~]# cat /etc/centos-release
CentOS Linux release 7.8.2003 (Core)
[root@fox ~]# uname -a
Linux fox 3.10.0-1127.el7.x86_64 rancher/k3s#1  SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration:

1 server functioning as master and node

Describe the bug:

When running k3s server all pods will stay in CrashLoopBackOff.

[root@fox ~]# ./k3s kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS             RESTARTS   AGE
kube-system   local-path-provisioner-6d59f47c7-ff9bg   0/1     CrashLoopBackOff   15         51m
kube-system   helm-install-traefik-mg6lk               0/1     CrashLoopBackOff   13         51m
kube-system   coredns-7944c66d8d-zgkzs                 0/1     CrashLoopBackOff   15         51m
kube-system   metrics-server-7566d596c8-fks6z          0/1     CrashLoopBackOff   14         51m
[root@fox ~]# ./k3s kubectl logs -n kube-system coredns-7944c66d8d-zgkzs
failed to try resolving symlinks in path "/var/log/pods/kube-system_coredns-7944c66d8d-zgkzs_ba5156ce-3f67-4773-8335-d0119089f340/coredns/15.log": lstat /var/log/pods/kube-system_coredns-7944c66d8d-zgkzs_ba5156ce-3f67-4773-8335-d0119089f340/coredns/15.log: no such file or directory

The error message seems to be related to the fact that no '*.log' file is created. However the directory does exist.

[root@fox ~]# ls -l /var/log/pods/kube-system_coredns-7944c66d8d-zgkzs_ba5156ce-3f67-4773-8335-d0119089f340/coredns/
total 0

Steps To Reproduce:

  • Install clean CentOS 7
  • Install k3s
$ wget https://github.com/rancher/k3s/releases/download/v1.18.8%2Bk3s1/k3s 
  • Run k3s
./k3s server
  • Verify status of pods
[root@fox ~]# ./k3s kubectl get pods -n kube-system 
NAME                                     READY   STATUS             RESTARTS   AGE
helm-install-traefik-mg6lk               0/1     CrashLoopBackOff   14         57m
metrics-server-7566d596c8-fks6z          0/1     CrashLoopBackOff   15         57m
local-path-provisioner-6d59f47c7-ff9bg   0/1     CrashLoopBackOff   16         57m
coredns-7944c66d8d-zgkzs                 0/1     CrashLoopBackOff   16         57m

Expected behavior:

I would expect the pods to be in a Running state.

Actual behavior:

The pods are in a CrashLoopBackOff state and are contstantly restarting.
The log files of the pods specify the following

lstat /var/log/pods/kube-system_coredns-7944c66d8d-zgkzs_ba5156ce-3f67-4773-8335-d0119089f340/coredns/15.log: no such file or directory

After some retries the file is created and no error mesage is shown anymore, however the pod is still in CrashLoopBackOff status.

# File has been created
[root@fox ~]# ls -l /var/log/pods/kube-system_coredns-7944c66d8d-zgkzs_ba5156ce-3f67-4773-8335-d0119089f340/coredns/
total 0
-rw-r-----. 1 root root 0 Aug 24 16:09 16.log
# No logs are shown
[root@fox ~]# ./k3s  kubectl logs -n kube-system coredns-7944c66d8d-zgkzs
# Pods are still in CrashLoopBackOff status
[root@fox ~]# ./k3s kubectl get pods -n kube-system 
NAME                                     READY   STATUS             RESTARTS   AGE
metrics-server-7566d596c8-fks6z          0/1     CrashLoopBackOff   16         60m
local-path-provisioner-6d59f47c7-ff9bg   0/1     CrashLoopBackOff   17         60m
coredns-7944c66d8d-zgkzs                 0/1     CrashLoopBackOff   17         60m
helm-install-traefik-mg6lk               0/1     CrashLoopBackOff   15         60m
# Killing the pod has no effect since the '*.log' file error will show up again.
[root@fox ~]# ./k3s kubectl delete pod coredns-7944c66d8d-zgkzs -n kube-system
pod "coredns-7944c66d8d-zgkzs" deleted

[root@fox ~]# ./k3s  kubectl logs -n kube-system coredns-7944c66d8d-56wsb
failed to try resolving symlinks in path "/var/log/pods/kube-system_coredns-7944c66d8d-56wsb_3f769173-b718-4cd7-9c04-095025f2276e/coredns/2.log": lstat /var/log/pods/kube-system_coredns-7944c66d8d-56wsb_3f769173-b718-4cd7-9c04-095025f2276e/coredns/2.log: no such file or directory
@vvanouytsel vvanouytsel changed the title k3s 1.18.8 k3s 1.18.8 pods stuck in CrashLoopBackOff Aug 24, 2020
@brandond
Copy link
Member

brandond commented Aug 24, 2020

What do you see if you kubectl describe pod -n kube-system coredns - in particular, what sort of events do you have? Do you have any errors in journalctl -u k3s?

@vvanouytsel
Copy link
Author

Below is the output when describing the coredns pod.

[root@fox ~]# ./k3s kubectl describe pod coredns-7944c66d8d-jggkr -n kube-system
Name:           coredns-7944c66d8d-jggkr
Namespace:      kube-system
Priority:       0
Node:           fox/192.168.0.233
Start Time:     Tue, 25 Aug 2020 08:23:34 +0200
Labels:         k8s-app=kube-dns
                pod-template-hash=7944c66d8d
Annotations:    <none>
Status:         Running
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/coredns-7944c66d8d
Containers:
  coredns:
    Container ID:  containerd://48d1f3f9b3745fb52ec56d36e7ac6c1645caa3dcbc716eadd4ee95b678d7b527
    Image:         rancher/coredns-coredns:1.6.9
    Image ID:      docker.io/rancher/coredns-coredns@sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       StartError
      Message:      sandbox container "65dbdba0f063f788f8b2d582928503a829efc5834200a74e3f3c22e5dff77862" is not running
      Exit Code:    128
      Started:      Thu, 01 Jan 1970 01:00:00 +0100
      Finished:     Tue, 25 Aug 2020 08:24:07 +0200
    Ready:          False
    Restart Count:  3
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=10s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-dgg8g (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-dgg8g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-dgg8g
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason          Age                 From               Message
  ----     ------          ----                ----               -------
  Normal   Scheduled       <unknown>           default-scheduler  Successfully assigned kube-system/coredns-7944c66d8d-jggkr to fox
  Warning  Failed          45s                 kubelet, fox       Error: failed to create containerd task: OCI runtime create failed: container_linux.go:341: creating new parent process caused "container_linux.go:1923: running lstat on namespace path \"/proc/28443/ns/ipc\" caused \"lstat /proc/28443/ns/ipc: no such file or directory\"": unknown
  Normal   Pulled          44s (x2 over 45s)   kubelet, fox       Container image "rancher/coredns-coredns:1.6.9" already present on machine
  Normal   Created         44s (x2 over 45s)   kubelet, fox       Created container coredns
  Warning  Failed          44s                 kubelet, fox       Error: sandbox container "503b3a5522dab20234b681026bbb3f6e2142071a01fffc1518a85a6878cf2c04" is not running
  Normal   SandboxChanged  35s (x10 over 44s)  kubelet, fox       Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         35s (x9 over 43s)   kubelet, fox       Back-off restarting failed container

The logging of the k3s service shows a lot of the following messages:

W0825 08:25:24.800481    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/b37f39d65e02a99f29b4f3119ab49d9e06fdf7a46b40afeb7f56a62d8c634073": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/b37f39d65e02a99f29b4f3119ab49d9e06fdf7a46b40afeb7f56a62d8c634073: no such file or directory
W0825 08:25:24.800501    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/b37f39d65e02a99f29b4f3119ab49d9e06fdf7a46b40afeb7f56a62d8c634073": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/b37f39d65e02a99f29b4f3119ab49d9e06fdf7a46b40afeb7f56a62d8c634073: no such file or directory
W0825 08:25:24.800525    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/b37f39d65e02a99f29b4f3119ab49d9e06fdf7a46b40afeb7f56a62d8c634073": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/b37f39d65e02a99f29b4f3119ab49d9e06fdf7a46b40afeb7f56a62d8c634073: no such file or directory
W0825 08:25:24.800543    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpuset/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpuset/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0: no such file or directory
W0825 08:25:24.800565    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0: no such file or directory
W0825 08:25:24.800582    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0: no such file or directory
W0825 08:25:24.800603    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0: no such file or directory
W0825 08:25:24.801650    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0: no such file or directory
W0825 08:25:24.801682    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/e503c468d7b6d12d2a75bd85302863eb4dc2dfdfbe7ad3ae13cae4031f2006c0: no such file or directory
W0825 08:25:24.801732    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpuset/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpuset/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83: no such file or directory
W0825 08:25:24.801754    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83: no such file or directory
W0825 08:25:24.801771    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83: no such file or directory
W0825 08:25:24.801791    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83: no such file or directory
W0825 08:25:24.801904    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83: no such file or directory
W0825 08:25:24.801932    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/kubepods/besteffort/pod24b2bd47-47e0-49c6-9b10-7ce62c75a662/b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83: no such file or directory
W0825 08:25:25.698406    1540 pod_container_deletor.go:77] Container "b71b5d2378a3a0ae21c2b5caae2109ef7d1d88a7ea53cc841dd92cc10e32da83" not found in pod's containers
E0825 08:25:25.987789    1540 pod_workers.go:191] Error syncing pod 1ab9db4b-5a1e-42ce-ac82-86d9480963a2 ("metrics-server-7566d596c8-fks6z_kube-system(1ab9db4b-5a1e-42ce-ac82-86d9480963a2)"), skipping: failed to "StartContainer" for "metrics-server" with CrashLoopBackOff: "back-off 5m0s restarting failed container=metrics-server pod=metrics-server-7566d596c8-fks6z_kube-system(1ab9db4b-5a1e-42ce-ac82-86d9480963a2)"
E0825 08:25:26.266633    1540 pod_workers.go:191] Error syncing pod 8c727328-93e9-46fe-8184-3add823cf15d ("coredns-7944c66d8d-jggkr_kube-system(8c727328-93e9-46fe-8184-3add823cf15d)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 1m20s restarting failed container=coredns pod=coredns-7944c66d8d-jggkr_kube-system(8c727328-93e9-46fe-8184-3add823cf15d)"
E0825 08:25:26.270030    1540 pod_workers.go:191] Error syncing pod dd2fed4d-6c2a-46d4-b164-f7da25ad113d ("helm-install-traefik-mg6lk_kube-system(dd2fed4d-6c2a-46d4-b164-f7da25ad113d)"), skipping: failed to "StartContainer" for "helm" with CrashLoopBackOff: "back-off 5m0s restarting failed container=helm pod=helm-install-traefik-mg6lk_kube-system(dd2fed4d-6c2a-46d4-b164-f7da25ad113d)"
W0825 08:25:26.371296    1540 manager.go:1131] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4344d35c33bdbd5bc303057ce177cfe754493f9cdc2162f31ea19da8d05f3fe7 WatchSource:0}: task 4344d35c33bdbd5bc303057ce177cfe754493f9cdc2162f31ea19da8d05f3fe7 not found: not found
W0825 08:25:26.371354    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76: no such file or directory
W0825 08:25:26.371388    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76: no such file or directory
W0825 08:25:26.371407    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76: no such file or directory
W0825 08:25:26.371429    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76: no such file or directory
W0825 08:25:26.371453    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/kubepods/besteffort/pod1ab9db4b-5a1e-42ce-ac82-86d9480963a2/4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76: no such file or directory
W0825 08:25:26.371548    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpuset/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpuset/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0: no such file or directory
W0825 08:25:26.371571    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0: no such file or directory
W0825 08:25:26.371588    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0: no such file or directory
W0825 08:25:26.371608    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0: no such file or directory
W0825 08:25:26.371625    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0: no such file or directory
W0825 08:25:26.371646    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/kubepods/burstable/pod8c727328-93e9-46fe-8184-3add823cf15d/99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0: no such file or directory
W0825 08:25:26.371663    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpuset/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpuset/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469: no such file or directory
W0825 08:25:26.371684    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469: no such file or directory
W0825 08:25:26.371700    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469: no such file or directory
W0825 08:25:26.372148    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469: no such file or directory
W0825 08:25:26.372171    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469: no such file or directory
W0825 08:25:26.372192    1540 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/kubepods/besteffort/poddd2fed4d-6c2a-46d4-b164-f7da25ad113d/b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469: no such file or directory
E0825 08:25:26.703457    1540 pod_workers.go:191] Error syncing pod 24b2bd47-47e0-49c6-9b10-7ce62c75a662 ("local-path-provisioner-6d59f47c7-ff9bg_kube-system(24b2bd47-47e0-49c6-9b10-7ce62c75a662)"), skipping: failed to "StartContainer" for "local-path-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=local-path-provisioner pod=local-path-provisioner-6d59f47c7-ff9bg_kube-system(24b2bd47-47e0-49c6-9b10-7ce62c75a662)"
W0825 08:25:26.709169    1540 pod_container_deletor.go:77] Container "99d2bbf0ae53616aaa28c411161bc2a402cc2645c1218a2e03eac4a6b58594a0" not found in pod's containers
W0825 08:25:26.733912    1540 pod_container_deletor.go:77] Container "b24c079faed0c1b2b1d2b0d2ee6a982cc80edb32a803135a7a1d0e72fc7f0469" not found in pod's containers
W0825 08:25:26.747738    1540 pod_container_deletor.go:77] Container "4ff98e73bbb8cb3c6c6de4331cc89b2d3f847d767a688810503aeca38e03fb76" not found in pod's containers
W0825 08:25:27.765191    1540 pod_container_deletor.go:77] Container "d618da3d4d1a09d243ccd4f19f948702c8265c46970ce6a2905fadb8da428a43" not found in pod's containers

All other pods have the same type of event:

  Warning  Failed          19s                kubelet, fox       Error: failed to create containerd task: OCI runtime create failed: container_linux.go:341: creating new parent process caused "container_linux.go:1923: running lstat on namespace path \"/proc/23578/ns/ipc\" caused \"lstat /proc/23578/ns/ipc: no such file or directory\"": unknown

It seems that containerd is not able to create the container?

@dweomer
Copy link
Contributor

dweomer commented Aug 25, 2020

@vvanouytsel given your installation method I suspect that you do not have the k3s-selinux policy installed. This is something that you would be guided to if you attempt to install via the install.sh method:

# on a clean centos/7 vagrant box
[vagrant@localhost ~]$ curl -sfL https://get.k3s.io | sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.18.8+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[ERROR]  Failed to find the k3s-selinux policy, please install:
    yum install -y container-selinux selinux-policy-base
    rpm -i https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm

So, either disable SELinux (sudo setenforce 0 # temporary, won't persist through a reboot) or install the selinux policy via:

sudo yum -y install container-selinux selinux-policy-base https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm

@vvanouytsel
Copy link
Author

I've manually installed the following packages.

$ sudo yum -y install container-selinux selinux-policy-base https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm

I've deleted the coredns pod and it still throws the following error:
failed to create containerd task: OCI runtime create failed: container_linux.go:341: creating new parent process caused "container_linux.go:1923: running lstat on namespace path \"/proc/3230/ns/ipc\" caused \"lstat /proc/3230/ns/ipc: no such file or directory\"": unknown

[root@fox ~]# ./k3s kubectl describe pod coredns-7944c66d8d-hbcz5 -n kube-system
Name:         coredns-7944c66d8d-hbcz5
Namespace:    kube-system
Priority:     0
Node:         fox/10.0.2.15
Start Time:   Wed, 26 Aug 2020 08:54:58 +0200
Labels:       k8s-app=kube-dns
              pod-template-hash=7944c66d8d
Annotations:  <none>
Status:       Running
IP:           10.42.0.19
IPs:
  IP:           10.42.0.19
Controlled By:  ReplicaSet/coredns-7944c66d8d
Containers:
  coredns:
    Container ID:  containerd://fdf7fc0c6d540d280cd2f7181263f0e61859324eaf7941f8036db8b44660ce9a
    Image:         rancher/coredns-coredns:1.6.9
    Image ID:      docker.io/rancher/coredns-coredns@sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       StartError
      Message:      failed to create containerd task: OCI runtime create failed: container_linux.go:341: creating new parent process caused "container_linux.go:1923: running lstat on namespace path \"/proc/3230/ns/ipc\" caused \"lstat /proc/3230/ns/ipc: no such file or directory\"": unknown
      Exit Code:    128
      Started:      Thu, 01 Jan 1970 01:00:00 +0100
      Finished:     Wed, 26 Aug 2020 09:00:14 +0200
    Ready:          False
    Restart Count:  6
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=10s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-dgg8g (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-dgg8g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-dgg8g
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason          Age                      From               Message
  ----     ------          ----                     ----               -------
  Normal   Scheduled       <unknown>                default-scheduler  Successfully assigned kube-system/coredns-7944c66d8d-hbcz5 to fox
  Warning  Failed          8m46s                    kubelet, fox       Error: sandbox container "2ec9443b4e298c39baf9a87886ff49a537604f24af1735533c018bc93ded464e" is not running
  Normal   Pulled          8m45s (x2 over 8m46s)    kubelet, fox       Container image "rancher/coredns-coredns:1.6.9" already present on machine
  Normal   Created         8m45s (x2 over 8m46s)    kubelet, fox       Created container coredns
  Warning  Failed          8m45s                    kubelet, fox       Error: sandbox container "d079b26b00982c83cd455df99be3158c67eb93d9af6de67730fb95015abe03fd" is not running
  Normal   SandboxChanged  8m36s (x10 over 8m45s)   kubelet, fox       Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         3m46s (x142 over 8m44s)  kubelet, fox       Back-off restarting failed container

I've also tried the k3s install script on a clean CentOS 7.8 vagrant box, and that worked perfectly.
So I guess my next steps are to look through the k3s install script and verify what else it is doing.
For my use case I would like to simply drop the k3s binary using Ansible, instead of running the install script.

@vvanouytsel
Copy link
Author

After some more testing I can confirm that everything works fine as long as I install the selinux packages before running k3s server.

$ sudo yum -y install container-selinux selinux-policy-base https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm

On the broken system where I installed the selinux packages after I already started k3s, I had to do the following:

  • Manually delete all pods/deployments/statefulsets/jobs/
[root@localhost ~]# k3s kubectl delete deploy,job,statefulset,pod --all-namespaces --all
  • Restart k3s
  • Recreate all deployments/statefulsets/jobs/...

Containerd is now able to successfully start the containers.

[root@localhost ~]# k3s kubectl get pods -n kube-system
NAME                                     READY   STATUS      RESTARTS   AGE
local-path-provisioner-6d59f47c7-bgh8w   1/1     Running     0          3m8s
metrics-server-7566d596c8-msdcb          1/1     Running     0          3m8s
coredns-7944c66d8d-t2j8d                 1/1     Running     0          3m8s
helm-install-traefik-r8nqs               0/1     Completed   2          3m9s
svclb-traefik-qf59d                      2/2     Running     0          2m49s
traefik-758cd5fc85-fcttx                 1/1     Running     0          2m49s

@brandond
Copy link
Member

If you'd installed from the script, it would have checked for this on install to ensure that you got the package in before starting k3s for the first time. It would have also dropped a k3s-killall.sh that you could have used to terminate the pods so that they could be recreated with the correct context. Although in this case, I suspect that just restarting the node (which would have recreated the pods) would have fixed your issue as well.

@rancher-max
Copy link
Contributor

@vvanouytsel it looks like your issue has been solved then per above? If so, I will close this out.

FWIW, I recreated the problem you had and came to the same resolution as the workaround.

As @brandond mentioned, the simpler install steps would just be to install k3s from the script:

$ sudo yum -y install container-selinux selinux-policy-base https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm
...
$ curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.18.8+k3s1 sh -

If you run the install script before installing selinux policy, you'll get an error like:

[ERROR]  Failed to find the k3s-selinux policy, please install:
    yum install -y container-selinux selinux-policy-base
    rpm -i https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm

@vvanouytsel
Copy link
Author

@brandond restarting the node does not solve the issue.
Restarting the pods also does not solve the issue.
Previously I thought that removing the parent resources (Deployments, Statefulsets, Jobs, Daemonsets, ...) and recreating them would fix the issue after the SElinux packages are installed, but this does not seem the case.

[root@localhost ~]# yum list installed | grep selinux
container-selinux.noarch           2:2.119.2-1.911c772.el7_8  @extras           
k3s-selinux.noarch                 0.1.1-rc1.el7              @/k3s-selinux-0.1.1-rc1.el7.noarch
libselinux.x86_64                  2.5-15.el7                 @anaconda         
libselinux-python.x86_64           2.5-15.el7                 @anaconda         
libselinux-utils.x86_64            2.5-15.el7                 @anaconda         
selinux-policy.noarch              3.13.1-266.el7_8.1         @updates          
selinux-policy-targeted.noarch     3.13.1-266.el7_8.1         @updates  

After restarting k3s, the pods still stay in CrashLoopBackOff.
Even a reboot did not solve the issue.

[root@localhost ~]# k3s kubectl describe pod  coredns-7944c66d8d-kz6c9  -n kube-system
Name:         coredns-7944c66d8d-kz6c9
Namespace:    kube-system
Priority:     0
Node:         localhost.localdomain/10.0.2.15
Start Time:   Thu, 27 Aug 2020 07:35:01 +0000
Labels:       k8s-app=kube-dns
              pod-template-hash=7944c66d8d
Annotations:  <none>
Status:       Running
IP:           10.42.0.184
IPs:
  IP:           10.42.0.184
Controlled By:  ReplicaSet/coredns-7944c66d8d
Containers:
  coredns:
    Container ID:  containerd://ca691db5d74e63e2991532b3b0fedf681ae638f55f5c6335435f34e937601e03
    Image:         rancher/coredns-coredns:1.6.9
    Image ID:      docker.io/rancher/coredns-coredns@sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       StartError
      Message:      failed to create containerd task: OCI runtime create failed: container_linux.go:341: creating new parent process caused "container_linux.go:1923: running lstat on namespace path \"/proc/8575/ns/ipc\" caused \"lstat /proc/8575/ns/ipc: no such file or directory\"": unknown
      Exit Code:    128
      Started:      Thu, 01 Jan 1970 00:00:00 +0000
      Finished:     Thu, 27 Aug 2020 07:36:14 +0000
    Ready:          False
    Restart Count:  3
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=10s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-shz7k (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-shz7k:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-shz7k
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason          Age                 From                            Message
  ----     ------          ----                ----                            -------
  Normal   Scheduled       <unknown>           default-scheduler               Successfully assigned kube-system/coredns-7944c66d8d-kz6c9 to localhost.localdomain
  Normal   Created         92s                 kubelet, localhost.localdomain  Created container coredns
  Warning  Failed          92s                 kubelet, localhost.localdomain  Error: failed to create containerd task: OCI runtime create failed: container_linux.go:341: creating new parent process caused "container_linux.go:1923: running lstat on namespace path \"/proc/1319/ns/ipc\" caused \"lstat /proc/1319/ns/ipc: no such file or directory\"": unknown
  Warning  Failed          92s                 kubelet, localhost.localdomain  Error: failed to get sandbox container task: no running task found: task e93898e9793b58185a060cb356958a5c2d25e8073a385c523e98c0c9360ac2e5 not found: not found
  Warning  BackOff         82s (x9 over 90s)   kubelet, localhost.localdomain  Back-off restarting failed container
  Normal   SandboxChanged  82s (x10 over 91s)  kubelet, localhost.localdomain  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          81s (x3 over 92s)   kubelet, localhost.localdomain  Container image "rancher/coredns-coredns:1.6.9" already present on machine

The logs of the k3s server process show the following

W0827 07:37:41.541807    1041 pod_container_deletor.go:77] Container "6443cd5f38821e3b2ceaba1423d5d25c9cf21fb1ba6e12a05102b3e47b6a208e" not found in pod's containers
E0827 07:37:42.291292    1041 pod_workers.go:191] Error syncing pod 758b7454-dfb6-40a5-b228-b9e1a90a7e46 ("coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 2m40s restarting failed container=coredns pod=coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"
W0827 07:37:42.565331    1041 pod_container_deletor.go:77] Container "bf55cb41babadbbd05bca069a56e6257251d5296e23e8631b332466b9b509c58" not found in pod's containers
W0827 07:37:42.990273    1041 manager.go:1131] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pod758b7454-dfb6-40a5-b228-b9e1a90a7e46/b8141a779ac848b2f1d290cb208f4e5e6025ef7e9cfc119d1ade991284347a4d WatchSource:0}: task b8141a779ac848b2f1d290cb208f4e5e6025ef7e9cfc119d1ade991284347a4d not found: not found
E0827 07:37:43.305835    1041 pod_workers.go:191] Error syncing pod 758b7454-dfb6-40a5-b228-b9e1a90a7e46 ("coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 2m40s restarting failed container=coredns pod=coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"
W0827 07:37:43.601735    1041 pod_container_deletor.go:77] Container "5f359ff4a23086ad3b1b47543571d6397e8f2a94f9468e4e1e87035cc158455d" not found in pod's containers
E0827 07:37:44.318715    1041 pod_workers.go:191] Error syncing pod 758b7454-dfb6-40a5-b228-b9e1a90a7e46 ("coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 2m40s restarting failed container=coredns pod=coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"
W0827 07:37:44.494354    1041 manager.go:1131] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pod758b7454-dfb6-40a5-b228-b9e1a90a7e46/6443cd5f38821e3b2ceaba1423d5d25c9cf21fb1ba6e12a05102b3e47b6a208e WatchSource:0}: task 6443cd5f38821e3b2ceaba1423d5d25c9cf21fb1ba6e12a05102b3e47b6a208e not found: not found
W0827 07:37:44.644697    1041 pod_container_deletor.go:77] Container "6381906248ec765455a9b723f03d972a170344c425110a62a39c8467fd6d4804" not found in pod's containers
E0827 07:37:45.398486    1041 pod_workers.go:191] Error syncing pod 758b7454-dfb6-40a5-b228-b9e1a90a7e46 ("coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 2m40s restarting failed container=coredns pod=coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"
W0827 07:37:45.668889    1041 pod_container_deletor.go:77] Container "0e8d7207b07c82f2e39d41f45fbf9b363db20af14c4c7b82fd3f251e851855a6" not found in pod's containers
W0827 07:37:46.002715    1041 manager.go:1131] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pod758b7454-dfb6-40a5-b228-b9e1a90a7e46/bf55cb41babadbbd05bca069a56e6257251d5296e23e8631b332466b9b509c58 WatchSource:0}: task bf55cb41babadbbd05bca069a56e6257251d5296e23e8631b332466b9b509c58 not found: not found
E0827 07:37:46.481327    1041 pod_workers.go:191] Error syncing pod 758b7454-dfb6-40a5-b228-b9e1a90a7e46 ("coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 2m40s restarting failed container=coredns pod=coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"
W0827 07:37:46.710841    1041 pod_container_deletor.go:77] Container "2e32f3f59af7cbd8412481804aac423b4f4f89b6f35c9a08b9bfdc1ddfaec8a7" not found in pod's containers
E0827 07:37:47.492781    1041 pod_workers.go:191] Error syncing pod 758b7454-dfb6-40a5-b228-b9e1a90a7e46 ("coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 2m40s restarting failed container=coredns pod=coredns-7944c66d8d-kz6c9_kube-system(758b7454-dfb6-40a5-b228-b9e1a90a7e46)"
W0827 07:37:47.505856    1041 manager.go:1131] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pod758b7454-dfb6-40a5-b228-b9e1a90a7e46/5f359ff4a23086ad3b1b47543571d6397e8f2a94f9468e4e1e87035cc158455d WatchSource:0}: task 5f359ff4a23086ad3b1b47543571d6397e8f2a94f9468e4e1e87035cc158455d not found: not found

@vvanouytsel
Copy link
Author

vvanouytsel commented Aug 27, 2020

After installing the SELinux packages I ran restorecon.

restorecon -Rv /

When restarting k3s I found the following SELinux denials in the audit.log file.

[root@localhost ~]# tail -f /var/log/audit/audit.log  | grep avc
type=AVC msg=audit(1598524830.002:766834): avc:  denied  { read execute } for  pid=2434 comm="pause" path="/pause" dev="sda1" ino=67634416 scontext=system_u:system_r:container_t:s0:c541,c744 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=0
type=AVC msg=audit(1598524830.982:767095): avc:  denied  { read execute } for  pid=3563 comm="pause" path="/pause" dev="sda1" ino=67634416 scontext=system_u:system_r:container_t:s0:c450,c524 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=0
type=AVC msg=audit(1598524832.063:767364): avc:  denied  { read execute } for  pid=4691 comm="pause" path="/pause" dev="sda1" ino=67634416 scontext=system_u:system_r:container_t:s0:c462,c480 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=0

@vvanouytsel
Copy link
Author

When further investigating the broken system I can see that containerd tries to start a container with image sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e, which fails so it retries.

# This list gets bigger and bigger
[root@localhost ~]# k3s ctr container list
CONTAINER                                                           IMAGE                                                                      RUNTIME                  
04ae86f757f2996c0fd682fbd6c7d0da3007e8237d18c2107a011348fba1b5be    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
053ea4e6f01270887437a9f121cecd878f3068bf88e051a1436557addfffbbb0    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
1f717698484633c585f7bee3a2747c6db0944ffe0197f8177167e7ea1e6bde48    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
22d7e511d252bd22088210b2dcfd55cef742b868df196321edb9773e22931f09    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
29e98b39e98bc0c8b6897948b9704e3d8986720ba3c7a4c421a8673b5dd9ee92    sha256:4e797b3234604c31f729cb63b6128b623e2f76e629d53ccb84d899de4e73f759    io.containerd.runc.v2    
2d673abe3c5b22492931e896d12eb9891a0d6ea6f9098a651e51358ba39d0d4f    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
3a771d947a5ebfc9a1a40fa514b3de63a2cdf110a735fd3cc5f9dd6ae5b5d47c    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
405a47aa4415269d606c90151eaf30e8e1df6eeed5389a88875260527935721a    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
4ca269034d9429fcb369cd41a6d0e0f0b066dd33e1b8920928630ec4389bf5f1    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
52752ba6db761c94a0e8781b1205505c91a20ebd24829e71e8efd09604327125    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
542db3877efb05e661e3b839730d3a7611c699cd38b3f79f30152ca9e4656c68    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
565286e4eab85d2d79a96d79694a1694172817b046ede891459badc1830a175f    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
59c35154a4401ae1032d5a73df1d58112386a64efaa504edcd2cbeae407ca6be    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
60f5a03cf4f929d904af87faf3f05843bbb3988f816435238835526281318526    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
7368e3384dc0e977ba45c10f991e1f1ac79c2638947f236357a1434ef0b7eca3    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
88694630e09c9f79edae338fa9ebfcca250a9287ef0bf760c4b6f8ba29500974    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
acd5df80ee3e5f6bd0eec05967e5c3d9ccf2d85d5b9d58035ea57a1e2d4dbdc6    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
c35de9e2af01c5b19a739f2114057bc7680358396e916acae94bf42694b59423    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
d0002983b72cc84787c254fe062cb379b3d4531f481d8dad9dedd6061d01af57    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
d01ed0ac4b35a9cfd628cb45a241958c5cf0f77ac900ae01d53cb9f6faef38da    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
da62bd025b86e38bc0f4db659dc480dd40d74fc1c6d284d00c448d1b2e221ab6    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
e4c72e5c0451562bc94aeb39436b30c1240deb2cadf6194797219d03fbcf43ee    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
e5a142d020f98f47d69d047e8b417badfdec7bb87f66c44642311e0748780c6a    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
e7e58105c9919aa056c1105f08b95d911564fcc75765dd7935ffc48e287dbdd2    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
ee6dd1996bd374140e5322343affcd54c6691cff51aef2957610004114412a2a    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
f6568a493a1d78eeec8fc2120c57ea457c090ebaa01bdda49feb8e65da243721    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2    
fc2395ec0c0e43181f16ac5a5c3a7ca73ac4c2244d434079746165a0dce7cad8    sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e    io.containerd.runc.v2 

We can see that the image maps to the docker.io/rancher/pause:3.1 image.

[root@localhost ~]# k3s ctr image list | grep sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e                                   application/vnd.docker.distribution.manifest.list.v2+json sha256:d22591b61e9c2b52aecbf07106d5db313c4f178e404d660b32517b18fcbf0144 318.9 KiB linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed
[root@localhost ~]# k3s crictl image ls  | grep da86e6b
docker.io/rancher/pause             3.1                 da86e6ba6ca19       327kB

When matching this with my previous comment, it seems that something SELinux related is not set up correctly yet for that image.

type=AVC msg=audit(1598524830.002:766834): avc:  denied  { read execute } for  pid=2434 comm="pause" path="/pause" dev="sda1" ino=67634416 scontext=system_u:system_r:container_t:s0:c541,c744 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=0

@vvanouytsel
Copy link
Author

When looking through journalctl we can also see that that SELinux error is related to /pause.
However the /pause file is not available on the filesystem, since it is used in a container.

[root@localhost ~]# journalctl -b -0
....
Aug 27 11:41:21 localhost.localdomain kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
Aug 27 11:41:21 localhost.localdomain setroubleshoot[4550]: SELinux is preventing /pause from 'read, execute' accesses on the file /pause. For complete SELinux messages run: sealert -l 00838093-401f-4614-b585-af900140deb6
Aug 27 11:41:21 localhost.localdomain python[4550]: SELinux is preventing /pause from 'read, execute' accesses on the file /pause.
                                                    
                                                    *****  Plugin restorecon (99.5 confidence) suggests   ************************
                                                    
                                                    If you want to fix the label. 
                                                    /pause default label should be etc_runtime_t.
                                                    Then you can run restorecon. The access attempt may have been stopped due to insufficient permissions to access a parent directory in which case try to change the following command accordingly.
                                                    Do
                                                    # /sbin/restorecon -v /pause
                                                    
                                                    *****  Plugin catchall (1.49 confidence) suggests   **************************
                                                    
                                                    If you believe that pause should be allowed read execute access on the pause file by default.
                                                    Then you should report this as a bug.
                                                    You can generate a local policy module to allow this access.
                                                    Do
                                                    allow this access for now by executing:
                                                    # ausearch -c 'pause' --raw | audit2allow -M my-pause
                                                    # semodule -i my-pause.pp

@vvanouytsel
Copy link
Author

I was able to work around the SELinux problem related to the '/pause' file, which is used in the 'docker.io/rancher/pause' image by running the following:

[root@localhost ~]# ausearch -c 'pause' --raw | audit2allow -M my-pause
[root@localhost ~]# semodule -i my-pause.pp

Although I am not sure why I had to do this manually after installing the following required SELinux pacakges...

@vvanouytsel
Copy link
Author

Just to make this issue complete, it seemed that there were 2 events related to the pause command.

[root@localhost ~]# ausearch -c 'pause'
...
----
time->Thu Aug 27 13:03:41 2020
type=ANOM_ABEND msg=audit(1598533421.490:247073): auid=1000 uid=0 gid=0 ses=3 subj=system_u:system_r:container_t:s0:c185,c332 pid=13246 comm="pause" reason="memory violation" sig=11
----
time->Thu Aug 27 13:03:43 2020
type=PROCTITLE msg=audit(1598533423.175:247269): proctitle="(null)"
type=SYSCALL msg=audit(1598533423.175:247269): arch=c000003e syscall=59 success=no exit=-13 a0=c000167a20 a1=c0001514d0 a2=c0001651a0 a3=0 items=0 ppid=14081 pid=14101 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="pause" exe="/pause" subj=system_u:system_r:container_t:s0:c380,c863 key=(null)
type=AVC msg=audit(1598533423.175:247269): avc:  denied  { read execute } for  pid=14101 comm="pause" path="/pause" dev="sda1" ino=67634416 scontext=system_u:system_r:container_t:s0:c380,c863 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=0
----
...

@brandond
Copy link
Member

As you've noticed, it's a fair bit of work to try to repair a system that's been brought up without the correct packages and policies available. It's even more work to try do to it without simply deleting everything and starting over.

@davidnuzik I suggest that we handle this as a documentation issue - the install script checks for this and prevents users from starting k3s without the selinux policies in place, but users that want to drop the binary directly without using the script (or RPM) should be responsible for ensuring that this is done themselves.

@vvanouytsel
Copy link
Author

I am also able to reproduce SELinux issues when defining a custom 'data-dir' path.

[root@localhost ~]# yum list installed | grep selinux
container-selinux.noarch            2:2.119.2-1.911c772.el7_8  @extras          
k3s-selinux.noarch                  0.1.1-rc1.el7              @/k3s-selinux-0.1.1-rc1.el7.noarch
libselinux.x86_64                   2.5-15.el7                 @anaconda        
libselinux-python.x86_64            2.5-15.el7                 @anaconda        
libselinux-utils.x86_64             2.5-15.el7                 @anaconda        
selinux-policy.noarch               3.13.1-266.el7_8.1         @updates         
selinux-policy-targeted.noarch      3.13.1-266.el7_8.1         @updates  

[root@localhost ~]# mkdir -p /mnt/data/kubernetes/
[root@localhost ~]# ls -ldZ  /mnt/data/kubernetes/
drwxr-xr-x. root root unconfined_u:object_r:mnt_t:s0   /mnt/data/kubernetes/

[root@localhost ~]# k3s server --data-dir=/mnt/data/kubernetes/ --no-deploy=metrics-server,traefik,servicelb,local-storage

In journalctl the following error message is shown related to SELinux.

Aug 28 07:54:26 localhost.localdomain setroubleshoot[4412]: SELinux is preventing /pause from 'read, execute' accesses on the file /pause. For complete SELinux messages run: sealert -l bf37dffe-0acb-4101-8cd7-1a23e9f7b560
[root@localhost ~]# sealert -l bf37dffe-0acb-4101-8cd7-1a23e9f7b560
SELinux is preventing /pause from 'read, execute' accesses on the file /pause.

*****  Plugin restorecon (99.5 confidence) suggests   ************************

If you want to fix the label. 
/pause default label should be etc_runtime_t.
Then you can run restorecon. The access attempt may have been stopped due to insufficient permissions to access a parent directory in which case try to change the following command accordingly.
Do
# /sbin/restorecon -v /pause

*****  Plugin catchall (1.49 confidence) suggests   **************************

If you believe that pause should be allowed read execute access on the pause file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'pause' --raw | audit2allow -M my-pause
# semodule -i my-pause.pp


Additional Information:
Source Context                system_u:system_r:container_t:s0:c34,c437
Target Context                unconfined_u:object_r:mnt_t:s0
Target Objects                /pause [ file ]
Source                        pause
Source Path                   /pause
Port                          <Unknown>
Host                          localhost.localdomain
Source RPM Packages           
Target RPM Packages           
Policy RPM                    selinux-policy-3.13.1-266.el7_8.1.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     localhost.localdomain
Platform                      Linux localhost.localdomain 3.10.0-1127.el7.x86_64
                              #1 SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64
Alert Count                   1
First Seen                    2020-08-28 07:54:25 UTC
Last Seen                     2020-08-28 07:54:25 UTC
Local ID                      bf37dffe-0acb-4101-8cd7-1a23e9f7b560

Raw Audit Messages
type=AVC msg=audit(1598601265.883:1160): avc:  denied  { read execute } for  pid=4922 comm="pause" path="/pause" dev="sda1" ino=33586703 scontext=system_u:system_r:container_t:s0:c34,c437 tcontext=unconfined_u:object_r:mnt_t:s0 tclass=file permissive=0


type=SYSCALL msg=audit(1598601265.883:1160): arch=x86_64 syscall=execve success=no exit=EACCES a0=c0000a1a50 a1=c00010eba0 a2=c0000971a0 a3=0 items=0 ppid=4902 pid=4922 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4 comm=pause exe=/pause subj=system_u:system_r:container_t:s0:c34,c437 key=(null)

Hash: pause,container_t,mnt_t,file,read,execute

Adding a custom module for all 'pause' SELinux alerts works around the issue.

[root@localhost ~]# ausearch -c 'pause' --raw | audit2allow -M my-pause
[root@localhost ~]# semodule -i my-pause.pp

After solving the 'pause' issue, the next SELinux related alert pops up.

Aug 28 07:56:49 localhost.localdomain setroubleshoot[24296]: failed to retrieve rpm info for /etc/resolv.conf
Aug 28 07:56:49 localhost.localdomain setroubleshoot[24296]: SELinux is preventing /coredns from open access on the file /etc/resolv.conf. For complete SELinux messages run: sealert -l fe3663c0-08c4-48cd-a765-eecd4b455b19

Again, by creating a custom module you can work around the issue.

[root@localhost ~]# ausearch -c 'coredns' --raw | audit2allow -M my-coredns
[root@localhost ~]# semodule -i my-coredns.pp

After these two manual actions, k3s is running properly with a custom data directory.
This can also be replicated by using the k3s install script.
It seems using the '--data-dir' k3s argument is not supported when enabling k3s?

@brandond
Copy link
Member

brandond commented Aug 28, 2020

The install script does not validate or take any action on the flags; they're passed through to the systemd/openrc service config as-is. Setting the correct context on the data-dir (whether default or custom) would definitely be part of the documentation for manual installs with selinux.

@davidnuzik
Copy link
Contributor

Documentation for how we'll recommend a k3s install on selinux-enforcing system to be handled here: #2058

The plan will be to support the yum repo installation method.

@davidnuzik davidnuzik removed this from the v1.19 - Backlog milestone Aug 31, 2020
@zube zube bot added this to the v1.19 - Backlog milestone Aug 31, 2020
@davidnuzik davidnuzik removed this from the v1.19 - Backlog milestone Aug 31, 2020
@brandond
Copy link
Member

brandond commented Dec 4, 2020

I believe this should be covered in the documentation now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants