This tutorial covers the fundamental building blocks that make up Kubernetes. Understanding what these components are and how they are used is crucial to learning how to use the higher level objects and resources.
Namespaces are a logical cluster or environment. They are the primary method of partitioning a cluster or scoping access.
Objectives: Learn how to create and switch between Kubernetes Namespaces using kubectl
.
NOTE: If you are coming from the cli tutorial, you may have completed this already.
- List the current namespaces
$ kubectl get namespaces
- Create the
dev
namespace
$ kubectl create namespace dev
- Create a new context called
kind-dev
within thekind-kind
cluster as thekind-kind
user, with the namespace set todev
.
$ kubectl config set-context kind-dev --cluster=kind-kind --user=kind-kind --namespace=dev
- Switch to the newly created context.
$ kubectl config use-context kind-dev
Summary: Namespaces function as the primary method of providing scoped names, access, and act as an umbrella for group based resource restriction. Creating and switching between them is quick and easy, but learning to use them is essential in the general usage of Kubernetes.
A pod is the atomic unit of Kubernetes. It is the smallest “unit of work” or “management resource” within the system and is the foundational building block of all Kubernetes Workloads.
Note: These exercises build off the previous Core tutorials. If you have not done so, complete those before continuing.
Objective: Examine both single and multi-container Pods; including: viewing their attributes through the cli and their exposed Services through the API Server proxy.
- Create a simple Pod called
pod-example
using thenginx:stable-alpine
image and expose port80
. Use the manifestmanifests/pod-example.yaml
or the yaml below.
manifests/pod-example.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
Command
$ kubectl create -f manifests/pod-example.yaml
- Use
kubectl
to describe the Pod and note the available information.
$ kubectl describe pod pod-example
- Use
kubectl proxy
to verify the web server running in the deployed Pod.
Command
$ kubectl proxy
URL
http://127.0.0.1:8001/api/v1/namespaces/dev/pods/pod-example/proxy/
The default "Welcome to nginx!" page should be visible.
- Using the same steps as above, create a new Pod called
multi-container-example
using the manifestmanifests/pod-multi-container-example.yaml
or create a new one yourself with the below yaml.
manifests/pod-multi-container-example.yaml
apiVersion: v1
kind: Pod
metadata:
name: multi-container-example
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
- name: content
image: alpine:latest
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- while true; do
echo $(date)"<br />" >> /html/index.html;
sleep 5;
done
volumes:
- name: html
emptyDir: {}
Command
$ kubectl create -f manifests/pod-multi-container-example.yaml
Note: spec.containers
is an array allowing you to use multiple containers within a Pod.
- Use the proxy to verify the web server running in the deployed Pod.
Command
$ kubectl proxy
URL
http://127.0.0.1:8001/api/v1/namespaces/dev/pods/multi-container-example/proxy/
There should be a repeating date-time-stamp.
Summary: Becoming familiar with creating and viewing the general aspects of a Pod is an important skill. While it is rare that one would manage Pods directly within Kubernetes, the knowledge of how to view, access and describe them is important and a common first-step in troubleshooting a possible Pod failure.
Labels are key-value pairs that are used to identify, describe and group together related sets of objects or resources.
Selectors use labels to filter or select objects, and are used throughout Kubernetes.
Objective: Explore the methods of labeling objects in addition to filtering them with both equality and set-based selectors.
- Label the Pod
pod-example
withapp=nginx
andenvironment=dev
viakubectl
.
$ kubectl label pod pod-example app=nginx environment=dev
- View the labels with
kubectl
by passing the--show-labels
flag
$ kubectl get pods --show-labels
- Update the multi-container example manifest created previously with the labels
app=nginx
andenvironment=prod
then apply it viakubectl
.
manifests/pod-multi-container-example.yaml
apiVersion: v1
kind: Pod
metadata:
name: multi-container-example
labels:
app: nginx
environment: prod
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
- name: content
image: alpine:latest
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 5;
done
volumes:
- name: html
emptyDir: {}
Command
$ kubectl apply -f manifests/pod-multi-container-example.yaml
- View the added labels with
kubectl
by passing the--show-labels
flag once again.
$ kubectl get pods --show-labels
- With the objects now labeled, use an equality based selector
targeting the
prod
environment.
$ kubectl get pods --selector environment=prod
- Do the same targeting the
nginx
app with the short version of the selector flag (-l
).
$ kubectl get pods -l app=nginx
- Use a set-based selector
to view all pods where the
app
label isnginx
and filter out any that are in theprod
environment.
$ kubectl get pods -l 'app in (nginx), environment notin (prod)'
Summary: Kubernetes makes heavy use of labels and selectors in near every aspect of it. The usage of selectors may seem limited from the cli, but the concept can be extended to when it is used with higher level resources and objects.
Services within Kubernetes are the unified method of accessing the exposed workloads of Pods. They are a durable resource (unlike Pods) that is given a static cluster-unique IP and provide simple load-balancing through kube-proxy.
Note: These exercises build off the previous Core tutorials. If you have not done so, complete those before continuing.
Objective: Create a ClusterIP
service and view the different ways it is accessible within the cluster.
- Create
ClusterIP
serviceclusterip
that targets Pods labeled withapp=nginx
forwarding port80
using either the yaml below, or the manifestmanifests/service-clusterip.yaml
.
manifests/service-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
name: clusterip
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
Command
$ kubectl create -f manifests/service-clusterip.yaml
- Describe the newly created service. Note the
IP
and theEndpoints
fields.
$ kubectl describe service clusterip
- View the service through
kube proxy
and refresh several times. It should serve up pages from both pods.
Command
$ kubectl proxy
URL
http://127.0.0.1:8001/api/v1/namespaces/dev/services/clusterip/proxy/
- Lastly, verify that the generated DNS record has been created for the Service by using nslookup within the
example-pod
Pod that was provisioned in the Creating Pods exercise.
$ kubectl exec pod-example -- nslookup clusterip.dev.svc.cluster.local
It should return a valid response with the IP matching what was noted earlier when describing the Service.
Summary: The ClusterIP
Service is the most commonly used Service within Kubernetes. Every ClusterIP
Service
is given a cluster unique IP and DNS name that maps to one or more Pod Endpoints
. It functions as the main method in
which exposed Pod Services are consumed within a Kubernetes Cluster.
Objective: Create a NodePort
based Service and explore how it is available both inside and outside the cluster.
- Create a
NodePort
Service callednodeport
that targets Pods with the labelsapp=nginx
andenvironment=dev
forwarding port80
in cluster, and port32410
on the node itself. Use either the yaml below, or the manifestmanifests/service-nodeport.yaml
.
manifests/service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: nodeport
spec:
type: NodePort
selector:
app: nginx
environment: prod
ports:
- nodePort: 32410
protocol: TCP
port: 80
targetPort: 80
Command
$ kubectl create -f manifests/service-nodeport.yaml
- Describe the newly created Service Endpoint. Note the Service still has an internal cluster
IP
, and now additionally has aNodePort
.
$ kubectl describe service nodeport
- Run the below command to get the Kind Cluster's IP address and visit it in a browser.
$ echo $(docker inspect -f '{{.NetworkSettings.Networks.kind.IPAddress}}' kind-control-plane):32410
- Lastly, verify that the generated DNS record has been created for the Service by using nslookup within
the
example-pod
Pod.
$ kubectl exec pod-example -- nslookup nodeport.dev.svc.cluster.local
It should return a valid response with the IP matching what was noted earlier when describing the Service.
Summary: The NodePort
Services extend the ClusterIP
Service and additionally expose a port that is either
statically defined, as above (port 32410) or dynamically taken from a range between 30000-32767. This port is then
exposed on every node within the cluster and proxies to the created Service.
Objective: Create a LoadBalancer
based Service, and learn how it extends both ClusterIP
and NodePort
to
make a Service available outside the Cluster.
Before you Begin
To use Service Type LoadBalancer
it requires integration with an external IP provider. In most cases, this is a
cloud provider which will likely already be integrated with your cluster.
For bare-metal and on prem deployments, this must be handled yourself. There are several available tools and products that can do this, but for this example the Google metalLB provider will be used.
NOTE: We need to provide metallb a range of IP addresses it controls. We want this range to be on the docker kind network.
$ docker network inspect -f '{{.IPAM.Config}}' kind
The output will contain a cidr such as 172.18.0.0/16. We want our loadbalancer IP range to come from this subclass. We can configure metallb, for instance, to use 172.18.255.200 to 172.18.255.250 by creating the configmap.
Edit the manifest manifests/metalLB.yaml
and change the cidr range on line 19 (172.18.255.200-172.18.255.250
) to
fit your requirements. Otherwise go ahead and deploy it.
$ kubectl create -f manifests/metalLB.yaml
- Create a
LoadBalancer
Service calledloadbalancer
that targets pods with the labelsapp=nginx
andenvironment=prod
forwarding as port80
. Use either the yaml below, or the manifestmanifests/service-loadbalancer.yaml
.
manifests/service-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
selector:
app: nginx
environment: prod
ports:
- protocol: TCP
port: 80
targetPort: 80
Command
$ kubectl create -f manifests/service-loadbalancer.yaml
- Describe the Service
loadbalancer
, and note the Service retains the aspects of both theClusterIP
andNodePort
Service types in addition to having a new attributeLoadBalancer Ingress
.
$ kubectl describe service loadbalancer
-
Open a browser and visit the IP noted in the
Loadbalancer Ingress
field. It should directly map to the exposed Service. -
Finally, verify that the generated DNS record has been created for the Service by using nslookup within the
example-pod
Pod.
$ kubectl exec pod-example -- nslookup loadbalancer.dev.svc.cluster.local
It should return a valid response with the IP matching what was noted earlier when describing the Service.
Summary: LoadBalancer
Services are the second most frequently used Service within Kubernetes as they are the
main method of directing external traffic into the Kubernetes cluster. They work with an external provider to map
ingress traffic destined to the LoadBalancer Ingress
IP to the cluster nodes on the exposed NodePort
. These in
turn direct traffic to the desired Pods.
Objective: Gain an understanding of the ExternalName
Service and how it is used within a Kubernetes Cluster.
- Create an
ExternalName
service calledexternalname
that points togoogle.com
$ kubectl create service externalname externalname --external-name=google.com
- Describe the
externalname
Service. Note that it does NOT have an internal IP or other normal service attributes.
$ kubectl describe service externalname
- Lastly, look at the generated DNS record has been created for the Service by using nslookup within the
example-pod
Pod. It should return the IP ofgoogle.com
.
$ kubectl exec pod-example -- nslookup externalname.dev.svc.cluster.local
Summary: ExternalName
Services create a CNAME
entry in the Cluster DNS. This provides an avenue to use
internal Service discovery methods to reference external entities.
To remove everything that was created in this tutorial, execute the following commands:
kubectl delete namespace dev
kubectl delete -f manifests/metalLB.yaml
kubectl config delete-context kind-dev
kubectl config use-context kind-kind