In the G025 guide you've seen how to create a K3s Kubernetes cluster with just one server node. This works fine and suits the constrained scenario set in this guide series. But if you want a more complete Kubernetes experience, you'll need to know how to set up two or more server nodes in your cluster.
In this supplementary guide I'll summarize you what to add or just do differently on the procedures explained in the G025 guide, with the goal of creating a K3s cluster with two server nodes.
BEWARE!
You cannot convert a single-node cluster setup that uses the embedded SQLite database into a multiserver one. You'll have to do to a clean new install of the K3s software, although of course you can reuse the same VMs you already have.
The first step is rather obvious: create a new VM and configure it to be the second K3s server node. And by configure
I mean the following.
-
Create the VM as you created the other ones (by link-cloning the
k3snodetpl
VM template), but:- Give it the next
VM ID
number after the one assigned to the first K3s server node. So, if the first server has the ID201
, assign the ID202
to this new VM. - Follow the same convention for naming this VM, but changing the number in the string. The first server is called
k3sserver01
, so this VM should be calledk3sserver02
.
- Give it the next
-
Configure this new
k3sserver02
VM as you did with the first server node VM, although:- Assign to its network cards the next IPs in the range reserved for server nodes in your network configuration. If you were using the same IP ranges used in the G025 guide:
- The net0 card should have
192.168.1.22
. - The net1 card should have
10.0.0.2
.
- The net0 card should have
- Change its hostname so its unique and concurs with the name of the VM. So, if the VM is called
k3ssserver02
, its hostname should also bek3sserver02
. - Either import the configuration files for TFA and SSH key-pair from
k3sserver01
or generate new ones for themgrsys
user.- TFA:
/home/mgrsys/.google_authenticator
file. - SSH key-pair: entire
/home/mgrsys/.ssh
folder.
- TFA:
- Either give the
mgrsys
user the same password as in the first server node or assign it a new one.
- Assign to its network cards the next IPs in the range reserved for server nodes in your network configuration. If you were using the same IP ranges used in the G025 guide:
You'll need to add a bunch of extra firewall rules to allow this second server node work properly in your K3s cluster. So open your Proxmox VE web console and do the following.
-
Go the
Datacenter > Firewall > Alias
page, and add a new alias for each of the IPs of your new VM.- Name
k3sserver02_net0
, IP192.168.1.22
. - Name
k3sserver02_net1
, IP10.0.0.2
.
- Name
-
Browse to
Datacenter > Firewall > IPSet
, and there:- Add the
k3sserver02_net0
alias to thek3s_nodes_net0_ips
set. - Add the
k3sserver02_net1
alias to thek3s_server_nodes_net1_ips
set.
- Add the
-
Browse to
Datacenter > Firewall > Security Group
. In this page, add to thek3s_srvrs_net1_in
group the following rules:- Rule 1: Type
in
, ActionACCEPT
, Protocoltcp
, Sourcek3s_srvrs_net1_in
, Dest. port2379,2380
, CommentHA etcd server ports for K3s SERVER nodes
. - Rule 2: Type
in
, ActionACCEPT
, Protocoltcp
, Sourcek3s_srvrs_net1_in
, Dest. port6443
, CommentK3s Kubernetes api server port open internally for K3s SERVER nodes
. - Rule 3: Type
in
, ActionACCEPT
, Protocoltcp
, Sourcek3s_srvrs_net1_in
, Dest. port10250
, CommentKubelet metrics port open internally for K3s SERVER nodes
. - Rule 4: Type
in
, ActionACCEPT
, Protocoludp
, Sourcek3s_srvrs_net1_in
, Dest. port8472
, CommentFlannel VXLAN port open internally for K3s SERVER nodes
.
- Rule 1: Type
-
Open a shell terminal as
mgrsys
on your Proxmox VE host, then copy the firewall file of the first K3s server VM but giving it theVM ID
of your second server VM.$ cd /etc/pve/firewall/ $ sudo cp 201.fw 202.fw
-
Modify the copy so the IPSET blocks point to the correct IP aliases for the
k3sserver02
node.[OPTIONS] ipfilter: 1 enable: 1 [IPSET ipfilter-net0] # Only allow specific IPs on net0 k3sserver02_net0 [IPSET ipfilter-net1] # Only allow specific IPs on net1 k3sserver02_net1 [RULES] GROUP k3s_srvrs_net1_in -i net1 GROUP k3s_srvrs_net0_in -i net0
The /etc/rancher/k3s.config.d/config.yaml
file for the first server node (k3sserver01
) is just slightly different from the one used in the single-server cluster scenario.
# k3sserver01
cluster-domain: "deimos.cluster.io"
tls-san:
- "k3sserver01.deimos.cloud"
flannel-backend: host-gw
flannel-iface: "ens19"
bind-address: "0.0.0.0"
https-listen-port: 6443
advertise-address: "10.0.0.1"
advertise-port: 6443
node-ip: "10.0.0.1"
node-external-ip: "192.168.1.21"
node-taint:
- "k3s-controlplane=true:NoExecute"
log: "/var/log/k3s.log"
disable:
- metrics-server
- servicelb
protect-kernel-defaults: true
secrets-encryption: true
agent-token: "SomePassword"
cluster-init: true
This config.yaml
file is essentially the same as it was in the G025 guide, but just with one extra parameter at the end.
cluster-init
: experimental argument. Using this option will initialize a new cluster using an embedded etcd data source.BEWARE!
A K3s cluster with several server nodes won't work just with a sqlite data source. It needs a full database engine to run, such as etcd.
With the config.yaml
file ready, execute the K3s installer.
$ wget -qO - https://get.k3s.io | INSTALL_K3S_VERSION="v1.21.4+k3s1" sh -s - server
The /etc/rancher/k3s.config.d/config.yaml
file for the second server has few, but important, differences.
# k3sserver02
cluster-domain: "deimos.cluster.io"
tls-san:
- "k3sserver02.deimos.cloud"
flannel-backend: host-gw
flannel-iface: "ens19"
bind-address: "0.0.0.0"
https-listen-port: 6443
advertise-address: "10.0.0.2"
advertise-port: 6443
node-ip: "10.0.0.2"
node-external-ip: "192.168.1.22"
node-taint:
- "k3s-controlplane=true:NoExecute"
log: "/var/log/k3s.log"
disable:
- metrics-server
- servicelb
protect-kernel-defaults: true
secrets-encryption: true
agent-token: "SamePasswordAsInTheFirstServer"
server: "https://10.0.0.1:6443"
token: "K10<sha256 sum of cluster CA certificate>::server:<password>"
There's no cluster-init
option, the agent-token
is also present here, and two new parameters have been added:
-
agent-token
: this has to be exactly the same password as in the first server node. -
server
: the address or url of a server node in the cluster, in this case the secondary IP of the first server. Notice that you also need to specify the port, which in this case is the default6443
. -
token
: code for authenticating this server node in an already running cluster. The token is generated and saved within the first server node that starts said cluster, in the/var/lib/rancher/k3s/server/token
file.
With the first server node up and running, let's get the server token you'll need to authorize the joining of any other server nodes into the cluster. Use the cat
command for getting it from /var/lib/rancher/k3s/server/token
file.
$ sudo cat /var/lib/rancher/k3s/server/token
K10288e77934e06dda1e7523114282478fdc1798545f04235a86b97c71a0bca41f4::server:baecfccac88699f5a12e228e72a69cf2
As it happens with agent tokens, you can distinguish three parts in a server token string:
- After the
K10
characters, you have the sha256 sum of theserver-ca.cert
file generated in this server node. - The
server
string is the username. - The remaining string after the
:
is the password for other server nodes.
The procedure for your second K3s server node will be as follows.
-
Edit the
/etc/rancher/k3s/config.yaml
file and verify that all its values are correct, in particular the interface and IPs and both theagent-token
and thetoken
. Remember:- The
agent-token
must be the same password already set in the first server node. - The
token
value is stored within the first server node, in the/var/lib/rancher/k3s/server/token
file.
- The
-
With the
config.yaml
file properly set, launch the installation on of your second server node.$ wget -qO - https://get.k3s.io | INSTALL_K3S_VERSION="v1.21.4+k3s1" sh -s - server
-
In your first server node, execute the next
watch kubectl
command.$ watch sudo kubectl get nodes -Ao wide
Observe the output until you see the new server join the cluster and reach the
Ready
STATUS
.Every 2.0s: sudo kubectl get nodes -Ao wide k3sserver01: Thu Jun 10 13:00:01 2021 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k3sserver01 Ready control-plane,etcd,master 14h v1.21.4+k3s1 10.0.0.1 192.168.1.21 Debian GNU/Linux 11 (bullseye) 5.10.0-8-amd64 containerd://1.4.9-k3s1 k3sserver02 Ready control-plane,etcd,master 4m46s v1.21.4+k3s1 10.0.0.2 192.168.1.22 Debian GNU/Linux 11 (bullseye) 5.10.0-8-amd64 containerd://1.4.9-k3s1
Notice, in the
ROLES
column, the roleetcd
that indicates that the server nodes are running the embedded etcd engine that comes with the K3s installation. -
When the second server has joined the cluster, get back in your second server node. It's possible that the installer has get stuck and not returned the prompt. Like you have to do when in the installation of the first server node, just press
Ctrl+C
to return to the shell prompt.
The agent nodes are installed with exactly the same config.yaml
file and command you already saw in the G025 guide. The only thing you might consider to change is to make each of your agent nodes point to different server nodes (the server
parameter in their config.yaml
file). Since the server nodes are always synchronized, it shouldn't matter to which one each agent is connected to.
<< Previous (G907. Appendix 07) | +Table Of Contents+ | Next (G909. Appendix 09) >>