diff --git a/docs/blog/posts/2024-11-04-running-cockroachdb-on-openstack-flex.md b/docs/blog/posts/2024-11-04-running-cockroachdb-on-openstack-flex.md new file mode 100644 index 0000000..3a08754 --- /dev/null +++ b/docs/blog/posts/2024-11-04-running-cockroachdb-on-openstack-flex.md @@ -0,0 +1,146 @@ +--- +date: 2024-11-04 +title: Running CockroachDB on OpenStack Flex +authors: + - cloudnull +description: > + Running CockroachDB on OpenStack Flex +categories: + - Kubernetes + - Database +--- + +# Running CockroachDB on OpenStack Flex + +![CockroachDB](assets/images/2024-11-04/cockroachlabs-logo.png){ align=left } +CockroachDB is a distributed SQL database that provides consistency, fault-tolerance, and scalability that has been purpose built for the cloud. In this guide, we will walk through deploying CockroachDB on an OpenStack Flex instance. As operators, we will need to create a new instance, install the CockroachDB software, and configure the service to run on the instance. The intent of this guide is to provide a simple functional example of how to deploy CockroachDB on an OpenStack Flex on Kubernetes. + + + +## Foundation + +This guide assumes there is an operational Kubernetes cluster running on OpenStack Flex. To support this requirement, this guide will assume that the Kubernetes cluster is running following the Talos guide, which can be found [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_talos_on_openstack_flex). + +An assumption of this guide is that the Kubernetes cluster has a working storage provider which can be used to create `PersistentVolumeClaims`. If the environment does not have a working storage provider, one will need to be deploy one before proceeding with this guide. In this guide, we will use Longhorn as our storage provider, which was deployed as part of the Talos on OpenStack Flex setup. Read more about Longhorn setup being used for this post [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_longhorn_on_openstack_flex). + +All operations will start from our Jump Host, which is a Debian instance running on OpenStack Flex adjacent to the Talos cluster. The Jump Host will be used to deploy Longhorn to our Kubernetes cluster using Helm. + +!!! note + + The jump host referenced within this guide will use the following variable, `${JUMP_PUBLIC_VIP}`, which is assumed to contain the public IP address of the node. + +### Prerequisites + +Before we begin, we need to ensure that we have the following prerequisites in place: + +- An OpenStack Flex project with a Kubernetes cluster +- A working knowledge of Kubernetes +- A working knowledge of Helm +- A working knowledge of OpenStack Flex + - At least 180GiB of storage available to `PersistentVolumeClaims` (Longhorn) + +!!! note + + This guide is using CockroachDB **1.7.2**, and the instructions may vary for other versions. Check the [CockroachDB documentation](https://www.cockroachlabs.com/whatsnew/) for the most up-to-date information on current releases. + +Create a new namespace. + +``` shell +kubectl create namespace cockroach-operator-system +``` + +Set the namespace security policy. + +``` shell +kubectl label --overwrite namespace cockroach-operator-system \ + pod-security.kubernetes.io/enforce=privileged \ + pod-security.kubernetes.io/enforce-version=latest \ + pod-security.kubernetes.io/warn=privileged \ + pod-security.kubernetes.io/warn-version=latest \ + pod-security.kubernetes.io/audit=privileged \ + pod-security.kubernetes.io/audit-version=latest +``` + +## Install the CockroachDB Operator + +Deploying the CockroachDB operator involves installing the CRDs and the operator itself. + +### Deploy the CockroachDB CRDs + +``` shell +kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v2.15.1/install/crds.yaml +``` + +### Deploy the CockroachDB Operator + +``` shell +kubectl --namespace cockroach-operator-system apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v2.15.1/install/operator.yaml +``` + +``` shell +kubectl --namespace cockroach-operator-system get pods +``` + +!!! example "The output should look similar to the following" + + ``` shell + NAME READY STATUS RESTARTS AGE + cockroach-operator-manager-c8f97d954-5fwh4 1/1 Running 0 38s + ``` + +### Deploy the CockroachDB Cluster + +``` shell +kubectl --namespace cockroach-operator-system apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v2.15.1/examples/example.yaml +``` + +!!! note "About the example cluster" + + This is a quick and easy cluster environment which is suitable for a wide range of purposes. However, for production use, administrators should consider a more robust configuration by reviewing this file and [CockroachDB documentation](https://www.cockroachlabs.com/docs/stable/). + +#### Deploy the CockroachDB Client + +Deploying the CockroachDB client is simple. It requires the installation of the client pod and the client secret. + +``` shell +kubectl --namespace cockroach-operator-system create -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v2.15.1/examples/client-secure-operator.yaml +``` + +``` shell +kubectl --namespace cockroach-operator-system exec -it cockroachdb-client-secure \ + -- ./cockroach sql \ + --certs-dir=/cockroach/cockroach-certs \ + --host=cockroachdb-public +``` + +!!! example "The above command will dropped into the SQL shell" + + ``` shell + # Welcome to the CockroachDB SQL shell. + # All statements must be terminated by a semicolon. + # To exit, type: \q. + # + # Server version: CockroachDB CCL v24.2.3 (x86_64-pc-linux-gnu, built 2024/09/23 22:30:53, go1.22.5 X:nocoverageredesign) (same version as client) + # Cluster ID: 162f3cf8-2699-4c59-b58d-a43afb34497c + # + # Enter \? for a brief introduction. + # + root@cockroachdb-public:26257/defaultdb> + ``` + + Running a simple `show databases;` command should return the following output. + + ``` shell + database_name | owner | primary_region | secondary_region | regions | survival_goal + ----------------+-------+----------------+------------------+---------+---------------- + defaultdb | root | NULL | NULL | {} | NULL + postgres | root | NULL | NULL | {} | NULL + system | node | NULL | NULL | {} | NULL + (3 rows) + + Time: 6ms total (execution 5ms / network 0ms) + ``` + +## Conclusion + +In this guide, we have walked through deploying CockroachDB on an OpenStack Flex instance on a Kubernetes cluster running Talos. We have also deployed the CockroachDB client and connected to the CockroachDB cluster to verify the deployment. This guide is intended to provide a simple example of how to deploy CockroachDB on an OpenStack Flex instance. For more information on CockroachDB, please refer to the [CockroachDB documentation](https://www.cockroachlabs.com/docs). diff --git a/docs/blog/posts/2024-11-04-running-crunchy-postgres-on-openstack-flex.md b/docs/blog/posts/2024-11-04-running-crunchy-postgres-on-openstack-flex.md new file mode 100644 index 0000000..3caa975 --- /dev/null +++ b/docs/blog/posts/2024-11-04-running-crunchy-postgres-on-openstack-flex.md @@ -0,0 +1,150 @@ +--- +date: 2024-11-05 +title: Running Postgres Operator from Crunchy Data on OpenStack Flex +authors: + - cloudnull +description: > + Running Postgres Operator from Crunchy Data on OpenStack Flex +categories: + - Kubernetes + - Database +--- + +# Running Crunchydata Postgres on OpenStack Flex + +![Crunchdata](assets/images/2024-11-05/crunchydata-logo.png){ align=left : style="max-width:125px" } + +Crunchydata provides a Postgres Operator that simplifies the deployment and management of PostgreSQL clusters on Kubernetes. In this guide, we will walk through deploying the Postgres Operator from Crunchy Data on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Postgres Operator software, and configure the service to run on the instance. The intent of this guide is to provide a simple functional example of how to deploy the Postgres Operator from Crunchy Data on an OpenStack Flex on Kubernetes. + + + +## Foundation + +This guide assumes there is an operational Kubernetes cluster running on OpenStack Flex. To support this requirement, this guide will assume that the Kubernetes cluster is running following the Talos guide, which can be found [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_talos_on_openstack_flex). + +An assumption of this guide is that the Kubernetes cluster has a working storage provider which can be used to create `PersistentVolumeClaims`. If the environment does not have a working storage provider, one will need to be deploy one before proceeding with this guide. In this guide, we will use Longhorn as our storage provider, which was deployed as part of the Talos on OpenStack Flex setup. Read more about Longhorn setup being used for this post [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_longhorn_on_openstack_flex). + +All operations will start from our Jump Host, which is a Debian instance running on OpenStack Flex adjacent to the Talos cluster. The Jump Host will be used to deploy Longhorn to our Kubernetes cluster using Helm. + +!!! note + + The jump host referenced within this guide will use the following variable, `${JUMP_PUBLIC_VIP}`, which is assumed to contain the public IP address of the node. + +### Prerequisites + +Before we begin, we need to ensure that we have the following prerequisites in place: + +- An OpenStack Flex project with a Kubernetes cluster +- A working knowledge of Kubernetes +- A working knowledge of Helm +- A working knowledge of OpenStack Flex + - At least 1GiB of storage available to `PersistentVolumeClaims` (Longhorn) + +!!! note + + This guide is using Crunchydata **5.7**, and the instructions may vary for other versions. Check the [Crunchydata documentation](https://access.crunchydata.com/documentation/postgres-operator/latest) for the most up-to-date information on current releases. + +## Create a New Namespace + +``` shell +kubectl create namespace crunchy-operator-system +``` + +Set the namespace security policy. + +``` shell +kubectl label --overwrite namespace crunchy-operator-system \ + pod-security.kubernetes.io/enforce=privileged \ + pod-security.kubernetes.io/enforce-version=latest \ + pod-security.kubernetes.io/warn=privileged \ + pod-security.kubernetes.io/warn-version=latest \ + pod-security.kubernetes.io/audit=privileged \ + pod-security.kubernetes.io/audit-version=latest +``` + +## Install the Crunchdata Postgres Operator + +Before getting started, set a few environment variables that will be used throughout the guide. + +``` shell +export CRUNCHY_OPERATOR_NAMESPACE=crunchy-operator-system +export CRUNCHY_CLUSTER_NAMESPACE=crunchy-operator-system # This can be a different namespace +export CRUNCHY_CLUSTER_NAME=hippo +export CRUNCHY_DB_REPLICAS=3 +export CRUNCHY_DB_SIZE=1Gi +``` + +Retrieve the operator helm chart and change into the directory. + +``` shell +git clone https://github.com/CrunchyData/postgres-operator-examples +cd postgres-operator-examples +``` + +Install the operator helm chart. + +``` shell +helm upgrade --install --namespace ${CRUNCHY_OPERATOR_NAMESPACE} crunchy-operator helm/install +``` + +## Create a Crunchydata Postgres Cluster + +Create a helm overrides file for the database deployment. The file should contain the following information. Replace the `${CRUNCHY_DB_REPLICAS}`, `${CRUNCHY_CLUSTER_NAME}`, and `${CRUNCHY_DB_SIZE}` with the desired values for the deployment. + +!!! example "crunchy-db.yaml" + + ``` yaml + instanceReplicas: ${CRUNCHY_DB_REPLICAS} + name: ${CRUNCHY_CLUSTER_NAME} + instanceSize: ${CRUNCHY_DB_SIZE} + users: + - name: rhino + databases: + - zoo + options: 'NOSUPERUSER' + ``` + +Create a new secret for the user **rhino** + +!!! example "crunchy-rhino-secret.yaml" + + ``` yaml + apiVersion: v1 + kind: Secret + metadata: + name: ${CRUNCHY_CLUSTER_NAME}-pguser-rhino + labels: + postgres-operator.crunchydata.com/cluster: ${CRUNCHY_CLUSTER_NAME} + postgres-operator.crunchydata.com/pguser: rhino + stringData: + password: river + ``` + +``` shell +kubectl --namespace ${CRUNCHY_CLUSTER_NAMESPACE} apply -f crunchy-rhino-secret.yaml +``` + +Run the Deployment + +``` shell +helm upgrade --install --namespace ${CRUNCHY_CLUSTER_NAMESPACE} hippo helm/postgres \ + -f crunchy-db.yaml +``` + +!!! tip + + Track the state of the deployment with the following + + ``` shell + kubectl -n ${CRUNCHY_CLUSTER_NAMESPACE} get pods --selector=postgres-operator.crunchydata.com/cluster=${CRUNCHY_CLUSTER_NAME},postgres-operator.crunchydata.com/instance + ``` + +## Verify the Crunchydata Postgres Cluster + +``` shell +kubectl --namespace ${CRUNCHY_CLUSTER_NAMESPACE} get svc --selector=postgres-operator.crunchydata.com/cluster=${CRUNCHY_CLUSTER_NAME} +``` + +## Conclusion + +In this guide, we have deployed the Crunchydata Postgres Operator on an OpenStack Flex Kubernetes cluster. We have also created a new Postgres cluster using the operator. This guide is intended to provide a simple functional example of how to deploy the Crunchydata Postgres Operator on an OpenStack Flex Kubernetes cluster. For more information on the Crunchydata Postgres Operator, please refer to the [Crunchydata documentation](https://access.crunchydata.com/documentation/postgres-operator/latest). diff --git a/docs/blog/posts/2024-11-04-running-longhorn-on-openstack-flex.md b/docs/blog/posts/2024-11-04-running-longhorn-on-openstack-flex.md new file mode 100644 index 0000000..0eb4886 --- /dev/null +++ b/docs/blog/posts/2024-11-04-running-longhorn-on-openstack-flex.md @@ -0,0 +1,262 @@ +--- +date: 2024-11-04 +title: Running Longhorn on OpenStack Flex +authors: + - cloudnull +description: > + Running Longhorn on OpenStack Flex +categories: + - Kubernetes + - Storage +--- + +# Running Longhorn on OpenStack Flex + +![Longhorn logo](assets/images/2024-11-04/longhorn-logo.png){ align=left : style="max-width:300px" } + +Longhorn is a distributed block storage system for Kubernetes that is designed to be easy to deploy and manage. In this guide, we will walk through deploying Longhorn on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Longhorn software, and configure the service to run on the instance. This setup will allow us to access the Longhorn web interface and create new volumes, snapshots, and backups. The intent of this guide is to provide a simple example of how to deploy Longhorn on an OpenStack Flex instance. + + + +## Foundation + +This guide assumes there is an operational Kubernetes cluster running on OpenStack Flex. To support this requirement, this guide will assume that the Kubernetes cluster is running following the Talos guide, which can be found [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_talos_on_openstack_flex). + +All operations will start from our Jump Host, which is a Debian instance running on OpenStack Flex adjacent to the Talos cluster. The Jump Host will be used to deploy Longhorn to our Kubernetes cluster using Helm. + +!!! note + + The jump host referenced within this guide will use the following variable, `${JUMP_PUBLIC_VIP}`, which is assumed to contain the public IP address of the node. + +### Prerequisites + +Before we begin, we need to ensure that we have the following prerequisites in place: + +- An OpenStack Flex project with a Kubernetes cluster +- A working knowledge of Kubernetes +- A working knowledge of Helm +- A working knowledge of OpenStack Flex +- A working knowledge of Longhorn + +!!! note + + This guide is using Longhorn **1.7.2**, and the instructions may vary for other versions. Check the [Longhorn documentation](https://longhorn.io/docs/) for the most up-to-date information on current releases. + +## Deploying storage volumes + +Longhorn works by creating volumes that are attached to the Talos workers. These volumes are then used to store data for the applications running on the cluster. The first step in deploying Longhorn is to create a new volume that will be used to store data for the applications running on the cluster. + +### Creating new volumes + +Using the OpenStack CLI we can create new volumes by running the following commands + +``` shell +openstack --os-cloud default volume create --type Capacity --size 100 longhorn-0 +openstack --os-cloud default volume create --type Capacity --size 100 longhorn-1 +openstack --os-cloud default volume create --type Capacity --size 100 longhorn-2 +``` + +### Attaching volumes to the Talos workers + +Now that we have created a new volume, we can attach it to the Talos workers using the OpenStack CLI. This will allow the volume to be used by the applications running on the cluster. + +``` shell +openstack --os-cloud default server add volume talos-worker-0 longhorn-0 +openstack --os-cloud default server add volume talos-worker-1 longhorn-1 +openstack --os-cloud default server add volume talos-worker-2 longhorn-2 +``` + +## Prepare the Volumes for Longhorn + +Before we can deploy Longhorn, we need to prepare the volumes that we created in the previous step. To do this, we will need to format the volumes and mount them to the Talos workers. +Run a quick scan to pick up all the members within the cluster. + +``` shell +talosctl --talosconfig ./talosconfig get members +``` + +!!! example "The Output Will Look Like This" + + ``` shell + NODE NAMESPACE TYPE ID VERSION HOSTNAME MACHINE TYPE OS ADDRESSES + 10.0.0.208 cluster Member talos-control-plane-0 11 talos-control-plane-0.novalocal controlplane Talos (v1.8.2) ["10.0.0.208"] + 10.0.0.208 cluster Member talos-control-plane-1 4 talos-control-plane-1.novalocal controlplane Talos (v1.8.2) ["10.0.0.60"] + 10.0.0.208 cluster Member talos-control-plane-2 6 talos-control-plane-2.novalocal controlplane Talos (v1.8.2) ["10.0.0.152"] + 10.0.0.208 cluster Member talos-worker-0 9 talos-worker-0.novalocal worker Talos (v1.8.2) ["10.0.0.16"] + 10.0.0.208 cluster Member talos-worker-1 12 talos-worker-1.novalocal worker Talos (v1.8.2) ["10.0.0.249"] + 10.0.0.208 cluster Member talos-worker-2 12 talos-worker-2.novalocal worker Talos (v1.8.2) ["10.0.0.110"] + ``` + +Here we can see the nodes and their related addresses. We will use this information to connect to the nodes and prepare the volumes. + +Run a quick volume discovery to verify that our expected volume is connected to the node. + +``` shell +talosctl --talosconfig ./talosconfig get discoveredvolumes --nodes 10.0.0.16 +``` + +!!! example "The Output Will Look Like This" + + ``` shell + NODE NAMESPACE TYPE ID VERSION TYPE SIZE DISCOVERED LABEL PARTITIONLABEL + 10.0.0.16 runtime DiscoveredVolume loop2 1 disk 684 kB squashfs + 10.0.0.16 runtime DiscoveredVolume loop3 1 disk 2.6 MB squashfs + 10.0.0.16 runtime DiscoveredVolume loop4 1 disk 75 MB squashfs + 10.0.0.16 runtime DiscoveredVolume vda 2 disk 43 GB gpt + 10.0.0.16 runtime DiscoveredVolume vda1 1 partition 105 MB vfat EFI + 10.0.0.16 runtime DiscoveredVolume vda2 1 partition 1.0 MB BIOS + 10.0.0.16 runtime DiscoveredVolume vda3 1 partition 982 MB xfs BOOT BOOT + 10.0.0.16 runtime DiscoveredVolume vda4 1 partition 1.0 MB META + 10.0.0.16 runtime DiscoveredVolume vda5 2 partition 92 MB xfs STATE STATE + 10.0.0.16 runtime DiscoveredVolume vda6 2 partition 42 GB xfs EPHEMERAL EPHEMERAL + 10.0.0.16 runtime DiscoveredVolume vdb 1 disk 1.1 GB swap + 10.0.0.16 runtime DiscoveredVolume vdc 1 disk 11 GB + ``` + +!!! note + + The output will show all the volumes attached to the node. The volume we're looking for is **vdc** which is reporting an 11 GB size. + +Armed with the information, we can see that the volume we're looking for is attached to the node. We can now format and mount the volume. + +Create a patch file which will modify the Talos worker to mount the volume. + +!!! example "talos-longhorn-disk.yaml" + + ``` yaml + machine: + kubelet: + extraMounts: + - destination: /var/lib/longhorn + type: bind + source: /var/lib/longhorn + options: + - bind + - rshared + - rw + disks: + - device: /dev/vdc + partitions: + - mountpoint: /var/lib/longhorn + ``` + +Now update the Talos configuration to format the volume and mount it. + +``` shell +talosctl --talosconfig ./talosconfig patch mc --patch @talos-longhorn-disk.json --nodes 10.0.0.16 +``` + +!!! tip + + If all nodes have the same disk layout and the same volume attached, use the `--nodes` flag with comma separated values to apply the patch to all nodes at once. + + ``` shell + --nodes 10.0.0.110,10.0.0.249,10.0.0.16 + ``` + +Once the patch has been applied the node will reboot, and the volume will be formatted and mounted. Validate the volume is mounted and formatted by rerunning the `discoveredvolumes` command. + +``` shell +talosctl --talosconfig ./talosconfig get discoveredvolumes --nodes 10.0.0.16 +``` + +!!! example "the output will looks like this" + + ``` shell + NODE NAMESPACE TYPE ID VERSION TYPE SIZE DISCOVERED LABEL PARTITIONLABEL + 10.0.0.16 runtime DiscoveredVolume loop2 1 disk 684 kB squashfs + 10.0.0.16 runtime DiscoveredVolume loop3 1 disk 2.6 MB squashfs + 10.0.0.16 runtime DiscoveredVolume loop4 1 disk 75 MB squashfs + 10.0.0.16 runtime DiscoveredVolume vda 1 disk 43 GB gpt + 10.0.0.16 runtime DiscoveredVolume vda1 1 partition 105 MB vfat EFI + 10.0.0.16 runtime DiscoveredVolume vda2 1 partition 1.0 MB BIOS + 10.0.0.16 runtime DiscoveredVolume vda3 1 partition 982 MB xfs BOOT BOOT + 10.0.0.16 runtime DiscoveredVolume vda4 1 partition 1.0 MB META + 10.0.0.16 runtime DiscoveredVolume vda5 1 partition 92 MB xfs STATE STATE + 10.0.0.16 runtime DiscoveredVolume vda6 1 partition 42 GB xfs EPHEMERAL EPHEMERAL + 10.0.0.16 runtime DiscoveredVolume vdb 1 disk 1.1 GB swap + 10.0.0.16 runtime DiscoveredVolume vdc 1 disk 11 GB gpt + 10.0.0.16 runtime DiscoveredVolume vdc1 3 partition 11 GB xfs + ``` + +## Deploying Longhorn + +With the workers situated, we can now deploy Longhorn to the Kubernetes cluster. To do this, we will use Helm to install the Longhorn chart. + +Add Longhorn to the Helm repos. + +``` shell +helm repo add longhorn https://charts.longhorn.io +``` + +Update the repos. + +``` shell +helm repo update +``` + +Create a new namespace for Longhorn. + +``` shell +kubectl create namespace longhorn-system +``` + +Set the longhorn-system namespace security policy. + +``` shell +kubectl label --overwrite namespace longhorn-system \ + pod-security.kubernetes.io/enforce=privileged \ + pod-security.kubernetes.io/enforce-version=latest \ + pod-security.kubernetes.io/warn=privileged \ + pod-security.kubernetes.io/warn-version=latest \ + pod-security.kubernetes.io/audit=privileged \ + pod-security.kubernetes.io/audit-version=latest +``` + +Install Longhorn. + +``` shell +helm upgrade --install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --version 1.7.2 +``` + +!!! tip + + For more information on all of the options that longhorn has to offer when deploying via helm, please refer to the [Longhorn documentation](https://longhorn.io/docs/1.7.2/advanced-resources/deploy/customizing-default-settings/#using-the-longhorn-deployment-yaml-file). + +The deployment will take a few minutes, watch the nodes to validate the deployment is ready. + +``` shell +kubectl -n longhorn-system get nodes.longhorn.io +``` + +!!! example "Healthy output will look like this" + + ``` shell + NAME READY ALLOWSCHEDULING SCHEDULABLE AGE + talos-worker-0 True true True 9m20s + talos-worker-1 True true True 9m20s + talos-worker-2 True true True 9m19s + ``` + +Validate functionality by creating the volume test deployment. + +``` shell +kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/v1.7.2/examples/pod_with_pvc.yaml +``` + +Assuming everything is working, the pod will spawn and the volume attach. Validate the volume is attached by running the following command. + +``` shell +kubectl -n longhorn-system get volumes.longhorn.io +``` + +!!! example "Healthy output will look like this" + + ``` shell + NAME DATA ENGINE STATE ROBUSTNESS SCHEDULED SIZE NODE AGE + pvc-95225b70-90de-4b77-b55f-0d4f089e8a07 v1 detached unknown 2147483648 3s + ``` + +## Conclusion + +Longhorn provides a robust storage solution that is container native and open infrastructure ready. By following this guide, the latest release of Longhorn will have been successfully deployed a Talos cluster running within OpenStack Flex instances. The next steps would be to explore the Longhorn documentation and experiment with the various features that Longhorn has to offer. diff --git a/docs/blog/posts/2024-11-04-running-talos-on-openstack-flex.md b/docs/blog/posts/2024-11-04-running-talos-on-openstack-flex.md new file mode 100644 index 0000000..65a5e5b --- /dev/null +++ b/docs/blog/posts/2024-11-04-running-talos-on-openstack-flex.md @@ -0,0 +1,520 @@ +--- +date: 2024-11-04 +title: Running Talos on OpenStack Flex +authors: + - cloudnull +description: > + Running Talos on OpenStack Flex +categories: + - Operating System + - Image + - Server + - Kubernetes +--- + +# Running Talos on OpenStack Flex + +![talos-linux](assets/images/2024-11-04/talos-logo.png){ align=left } + +As developers, we're constantly seeking platforms that streamline our workflows and enhance the performance and reliability of our applications. Talos is a container optimized Linux distribution reimagined for distributed systems. Designed with minimalism and practicality in mind, Talos brings a host of features that are particularly advantageous for OpenStack environments. By stripping away unnecessary components, it embodies minimalism, reducing the attack surface and resource consumption. It comes secure by default, providing out-of-the-box secure configurations that alleviate the need for extensive hardening. + + + +Integrating Talos with OpenStack Flex brings significant benefits. The immutable and minimal nature of Talos ensures that all compute nodes in the OpenStack cluster are consistent, reducing the chances of unexpected behavior due to environmental differences and thus enhancing consistency and reliability. When OpenStack Flex and Talos are combined, it creates an optimal environment for developers. Talos's ephemeral and atomic nature makes scaling out compute resources in OpenStack Flex seamless and efficient, enhancing scalability. The combination ensures high availability and quick recovery from failures, as Talos's design simplifies node replacement and recovery, thereby improving resiliency. + +In essence, Talos offers more by providing less—less complexity, less overhead, and fewer security concerns. This minimalistic yet powerful approach enhances security, efficiency, resiliency, and consistency. For developers working with OpenStack and specifically OpenStack Flex, Talos presents a compelling operating system choice that aligns perfectly with the goals of modern open infrastructure native applications. + +## Creating a cluster via the CLI on OpenStack + +In this guide, we will create an HA Kubernetes cluster in OpenStack with 3 worker nodes. We will assume some existing familiarity with OpenStack. For more information on OpenStack specifics, please see the official OpenStack documentation. + +``` mermaid +flowchart TD + A(((Internet))) --> B@{ shape: hex, label: "Router" } + B --> TN[ Network] --> NS@{ shape: tag-rect, label: "Subnetwork" } + NC@{ shape: braces, label: "Optional: Floating IP/Port(s)" } --> C + NS --> C{Loadbalancer} + NS --> X[Jump 0] + JC@{ shape: braces, label: "Floating IP" } --> X + C -->D[Controller 0] + D <--> G[Worker 0] + D <--> H[Worker 1] + D <--> I[Worker 2] + C -->E[Controller 1] + E <--> G[Worker 0] + E <--> H[Worker 1] + E <--> I[Worker 2] + C -->F[Controller 2] + F <--> G[Worker 0] + F <--> H[Worker 1] + F <--> I[Worker 2] + X <--> D + X <--> E + X <--> F +``` + +### Environment Setup + +!!! note "This blog post was written with the following environment assumptions already existing" + + - Router: `tenant-router` + - Network: `tenant-net` + - Subnet: `tenant-subnet` + - Key Pair: `tenant-key` + + If the project isn't setup completly, checkout the [getting started guide](https://blog.rackspacecloud.com/blog/2024/06/18/getting_started_with_rackspace_openstack_flex). + + All of the **tenant** items used within this post can be replaced with individual values. + +The `openstack` client is assumed to be setup with a functional `clouds.yaml` file to interact with the cloud. This file will provide the necessary config to talk to with OpenStack Flex. Additional instructions on setting up the the OpenStack client can be found [here](https://docs.openstack.org/cli/quick-start.html). + +## Network Infrastructure + +The network setup will cover the creation of a router, network, subnet, load balancer, and ports. + +### Creating loadbalancer + +The OpenStack Flex Loadbalancer used for this environment is a Layer 4 TCP load balancer powered by the OVN loadbalancer solution. The Loadbalancer will be used to distribute traffic to the control plane nodes. + +!!! tip "Check the loadbalancer providers available within the environment" + + ``` shell + openstack --os-cloud default loadbalancer provider list + ``` + +Create load balancer, updating vip-subnet-id if necessary + +``` shell +openstack --os-cloud default loadbalancer create --provider ovn \ + --name talos-control-plane \ + --vip-subnet-id tenant-subnet +``` + +Store the load balancer ID for later use + +``` shell +LB_ID=$(openstack --os-cloud default loadbalancer show talos-control-plane -f value -c id) +``` + +Create listener + +``` shell +openstack --os-cloud default loadbalancer listener create --name talos-control-plane-listener \ + --protocol TCP \ + --protocol-port 6443 talos-control-plane +``` + +Create Pool + +``` shell +openstack --os-cloud default loadbalancer pool create --name talos-control-plane-pool \ + --lb-algorithm SOURCE_IP_PORT \ + --listener talos-control-plane-listener \ + --protocol TCP +``` + +Create health monitoring + +``` shell +openstack --os-cloud default loadbalancer healthmonitor create \ + --delay 5 \ + --max-retries 4 \ + --timeout 10 \ + --type TCP talos-control-plane-pool +``` + +Retrieve the VIP for the load balancer + +``` shell +export LB_PRIVATE_VIP=$(openstack --os-cloud default loadbalancer show talos-control-plane -f json | jq -r .vip_address) +``` + +## Create the Image + +First, download the OpenStack image from a [Talos Image Factory](https://factory.talos.dev/?arch=amd64&board=undefined&cmdline-set=true&extensions=-&extensions=siderolabs%2Fiscsi-tools&extensions=siderolabs%2Fqemu-guest-agent&extensions=siderolabs%2Futil-linux-tools&platform=openstack&secureboot=undefined&target=cloud&version=1.8.2). + +!!! example "At the time of this writing the latest image was 1.8.2" + + ``` shell + wget https://factory.talos.dev/image/88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b/v1.8.2/openstack-amd64.raw.xz + ``` + +The image comes pre-bundled with the following extensions + +| Extension | Description | +|-----------|-------------| +| `siderolabs/iscsi-tools` | iSCSI tools for Talos | +| `siderolabs/qemu-guest-agent` | QEMU Guest Agent for Talos | +| `siderolabs/util-linux-tools` | Util Linux tools for Talos | + +Once the image is downloaded, decompress the file. + +``` shell +xz --decompress -v openstack-amd64.raw.xz +``` + +After decompressing the file downloaded, the command will result in a raw image file named, `openstack-amd64.raw`. + +### Upload the Image + +Once the image is downloaded, upload it to OpenStack with the following command. + +``` shell +openstack --os-cloud default image create \ + --progress \ + --disk-format raw \ + --container-format bare \ + --file openstack-amd64.raw \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=talos \ + --property os_distro=talos \ + --property os_version=18.2 \ + --tag "siderolabs/iscsi-tools" \ + --tag "siderolabs/util-linux-tools" \ + --tag "siderolabs/qemu-guest-agent" \ + talos-18.2 +``` + +This command will prepare the image to run in a KVM environment, with UEFI firmware, and the Talos operating system. The image will be named `talos-18.2`. For a full overview of how we are constructing our image metadata, see our [OpenStack Image](https://docs.rackspacecloud.com/openstack-glance-images) documentation for more information. + +## Security Groups + +Security groups allow operators to control the traffic to and from an instances. We will create two security groups, one for the tenant and one for the Talos control plane. + +### Create a tenant security group + +Be sure that the tenant security group, `tenant-secgroup` is permiting SSH traffic, this will be important for the jump host. + +``` shell +openstack --os-cloud default security group create tenant-secgroup +``` + +Add an SSH rule to the security group, allowing traffic from anywhere. + +``` shell +openstack --os-cloud default security group rule create tenant-secgroup \ + --protocol tcp \ + --ingress \ + --remote-ip 0.0.0.0/0 \ + --dst-port 22 +``` + +### Create a Talos control plane security group + +Create a Talos security group, this `talos-secgroup` will be used to permit Talos control plane and kubernetes traffic within the cluster. + +``` shell +openstack --os-cloud default security group create talos-secgroup +``` + +The security group will be used to permit traffic to the control plane nodes. We will open the following ports: + +| Port | Direction | Value | +|------|-----------|-------| +| Talos control plane | Ingress | 50000 | +| Talos workers | Ingress | 50001 | +| Kubernetes API | Ingress | 6443 | + +The security group will be used to permit traffic to the control plane nodes. We will open the following ports: + +``` shell +openstack --os-cloud default security group rule create --ingress --protocol tcp --dst-port 6443 talos-secgroup +openstack --os-cloud default security group rule create --ingress --protocol tcp --dst-port 50000 talos-secgroup +openstack --os-cloud default security group rule create --ingress --protocol tcp --dst-port 50001 talos-secgroup +openstack --os-cloud default security group rule create --ingress --protocol tcp talos-secgroup +openstack --os-cloud default security group rule create --ingress --protocol udp talos-secgroup +``` + +Additional rules can be added as needed. Refer to the Talos Network Connectivity [documentation](https://www.talos.dev/v1.8/learn-more/talos-network-connectivity) for more information on additional ports and protocols that may be needed for the environment. + +## Network Ports + +Creating the network ports allows us to use IP addresses for the control plane and jump nodes in a deterministic way. These ports will have our security groups attached and will be used to associate floating IPs. + +``` shell +export JUMP_0=$(openstack --os-cloud default port create --security-group tenant-secgroup --security-group talos-secgroup --network tenant-net jump-0 -f json | jq -r '.fixed_ips[0].ip_address') +export CONTROLLER_0=$(openstack --os-cloud default port create --security-group talos-secgroup --network tenant-net talos-control-plane-0 -f json | jq -r '.fixed_ips[0].ip_address') +export CONTROLLER_1=$(openstack --os-cloud default port create --security-group talos-secgroup --network tenant-net talos-control-plane-1 -f json | jq -r '.fixed_ips[0].ip_address') +export CONTROLLER_2=$(openstack --os-cloud default port create --security-group talos-secgroup --network tenant-net talos-control-plane-2 -f json | jq -r '.fixed_ips[0].ip_address') +``` + +!!! note + + The jump-0 port has both the `tenant-secgroup` and `talos-secgroup` security groups. The control plane ports have only the `talos-secgroup` security group. + + The above commands will store the port IP addresses in the variables `JUMP_0`, `CONTROLLER_0`, `CONTROLLER_1`, and `CONTROLLER_2`. These variables will be used in the next step. Validate the variables are defined and have the correct values by running `echo $JUMP_0 $CONTROLLER_0 $CONTROLLER_1 $CONTROLLER_2`. + +### Associate port’s private IPs to loadbalancer + +Create the loadbalancer members for each port IP. + +``` shell +openstack --os-cloud default loadbalancer member create --subnet-id tenant-subnet --address ${CONTROLLER_0} --protocol-port 6443 talos-control-plane-pool +openstack --os-cloud default loadbalancer member create --subnet-id tenant-subnet --address ${CONTROLLER_1} --protocol-port 6443 talos-control-plane-pool +openstack --os-cloud default loadbalancer member create --subnet-id tenant-subnet --address ${CONTROLLER_2} --protocol-port 6443 talos-control-plane-pool +``` + +### Associate floating IPs to the `jump-0` port + +Create a floating IP for the jump host. + +``` shell +openstack --os-cloud default floating ip create --port jump-0 PUBLICNET +``` + +Retrieve the floating IP for the jump host. + +``` shell +export JUMP_PUBLIC_VIP=$(openstack --os-cloud default floating ip list --fixed-ip-address $JUMP_0 -f json | jq -r '.[0]."Floating IP Address"') +``` + +### (Optional) Controller floating IPs + +This setup is making the assumption that the `talosctl` command will be executed from the jump host. If the `talosctl` command will be executed from outside the Jump host, floating IPs will be needed for the controller nodes. + +Create a floating IP for the load balancer. + +``` shell +openstack --os-cloud default floating ip create --port ovn-lb-vip-${LB_ID} PUBLICNET +``` + +Retrieve the VIP for the load balancer. + +``` shell +export LB_PUBLIC_VIP=$(openstack --os-cloud default floating ip list --fixed-ip-address ${LB_PRIVATE_VIP} -f json | jq -r '.[0]."Floating IP Address"') +``` + +``` shell +openstack --os-cloud default floating ip create --port talos-control-plane-0 PUBLICNET +openstack --os-cloud default floating ip create --port talos-control-plane-1 PUBLICNET +openstack --os-cloud default floating ip create --port talos-control-plane-2 PUBLICNET +``` + +## Build the Jump Host + +The jump host will be used to interact with the Talos cluster. We will use the floating IP we created earlier to access the jump host. + +``` shell +openstack --os-cloud default server create jump-0 --flavor gp.0.1.2 \ + --nic port-id=jump-0 \ + --image Debian-12 \ + --key-name tenant-key +``` + +Login to the jump host and install `talosctl`. + +``` shell +ssh debian@${JUMP_PUBLIC_VIP} 'curl -sL https://talos.dev/install | sh' +``` + +!!! tip + + See the Talos install [documentation](https://www.talos.dev/v1.8/talos-guides/install/talosctl/) for more information on installing `talosctl`. + +## Cluster Configuration + +With our networking deployed, and the jump host online fetch the IP for our OVN Loadbalancer. + +Generate the configuration. + +``` shell +# If a floating IP was created for the load balancer, use the LB_PUBLIC_VIP otherwise use the LB_PRIVATE_VIP +ssh debian@${JUMP_PUBLIC_VIP} "talosctl gen config talos-k8s-openstack https://${LB_PRIVATE_VIP}:6443" +``` + +Upon the completion of this command the local directory will contain a `talosconfig`, `controlplane.yaml`, `worker.yaml` files. This file will be used to interact with the Talos cluster. + +Retrieve these files from the jump host to the machine running the OpenStack CLI. + +``` shell +scp debian@${JUMP_PUBLIC_VIP}:talosconfig . +scp debian@${JUMP_PUBLIC_VIP}:controlplane.yaml . +scp debian@${JUMP_PUBLIC_VIP}:worker.yaml . +``` + +## Server Creation + +To build the Talos cluster, we will create the control plane nodes and worker nodes. We will use the `controlplane.yaml` and `worker.yaml` files we retrieved from the jump host. + +### Create control plane nodes + +The following command will create 3 control plane nodes. Adjust the number of control plane nodes by changing the `seq` range. The flavor used for the control plane nodes is `gp.0.2.4` which will provide the controllers with 2 vCPUs, 4GB of RAM. + +!!! tip + + Depeding on where the command is executed from, the `controlplane.yaml` file may need to be retrieved from the jump host. + +``` shell +for i in $(seq 0 1 2); do + openstack --os-cloud default server create \ + talos-control-plane-$i \ + --flavor gp.0.2.4 \ + --nic port-id=talos-control-plane-$i \ + --image talos-18.2 \ + --key-name tenant-key \ + --user-data controlplane.yaml +done +``` + +### Create worker nodes + +The following command will create 3 worker nodes. Adjust the number of control plane nodes by changing the `seq` range. The flavor used for the control plane nodes is `gp.0.2.4` which will provide the workers with 2 vCPUs, 4GB of RAM. + +!!! tip + + Depending on where the command is executed from, the `worker.yaml` file may need to be retrieved from the jump host. + +``` shell +for i in $(seq 0 1 2); do + openstack --os-cloud default server create \ + talos-worker-$i \ + --flavor gp.0.2.4 \ + --network tenant-net \ + --image talos-18.2 \ + --key-name tenant-key \ + --security-group talos-secgroup \ + --user-data worker.yaml +done +``` + +!!! note + + Adding more workers later, will follow this same pattern. + +## Talos Cluster + +At this point we'll have a fully constructed environment that looks like this + +![OpenStack Topology](assets/images/2024-11-04/os-topology.png) + +The cluster will consist of 3 control plane nodes and 3 worker nodes. The control plane nodes will be behind the OVN Loadbalancer and the worker nodes will be on the `tenant-net` network. The jump host will be accessible via the floating IP we created earlier. The project is setup with security groups and ports to allow traffic to the control plane nodes and jump host and permit the environment to be rebuilt in a fully reproducible manner. + +With the cluster online, we can now interact with the cluster using `talosctl`. + +### Bootstrap Etcd + +It is now time to bootstrap the cluster. The following commands will be executed from the jump host. + +#### Set the endpoints and nodes + +Within the `talosconfig` file, set the one endpoint and and node. + +``` shell +ssh debian@${JUMP_PUBLIC_VIP} "talosctl --talosconfig talosconfig config endpoint ${CONTROLLER_0}" +ssh debian@${JUMP_PUBLIC_VIP} "talosctl --talosconfig talosconfig config node ${CONTROLLER_0}" +``` + +#### Bootstrap etcd + +Run the bootstrap command to start the cluster. + +``` shell +ssh debian@${JUMP_PUBLIC_VIP} "talosctl --talosconfig talosconfig bootstrap" +``` + +The bootstrap command will take a few minutes to complete. Once the command has completed, check the status of the cluster. + +``` shell +ssh debian@${JUMP_PUBLIC_VIP} "talosctl --talosconfig talosconfig service" +``` + +!!! example "The output should look something like this" + + ``` shell + NODE SERVICE STATE HEALTH LAST CHANGE LAST EVENT + 10.0.0.208 apid Running OK 5m0s ago Health check successful + 10.0.0.208 containerd Running OK 5m4s ago Health check successful + 10.0.0.208 cri Running OK 5m0s ago Health check successful + 10.0.0.208 dashboard Running ? 5m2s ago Process Process(["/sbin/dashboard"]) started with PID 2169 + 10.0.0.208 etcd Running OK 3m34s ago Health check successful + 10.0.0.208 kubelet Running OK 4m53s ago Health check successful + 10.0.0.208 machined Running OK 5m4s ago Health check successful + 10.0.0.208 syslogd Running OK 5m3s ago Health check successful + 10.0.0.208 trustd Running OK 5m0s ago Health check successful + 10.0.0.208 udevd Running OK 5m3s ago Health check successful + ``` + +The cluster will also have all the membership information, which can be viewed with the following command. + +``` shell +talosctl --talosconfig ./talosconfig get members +``` + +!!! example "The output should look something like this" + + ``` shell + NODE NAMESPACE TYPE ID VERSION HOSTNAME MACHINE TYPE OS ADDRESSES + 10.0.0.208 cluster Member talos-control-plane-0 19 talos-control-plane-0.novalocal controlplane Talos (v1.8.2) ["10.0.0.208"] + 10.0.0.208 cluster Member talos-control-plane-1 15 talos-control-plane-1.novalocal controlplane Talos (v1.8.2) ["10.0.0.60"] + 10.0.0.208 cluster Member talos-control-plane-2 12 talos-control-plane-2.novalocal controlplane Talos (v1.8.2) ["10.0.0.152"] + 10.0.0.208 cluster Member talos-worker-0 6 talos-worker-0.novalocal worker Talos (v1.8.2) ["10.0.0.232"] + 10.0.0.208 cluster Member talos-worker-1 2 talos-worker-1.novalocal worker Talos (v1.8.2) ["10.0.0.39"] + 10.0.0.208 cluster Member talos-worker-2 5 talos-worker-2.novalocal worker Talos (v1.8.2) ["10.0.0.110"] + ``` + +## Setup the Kubernetes Config + +With the cluster bootstrapped, we can now setup the Kubernetes configuration and begin interacting with the environment. + +At this stage login to the jump host and install the `kubectl` binary. + +``` shell +ssh debian@${JUMP_PUBLIC_VIP} +``` + +### Install the kubectl binary + +Install the `kubectl` binary is optional, but an easy way to interact with Talos Kubernetes environment now that it is deployed. + +``` shell +curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" +``` + +Move the binary to a location in within the `$PATH`. + +``` shell +sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl +``` + +### Retrieve the admin kubeconfig + +At this point we can retrieve the admin kubeconfig by running + +``` shell +talosctl --talosconfig talosconfig kubeconfig ~/.kube/config +``` + +### Check the Kubernetes Nodes + +With the kubeconfig in place, check the nodes in the cluster. + +``` shell +kubectl get nodes +``` + +!!! example "The output should look something like this" + + ``` shell + NAME STATUS ROLES AGE VERSION + talos-control-plane-0 Ready control-plane 2m6s v1.31.2 + talos-control-plane-1 Ready control-plane 2m2s v1.31.2 + talos-control-plane-2 Ready control-plane 2m5s v1.31.2 + talos-worker-0 Ready 118s v1.31.2 + talos-worker-1 Ready 112s v1.31.2 + talos-worker-2 Ready 112s v1.31.2 + ``` + +Assuming the output of the command is matching expectations for the deployment, it is safe to assume the Talos cluster on OpenStack Flex is ready for work. + +## Conclusion + +To recap, this blog post has outlined the steps to deploy a Talos cluster on OpenStack Flex. The cluster consisted of three control plane nodes and three worker nodes. We created the necessary network infrastructure, security groups, and ports to support the cluster. We then built the jump host and retrieved the necessary configuration files to interact with the cluster. We bootstrapped the cluster using `talosctl`. Finally, we retrieved the admin kubeconfig and installed the `kubectl` binary to interact with the cluster. + +Talos is a powerful operating system that is well-suited for OpenStack Flex environments. By combining the minimalistic and secure nature of Talos with the flexibility and scalability of OpenStack Flex, operators and administrators can create a robust and reliable infrastructure for applications. With the steps outlined in this guide, admins can easily deploy a Talos clusters on OpenStack Flex and take advantage of the benefits that both platforms have to offer. Whether running a small environment or a large production system, Talos and OpenStack Flex provide the tools needec to build and manage infrastructure effectively. diff --git a/docs/blog/posts/2024-11-05-running-metallb-on-openstack-flex.md b/docs/blog/posts/2024-11-05-running-metallb-on-openstack-flex.md new file mode 100644 index 0000000..b320607 --- /dev/null +++ b/docs/blog/posts/2024-11-05-running-metallb-on-openstack-flex.md @@ -0,0 +1,199 @@ +--- +date: 2024-11-05 +title: Running MetalLB on OpenStack Flex +authors: + - cloudnull +description: > + Running MetalLB on OpenStack Flex +categories: + - Kubernetes + - Authentication +--- + +# Running MetalLB on OpenStack Flex + +![alt text](assets/images/2024-11-05/metallb-logo.png){ align=left : style="max-width:150px;background-color:rgb(28 144 243);" } + +MetalLb is a load balancer for Kubernetes that provides a network load balancer implementation for Kubernetes clusters. MetalLB is a Kubernetes controller that watches for services of type `LoadBalancer` and provides a network load balancer implementation. The load balancer implementation is based on standard routing protocols. In this post we'll setup a set of allowed address pairs on the OpenStack Flex network to allow MetalLB to assign floating IPs to the load balancer service. + + + +### Environment Setup + +!!! note "This blog post was written with the following environment assumptions already existing" + + - Network: `tenant-net` + +## Foundation + +This guide assumes there is an operational Kubernetes cluster running on OpenStack Flex. To support this requirement, this guide will assume that the Kubernetes cluster is running following the Talos guide, which can be found [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_talos_on_openstack_flex). + +All operations will start from our Jump Host, which is a Debian instance running on OpenStack Flex adjacent to the Talos cluster. The Jump Host will be used to deploy Teleport to our Kubernetes cluster using Helm. + +!!! note + + The jump host referenced within this guide will use the following variable, `${JUMP_PUBLIC_VIP}`, which is assumed to contain the public IP address of the node. + +### Prerequisites + +Before we begin, we need to ensure that we have the following prerequisites in place: + +- An OpenStack Flex project with a Kubernetes cluster +- A working knowledge of Kubernetes + +## Create new allowed address pairs + +To allow MetalLB to assign floating IPs to the load balancer service, we need to create a set of allowed address pairs on the OpenStack Flex network. The allowed address pairs will allow the MetalLB pods to assign floating IPs to the load balancer service. + +1. Create a new port. + +``` shell +METAL_LB_IP=$(openstack --os-cloud default port create --network tenant-net metallb-vip-0 -f json | jq -r '.fixed_ips[0].ip_address') +``` + +!!! note "A word about port security" + + The port security group will be set to the default security group of the network. If security group changes are needed, set the `--security-group` flag when running to the `port create` command. + +2. Associate the addressed assigned to the port as an allowed address pairs of our worker nodes. + +``` shell +WORKER_0_PORT=$(openstack --os-cloud default port list --server talos-worker-0 -c ID -f value) +openstack --os-cloud default port set --allowed-address ip-address=${METAL_LB_IP} ${WORKER_0_PORT} + +WORKER_1_PORT=$(openstack --os-cloud default port list --server talos-worker-1 -c ID -f value) +openstack --os-cloud default port set --allowed-address ip-address=${METAL_LB_IP} ${WORKER_1_PORT} + +WORKER_2_PORT=$(openstack --os-cloud default port list --server talos-worker-2 -c ID -f value) +openstack --os-cloud default port set --allowed-address ip-address=${METAL_LB_IP} ${WORKER_2_PORT} +``` + +!!! note + + The IP address ${METAL_LB_IP} is an example. The port create process will assign a free IP address from the supplied network. + Additionally it is possible to have multiple IP addresses in the allowed address pairs. Repeat the above steps for each IP address that will be used within MetalLB. + +## Create The MetalLB Namespace + +``` shell +kubectl create namespace metallb-system +``` + +Set the namespace security policy. + +``` shell +kubectl label --overwrite namespace metallb-system \ + pod-security.kubernetes.io/enforce=privileged \ + pod-security.kubernetes.io/enforce-version=latest \ + pod-security.kubernetes.io/warn=privileged \ + pod-security.kubernetes.io/warn-version=latest \ + pod-security.kubernetes.io/audit=privileged \ + pod-security.kubernetes.io/audit-version=latest +``` + +### Add the Teleport Helm Repository + +``` shell +helm repo add metallb https://metallb.github.io/metallb +``` + +Now update the repository: + +``` shell +helm repo update +``` + +Run the following command to install MetalLB + +``` shell +helm upgrade --install --namespace metallb-system metallb metallb/metallb +``` + +## Gather Node Information + +``` shell +kubectl get nodes -o wide +``` + +!!! example "The output should look like this" + + ``` shell + NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME + talos-control-plane-0 Ready control-plane 19h v1.31.2 10.0.0.208 Talos (v1.8.2) 6.6.58-talos containerd://2.0.0-rc.6 + talos-control-plane-1 Ready control-plane 19h v1.31.2 10.0.0.60 Talos (v1.8.2) 6.6.58-talos containerd://2.0.0-rc.6 + talos-control-plane-2 Ready control-plane 19h v1.31.2 10.0.0.152 Talos (v1.8.2) 6.6.58-talos containerd://2.0.0-rc.6 + talos-worker-0 Ready 19h v1.31.2 10.0.0.145 Talos (v1.8.2) 6.6.58-talos containerd://2.0.0-rc.6 + talos-worker-1 Ready 19h v1.31.2 10.0.0.235 Talos (v1.8.2) 6.6.58-talos containerd://2.0.0-rc.6 + talos-worker-2 Ready 19h v1.31.2 10.0.0.242 Talos (v1.8.2) 6.6.58-talos containerd://2.0.0-rc.6 + ``` + +MetalLB will be installed on the "worker" nodes in the Kubernetes cluster. In this example, `talos-worker-0`, `talos-worker-1`, and `talos-worker-2`. + +## Install MetalLB + +To install MetalLB, we will use the following manifest + +!!! example "talos-metallb.yaml" + + ``` yaml + --- + apiVersion: metallb.io/v1beta1 + kind: IPAddressPool + metadata: + name: openstack-external + namespace: metallb-system + spec: + addresses: + - ${METAL_LB_IP}/32 # The addresses listed here must match the same address in the allowed address pairs + autoAssign: true # Automatically assign an IP address from the pool to a service of type LoadBalancer + --- + apiVersion: metallb.io/v1beta1 + kind: L2Advertisement + metadata: + name: openstack-external-advertisement + namespace: metallb-system + spec: + ipAddressPools: + - openstack-external + nodeSelectors: + - matchLabels: + kubernetes.io/hostname: talos-worker-0 + - matchLabels: + kubernetes.io/hostname: talos-worker-1 + - matchLabels: + kubernetes.io/hostname: talos-worker-2 + ``` + +!!! note "about autoAssign" + + Setting `autoAssign` to `true` allows MetalLB to automatically assign an IP address from the pool to a `LoadBalancer` service. + + Setting the `autoAssign` field to `false` will require operators to manually assign IP space to services. To assign the IP space manually, set an annotation in the service object, `metallb.universe.tf/address-pool`, with the value of the IP address or the name of the pool where the IP address will come from. + + !!! example "Example of manual assignment" + + ``` yaml + annotations: + metallb.universe.tf/address-pool: openstack-external + ``` + +Deploy the MetalLB manifest + +``` shell +kubectl apply -f talos-metallb.yaml +``` + +## Validate MetalLB + +After deployment validate the MetalLB installation by running simple commands. + +``` shell +kubectl --namespace metallb-system get ipaddresspools.metallb.io +``` + +!!! example "The output should look like this" + + ``` shell + NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES + openstack-external true false ["${METAL_LB_IP}/32"] + ``` diff --git a/docs/blog/posts/2024-11-06-running-teleport-cluster-on-openstack-flex.md b/docs/blog/posts/2024-11-06-running-teleport-cluster-on-openstack-flex.md new file mode 100644 index 0000000..27c4b8b --- /dev/null +++ b/docs/blog/posts/2024-11-06-running-teleport-cluster-on-openstack-flex.md @@ -0,0 +1,459 @@ +--- +date: 2024-11-06 +title: Running Teleport Cluster on OpenStack Flex +authors: + - cloudnull +description: > + Running Teleport Cluster on OpenStack Flex +categories: + - Kubernetes + - Authentication +--- + +# Running Teleport Cluster on OpenStack Flex + +![alt text](assets/images/2024-11-06/teleport-logo.png){ align=left } + +Teleport is a modern security gateway for remotely accessing clusters of Linux servers via SSH or Kubernetes. In this guide, we will walk through deploying Teleport on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Teleport software, and configure the service to run on the instance. This setup will allow us to access the Teleport web interface and create new users and roles, and manage access to the instance. The intent of this guide is to provide a simple example of how to deploy Teleport on an OpenStack Flex instance. + + + +## Foundation + +This guide assumes there is an operational Kubernetes cluster running on OpenStack Flex. To support this requirement, this guide will assume that the Kubernetes cluster is running following the Talos guide, which can be found [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_talos_on_openstack_flex). + +This guide also assumes that have metallb deployed on the Kubernetes cluster. If do not have metallb deployed, please refer to the [Running MetalLB on OpenStack Flex](https://blog.rackspacecloud.com/blog/2024/11/05/running-metallb-on-openstack-flex) guide. This guide will use the metallb service to route traffic to the Teleport service. + +All operations will start from our Jump Host, which is a Debian instance running on OpenStack Flex adjacent to the Talos cluster. The Jump Host will be used to deploy Teleport to our Kubernetes cluster using Helm. + +!!! note + + The jump host referenced within this guide will use the following variable, `${JUMP_PUBLIC_VIP}`, which is assumed to contain the public IP address of the node. + +### Prerequisites + +Before we begin, we need to ensure that we have the following prerequisites in place: + +- An OpenStack Flex project with a Kubernetes cluster +- A working knowledge of Kubernetes +- A working knowledge of Helm +- A working knowledge of OpenStack Flex +- A working knowledge of PostgreSQL +- A working knowledge of Teleport + +!!! note + + This guide is using Teleport **16.4.6**, and the instructions may vary for other versions. Check the [Teleport documentation](https://goteleport.com/docs/upcoming-releases/) for the most up-to-date information on current releases. + +## Generate EC2 credentials + +The following credentials will be used to authenticate the Teleport service to the S3 API provided by OpenStack Flex Object Storage. + +``` shell +openstack --os-cloud default ec2 credentials create +``` + +!!! example "The output should look similar to the following" + + ``` shell + +------------+----------------------------------+ + | Field | Value | + +------------+----------------------------------+ + | access | ACCESS | + | links | {} | + | project_id | PROJECT_ID | + | secret | SECRET | + | trust_id | None | + | user_id | USER_ID | + +------------+----------------------------------+ + ``` + +Create an aws-config file. + +``` shell +cat > ~/aws-config < ~/aws-credentials <` additional address pairs may be needed for the worker nodes. Refer to the [Running MetalLB on OpenStack Flex](https://blog.rackspacecloud.com/blog/2024/11/05/running-metallb-on-openstack-flex) guide to add additional allowed address pairs. + +## Allowed Address Pairs + +If the [MetalLB](https://blog.rackspacecloud.com/blog/2024/11/05/running-metallb-on-openstack-flex) environment is already configured with an allowed address pair, the IP address or port may need to be adjusted to match the expected port security or security group settings used for Teleport. + +Validate the IP address port settings, and make appropriate changes. + +``` shell +# ip-address is using the value of EXTERNAL-IP +openstack --os-cloud default port list --fixed-ip ip-address=10.0.0.221 +``` + +!!! example + + ``` shell + +--------------------------------------+---------------+-------------------+---------------------------------------------------------------------------+--------+ + | ID | Name | MAC Address | Fixed IP Addresses | Status | + +--------------------------------------+---------------+-------------------+---------------------------------------------------------------------------+--------+ + | bb2b010f-e792-4def-9350-e7a3944daee3 | metallb-vip-0 | fa:16:3e:85:16:9e | ip_address='10.0.0.221', subnet_id='b4448aa6-bb7d-4e01-86c1-80e589d3fb92' | DOWN | + +--------------------------------------+---------------+-------------------+---------------------------------------------------------------------------+--------+ + ``` + +Run the show command to get the port details. + +``` shell +openstack --os-cloud default port show bb2b010f-e792-4def-9350-e7a3944daee3 +``` + +!!! tip "If port security is `true` and the port security groups are not set appropriately, the security group settings may need to be adjusted" + + ``` shell + openstack --os-cloud default port set --security-group teleport-secgroup bb2b010f-e792-4def-9350-e7a3944daee3 + ``` + +### Associate a Floating IP + +If the IP address is not assigned to the port, and the Teleport Cluster will be accessed over the internet, associate a floating IP to the port. + +``` shell +openstack --os-cloud default floating ip create --port bb2b010f-e792-4def-9350-e7a3944daee3 PUBLICNET +``` + +Retrieve and validate the floating IP address is associated with the port. + +``` shell +PUBLIC_VIP=(openstack --os-cloud rxt-sjc-mine-flex floating ip list --fixed-ip 10.0.0.221 -f value -c "Floating IP Address") +``` + +## DNS Setup + +A domain which will serve as the Teleport endpoint is needed. This guide assumes that record is `teleport.example.com` however, this value should be replaced with an actual domain. + +> At this time, DNS will need to be managed outside of OpenStack Flex. + +!!! example "The DNS record should point to the public IP address" + + The `${PUBLIC_VIP}` is a placeholder for the public IP address defined in the previous step. + + ``` txt + ;; A Records + teleport.example.com. 1 IN A ${PUBLIC_VIP} + + ;; CNAME Records + *.teleport.example.com. 1 IN CNAME teleport.example.com. + ``` + +## Access the Teleport Environment + +``` shell +tsh login --proxy=teleport.example.com --user=YourUser +``` + +In a browser, navigate to `https://teleport.example.com` to access the Teleport web interface. + +![teleport-web](assets/images/2024-11-06/teleport-web.png) + +## Conclusion + +In this guide, we have walked through deploying Teleport on an OpenStack Flex environment where Talos is running. We installed the Teleport software, and configured the service to run on the cluster. This setup allows us to access the Teleport web interface, create new users and roles, and manage access to the instance. The intent of this guide is to provide a simple example of how to deploy Teleport on an OpenStack Flex via Kubernetes and highlight the flexibility of the environment. For more information on Teleport and running Teleport Agents, please refer to the [Teleport documentation](https://goteleport.com/docs/enroll-resources/agents/join-services-to-your-cluster/join-services-to-your-cluster/). diff --git a/docs/blog/posts/assets/images/2024-11-04/cockroachlabs-logo.png b/docs/blog/posts/assets/images/2024-11-04/cockroachlabs-logo.png new file mode 100644 index 0000000..883fab1 Binary files /dev/null and b/docs/blog/posts/assets/images/2024-11-04/cockroachlabs-logo.png differ diff --git a/docs/blog/posts/assets/images/2024-11-04/longhorn-logo.png b/docs/blog/posts/assets/images/2024-11-04/longhorn-logo.png new file mode 100644 index 0000000..b23847d Binary files /dev/null and b/docs/blog/posts/assets/images/2024-11-04/longhorn-logo.png differ diff --git a/docs/blog/posts/assets/images/2024-11-04/os-topology.png b/docs/blog/posts/assets/images/2024-11-04/os-topology.png new file mode 100644 index 0000000..307d13b Binary files /dev/null and b/docs/blog/posts/assets/images/2024-11-04/os-topology.png differ diff --git a/docs/blog/posts/assets/images/2024-11-04/talos-logo.png b/docs/blog/posts/assets/images/2024-11-04/talos-logo.png new file mode 100644 index 0000000..185baef Binary files /dev/null and b/docs/blog/posts/assets/images/2024-11-04/talos-logo.png differ diff --git a/docs/blog/posts/assets/images/2024-11-05/crunchydata-logo.png b/docs/blog/posts/assets/images/2024-11-05/crunchydata-logo.png new file mode 100644 index 0000000..b1bebb9 Binary files /dev/null and b/docs/blog/posts/assets/images/2024-11-05/crunchydata-logo.png differ diff --git a/docs/blog/posts/assets/images/2024-11-05/metallb-logo.png b/docs/blog/posts/assets/images/2024-11-05/metallb-logo.png new file mode 100644 index 0000000..1c262c7 Binary files /dev/null and b/docs/blog/posts/assets/images/2024-11-05/metallb-logo.png differ diff --git a/docs/blog/posts/assets/images/2024-11-06/teleport-logo.png b/docs/blog/posts/assets/images/2024-11-06/teleport-logo.png new file mode 100644 index 0000000..f29ace7 Binary files /dev/null and b/docs/blog/posts/assets/images/2024-11-06/teleport-logo.png differ diff --git a/docs/blog/posts/assets/images/2024-11-06/teleport-web.png b/docs/blog/posts/assets/images/2024-11-06/teleport-web.png new file mode 100644 index 0000000..c3bd2f6 Binary files /dev/null and b/docs/blog/posts/assets/images/2024-11-06/teleport-web.png differ diff --git a/docs/overrides/stylesheets/adr.css b/docs/overrides/stylesheets/adr.css index 7fc4a9c..042546e 100644 --- a/docs/overrides/stylesheets/adr.css +++ b/docs/overrides/stylesheets/adr.css @@ -107,10 +107,10 @@ .md-content { flex-grow: 1; min-width: 0; - max-width: 1000px; + max-width: 1500px; } -@media only screen and (min-width: 1220px) { +@media only screen and (min-width: 1740px) { .md-main { flex-grow: 1; margin-left: auto; @@ -222,15 +222,6 @@ .md-nav__item .md-nav__link--active,.md-nav__item .md-nav__link--active code { color: #9e0000; } - .md-meta__link { - color: #9e0000; - } - .md-search-result__more>summary>div { - color: #eb0000; - font-size: .64rem; - padding: .75em .8rem; - transition: color .25s,background-color .25s - } } [data-md-color-accent=red] {