-
Notifications
You must be signed in to change notification settings - Fork 12
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat: a series of blog posts for openstack-flex
This change adds a new blog post for Talos, which I'll be using for content in other application centric posts being developed for GA. 1. Updates the CSS to have a better reading experience on long content. 2. Create a talos cluster 3. Create a storage cluster with longhorn 4. Create a crunch postgres cluster 5. Craete a cockroachDB cluster 6. Deploy MetalLB with, covering allowed address pairs 7. Create a highly available Teleport cluster All blog posts work together in the series to show how administrators can leveral openstack-flex. Related-Issue: https://rackspace.atlassian.net/browse/OSPC-118 Signed-off-by: Kevin Carter <[email protected]>
- Loading branch information
Showing
15 changed files
with
1,738 additions
and
11 deletions.
There are no files selected for viewing
146 changes: 146 additions & 0 deletions
146
docs/blog/posts/2024-11-04-running-cockroachdb-on-openstack-flex.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,146 @@ | ||
--- | ||
date: 2024-11-04 | ||
title: Running CockroachDB on OpenStack Flex | ||
authors: | ||
- cloudnull | ||
description: > | ||
Running CockroachDB on OpenStack Flex | ||
categories: | ||
- Kubernetes | ||
- Database | ||
--- | ||
|
||
# Running CockroachDB on OpenStack Flex | ||
|
||
![CockroachDB](assets/images/2024-11-04/cockroachlabs-logo.png){ align=left } | ||
CockroachDB is a distributed SQL database that provides consistency, fault-tolerance, and scalability that has been purpose built for the cloud. In this guide, we will walk through deploying CockroachDB on an OpenStack Flex instance. As operators, we will need to create a new instance, install the CockroachDB software, and configure the service to run on the instance. The intent of this guide is to provide a simple functional example of how to deploy CockroachDB on an OpenStack Flex on Kubernetes. | ||
|
||
<!-- more --> | ||
|
||
## Foundation | ||
|
||
This guide assumes there is an operational Kubernetes cluster running on OpenStack Flex. To support this requirement, this guide will assume that the Kubernetes cluster is running following the Talos guide, which can be found [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_talos_on_openstack_flex). | ||
|
||
An assumption of this guide is that the Kubernetes cluster has a working storage provider which can be used to create `PersistentVolumeClaims`. If the environment does not have a working storage provider, one will need to be deploy one before proceeding with this guide. In this guide, we will use Longhorn as our storage provider, which was deployed as part of the Talos on OpenStack Flex setup. Read more about Longhorn setup being used for this post [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_longhorn_on_openstack_flex). | ||
|
||
All operations will start from our Jump Host, which is a Debian instance running on OpenStack Flex adjacent to the Talos cluster. The Jump Host will be used to deploy Longhorn to our Kubernetes cluster using Helm. | ||
|
||
!!! note | ||
|
||
The jump host referenced within this guide will use the following variable, `${JUMP_PUBLIC_VIP}`, which is assumed to contain the public IP address of the node. | ||
|
||
### Prerequisites | ||
|
||
Before we begin, we need to ensure that we have the following prerequisites in place: | ||
|
||
- An OpenStack Flex project with a Kubernetes cluster | ||
- A working knowledge of Kubernetes | ||
- A working knowledge of Helm | ||
- A working knowledge of OpenStack Flex | ||
- At least 180GiB of storage available to `PersistentVolumeClaims` (Longhorn) | ||
|
||
!!! note | ||
|
||
This guide is using CockroachDB **1.7.2**, and the instructions may vary for other versions. Check the [CockroachDB documentation](https://www.cockroachlabs.com/whatsnew/) for the most up-to-date information on current releases. | ||
|
||
Create a new namespace. | ||
|
||
``` shell | ||
kubectl create namespace cockroach-operator-system | ||
``` | ||
|
||
Set the namespace security policy. | ||
|
||
``` shell | ||
kubectl label --overwrite namespace cockroach-operator-system \ | ||
pod-security.kubernetes.io/enforce=privileged \ | ||
pod-security.kubernetes.io/enforce-version=latest \ | ||
pod-security.kubernetes.io/warn=privileged \ | ||
pod-security.kubernetes.io/warn-version=latest \ | ||
pod-security.kubernetes.io/audit=privileged \ | ||
pod-security.kubernetes.io/audit-version=latest | ||
``` | ||
|
||
## Install the CockroachDB Operator | ||
|
||
Deploying the CockroachDB operator involves installing the CRDs and the operator itself. | ||
|
||
### Deploy the CockroachDB CRDs | ||
|
||
``` shell | ||
kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v2.15.1/install/crds.yaml | ||
``` | ||
|
||
### Deploy the CockroachDB Operator | ||
|
||
``` shell | ||
kubectl --namespace cockroach-operator-system apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v2.15.1/install/operator.yaml | ||
``` | ||
|
||
``` shell | ||
kubectl --namespace cockroach-operator-system get pods | ||
``` | ||
|
||
!!! example "The output should look similar to the following" | ||
|
||
``` shell | ||
NAME READY STATUS RESTARTS AGE | ||
cockroach-operator-manager-c8f97d954-5fwh4 1/1 Running 0 38s | ||
``` | ||
|
||
### Deploy the CockroachDB Cluster | ||
|
||
``` shell | ||
kubectl --namespace cockroach-operator-system apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v2.15.1/examples/example.yaml | ||
``` | ||
|
||
!!! note "About the example cluster" | ||
|
||
This is a quick and easy cluster environment which is suitable for a wide range of purposes. However, for production use, administrators should consider a more robust configuration by reviewing this file and [CockroachDB documentation](https://www.cockroachlabs.com/docs/stable/). | ||
|
||
#### Deploy the CockroachDB Client | ||
|
||
Deploying the CockroachDB client is simple. It requires the installation of the client pod and the client secret. | ||
|
||
``` shell | ||
kubectl --namespace cockroach-operator-system create -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v2.15.1/examples/client-secure-operator.yaml | ||
``` | ||
|
||
``` shell | ||
kubectl --namespace cockroach-operator-system exec -it cockroachdb-client-secure \ | ||
-- ./cockroach sql \ | ||
--certs-dir=/cockroach/cockroach-certs \ | ||
--host=cockroachdb-public | ||
``` | ||
|
||
!!! example "The above command will dropped into the SQL shell" | ||
|
||
``` shell | ||
# Welcome to the CockroachDB SQL shell. | ||
# All statements must be terminated by a semicolon. | ||
# To exit, type: \q. | ||
# | ||
# Server version: CockroachDB CCL v24.2.3 (x86_64-pc-linux-gnu, built 2024/09/23 22:30:53, go1.22.5 X:nocoverageredesign) (same version as client) | ||
# Cluster ID: 162f3cf8-2699-4c59-b58d-a43afb34497c | ||
# | ||
# Enter \? for a brief introduction. | ||
# | ||
root@cockroachdb-public:26257/defaultdb> | ||
``` | ||
|
||
Running a simple `show databases;` command should return the following output. | ||
|
||
``` shell | ||
database_name | owner | primary_region | secondary_region | regions | survival_goal | ||
----------------+-------+----------------+------------------+---------+---------------- | ||
defaultdb | root | NULL | NULL | {} | NULL | ||
postgres | root | NULL | NULL | {} | NULL | ||
system | node | NULL | NULL | {} | NULL | ||
(3 rows) | ||
|
||
Time: 6ms total (execution 5ms / network 0ms) | ||
``` | ||
|
||
## Conclusion | ||
|
||
In this guide, we have walked through deploying CockroachDB on an OpenStack Flex instance on a Kubernetes cluster running Talos. We have also deployed the CockroachDB client and connected to the CockroachDB cluster to verify the deployment. This guide is intended to provide a simple example of how to deploy CockroachDB on an OpenStack Flex instance. For more information on CockroachDB, please refer to the [CockroachDB documentation](https://www.cockroachlabs.com/docs). |
150 changes: 150 additions & 0 deletions
150
docs/blog/posts/2024-11-04-running-crunchy-postgres-on-openstack-flex.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,150 @@ | ||
--- | ||
date: 2024-11-05 | ||
title: Running Postgres Operator from Crunchy Data on OpenStack Flex | ||
authors: | ||
- cloudnull | ||
description: > | ||
Running Postgres Operator from Crunchy Data on OpenStack Flex | ||
categories: | ||
- Kubernetes | ||
- Database | ||
--- | ||
|
||
# Running Crunchydata Postgres on OpenStack Flex | ||
|
||
![Crunchdata](assets/images/2024-11-05/crunchydata-logo.png){ align=left : style="max-width:125px" } | ||
|
||
Crunchydata provides a Postgres Operator that simplifies the deployment and management of PostgreSQL clusters on Kubernetes. In this guide, we will walk through deploying the Postgres Operator from Crunchy Data on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Postgres Operator software, and configure the service to run on the instance. The intent of this guide is to provide a simple functional example of how to deploy the Postgres Operator from Crunchy Data on an OpenStack Flex on Kubernetes. | ||
|
||
<!-- more --> | ||
|
||
## Foundation | ||
|
||
This guide assumes there is an operational Kubernetes cluster running on OpenStack Flex. To support this requirement, this guide will assume that the Kubernetes cluster is running following the Talos guide, which can be found [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_talos_on_openstack_flex). | ||
|
||
An assumption of this guide is that the Kubernetes cluster has a working storage provider which can be used to create `PersistentVolumeClaims`. If the environment does not have a working storage provider, one will need to be deploy one before proceeding with this guide. In this guide, we will use Longhorn as our storage provider, which was deployed as part of the Talos on OpenStack Flex setup. Read more about Longhorn setup being used for this post [here](https://blog.rackspacecloud.com/blog/2024/11/04/running_longhorn_on_openstack_flex). | ||
|
||
All operations will start from our Jump Host, which is a Debian instance running on OpenStack Flex adjacent to the Talos cluster. The Jump Host will be used to deploy Longhorn to our Kubernetes cluster using Helm. | ||
|
||
!!! note | ||
|
||
The jump host referenced within this guide will use the following variable, `${JUMP_PUBLIC_VIP}`, which is assumed to contain the public IP address of the node. | ||
|
||
### Prerequisites | ||
|
||
Before we begin, we need to ensure that we have the following prerequisites in place: | ||
|
||
- An OpenStack Flex project with a Kubernetes cluster | ||
- A working knowledge of Kubernetes | ||
- A working knowledge of Helm | ||
- A working knowledge of OpenStack Flex | ||
- At least 1GiB of storage available to `PersistentVolumeClaims` (Longhorn) | ||
|
||
!!! note | ||
|
||
This guide is using Crunchydata **5.7**, and the instructions may vary for other versions. Check the [Crunchydata documentation](https://access.crunchydata.com/documentation/postgres-operator/latest) for the most up-to-date information on current releases. | ||
|
||
## Create a New Namespace | ||
|
||
``` shell | ||
kubectl create namespace crunchy-operator-system | ||
``` | ||
|
||
Set the namespace security policy. | ||
|
||
``` shell | ||
kubectl label --overwrite namespace crunchy-operator-system \ | ||
pod-security.kubernetes.io/enforce=privileged \ | ||
pod-security.kubernetes.io/enforce-version=latest \ | ||
pod-security.kubernetes.io/warn=privileged \ | ||
pod-security.kubernetes.io/warn-version=latest \ | ||
pod-security.kubernetes.io/audit=privileged \ | ||
pod-security.kubernetes.io/audit-version=latest | ||
``` | ||
|
||
## Install the Crunchdata Postgres Operator | ||
|
||
Before getting started, set a few environment variables that will be used throughout the guide. | ||
|
||
``` shell | ||
export CRUNCHY_OPERATOR_NAMESPACE=crunchy-operator-system | ||
export CRUNCHY_CLUSTER_NAMESPACE=crunchy-operator-system # This can be a different namespace | ||
export CRUNCHY_CLUSTER_NAME=hippo | ||
export CRUNCHY_DB_REPLICAS=3 | ||
export CRUNCHY_DB_SIZE=1Gi | ||
``` | ||
|
||
Retrieve the operator helm chart and change into the directory. | ||
|
||
``` shell | ||
git clone https://github.com/CrunchyData/postgres-operator-examples | ||
cd postgres-operator-examples | ||
``` | ||
|
||
Install the operator helm chart. | ||
|
||
``` shell | ||
helm upgrade --install --namespace ${CRUNCHY_OPERATOR_NAMESPACE} crunchy-operator helm/install | ||
``` | ||
|
||
## Create a Crunchydata Postgres Cluster | ||
|
||
Create a helm overrides file for the database deployment. The file should contain the following information. Replace the `${CRUNCHY_DB_REPLICAS}`, `${CRUNCHY_CLUSTER_NAME}`, and `${CRUNCHY_DB_SIZE}` with the desired values for the deployment. | ||
|
||
!!! example "crunchy-db.yaml" | ||
|
||
``` yaml | ||
instanceReplicas: ${CRUNCHY_DB_REPLICAS} | ||
name: ${CRUNCHY_CLUSTER_NAME} | ||
instanceSize: ${CRUNCHY_DB_SIZE} | ||
users: | ||
- name: rhino | ||
databases: | ||
- zoo | ||
options: 'NOSUPERUSER' | ||
``` | ||
|
||
Create a new secret for the user **rhino** | ||
|
||
!!! example "crunchy-rhino-secret.yaml" | ||
|
||
``` yaml | ||
apiVersion: v1 | ||
kind: Secret | ||
metadata: | ||
name: ${CRUNCHY_CLUSTER_NAME}-pguser-rhino | ||
labels: | ||
postgres-operator.crunchydata.com/cluster: ${CRUNCHY_CLUSTER_NAME} | ||
postgres-operator.crunchydata.com/pguser: rhino | ||
stringData: | ||
password: river | ||
``` | ||
|
||
``` shell | ||
kubectl --namespace ${CRUNCHY_CLUSTER_NAMESPACE} apply -f crunchy-rhino-secret.yaml | ||
``` | ||
|
||
Run the Deployment | ||
|
||
``` shell | ||
helm upgrade --install --namespace ${CRUNCHY_CLUSTER_NAMESPACE} hippo helm/postgres \ | ||
-f crunchy-db.yaml | ||
``` | ||
|
||
!!! tip | ||
|
||
Track the state of the deployment with the following | ||
|
||
``` shell | ||
kubectl -n ${CRUNCHY_CLUSTER_NAMESPACE} get pods --selector=postgres-operator.crunchydata.com/cluster=${CRUNCHY_CLUSTER_NAME},postgres-operator.crunchydata.com/instance | ||
``` | ||
|
||
## Verify the Crunchydata Postgres Cluster | ||
|
||
``` shell | ||
kubectl --namespace ${CRUNCHY_CLUSTER_NAMESPACE} get svc --selector=postgres-operator.crunchydata.com/cluster=${CRUNCHY_CLUSTER_NAME} | ||
``` | ||
|
||
## Conclusion | ||
|
||
In this guide, we have deployed the Crunchydata Postgres Operator on an OpenStack Flex Kubernetes cluster. We have also created a new Postgres cluster using the operator. This guide is intended to provide a simple functional example of how to deploy the Crunchydata Postgres Operator on an OpenStack Flex Kubernetes cluster. For more information on the Crunchydata Postgres Operator, please refer to the [Crunchydata documentation](https://access.crunchydata.com/documentation/postgres-operator/latest). |
Oops, something went wrong.