Skip to content

Commit

Permalink
Mc tutorial suggestions (kiali#689)
Browse files Browse the repository at this point in the history
* Add more info about env to MC tutorial

* Remove also

* Further tweaks

* Attempt to fix links

* More link fixes
  • Loading branch information
nrfox authored and hhovsepy committed Apr 5, 2024
1 parent 8a2bab4 commit 64f376d
Show file tree
Hide file tree
Showing 7 changed files with 35 additions and 34 deletions.
12 changes: 6 additions & 6 deletions content/en/docs/Configuration/p8s-jaeger-grafana/jaeger.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ description: >
This page describes how to configure Jaeger for Kiali.
---


## Jaeger configuration

Jaeger is a _highly recommended_ service because [Kiali uses distributed
Expand Down Expand Up @@ -32,10 +31,10 @@ spec:
# Jaeger service name is "tracing" and is in the "telemetry" namespace.
# Make sure the URL you provide corresponds to the non-GRPC enabled endpoint
# if you set "use_grpc" to false.
in_cluster_url: 'http://tracing.telemetry:16685/jaeger'
in_cluster_url: "http://tracing.telemetry:16685/jaeger"
use_grpc: true
# Public facing URL of Jaeger
url: 'http://my-jaeger-host/jaeger'
url: "http://my-jaeger-host/jaeger"
```
Minimally, you must provide `spec.external_services.tracing.in_cluster_url` to
Expand Down Expand Up @@ -67,8 +66,9 @@ need to manage your Tempo instance.
The [official Grafana Tempo documentation](https://grafana.com/docs/tempo/latest/setup/tanka/)
explains how to deploy a Tempo instance using [Tanka](https://tanka.dev/). You
will need to tweak the settings from the default Tanka configuration to:
* Expose the Zipkin collector
* Expose the GRPC Jaeger Query port

- Expose the Zipkin collector
- Expose the GRPC Jaeger Query port

When the Tempo instance is deployed with the needed configurations, you have to
set
Expand Down Expand Up @@ -131,4 +131,4 @@ the Jaeger API. You can point to the `16685` port to use GRPC or `16686` if not.
For the given example, the value would be
`http://tempo-ssm-query-frontend.tempo.svc.cluster.local:16685`.

There is a [related tutorial](https://kiali.io/docs/tutorials/tempo/02-kiali-tempo-integration/) with detailed instructions to setup Kiali and Grafana Tempo with the Operator.
There is a [related tutorial]({{< ref "/docs/tutorials/tempo/02-kiali-tempo-integration" >}}) with detailed instructions to setup Kiali and Grafana Tempo with the Operator.
14 changes: 8 additions & 6 deletions content/en/docs/Tutorials/multicluster/01-Introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,14 @@ description: "Observe the Travels application deployed in multiple clusters with
weight: 1
---

So far, we know how good Kiali can be to understand applications, their relationships with itself and also with external applications.

In the past, Kiali was installed just to observe one cluster with all the applications that conforms to it. Today, we are expanding its capabilities to also observe more than one cluster. The extra clusters are remotes, meaning that there is not a control plane on them, they only have user applications.
So far, we know how good Kiali can be to understand applications, their relationships with each other and with external applications.

This topology is called [primary-remote](https://istio.io/latest/docs/setup/install/multicluster/primary-remote/) and it is very useful to spread applications into different clusters having just one primary cluster, which is where Istio and Kiali are installed.
In the previous tutorial, Kiali was setup to observe just a single cluster. Now, we will expand its capabilities to observe more than one cluster. The extra clusters are remotes, meaning that there is not a control plane on them, they only have user applications.

This scenario is a good choice when as an application administrator or architect, you want to give a different set of clusters to different sets of developers and you also want that all these applications belong to the same mesh. This scenario is also very helpful to give applications high availability capabilities while keeping the observability together (we are referring to just applications in terms of high availability, for Istio, we might want to install a multi-primary deployment model, which is on the [roadmap](https://github.com/kiali/kiali/issues/5618) for the multicluster journey for Kiali).
This topology is called [primary-remote](https://istio.io/latest/docs/setup/install/multicluster/primary-remote/) and it is very useful to spread applications into different clusters having just one primary cluster, which is where Istio and Kiali are installed.

At first, we will install one cluster with Istio, then we will add a new cluster, the remote, and we will join it to the mesh and we will see how Kiali allows us to observe and manage both of them and their applications.
This scenario is a good choice when as an application administrator or architect, you want to give a different set of clusters to different sets of developers and you also want all these applications to belong to the same mesh. This scenario is also very helpful to give applications high availability capabilities while keeping the observability together (we are referring to just applications in terms of high availability, for Istio, we might want to install a multi-primary deployment model, which is on the [roadmap](https://github.com/kiali/kiali/issues/5618) for the multicluster journey for Kiali).

In this tutorial we will be deploying Istio in a primary-remote deployment. At first, we will install the "east" cluster with Istio, then we will add the "west" remote cluster and join it to the mesh. Then we will see how Kiali allows us to observe and manage both clusters and their applications. Metrics will be aggregated into the "east" cluster using Prometheus federation and a single Kiali will be deployed on the "east" cluster.

If you already have a primary-remote deployment, you can skip to [instaliing Kiali]({{< relref "./05-Install-Kiali.md" >}}).
14 changes: 7 additions & 7 deletions content/en/docs/Tutorials/multicluster/02-Prerequisites.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,17 @@ weight: 2

This tutorial is a walkthrough guide to install everything. For this reason, we will need:

* minikube
* istioctl
* helm
- minikube
- istioctl
- helm

This tutorial was tested on:

* Minikube v1.30.1
* Istio v1.18.1
* Kiali v1.70
- Minikube v1.30.1
- Istio v1.18.1
- Kiali v1.70

Clusters are provided by minikube instances, but we can choose others instead, like OpenShift or just vanilla Kubernetes installations.
Clusters are provided by minikube instances, but this tutorial should work on on any Kubernetes environment.

We will set up some environment variables for the following commands:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ $ISTIO_DIR/samples/multicluster/gen-eastwest-gateway.sh \

## Prometheus federation

An important design decision for Kiali was to decide that it will continue consuming data from one Prometheus instance per all clusters. For this reason, Prometheus needs to be federated, meaning that all the remote’s metrics should be fetched by the main Prometheus.
Kiali requires unified metrics from a single Prometheus endpoint for all clusters, even in a multi-cluster environment. In this tutorial, we will federate the two Prometheus instances, meaning that all the remote’s metrics should be fetched by the main Prometheus.

We will configure east's Prometheus to fetch west's metrics:

Expand All @@ -73,4 +73,4 @@ curl -L -o prometheus.yaml https://raw.githubusercontent.com/kiali/kiali/master/
sed -i "s/WEST_PROMETHEUS_ADDRESS/$WEST_PROMETHEUS_ADDRESS/g" prometheus.yaml
kubectl --context=$CLUSTER_EAST apply -f prometheus.yaml -n istio-system
```
```
4 changes: 2 additions & 2 deletions content/en/docs/Tutorials/multicluster/09-Configure-Kiali.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: "In this section we will add some configuration for Kiali to start
weight: 9
---

We will configure Kiali to access the remote cluster. This will require a secret (similar to the Istio secret) containing the credentials for Kiali to fetch information for the remote cluster:
We will configure Kiali to access the remote cluster. This will require a secret (similar to the Istio secret) containing the credentials for Kiali to fetch information from the remote cluster:

```
curl -L -o kiali-prepare-remote-cluster.sh https://raw.githubusercontent.com/kiali/kiali/master/hack/istio/multicluster/kiali-prepare-remote-cluster.sh
Expand All @@ -19,7 +19,7 @@ Finally, upgrade the installation for Kiali to pick up the secret:
```
kubectl config use-context $CLUSTER_EAST
helm upgrade --install --namespace istio-system --set auth.strategy=anonymous --set deployment.logger.log_level=debug --set deployment.ingress.enabled=true --repo https://kiali.org/helm-charts kiali-server kiali-server
helm upgrade --install --namespace istio-system --set auth.strategy=anonymous --set deployment.logger.log_level=debug --set deployment.ingress.enabled=true --repo https://kiali.org/helm-charts kiali-server kiali-server
```

As result, we can quickly see that a new namespace appear in the Overview, the istio-system namespace from west cluster:
Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/Tutorials/multicluster/_index.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: Travels Demo, Now Multicluster
title: Travels Demo - Multicluster
description: Learn how to configure and use Kiali in an Istio multicluster scenario.
weight: 6
type: tutorial
---

This tutorial will demonstrate Kiali capabilities for Istio multicluster, particulary for the primary-remote cluster model.

For more information, check our [documentation for multicluster]({{< relref "../../Features/multi-cluster" >}}).
For more information, check our [documentation for multicluster]({{< relref "../../Features/multi-cluster" >}}).
17 changes: 8 additions & 9 deletions content/en/docs/Tutorials/tempo/01-Introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,24 +6,23 @@ weight: 1

### Introduction

Kiali uses [Jaeger](https://kiali.io/docs/configuration/p8s-jaeger-grafana/jaeger/) as a default distributed tracing backend. In this tutorial, we will replace it for [Grafana Tempo](https://grafana.com/docs/tempo/next/).
Kiali uses [Jaeger]({{< ref "/docs/Configuration/p8s-jaeger-grafana/jaeger" >}}) as a default distributed tracing backend. In this tutorial, we will replace it for [Grafana Tempo](https://grafana.com/docs/tempo/next/).

We will setup a local environment in minikube, and install Kiali with Tempo as a distributed backend. This is a simplified architecture diagram:

![Kiali Tempo Architecture](/images/tutorial/tempo/kiali-tempo.png "Kiali Tempo integration architecture")

* We will install Tempo with the Tempo Operator and enable Jaeger query frontend to be compatible with Kiali in order to query traces.
* We will setup Istio to send traces to the Tempo collector using the zipkin protocol. It is enabled by default from version 3.0 or higher of the Tempo Operator.
* We will install MinIO and setup it up as object store, S3 compatible.
- We will install Tempo with the Tempo Operator and enable Jaeger query frontend to be compatible with Kiali in order to query traces.
- We will setup Istio to send traces to the Tempo collector using the zipkin protocol. It is enabled by default from version 3.0 or higher of the Tempo Operator.
- We will install MinIO and setup it up as object store, S3 compatible.

### Environment

We use the following environment:

* Istio 1.18.1
* Kiali 1.72
* Minikube 1.30
* Tempo operator TempoStack v3.0
- Istio 1.18.1
- Kiali 1.72
- Minikube 1.30
- Tempo operator TempoStack v3.0

There are different installation methods for Grafana Tempo, but in this tutorial we will use the [Tempo operator](https://grafana.com/docs/tempo/latest/setup/operator/).

0 comments on commit 64f376d

Please sign in to comment.