Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.11 refresh 11/14 #7236

Merged
merged 17 commits into from
Nov 14, 2024
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 11 additions & 7 deletions business_continuity/backup_restore/backup_hub_config.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,19 +23,23 @@ The passive hub cluster restores this data, except for the managed cluster activ
[#disaster-recovery]
== Disaster recovery

When the primary hub cluster fails, the administrator chooses a passive hub cluster to take over the managed clusters. In the following image, the administrator decides to use _Hub cluster N_ as the new primary hub cluster:
When the primary hub cluster fails, as a hub administrator, you can select a passive hub cluster to take over the managed clusters. In the following _Disaster recovery diagram_, see how you can use _Hub cluster N_ as the new primary hub cluster:

image:../images/disaster_recovery.png[Disaster recovery diagram]

_Hub cluster N_ restores the managed cluster activation data. At this point, the managed clusters connect with _Hub cluster N_. The administrator activates a backup on the new primary hub cluster, _Hub cluster N_, by creating a `BackupSchedule.cluster.open-cluster-management.io` resource, and storing the backups at the same storage location as the initial primary hub cluster.
_Hub cluster N_ restores the managed cluster activation data. The managed clusters connect to _Hub cluster N_. You activate a backup on the new primary hub cluster, _Hub cluster N_, by creating a `BackupSchedule.cluster.open-cluster-management.io` resource, and storing the backups at the same storage location as the initial primary hub cluster.

All other passive hub clusters now restore passive data using the backup data created by the new primary hub cluster. _Hub N_ is now the primary hub cluster, managing clusters and backing up data.
All other passive hub clusters now restore passive data by using the backup data that is created by the new primary hub cluster. _Hub N_ is now the primary hub cluster, managing clusters and backing up data.

*Notes:*
*Important:*

- Process 1 in the previous diagram is not automated because the administrator must decide if the primary hub cluster has failed and needs to be replaced, or if there is a network communication error between the hub cluster and the managed clusters. The administrator also decides which passive hub cluster becomes the primary hub cluster. The policy integration with :aap: jobs can help you automate this step by making a job run when the backup policy reports backup errors.

- Process 2 in the previous diagram is manual. If the administrator does not create backups from the new primary hub cluster, the administrator is notified by using the backups that are actively running as a cron job.
* The first process in the earlier _Disaster recovery diagram_ is not automated because of the following reasons:
** You must decide if the primary hub cluster has failed and needs to be replaced, or if there is a network communication error between the hub cluster and the managed clusters.
** You must decide which passive hub cluster becomes the primary hub cluster. The policy integration with {aap} jobs can help you automate this step by making a job run when the backup policy reports backup errors.
* The second process in the earlier _Disaster recovery diagram_ is manual. If you did not create a backup schedule on the new primary hub cluster, the `backup-restore-enabled` policy shows a violation by using the `backup-schedule-cron-enabled` policy template. In this second process, you can do the following actions:
** Use the `backup-schedule-cron-enabled` policy template to validate if the new primary hub cluster has backups running as a cron job.
** Use the policy integration with `Ansible` and define an `Ansible` job that can run when the `backup-schedule-cron-enabled` policy template reports violations.
* For more details about the `backup-restore-enabled` policy templates, see xref:../backup_restore/backup_validate.adoc#backup-validation-using-a-policy[Validating your backup or restore configurations].

[#dr4hub-hub-config-resources]
== Additional resources
Expand Down
10 changes: 10 additions & 0 deletions clusters/discovery/enable_discovery.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,11 @@ Automatically import supported clusters into your hub cluster with the `Discover

*Required access:* Cluster administrator

[#discovered-rosa-prereqs]
== Prerequisite

* Discovery is enabled by default. If you changed default settings, you need to enable Discovery.
* You must set up the {rosa} command line interface. See link:https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/rosa_cli/rosa-get-started-cli#rosa-get-started-cli[Getting started with the {rosa} CLI] documentation.

[#import-discovered-auto-rosa-hcp]
== Importing discovered {rosa} and hosted control plane clusters automatically
Expand Down Expand Up @@ -93,6 +95,14 @@ oc patch discoveredcluster <name> -n <namespace> --type='json' -p='[{"op": "repl
oc get managedcluster <name>
----

. To get a description of your {rosa} cluster ID, run the following command from the {rosa} command line interface:

+
[source,bash]
----
rosa describe cluster --cluster=<cluster-name> | grep -o '^ID:.*
----

For other Kubernetes providers, you must import these infrastructure provider `DiscoveredCluster` resources manually. Directly apply Kubernetes configurations to the other types of `DiscoveredCluster` resources. If you enable the `importAsManagedCluster` field from the `DiscoveredCluster` resource, it is not imported due to the Discovery webhook.

[#add-resource-enable-discovery]
Expand Down
35 changes: 26 additions & 9 deletions clusters/release_notes/known_issues.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,12 @@ If this problem occurs, it is typically on the following versions of {ocp-short}

To avoid this error, upgrade your {ocp-short} to version 4.8.18 or later, or 4.9.7 or later.

[#boot-discovery-auto-add-host]
=== Cannot use host inventory to boot with the discovery image and add hosts automatically
//2.12:ACM-14719

You cannot use a host inventory, or `InfraEnv` custom resource, to both boot with the discovery image and add hosts automatically. If you used your previous `InfraEnv` resource for the `BareMetalHost` resource, and you want to boot the image yourself, you can work around the issue by creating a new `InfraEnv` resource.

[#cluster-local-offline-reimport]
=== Local-cluster status offline after reimporting with a different name
//2.4:16977
Expand Down Expand Up @@ -445,20 +451,14 @@ Error querying resource logs:
Service unavailable
----

[#manageserviceaccount-addon-limitation]
=== Managed Service Account add-on limitations
//2.9:ACM-8586

The following are known issues and limitations for the `managed-serviceaccount` add-on:

[#installnamespace-field-limit]
==== _installNamespace_ field can only have one value
=== _installNamespace_ field can only have one value
//2.9:ACM-7523

When enabling the `managed-serviceaccount` add-on, the `installNamespace` field in the `ManagedClusterAddOn` resource must have `open-cluster-management-agent-addon` as the value. Other values are ignored. The `managed-serviceaccount` add-on agent is always deployed in the `open-cluster-management-agent-addon` namespace on the managed cluster.

[#settings-limit-msa-agent]
==== _tolerations_ and _nodeSelector_ settings do not affect the _managed-serviceaccount_ agent
=== _tolerations_ and _nodeSelector_ settings do not affect the _managed-serviceaccount_ agent
//2.9:ACM-7523

The `tolerations` and `nodeSelector` settings configured on the `MultiClusterEngine` and `MultiClusterHub` resources do not affect the `managed-serviceaccount` agent deployed on the local cluster. The `managed-serviceaccount` add-on is not always required on the local cluster.
Expand Down Expand Up @@ -493,6 +493,23 @@ When you upgrade an {ocp-short} Dedicated cluster by using the `ClusterCurator`

You can specify a custom ingress domain by using the `ClusterDeployment` resource while installing a managed cluster, but the change is only applied after the installation by using the `SyncSet` resource. As a result, the `spec` field in the `clusterdeployment.yaml` file displays the custom ingress domain you specified, but the `status` still displays the default domain.

[#install-sno-ocp-infra]
=== A {sno} cluster installation requires a matching {ocp-short} with infrastructure operator for Red Hat OpenShift

If you want to install a {sno} cluster with an {ocp} version before 4.16, your `InfraEnv` custom resource and your booted host must use the same {ocp-short} version that you are using to install the {sno} cluster. The installation fails if the versions do not match.

To work around the issue, edit your `InfraEnv` resource before you boot a host with the Discovery ISO, and include the following content:

[source,yaml]
----
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
spec:
osImageVersion: 4.15
----

The `osImageVersion` field must match the {ocp} cluster version that you want to install.

[#hosted-control-plane-mce]
== Hosted control planes

Expand Down Expand Up @@ -564,4 +581,4 @@ As a workaround, in the Agent specification, delete the secret that the `Ignitio
=== IBM Z hosts restart in a loop
//2.11:MGMT-17103

In hosted control planes on the IBM Z platform, when you unbind the hosts with the cluster, the hosts restart in loop and are not ready to be used. For a workaround for this issue, see xref:../../clusters/hosted_control_planes/destroy_hosted_cluster_x86bm_ibmz.adoc#destroy-hosted-cluster-x86bm-ibmz[Destroying a hosted cluster on x86 bare metal with IBM Z compute nodes].
In hosted control planes on the IBM Z platform, when you unbind the hosts with the cluster, the hosts restart in loop and are not ready to be used. For a workaround for this issue, see xref:../../clusters/hosted_control_planes/destroy_hosted_cluster_x86bm_ibmz.adoc#destroy-hosted-cluster-x86bm-ibmz[Destroying a hosted cluster on x86 bare metal with IBM Z compute nodes].
2 changes: 2 additions & 0 deletions clusters/release_notes/whats_new.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,8 @@ Learn about new features and enhancements for Cluster lifecycle with {mce-short}

- You can now use the {assist-install} to install a cluster in FIPS mode. See link:../../clusters/cluster_lifecycle/cim_enable.adoc#fips-install-cim[Installing a FIPS-enabled cluster by using the {assist-install}].

- The `local-cluster` is now imported automatically if you have both an `AgentServiceConfig` and `ManagedCluster` custom resource with the necessary annotations.

//[#credential]
//== Credentials

Expand Down
19 changes: 7 additions & 12 deletions networking/submariner/subm_disconnected.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,16 @@ Deploying Submariner on disconnected clusters can help with security concerns by
[#configuring-submariner-disconnected]
== Configuring Submariner on disconnected clusters

After following the steps outlined in link:../../install/install_disconnected.adoc#install-on-disconnected-networks[Install in disconnected network environments], you must configure Submariner during the installation to support deployment on disconnected clusters. See the following topics:
After completing the steps in link:../../install/install_disconnected.adoc#install-on-disconnected-networks[Install in disconnected network environments], configure Submariner during the installation to support deployment on disconnected clusters.

[#mirroring-images]
=== Mirroring images in the local registry
Complete the following steps:

Make sure to mirror the `Submariner Operator bundle` image in the local registry before deploying Submariner on disconnected clusters.
. Mirror the `Submariner Operator bundle` image in the local registry before you deploy Submariner on disconnected clusters.

[#customizing-catalogsource-names]
=== Customizing _catalogSource_ names
. Choose the Submariner Operator version that is compatible with your {acm-short} version. For instance, use `0.18.0` for {acm-short} version 2.11.

By default, `submariner-addon` searches for a `catalogSource` with the name `redhat-operators`. When using a `catalogSource` with a different name, you must update the value of the `SubmarinerConfig.Spec.subscriptionConfig.Source` parameter in the `SubmarinerConfig` associated with your managed cluster with the custom name of the `catalogSource`.
. Customize `catalogSource` names. By default, `submariner-addon` searches for a `catalogSource` with the name `redhat-operators`. When you use a `catalogSource` with a different name, you must update the value of the `SubmarinerConfig.Spec.subscriptionConfig.Source` parameter in the `SubmarinerConfig` associated with your managed cluster with the custom name of the `catalogSource`.

[#enabling-airgappeddeployment-submarinerconfig]
=== Enabling _airGappedDeployment_ in _SubmarinerConfig_
. Enable `airGappedDeployment` in `SubmarinerConfig`.When installing `submariner-addon` on a managed cluster from the {acm} console, you can select the *Disconnected cluster* option so that Submariner does not make API queries to external servers.

When installing `submariner-addon` on a managed cluster from the {acm} console, you can select the *Disconnected cluster* option so that Submariner does not make API queries to external servers.

If you are installing Submariner by using the APIs, you must set the `airGappedDeployment` parameter to `true` in the `SubmarinerConfig` associated with your managed cluster.
*Note:* If you are installing Submariner by using the APIs, you must set the `airGappedDeployment` parameter to `true` in the `SubmarinerConfig` associated with your managed cluster.
6 changes: 6 additions & 0 deletions release_notes/known_issues_application.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,12 @@ For more about deprecations and removals, see xref:../release_notes/deprecate_re

See the following known issues for the Application lifecycle component.

[#topology-displays-invalid-expression]
== Application topology displays invalid expression
//2.11:ACM-15077

When you use the `Exist` or `DoesNotExist` operators in the `Placement` resource, the application topology node details display the expressions as `#invalidExpr`. This display is wrong, and the expression is still valid and works in the `Placement` resource. To workaround this issue, edit the expression inside the `Placement` resource YAML.

[#app-argo-ocp-clusters]
== Argo CD pull model does not work on {ocp-short} 4.17 clusters
//2.11:ACM-14650
Expand Down