diff --git a/downstream/assemblies/aap-hardening/assembly-aap-security-enabling.adoc b/downstream/assemblies/aap-hardening/assembly-aap-security-enabling.adoc deleted file mode 100644 index 618334d88..000000000 --- a/downstream/assemblies/aap-hardening/assembly-aap-security-enabling.adoc +++ /dev/null @@ -1,20 +0,0 @@ -ifdef::context[:parent-context: {context}] - -[id="aap-security-enabling"] -= {PlatformNameShort} as a security enabling tool - -:context: aap-security-enabling - -[role="_abstract"] - - - -//// -Consider adding a link to future Builder docs here -[role="_additional-resources"] -.Additional resources -* A bulleted list of links to other material closely related to the contents of the concept module. -* Currently, modules cannot include xrefs, so you cannot include links to other content in your collection. If you need to link to another assembly, add the xref to the assembly that includes this module. -* For more details on writing concept modules, see the link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. -* Use a consistent system for file names, IDs, and titles. For tips, see _Anchor Names and File Names_ in link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. -//// \ No newline at end of file diff --git a/downstream/assemblies/aap-hardening/assembly-aap-security-use-cases.adoc b/downstream/assemblies/aap-hardening/assembly-aap-security-use-cases.adoc new file mode 100644 index 000000000..d1079af97 --- /dev/null +++ b/downstream/assemblies/aap-hardening/assembly-aap-security-use-cases.adoc @@ -0,0 +1,37 @@ +ifdef::context[:parent-context: {context}] + +[id="aap-security-use-cases"] += {PlatformNameShort} security automation use cases + +:context: aap-security-enabling + +[role="_abstract"] + +{PlatformNameShort} provides organizations the opportunity to automate many of the manual tasks required to maintain a strong IT security posture. +Areas where security operations might be automated include security event response and remediation, routine security operations, compliance with security policies and regulations, and security hardening of IT infrastructure. + +include::aap-hardening/con-security-operations-center.adoc[leveloffset=+1] +include::aap-hardening/con-patch-automation-with-aap.adoc[leveloffset=+1] +include::aap-hardening/con-benefits-of-patch-automation.adoc[leveloffset=+2] +include::aap-hardening/con-patching-examples.adoc[leveloffset=+2] +include::aap-hardening/ref-keep-up-to-date.adoc[leveloffset=+3] +include::aap-hardening/ref-install-security-updates.adoc[leveloffset=+3] +include::aap-hardening/ref-specify-package-versions.adoc[leveloffset=+3] +include::aap-hardening/ref-complex-patching-scenarios.adoc[leveloffset=+2] + + + + + + + + +//// +Consider adding a link to future Builder docs here +[role="_additional-resources"] +.Additional resources +* A bulleted list of links to other material closely related to the contents of the concept module. +* Currently, modules cannot include xrefs, so you cannot include links to other content in your collection. If you need to link to another assembly, add the xref to the assembly that includes this module. +* For more details on writing concept modules, see the link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. +* Use a consistent system for file names, IDs, and titles. For tips, see _Anchor Names and File Names_ in link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. +//// \ No newline at end of file diff --git a/downstream/assemblies/aap-hardening/assembly-hardening-aap.adoc b/downstream/assemblies/aap-hardening/assembly-hardening-aap.adoc index a2aa73696..c2bb33a6f 100644 --- a/downstream/assemblies/aap-hardening/assembly-hardening-aap.adoc +++ b/downstream/assemblies/aap-hardening/assembly-hardening-aap.adoc @@ -7,7 +7,8 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] -This guide takes a practical approach to hardening the {PlatformNameShort} security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for the installation phase. As this guide specifically covers {PlatformNameShort} running on Red Hat Enterprise Linux, hardening guidance for Red Hat Enterprise Linux will be covered where it affects the automation platform components. +This guide takes a practical approach to hardening the {PlatformNameShort} security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for the installation phase. +As this guide specifically covers {PlatformNameShort} running on {RHEL}, hardening guidance for {RHEL} is covered where it affects the automation platform components. include::aap-hardening/con-planning-considerations.adoc[leveloffset=+1] include::aap-hardening/ref-architecture.adoc[leveloffset=+2] @@ -16,11 +17,11 @@ include::aap-hardening/con-dns-ntp-service-planning.adoc[leveloffset=+2] include::aap-hardening/ref-dns.adoc[leveloffset=+3] include::aap-hardening/ref-dns-load-balancing.adoc[leveloffset=+3] include::aap-hardening/ref-ntp.adoc[leveloffset=+3] -include::aap-hardening/con-user-authentication-planning.adoc[leveloffset=+2] -include::aap-hardening/ref-automation-controller-authentication.adoc[leveloffset=+3] -include::aap-hardening/ref-private-automation-hub-authentication.adoc[leveloffset=+3] +//include::aap-hardening/con-user-authentication-planning.adoc[leveloffset=+2] +include::aap-hardening/ref-aap-authentication.adoc[leveloffset=+3] +//include::aap-hardening/ref-private-automation-hub-authentication.adoc[leveloffset=+3] include::aap-hardening/con-credential-management-planning.adoc[leveloffset=+2] -include::aap-hardening/ref-automation-controller-operational-secrets.adoc[leveloffset=+3] +//include::aap-hardening/ref-automation-controller-operational-secrets.adoc[leveloffset=+3] include::aap-hardening/con-automation-use-secrets.adoc[leveloffset=+3] include::aap-hardening/con-logging-log-capture.adoc[leveloffset=+2] include::aap-hardening/ref-auditing-incident-detection.adoc[leveloffset=+2] @@ -28,23 +29,23 @@ include::aap-hardening/con-rhel-host-planning.adoc[leveloffset=+2] include::aap-hardening/con-aap-additional-software.adoc[leveloffset=+3] include::aap-hardening/con-installation.adoc[leveloffset=+1] include::aap-hardening/con-install-secure-host.adoc[leveloffset=+2] -include::aap-hardening/ref-security-variables-install-inventory.adoc[leveloffset=+2] +//include::aap-hardening/ref-security-variables-install-inventory.adoc[leveloffset=+2] include::aap-hardening/proc-install-user-pki.adoc[leveloffset=+2] include::aap-hardening/ref-sensitive-variables-install-inventory.adoc[leveloffset=+2] -include::aap-hardening/con-controller-stig-considerations.adoc[leveloffset=+2] -include::aap-hardening/proc-fapolicyd.adoc[leveloffset=+3] -include::aap-hardening/proc-file-systems-mounted-noexec.adoc[leveloffset=+3] -include::aap-hardening/proc-namespaces.adoc[leveloffset=+3] -include::aap-hardening/ref-sudo-nopasswd.adoc[leveloffset=+3] +//include::aap-hardening/con-controller-stig-considerations.adoc[leveloffset=+2] +//include::aap-hardening/proc-fapolicyd.adoc[leveloffset=+3] +//include::aap-hardening/proc-file-systems-mounted-noexec.adoc[leveloffset=+3] +//include::aap-hardening/proc-namespaces.adoc[leveloffset=+3] +//include::aap-hardening/ref-sudo-nopasswd.adoc[leveloffset=+3] include::aap-hardening/ref-initial-configuration.adoc[leveloffset=+1] include::aap-hardening/ref-infrastructure-as-code.adoc[leveloffset=+2] include::aap-hardening/con-controller-configuration.adoc[leveloffset=+2] -include::aap-hardening/proc-configure-centralized-logging.adoc[leveloffset=+3] -include::aap-hardening/proc-configure-external-authentication.adoc[leveloffset=+3] +//include::aap-hardening/proc-configure-centralized-logging.adoc[leveloffset=+3] +//include::aap-hardening/proc-configure-external-authentication.adoc[leveloffset=+3] include::aap-hardening/con-external-credential-vault.adoc[leveloffset=+3] include::aap-hardening/con-day-two-operations.adoc[leveloffset=+1] include::aap-hardening/con-rbac.adoc[leveloffset=+2] include::aap-hardening/ref-updates-upgrades.adoc[leveloffset=+2] -include::aap-hardening/proc-controller-stig-considerations.adoc[leveloffset=+3] +//include::aap-hardening/proc-controller-stig-considerations.adoc[leveloffset=+3] include::aap-hardening/proc-disaster-recovery-operations.adoc[leveloffset=+3] diff --git a/downstream/assemblies/aap-hardening/assembly-intro-to-aap-hardening.adoc b/downstream/assemblies/aap-hardening/assembly-intro-to-aap-hardening.adoc index fb0682377..47492f3dc 100644 --- a/downstream/assemblies/aap-hardening/assembly-intro-to-aap-hardening.adoc +++ b/downstream/assemblies/aap-hardening/assembly-intro-to-aap-hardening.adoc @@ -10,15 +10,29 @@ ifdef::context[:parent-context: {context}] This document provides guidance for improving the security posture (referred to as “hardening” throughout this guide) of your {PlatformName} deployment on {RHEL}. -Other deployment targets, such as OpenShift, are not currently within the scope of this guide. {PlatformNameShort} managed services available through cloud service provider marketplaces are also not within the scope of this guide. +The following are not currently withoin the scope of this guide: -This guide takes a practical approach to hardening the {PlatformNameShort} security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for installation, initial configuration, and day two operations. As this guide specifically covers {PlatformNameShort} running on {RHEL}, hardening guidance for {RHEL} will be covered where it affects the automation platform components. Additional considerations with regards to the Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIGs) are provided for those organizations that integrate the DISA STIG as a part of their overall security strategy. +* Other deployment targets for {PlatformNameShort}, such as OpenShift. +* {PlatformNameShort} managed services available through cloud service provider marketplaces. +* Additional considerations with regards to the _Defense Information Systems Agency_ (DISA) _Security Technical Implementation Guides_ (STIGs) [NOTE] ==== -These recommendations do not guarantee security or compliance of your deployment of {PlatformNameShort}. You must assess security from the unique requirements of your organization to address specific threats and risks and balance these against implementation factors. +Hardening and compliance for {PlatformNameShort} 2.4 includes additional considerations with regards to the DISA STIGs, but this guidance does not apply to {PlatformNameShort} {PlatformVers}. +==== + +This guide takes a practical approach to hardening the {PlatformNameShort} security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for installation, initial configuration, and day 2 operations. +As this guide specifically covers {PlatformNameShort} running on {RHEL}, hardening guidance for {RHEL} will be covered where it affects the automation platform components. +//Additional considerations with regards to the Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIGs) are provided for those organizations that integrate the DISA STIG as a part of their overall security strategy. + +[NOTE] +==== +These recommendations do not guarantee security or compliance of your deployment of {PlatformNameShort}. +You must assess security from the unique requirements of your organization to address specific threats and risks and balance these against implementation factors. ==== include::aap-hardening/con-hardening-guide-audience.adoc[leveloffset=+1] include::aap-hardening/con-product-overview.adoc[leveloffset=+1] + +include::aap-hardening/con-deployment-methods.adoc[leveloffset=+2] include::aap-hardening/con-platform-components.adoc[leveloffset=+2] \ No newline at end of file diff --git a/downstream/images/workflow.png b/downstream/images/workflow.png new file mode 100644 index 000000000..c3ce21797 Binary files /dev/null and b/downstream/images/workflow.png differ diff --git a/downstream/modules/aap-hardening/.platform b/downstream/modules/aap-hardening/.platform new file mode 120000 index 000000000..1d58796b7 --- /dev/null +++ b/downstream/modules/aap-hardening/.platform @@ -0,0 +1 @@ +../platform \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-aap-additional-software.adoc b/downstream/modules/aap-hardening/con-aap-additional-software.adoc index a60ec7517..ba77677d2 100644 --- a/downstream/modules/aap-hardening/con-aap-additional-software.adoc +++ b/downstream/modules/aap-hardening/con-aap-additional-software.adoc @@ -7,6 +7,9 @@ [role="_abstract"] -When installing the {PlatformNameShort} components on {RHEL} servers, the {RHEL} servers should be dedicated to that use alone. Additional server capabilities should not be installed in addition to {PlatformNameShort}, as this is an unsupported configuration and may affect the security and performance of the {PlatformNameShort} software. +When installing the {PlatformNameShort} components on {RHEL} servers, the {RHEL} servers should be dedicated to that use alone. +Additional server capabilities must not be installed in addition to {PlatformNameShort}, as this is an unsupported configuration and may affect the security and performance of the {PlatformNameShort} software. -Similarly, when {PlatformNameShort} is deployed on a {RHEL} host, it installs software like the nginx web server, the Pulp software repository, and the PostgreSQL database server. This software should not be modified or used in a more generic fashion (for example, do not use nginx to server additional website content or PostgreSQL to host additional databases) as this is an unsupported configuration and may affect the security and performance of {PlatformNameShort}. The configuration of this software is managed by the {PlatformNameShort} installer, and any manual changes might be undone when performing upgrades. \ No newline at end of file +Similarly, when {PlatformNameShort} is deployed on a {RHEL} host, it installs software like the nginx web server, the Pulp software repository, and the PostgreSQL database server (unless a user-provided exyternal database is used). +This software should not be modified or used in a more generic fashion (for example, do not use nginx to server additional website content or PostgreSQL to host additional databases) as this is an unsupported configuration and may affect the security and performance of {PlatformNameShort}. +The configuration of this software is managed by the {PlatformNameShort} installer, and any manual changes might be undone when performing upgrades. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-benefits-of-patch-automation.adoc b/downstream/modules/aap-hardening/con-benefits-of-patch-automation.adoc new file mode 100644 index 000000000..d3c705187 --- /dev/null +++ b/downstream/modules/aap-hardening/con-benefits-of-patch-automation.adoc @@ -0,0 +1,10 @@ +[id="con-benefits-of-patch-automation"] + += Benefits of patch automation + +Automating the patching process provides a number of benefits: + +* Reduces error-prone manual effort. +* Decreases time to deploy patches at scale. +* Ensures consistency of patches across similar systems. Manual patching of similar systems can result in human error (forgetting one or more, patching using different versions) that impacts consistency. +* Enables orchestration of complex patching scenarios where an update mightmay require taking a system snapshot before applying a patch, or might require additional configuration changes when the patch is applied. diff --git a/downstream/modules/aap-hardening/con-credential-management-planning.adoc b/downstream/modules/aap-hardening/con-credential-management-planning.adoc index 31667f2ea..ed3601a13 100644 --- a/downstream/modules/aap-hardening/con-credential-management-planning.adoc +++ b/downstream/modules/aap-hardening/con-credential-management-planning.adoc @@ -7,9 +7,10 @@ [role="_abstract"] -{ControllerNameStart} uses credentials to authenticate requests to jobs against machines, synchronize with inventory sources, and import project content from a version control system. {ControllerNameStart} manages three sets of secrets: +{PlatformName} uses credentials to authenticate requests to jobs against machines, synchronize with inventory sources, and import project content from a version control system. {ControllerNameStart} manages three sets of secrets: -* User passwords for *local automation controller users*. See the xref:con-user-authentication-planning_{context}[User Authentication Planning] section of this guide for additional details. +* User passwords for *local automation controller users*. +//See the xref:con-user-authentication-planning_{context}[User Authentication Planning] section of this guide for additional details. * Secrets for automation controller *operational use* (database password, message bus password, and so on). * Secrets for *automation use* (SSH keys, cloud credentials, external password vault credentials, and so on). diff --git a/downstream/modules/aap-hardening/con-day-two-operations.adoc b/downstream/modules/aap-hardening/con-day-two-operations.adoc index f7e587847..bbb3efb31 100644 --- a/downstream/modules/aap-hardening/con-day-two-operations.adoc +++ b/downstream/modules/aap-hardening/con-day-two-operations.adoc @@ -7,4 +7,4 @@ [role="_abstract"] -Day 2 Operations include Cluster Health and Scaling Checks, including Host, Project, and environment level Sustainment. You should continually analyze configuration and security drift. \ No newline at end of file +Day 2 Operations include Cluster Health and Scaling Checks, including Host, Project, and environment level Sustainment. You must continually analyze configuration and security drift. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-deployment-methods.adoc b/downstream/modules/aap-hardening/con-deployment-methods.adoc new file mode 100644 index 000000000..16a703c53 --- /dev/null +++ b/downstream/modules/aap-hardening/con-deployment-methods.adoc @@ -0,0 +1,16 @@ +[id="con-deployment-methods"] + += {PlatformName} deployment methods + +There are three different installation methods for {PlatformNameShort}: + +* RPM-based on {RHEL} +* Container-based on {RHEL} +* Operator-based on {OCP} + +This document offers guidance on hardening {PlatformNameShort} when installed using either of the first two installation methods (RPM-based or container-based). +This document further recommends using the container-based installation method for new deployments, as the RPM-based installer will be deprecated in a future release. + +For further information, see link:{URLReleaseNotes}/aap-2.5-deprecated-features#aap-2.5-deprecated-features[Deprecated features]. + +Operator-based deployments are out of scope for this document. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-install-secure-host.adoc b/downstream/modules/aap-hardening/con-install-secure-host.adoc index fd23a45dc..4162a1121 100644 --- a/downstream/modules/aap-hardening/con-install-secure-host.adoc +++ b/downstream/modules/aap-hardening/con-install-secure-host.adoc @@ -7,8 +7,14 @@ [role="_abstract"] -The {PlatformNameShort} installer can be run from one of the infrastructure servers, such as an {ControllerName}, or from an external system that has SSH access to the {PlatformNameShort} infrastructure servers. The {PlatformNameShort} installer is also used not just for installation, but for subsequent day-two operations, such as backup and restore, as well as upgrades. This guide recommends performing installation and day-two operations from a dedicated external server, hereafter referred to as the installation host. Doing so eliminates the need to log in to one of the infrastructure servers to run these functions. The installation host must only be used for management of {PlatformNameShort} and must not run any other services or software. +The {PlatformNameShort} installer can be run from one of the infrastructure servers, such as an {ControllerName}, or from an external system that has SSH access to the {PlatformNameShort} infrastructure servers. +The {PlatformNameShort} installer is also used not just for installation, but for subsequent day-two operations, such as backup and restore, as well as upgrades. +This guide recommends performing installation and day-two operations from a dedicated external server, hereafter referred to as the installation host. +Doing so eliminates the need to log in to one of the infrastructure servers to run these functions. The installation host must only be used for management of {PlatformNameShort} and must not run any other services or software. -The installation host must be a {RHEL} server that has been installed and configured in accordance with link:{BaseURL}/red_hat_enterprise_linux/8/html/security_hardening/index[Security hardening for Red Hat Enterprise Linux] and any security profile requirements relevant to your organization (CIS, STIG, and so on). Obtain the {PlatformNameShort} installer as described in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_planning_guide/index#choosing_and_obtaining_a_red_hat_ansible_automation_platform_installer[Automation Platform Planning Guide], and create the installer inventory file as describe in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_installation_guide/index#proc-editing-installer-inventory-file_platform-install-scenario[Automation Platform Installation Guide]. This inventory file is used for upgrades, adding infrastructure components, and day-two operations by the installer, so preserve the file after installation for future operational use. +The installation host must be a {RHEL} server that has been installed and configured in accordance with link:{BaseURL}/red_hat_enterprise_linux/9/html/security_hardening/index[Security hardening for Red Hat Enterprise Linux] and any security profile requirements relevant to your organization (CIS, STIG, and so on). +Obtain the {PlatformNameShort} installer as described in the link:{URLPlanningGuide}/choosing_and_obtaining_a_red_hat_ansible_automation_platform_installer[Planning your installation], and create the installer inventory file as described in the link:{URLInstallationGuide}/assembly-platform-install-scenario#proc-editing-installer-inventory-file_platform-install-scenario[Editing the Red Hat Ansible Automation Platform installer inventory file]. +This inventory file is used for upgrades, adding infrastructure components, and day-two operations by the installer, so preserve the file after installation for future operational use. -Access to the installation host must be restricted only to those personnel who are responsible for managing the {PlatformNameShort} infrastructure. Over time, it will contain sensitive information, such as the installer inventory (which contains the initial login credentials for {PlatformNameShort}), copies of user-provided PKI keys and certificates, backup files, and so on. The installation host must also be used for logging in to the {PlatformNameShort} infrastructure servers through SSH when necessary for infrastructure management and maintenance. +Access to the installation host must be restricted only to those personnel who are responsible for managing the {PlatformNameShort} infrastructure. +Over time, it will contain sensitive information, such as the installer inventory (which contains the initial login credentials for {PlatformNameShort}), copies of user-provided PKI keys and certificates, backup files, and so on. The installation host must also be used for logging in to the {PlatformNameShort} infrastructure servers through SSH when necessary for infrastructure management and maintenance. diff --git a/downstream/modules/aap-hardening/con-installation.adoc b/downstream/modules/aap-hardening/con-installation.adoc index 64589fd3c..ed4d5fcaa 100644 --- a/downstream/modules/aap-hardening/con-installation.adoc +++ b/downstream/modules/aap-hardening/con-installation.adoc @@ -7,4 +7,6 @@ [role="_abstract"] -There are installation-time decisions that affect the security posture of {PlatformNameShort}. The installation process includes setting a number of variables, some of which are relevant to the hardening of the {PlatformNameShort} infrastructure. Before installing {PlatformNameShort}, consider the guidance in the installation section of this guide. \ No newline at end of file +There are installation-time decisions that affect the security posture of {PlatformNameShort}. +The installation process includes setting a number of variables, some of which are relevant to the hardening of the {PlatformNameShort} infrastructure. +Before installing {PlatformNameShort}, consider the guidance in the installation section of this guide. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-logging-log-capture.adoc b/downstream/modules/aap-hardening/con-logging-log-capture.adoc index 230010399..a04821b20 100644 --- a/downstream/modules/aap-hardening/con-logging-log-capture.adoc +++ b/downstream/modules/aap-hardening/con-logging-log-capture.adoc @@ -7,8 +7,22 @@ [role="_abstract"] -Visibility and analytics is an important pillar of Enterprise Security and Zero Trust Architecture. Logging is key to capturing actions and auditing. You can manage logging and auditing by using the built-in audit support described in the link:{BaseURL}/red_hat_enterprise_linux/9/html/security_hardening/auditing-the-system_security-hardening[Auditing the system] section of the Security hardening for {RHEL} guide. Controller's built-in logging and activity stream support {ControllerName} logs all changes within {ControllerName} and automation logs for auditing purposes. More detailed information is available in the link:https://docs.ansible.com/automation-controller/latest/html/administration/logging.html[Logging and Aggregation] section of the {ControllerName} documentation. +Visibility and analytics is an important pillar of Enterprise Security and Zero Trust Architecture. +Logging is key to capturing actions and auditing. +You can manage logging and auditing by using the built-in audit support described in the link:{BaseURL}/red_hat_enterprise_linux/9/html/security_hardening/auditing-the-system_security-hardening[Auditing the system] section of the Security hardening for {RHEL} guide. +{ControllerNameStart}'s built-in logging and activity stream support {ControllerName} logs all changes within {ControllerName} and automation logs for auditing purposes. +More detailed information is available in the link:{URLControllerAdminGuid}/assembly-controller-logging-aggregation[Logging and Aggregation] section of {TitleControllerAdminGuide}. -This guide recommends that you configure {PlatformNameShort} and the underlying {RHEL} systems to collect logging and auditing centrally, rather than reviewing it on the local system. {ControllerNameStart} must be configured to use external logging to compile log records from multiple components within the controller server. The events occurring must be time-correlated to conduct accurate forensic analysis. This means that the controller server must be configured with an NTP server that is also used by the logging aggregator service, as well as the targets of the controller. The correlation must meet certain industry tolerance requirements. In other words, there might be a varying requirement that time stamps of different logged events must not differ by any amount greater than X seconds. This capability should be available in the external logging service. +This guide recommends that you configure {PlatformNameShort} and the underlying {RHEL} systems to collect logging and auditing centrally, rather than reviewing it on the local system. +{ControllerNameStart} must be configured to use external logging to compile log records from multiple components within the controller server. +The events occurring must be time-correlated to conduct accurate forensic analysis. +This means that the {Controllername} server must be configured with an NTP server that is also used by the logging aggregator service, as well as the targets of {ControllerName}. +The correlation must meet certain industry tolerance requirements. +In other words, there might be a varying requirement that time stamps of different logged events must not differ by any amount greater than _x_ seconds. +This capability should be available in the external logging service. -Another critical capability of logging is the ability to use cryptography to protect the integrity of log tools. Log data includes all information (for example, log records, log settings, and log reports) needed to successfully log information system activity. It is common for attackers to replace the log tools or inject code into the existing tools to hide or erase system activity from the logs. To address this risk, log tools must be cryptographically signed so that you can identify when the log tools have been modified, manipulated, or replaced. For example, one way to validate that the log tool(s) have not been modified, manipulated or replaced is to use a checksum hash against the tool file(s). This ensures the integrity of the tool(s) has not been compromised. +Another critical capability of logging is the ability to use cryptography to protect the integrity of log tools. Log data includes all information (for example, log records, log settings, and log reports) needed to successfully log information system activity. +It is common for attackers to replace the log tools or inject code into the existing tools to hide or erase system activity from the logs. +To address this risk, log tools must be cryptographically signed so that you can identify when the log tools have been modified, manipulated, or replaced. +For example, one way to validate that the log tool(s) have not been modified, manipulated or replaced is to use a checksum hash against the tool file(s). +This ensures the integrity of the tool(s) has not been compromised. diff --git a/downstream/modules/aap-hardening/con-network-firewall-services-planning.adoc b/downstream/modules/aap-hardening/con-network-firewall-services-planning.adoc index ba5de6ccf..06944eb2f 100644 --- a/downstream/modules/aap-hardening/con-network-firewall-services-planning.adoc +++ b/downstream/modules/aap-hardening/con-network-firewall-services-planning.adoc @@ -3,20 +3,24 @@ [id="con-network-firewall-services_{context}"] -= Network, firewall, and network services planning for {PlatformNameShort} +//= Network, firewall, and network services planning for {PlatformNameShort} -[role="_abstract"] +//[role="_abstract"] -{PlatformNameShort} requires access to a network to integrate to external auxiliary services and to manage target environments and resources such as hosts, other network devices, applications, cloud services. The link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_planning_guide/index#ref-network-ports-protocols_planning[network ports and protocols] section of the {PlatformNameShort} planning guide describes how {PlatformNameShort} components interact on the network as well as which ports and protocols are used, as shown in the following diagram: +//{PlatformNameShort} requires access to a network to integrate to external auxiliary services and to manage target environments and resources such as hosts, other network devices, applications, cloud services. +//The link:{URLPlanningGuide}?ref-network-ports-protocols_planning[network ports and protocols] section of {TitlePlanningGuide} describes how {PlatformNameShort} components interact on the network as well as which ports and protocols are used, as shown in the following diagram: -.{PlatformNameShort} Network ports and protocols -image::aap-network-ports-protocols.png[Interaction of {PlatformNameShort} components on the network with information about the ports and protocols that are used.] +//.{PlatformNameShort} Network ports and protocols +//image::aap-network-ports-protocols.png[Interaction of {PlatformNameShort} components on the network with information about the ports and protocols that are used.] -When planning firewall or cloud network security group configurations related to {PlatformNameShort}, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_planning_guide/index#ref-network-ports-protocols_planning[Network ports and protocols] section of the {PlatformNameShort} Planning Guide to understand what network ports need to be opened on a firewall or security group. +When planning firewall or cloud network security group configurations related to {PlatformNameShort}, see the +'Network Ports' section of your chosen topology in link:{LinkTopologies} +//link:{URLPlanningGuide}?ref-network-ports-protocols_planning[network ports and protocols] section of {TitlePlanningGuide} +to understand what network ports need to be opened on a firewall or security group. -For more information on using a load balancer, and for outgoing traffic requirements for services compatible with {PlatformNameShort}. Consult the Red Hat Knowledgebase article link:https://access.redhat.com/solutions/6756251[What ports need to be opened in the firewall for {PlatformNameShort} 2 Services?]. For internet-connected systems, this article also defines the outgoing traffic requirements for services that {PlatformNameShort} can be configured to use, such as {HubNameMain}, {InsightsName}, {Galaxy}, the registry.redhat.io container image registry, and so on. +//For more information on using a load balancer, and for outgoing traffic requirements for services compatible with {PlatformNameShort}. Consult the Red Hat Knowledgebase article link:https://access.redhat.com/solutions/6756251[What ports need to be opened in the firewall for {PlatformNameShort} 2 Services?]. For internet-connected systems, this article also defines the outgoing traffic requirements for services that {PlatformNameShort} can be configured to use, such as {HubNameMain}, {InsightsName}, {Galaxy}, the registry.redhat.io container image registry, and so on. -For internet-connected systems, this article also defines the outgoing traffic requirements for services that {PlatformNameShort} can be configured to use, such as Red Hat {HubName}, {InsightsShort}, {Galaxy}, the registry.redhat.io container image registry, and so on. +For internet-connected systems, the link:{URLPlanningGuide}/ref-network-ports-protocols_planning[Networks and Protocols] section of {TitlePlanningGuide} defines the outgoing traffic requirements for services that {PlatformNameShort} can be configured to use, such as Red Hat {HubName}, {InsightsShort}, {Galaxy}, the registry.redhat.io container image registry, and so on. Restrict access to the ports used by the {PlatformNameShort} components to protected networks and clients. The following restrictions are highly recommended: diff --git a/downstream/modules/aap-hardening/con-patch-automation-with-aap.adoc b/downstream/modules/aap-hardening/con-patch-automation-with-aap.adoc new file mode 100644 index 000000000..8df0e5f71 --- /dev/null +++ b/downstream/modules/aap-hardening/con-patch-automation-with-aap.adoc @@ -0,0 +1,8 @@ +[id="con-patch-automation-with-aap"] + += Patch automation with {PlatformNameShort} + +Software patching is a fundamental activity of security and IT operations teams everywhere. +Keeping patches up to date is critical to remediating software vulnerabilities and meeting compliance requirements, but patching systems manually at scale can be time-consuming and error-prone. +Organizations should put thought into patch management strategies that meet their security, compliance, and business objectives, to prioritize the types of patches to apply (known exploits, critical or important vulnerabilities, optimizations, routine updates, new features, and so on) against the IT assets available across the enterprise. +Once policies and priorities have been defined and a patching plan is established, the manual tasks involved in patch management can be automated using {PlatformName} to improve patch deployment speed and accuracy, reduce human error, and limit downtime. diff --git a/downstream/modules/aap-hardening/con-patching-examples.adoc b/downstream/modules/aap-hardening/con-patching-examples.adoc new file mode 100644 index 000000000..0c36ed064 --- /dev/null +++ b/downstream/modules/aap-hardening/con-patching-examples.adoc @@ -0,0 +1,7 @@ +[id="con-patching-examples"] + += Patching examples + +The following playbooks are provided as patching examples, and should be modified to fit the target environment and tested thoroughly before being used in production. +These examples use the `ansible.builtin.dnf` module for managing packages on RHEL and other operating systems that use the `dnf` package manager. +Modules for patching other Linux operating systems, Microsoft Windows, and many network devices are also available. diff --git a/downstream/modules/aap-hardening/con-planning-considerations.adoc b/downstream/modules/aap-hardening/con-planning-considerations.adoc index f6f861746..fac3da912 100644 --- a/downstream/modules/aap-hardening/con-planning-considerations.adoc +++ b/downstream/modules/aap-hardening/con-planning-considerations.adoc @@ -7,16 +7,16 @@ [role="_abstract"] -When planning an {PlatformNameShort} installation, ensure that the following components are included: - -* Installer-manged components -** {ControllerNameStart} -** {EDAcontroller} -** {PrivateHubNameStart} -* PostgreSQL database (if not external) -** External services -** {InsightsName} -** {HubNameStart} -** `registry.redhat.io` (default {ExecEnvShort} container registry) - -See the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/platform-system-requirements[system requirements] section of the _{PlatformName} Planning Guide_ for additional information. +{PlatformName} is composed of the following primary components: + + + +* {ControllerNameStart} +* Automation mesh +* {PrivateHubNameStart} +* {EDAcontroller} + +A PostgreSQL database is also provided, although a user-provided PostgreSQL database can be used as well. +Red Hat recommends that customers always deploy all components of {PlatformNameShort} so that all features and capabilities are available for use without the need to take further action. + +For further information, see link:{URLPlanningGuide}/aap_architecture[{PlatformName} Architecture] diff --git a/downstream/modules/aap-hardening/con-platform-components.adoc b/downstream/modules/aap-hardening/con-platform-components.adoc index e3768d3ad..3ceee6053 100644 --- a/downstream/modules/aap-hardening/con-platform-components.adoc +++ b/downstream/modules/aap-hardening/con-platform-components.adoc @@ -7,8 +7,8 @@ [role="_abstract"] -{PlatformNameShort} is a modular platform that includes {ControllerName}, {HubName}, {EDAcontroller}, and {InsightsShort}. +{PlatformNameShort} is a modular platform composed of separate components that can be connected together, including {ControllerName}, {GatewayStart},{HubName}, and {EDAcontroller}. [role="_additional-resources"] .Additional resources -For more information about the components provided within {PlatformNameShort}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/ref-aap-components[Red Hat Ansible Automation Platform components] in the _Red Hat Ansible Automation Platform Planning Guide_. +For more information about the components provided within {PlatformNameShort}, see link:{URLPlanningGuide}/ref-aap-components[Red Hat Ansible Automation Platform components] in _Planning your installation_. diff --git a/downstream/modules/aap-hardening/con-product-overview.adoc b/downstream/modules/aap-hardening/con-product-overview.adoc index f3c42ba54..2db20855c 100644 --- a/downstream/modules/aap-hardening/con-product-overview.adoc +++ b/downstream/modules/aap-hardening/con-product-overview.adoc @@ -7,6 +7,11 @@ [role="_abstract"] -Ansible is an open source, command-line IT automation software application written in Python. You can use {PlatformNameShort} to configure systems, deploy software, and orchestrate advanced workflows to support application deployment, system updates, and more. Ansible’s main strengths are simplicity and ease of use. It also has a strong focus on security and reliability, featuring minimal moving parts. It uses secure, well-known communication protocols like SSH, HTTPS, and WinRM for transport and uses a human-readable language that is designed for getting started quickly without extensive training. +Ansible is an open source, command-line IT automation software application written in Python. +You can use {PlatformNameShort} to configure systems, deploy software, and orchestrate advanced workflows to support application deployment, system updates, and more. +Ansible's main strengths are simplicity and ease of use. It also has a strong focus on security and reliability, featuring minimal moving parts. It uses secure, well-known communication protocols like SSH, HTTPS, and WinRM for transport and uses a human-readable language that is designed for getting started quickly without extensive training. -{PlatformNameShort} enhances the Ansible language with enterprise-class features, such as Role-Based Access Controls (RBAC), centralized logging and auditing, credential management, job scheduling, and complex automation workflows. With {PlatformNameShort} you get certified content from our robust partner ecosystem; added security, reporting, and analytics; and life cycle technical support to scale automation across your organization. {PlatformNameShort} simplifies the development and operation of automation workloads for managing enterprise application infrastructure life cycles. It works across multiple IT domains including operations, networking, security, and development, as well as across diverse hybrid environments. \ No newline at end of file +{PlatformNameShort} enhances the Ansible language with enterprise-class features, such as _Role-Based Access Controls_ (RBAC), centralized logging and auditing, credential management, job scheduling, and complex automation workflows. +With {PlatformNameShort} you get certified content from our robust partner ecosystem; added security, reporting, and analytics; and life cycle technical support to scale automation across your organization. +{PlatformNameShort} simplifies the development and operation of automation workloads for managing enterprise application infrastructure life cycles. +It works across multiple IT domains including operations, networking, security, and development, as well as across diverse hybrid environments. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-rbac.adoc b/downstream/modules/aap-hardening/con-rbac.adoc index dee75d460..e56162af3 100644 --- a/downstream/modules/aap-hardening/con-rbac.adoc +++ b/downstream/modules/aap-hardening/con-rbac.adoc @@ -7,17 +7,21 @@ [role="_abstract"] -As an administrator, you can use the Role-Based Access Controls (RBAC) built into {ControllerName} to delegate access to server inventories, organizations, and more. Administrators can also centralize the management of various credentials, allowing end users to leverage a needed secret without ever exposing that secret to the end user. RBAC controls allow the controller to help you increase security and streamline management. +As an administrator, you can use the _Role-Based Access Controls_ (RBAC) built into the {GatewayStart} to delegate access to server inventories, organizations, and more. +Administrators can also centralize the management of various credentials, enabling end users to use a needed secret without ever exposing that secret to the end user. +RBAC controls allow {PlatformNameShort} to help you increase security and streamline management. -RBAC is the practice of granting roles to users or teams. RBACs are easiest to think of in terms of Roles which define precisely who or what can see, change, or delete an “object” for which a specific capability is being set. +RBAC is the practice of granting roles to users or teams. +RBAC is easiest to think of in terms of Roles which define precisely who or what can see, change, or delete an “object” for which a specific capability is being set. -There are a few main concepts that you should become familiar with regarding {ControllerName}'s RBAC design–roles, resources, and users. Users can be members of a role, which gives them certain access to any resources associated with that role, or any resources associated with “descendant” roles. +There are a few main concepts that you should become familiar with regarding {PlatformNameShort}'s RBAC design–roles, resources, and users. +Users can be members of a role, which gives them certain access to any resources associated with that role, or any resources associated with “descendant” roles. A role is essentially a collection of capabilities. Users are granted access to these capabilities and the controller’s resources through the roles to which they are assigned or through roles inherited through the role hierarchy. Roles associate a group of capabilities with a group of users. All capabilities are derived from membership within a role. Users receive capabilities only through the roles to which they are assigned or through roles they inherit through the role hierarchy. All members of a role have all capabilities granted to that role. Within an organization, roles are relatively stable, while users and capabilities are both numerous and may change rapidly. Users can have many roles. -For further detail on Role Hierarchy, access inheritance, Built in Roles, permissions, personas, Role Creation, and so on see link:https://docs.ansible.com/automation-controller/latest/html/userguide/security.html#role-based-access-controls[Role-Based Access Controls]. +For further detail on Role Hierarchy, access inheritance, Built in Roles, permissions, personas, Role Creation, and so on see link:https://docs.ansible.com/automation-controller/latest/html/userguide/security.html#role-based-access-controls[Managing access with Role-Based access controls]. The following is an example of an organization with roles and resource permissions: @@ -26,6 +30,12 @@ image::aap_ref_arch_2.4.1.png[Reference architecture for an example of an organi User access is based on managing permissions to system objects (users, groups, namespaces) rather than by assigning permissions individually to specific users. You can assign permissions to the groups you create. You can then assign users to these groups. This means that each user in a group has the permissions assigned to that group. -Groups created in Automation Hub can range from system administrators responsible for governing internal collections, configuring user access, and repository management to groups with access to organize and upload internally developed content to Automation Hub. For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/assembly-user-access#ref-permissions[{HubNameStart} permissions] for consistency. +Teams created in {HubName} can range from system administrators responsible for governing internal collections, configuring user access, and repository management to groups with access to organize and upload internally developed content to {HubName}. -View-only access can be enabled for further lockdown of the {PrivateHubName}. By enabling view-only access, you can grant access for users to view collections or namespaces on your {PrivateHubName} without the need for them to log in. View-only access allows you to share content with unauthorized users while restricting their ability to only view or download source code, without permissions to edit anything on your {PrivateHubName}. Enable view-only access for your {PrivateHubName} by editing the inventory file found on your {PlatformName} installer. +//TBD link to getting started with hub +//For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/assembly-user-access#ref-permissions[{HubNameStart} permissions] for consistency. + +View-only access can be enabled for further lockdown of the {PrivateHubName}. +By enabling view-only access, you can grant access for users to view collections or namespaces on your {PrivateHubName} without the need for them to log in. +View-only access allows you to share content with unauthorized users while restricting their ability to only view or download source code, without permissions to edit anything on your {PrivateHubName}. +Enable view-only access for your {PrivateHubName} by editing the inventory file found on your {PlatformName} installer. diff --git a/downstream/modules/aap-hardening/con-rhel-host-planning.adoc b/downstream/modules/aap-hardening/con-rhel-host-planning.adoc index 52ba269f2..148a4f3d6 100644 --- a/downstream/modules/aap-hardening/con-rhel-host-planning.adoc +++ b/downstream/modules/aap-hardening/con-rhel-host-planning.adoc @@ -7,6 +7,10 @@ [role="_abstract"] -The security of {PlatformNameShort} relies in part on the configuration of the underlying {RHEL} servers. For this reason, the underlying {RHEL} hosts for each {PlatformNameShort} component must be installed and configured in accordance with the link:{BaseURL}/red_hat_enterprise_linux/8/html-single/security_hardening/index[Security hardening for {RHEL} 8] or link:{BaseURL}/red_hat_enterprise_linux/9/html-single/security_hardening/index[Security hardening for {RHEL} 9] (depending on which operating system will be used), as well as any security profile requirements (CIS, STIG, HIPAA, and so on) used by your organization. +The security of {PlatformNameShort} relies in part on the configuration of the underlying {RHEL} servers. +For this reason, the underlying {RHEL} hosts for each {PlatformNameShort} component must be installed and configured in accordance with the link:{BaseURL}/red_hat_enterprise_linux/8/html-single/security_hardening/index[Security hardening for {RHEL} 8] or link:{BaseURL}/red_hat_enterprise_linux/9/html-single/security_hardening/index[Security hardening for {RHEL} 9] (depending on which operating system will be used), as well as any security profile requirements (CIS, STIG, HIPAA, and so on) used by your organization. +For new deployments, this document recommends {RHEL} 9 for all new deployments. +When using the container-based installation method, {RHEL} 9 is required. -Note that applying certain security controls from the STIG or other security profiles may conflict with {PlatformNameShort} support requirements. Some examples are listed in the xref:con-controller-stig-considerations_{context}[{ControllerNameStart} STIG considerations] section, although this is not an exhaustive list. To maintain a supported configuration, be sure to discuss any such conflicts with your security auditors so the {PlatformNameShort} requirements are understood and approved. +//Note that applying certain security controls from the STIG or other security profiles may conflict with {PlatformNameShort} support requirements. +//Some examples are listed in the xref:con-controller-stig-considerations_{context}[{ControllerNameStart} STIG considerations] section, although this is not an exhaustive list. To maintain a supported configuration, be sure to discuss any such conflicts with your security auditors so the {PlatformNameShort} requirements are understood and approved. diff --git a/downstream/modules/aap-hardening/con-security-operations-center.adoc b/downstream/modules/aap-hardening/con-security-operations-center.adoc new file mode 100644 index 000000000..3cd095b9e --- /dev/null +++ b/downstream/modules/aap-hardening/con-security-operations-center.adoc @@ -0,0 +1,14 @@ +[id="con-security-operations-center"] + += {PlatformName} as part of a Security Operations Center + +Protecting your organization is a critical task. +Automating functions of your _Security Operations Center_ (SOC) can help you streamline security operations, response, and remediation activities at scale to reduce the risk and cost of breaches. +{PlatformName} can connect your security teams, tools, and processes for more successful automation adoption and use. +Learn how automation can help you safeguard your business and respond to growing security threats faster. + +link:https://www.redhat.com/en/resources/security-automation-ebook[Simplify your security operations center] provides an overview of the benefits to automating SOC operations, including such use cases as: + +* Investigation enrichment +* Threat hunting +* Incident response diff --git a/downstream/modules/aap-hardening/con-user-authentication-planning.adoc b/downstream/modules/aap-hardening/con-user-authentication-planning.adoc index cfaff9004..e0915d521 100644 --- a/downstream/modules/aap-hardening/con-user-authentication-planning.adoc +++ b/downstream/modules/aap-hardening/con-user-authentication-planning.adoc @@ -7,15 +7,14 @@ [role="_abstract"] -When planning for access to the {PlatformNameShort} user interface or API, be aware that user accounts can either be local or mapped to an external authentication source such as LDAP. This guide recommends that where possible, all primary user accounts should be mapped to an external authentication source. Using external account sources eliminates a source of error when working with permissions in this context and minimizes the amount of time devoted to maintaining a full set of users exclusively within {PlatformNameShort}. This includes accounts assigned to individual persons as well as for non-person entities such as service accounts used for external application integration. Reserve any local administrator accounts such as the default "admin" account for emergency access or "break glass" scenarios where the external authentication mechanism is not available. - - -[NOTE] -==== -The {EDAcontroller} does not currently support external authentication, only local accounts. -==== - -For user accounts on the {RHEL} servers that run the {PlatformNameShort} services, follow your organizational policies to determine if individual user accounts should be local or from an external authentication source. Only users who have a valid need to perform maintenance tasks on the {PlatformNameShort} components themselves should be granted access to the underlying {RHEL} servers, as the servers will have configuration files that contain sensitive information such as encryption keys and service passwords. Because these individuals must have privileged access to maintain {PlatformNameShort} services, minimizing the access to the underlying {RHEL} servers is critical. Do not grant sudo access to the root account or local {PlatformNameShort} service accounts (awx, pulp, postgres) to untrusted users. +When planning for access to the {PlatformNameShort} user interface or API, be aware that user accounts can either be local or mapped to an external authentication source such as LDAP. +This guide recommends that where possible, all primary user accounts should be mapped to an external authentication source. +Using external account sources eliminates a source of error when working with permissions in this context and minimizes the amount of time devoted to maintaining a full set of users exclusively within {PlatformNameShort}. This includes accounts assigned to individual persons as well as for non-person entities such as service accounts used for external application integration. +Reserve any local administrator accounts such as the default "admin" account for emergency access or "break glass" scenarios where the external authentication mechanism is not available. + +For user accounts on the {RHEL} servers that run the {PlatformNameShort} services, follow your organizational policies to determine if individual user accounts should be local or from an external authentication source. +Only users who have a valid need to perform maintenance tasks on the {PlatformNameShort} components themselves should be granted access to the underlying {RHEL} servers, as the servers will have configuration files that contain sensitive information such as encryption keys and service passwords. +Because these individuals must have privileged access to maintain {PlatformNameShort} services, minimizing the access to the underlying {RHEL} servers is critical. Do not grant sudo access to the root account or local {PlatformNameShort} service accounts (awx, pulp, postgres) to untrusted users. [NOTE] ==== diff --git a/downstream/modules/aap-hardening/platform b/downstream/modules/aap-hardening/platform new file mode 120000 index 000000000..01b1259b7 --- /dev/null +++ b/downstream/modules/aap-hardening/platform @@ -0,0 +1 @@ +../platform/ \ No newline at end of file diff --git a/downstream/modules/aap-hardening/proc-configure-centralized-logging.adoc b/downstream/modules/aap-hardening/proc-configure-centralized-logging.adoc index 577f2060e..a41f476c6 100644 --- a/downstream/modules/aap-hardening/proc-configure-centralized-logging.adoc +++ b/downstream/modules/aap-hardening/proc-configure-centralized-logging.adoc @@ -5,14 +5,18 @@ = Configure centralized logging -A critical capability of logging is the ability for the {ControllerName} to detect and take action to mitigate a failure, such as reaching storage capacity, which by default shuts down the controller. This guide recommends that the application server be part of a high availability system. When this is the case, {ControllerName} will take the following steps to mitigate failure: +A critical capability of logging is the ability for the {ControllerName} to detect and take action to mitigate a failure, such as reaching storage capacity, which by default shuts down {ControllerName}. +This guide recommends that the application server be part of a high availability system. +When this is the case, {ControllerName} will take the following steps to mitigate failure: * If the failure was caused by the lack of log record storage capacity, the application must continue generating log records if possible (automatically restarting the log service if necessary), overwriting the oldest log records in a first-in-first-out manner. -* If log records are sent to a centralized collection server and communication with this server is lost or the server fails, the application must queue log records locally until communication is restored or until the log records are retrieved manually. Upon restoration of the connection to the centralized collection server, action must be taken to synchronize the local log data with the collection server. +* If log records are sent to a centralized collection server and communication with this server is lost or the server fails, the application must queue log records locally until communication is restored or until the log records are retrieved manually. +Upon restoration of the connection to the centralized collection server, action must be taken to synchronize the local log data with the collection server. To verify the rsyslog configuration for each {ControllerName} host, complete the following steps for each {ControllerName}: -The administrator must check the rsyslog configuration for each {ControllerName} host to verify the log rollover against a organizationally defined log capture size. To do this, use the following steps, and correct using the configuration steps as required: +The administrator must check the rsyslog configuration for each {ControllerName} host to verify the log rollover against a organizationally defined log capture size. +To do this, use the following steps, and correct using the configuration steps as required: . Check the `LOG_AGGREGATOR_MAX_DISK_USAGE_GB` field in the {ControllerName} configuration. On the host, execute: + @@ -43,51 +47,55 @@ Note that this change will need to be made on each {ControllerName} in a load-ba All user session data must be logged to support troubleshooting, debugging and forensic analysis for visibility and analytics. Without this data from the controller’s web server, important auditing and analysis for event investigations will be lost. To verify that the system is configured to ensure that user session data is logged, use the following steps: -For each {ControllerName} host, navigate to console Settings >> System >> Miscellaneous System. +For each {ControllerName} host, from the navigation panel, select {MenuSetSystem}. . Click btn:[Edit]. . Set the following: -* Enable Activity Stream = On -* Enable Activity Stream for Inventory Sync = On -* Organization Admins Can Manage Users and Teams = Off -* All Users Visible to Organization Admins = On +* *Enable Activity Stream* = On +* *Enable Activity Stream for Inventory Sync* = On +* *Organization Admins Can Manage Users and Teams* = Off +* *All Users Visible to Organization Admins* = On . Click btn:[Save] To set up logging to any of the aggregator types, read the documentation on link:https://docs.ansible.com/automation-controller/latest/html/administration/logging.html#logging-aggregator-services[supported log aggregators] and configure your log aggregator using the following steps: -. Navigate to {PlatformNameShort}. -. Click btn:[Settings]. -. Under the list of System options, select Logging settings. -. At the bottom of the Logging settings screen, click btn:[Edit]. -. Set the configurable options from the fields provided: -* Enable External Logging: Click the toggle button to btn:[ON] if you want to send logs to an external log aggregator. The UI requires the Logging Aggregator and Logging Aggregator Port fields to be filled in before this can be done. -* Logging Aggregator: Enter the hostname or IP address you want to send logs. -* Logging Aggregator Port: Specify the port for the aggregator if it requires one. -* Logging Aggregator Type: Select the aggregator service from the drop-down menu: -** Splunk -** Loggly -** Sumologic -** Elastic stack (formerly ELK stack) -* Logging Aggregator Username: Enter the username of the logging aggregator if required. -* Logging Aggregator Password/Token: Enter the password of the logging aggregator if required. -* Log System Tracking Facts Individually: Click the tooltip icon for additional information, whether or not you want to turn it on, or leave it off by default. -* Logging Aggregator Protocol: Select a connection type (protocol) to communicate with the log aggregator. Subsequent options vary depending on the selected protocol. -* Logging Aggregator Level Threshold: Select the level of severity you want the log handler to report. -* TCP Connection Timeout: Specify the connection timeout in seconds. This option is only applicable to HTTPS and TCP log aggregator protocols. -* Enable/disable HTTPS certificate verification: Certificate verification is enabled by default for HTTPS log protocol. Click the toggle button to btn:[OFF] if you do not want the log handler to verify the HTTPS certificate sent by the external log aggregator before establishing a connection. -* Loggers to Send Data to the Log Aggregator Form: All four types of data are pre-populated by default. Click the tooltip icon next to the field for additional information on each data type. Delete the data types you do not want. -* Log Format For API 4XX Errors: Configure a specific error message. -. Click btn:[Save] to apply the settings or btn:[Cancel] to abandon the changes. -. To verify if your configuration is set up correctly, btn:[Save] first then click btn:[Test]. This sends a test log message to the log aggregator using the current logging configuration in the {ControllerName}. You should check to make sure this test message was received by your external log aggregator. - -A {ControllerName} account is automatically created for any user who logs in with an LDAP username and password. These users can automatically be placed into organizations as regular users or organization administrators. This means that logging should be turned on when LDAP integration is in use. You can enable logging messages for the SAML adapter the same way you can enable logging for LDAP. +include::platform/proc-controller-set-up-logging.adoc[leveloffset=+2] + +//. Navigate to {PlatformNameShort}. +//. Click btn:[Settings]. +//. Under the list of System options, select Logging settings. +//. At the bottom of the Logging settings screen, click btn:[Edit]. +//. Set the configurable options from the fields provided: +//* Enable External Logging: Click the toggle button to btn:[ON] if you want to send logs to an external log aggregator. The UI requires the Logging Aggregator and Logging Aggregator Port fields to be filled in before this can be done. +//* Logging Aggregator: Enter the hostname or IP address you want to send logs. +//* Logging Aggregator Port: Specify the port for the aggregator if it requires one. +//* Logging Aggregator Type: Select the aggregator service from the drop-down menu: +//** Splunk +//** Loggly +//** Sumologic +//** Elastic stack (formerly ELK stack) +//* Logging Aggregator Username: Enter the username of the logging aggregator if required. +//* Logging Aggregator Password/Token: Enter the password of the logging aggregator if required. +//* Log System Tracking Facts Individually: Click the tooltip icon for additional information, whether or not you want to turn it on, or leave it off by default. +//* Logging Aggregator Protocol: Select a connection type (protocol) to communicate with the log aggregator. Subsequent options vary depending on the selected protocol. +//* Logging Aggregator Level Threshold: Select the level of severity you want the log handler to report. +//* TCP Connection Timeout: Specify the connection timeout in seconds. This option is only applicable to HTTPS and TCP log aggregator protocols. +//* Enable/disable HTTPS certificate verification: Certificate verification is enabled by default for HTTPS log protocol. Click the toggle button to btn:[OFF] if you do not want the log handler to verify the HTTPS certificate sent by the external log aggregator before establishing a connection. +//* Loggers to Send Data to the Log Aggregator Form: All four types of data are pre-populated by default. Click the tooltip icon next to the field for additional information on each data type. Delete the data types you do not want. +//* Log Format For API 4XX Errors: Configure a specific error message. +//. Click btn:[Save] to apply the settings or btn:[Cancel] to abandon the changes. +//. To verify if your configuration is set up correctly, btn:[Save] first then click btn:[Test]. This sends a test log message to the log aggregator using the current logging configuration in the {ControllerName}. You should check to make sure this test message was received by your external log aggregator. + +An {ControllerName} account is automatically created for any user who logs in with an LDAP username and password. These users can automatically be placed into organizations as regular users or organization administrators. +This means that logging must be turned on when LDAP integration is in use. +You can enable logging messages for the SAML adapter the same way you can enable logging for LDAP. The following steps enable the LDAP logging: To enable logging for LDAP, you must set the level to DEBUG in the Settings configuration window. -. Click btn:[Settings] from the left navigation pane and select Logging settings from the System list of options. -. Click btn:[Edit]. -. Set the Logging Aggregator Level Threshold field to Debug. +. From the navigation panel, select {MenuSetLogging}. +. On the *Logging settings* page, click btn:[Edit]. +. Set the *Logging Aggregator Level Threshold* field to Debug. . Click btn:[Save] to save your changes. diff --git a/downstream/modules/aap-hardening/proc-configure-external-authentication.adoc b/downstream/modules/aap-hardening/proc-configure-external-authentication.adoc index 2ddb9a1f1..2211741af 100644 --- a/downstream/modules/aap-hardening/proc-configure-external-authentication.adoc +++ b/downstream/modules/aap-hardening/proc-configure-external-authentication.adoc @@ -7,7 +7,14 @@ [role="_abstract"] -As noted in the xref:con-user-authentication-planning_{context}[User authentication planning] section, external authentication is recommended for user access to the {ControllerName}. After you choose the authentication type that best suits your needs, navigate to {MenuAEAdminSettings} and select *Authentication* in the {ControllerName} UI, click on the relevant link for your authentication back-end, and follow the relevant instructions for link:https://docs.ansible.com/automation-controller/latest/html/administration/configure_tower_in_tower.html#authentication[configuring the authentication] connection. +As noted in the xref:con-user-authentication-planning_{context}[User authentication planning] section, external authentication is recommended for user access to the {ControllerName}. +After you choose the authentication type that best suits your needs: + +. Navigate to {MenuAMAuthentication}. +. Click btn:[Create authentication] +. Select the *Authentication type* you require from the menu. +. Click btn:[Next] +. On the *Authentication details* in the {ControllerName} UI, click on the relevant link for your authentication back-end, and follow the relevant instructions for link:https://docs.ansible.com/automation-controller/latest/html/administration/configure_tower_in_tower.html#authentication[configuring the authentication] connection. // [ddacosta] The following will need to be rewritten for the way this is configured in 2.5 When using LDAP for external authentication with the {ControllerName}, navigate to {MenuAEAdminSettings} and select *Authentication* and then select *LDAP settings* on the {ControllerName} and ensure that one of the following is configured: diff --git a/downstream/modules/aap-hardening/proc-install-user-pki.adoc b/downstream/modules/aap-hardening/proc-install-user-pki.adoc index 5e472e662..ce1cfbfc5 100644 --- a/downstream/modules/aap-hardening/proc-install-user-pki.adoc +++ b/downstream/modules/aap-hardening/proc-install-user-pki.adoc @@ -13,29 +13,39 @@ Use the following inventory variables to configure the infrastructure components .PKI certificate inventory variables |=== -| *Variable* | *Details* -| `custom_ca_cert` | The file name of the CA certificate located in the installer directory. +| *RPM Variable* | *Containerized Variable* | *Details* +| `custom_ca_cert` | | The file name of the CA certificate located in the installer directory. -| `web_server_ssl_cert` | The file name of the {ControllerName} PKI certificate located in the installer directory. +Defines a custom CA certificate here when you have the leaf certificates created for each product and need the certificate trust to be established. -| `web_server_ssl_key` | The file name of the {ControllerName} PKI key located in the installer directory. +| `web_server_ssl_cert` | `controller_tls_cert` | The file name of the {ControllerName} PKI certificate located in the installer directory. -| `automationhub_ssl_cert` | The file name of the {PrivateHubName} PKI certificate located in the installer directory. +| `web_server_ssl_key` | `controller_tls_key` | The file name of the {ControllerName} PKI key located in the installer directory. -| `automationhub_ssl_key` | The file name of the {PrivateHubName} PKI key located in the installer directory. +| `automationhub_ssl_cert` | `hub_tls_cert` | The file name of the {PrivateHubName} PKI certificate located in the installer directory. -| `postgres_ssl_cert` | The file name of the database server PKI certificate located in the installer directory. This variable is only needed for the installer-managed database server, not if a third-party database is used. +| `automationhub_ssl_key` | `hub_tls_key` | The file name of the {PrivateHubName} PKI key located in the installer directory. -| `postgres_ssl_key` | The file name of the database server PKI certificate located in the installer directory. This variable is only needed for the installer-managed database server, not if a third-party database is used. +| `postgres_ssl_cert` | `postgresql_tls_cert` | The file name of the database server PKI certificate located in the installer directory. This variable is only needed for the installer-managed database server, not if a third-party database is used. -| `automationedacontroller_ssl_cert` | The file name of the {EDAcontroller} PKI certificate located in the installer directory. +| `postgres_ssl_key` | `postgresql_tls_key` | The file name of the database server PKI certificate located in the installer directory. This variable is only needed for the installer-managed database server, not if a third-party database is used. -| `automationedacontroller_ssl_key` | The file name of the {EDAcontroller} PKI key located in the installer directory. +| `automationedacontroller_ssl_cert` | `eda_tls_cert` | The file name of the {EDAcontroller} PKI certificate located in the installer directory. + +| `automationedacontroller_ssl_key` | `eda_tls_key` | The file name of the {EDAcontroller} PKI key located in the installer directory. |=== -When multiple {ControllerName} are deployed with a load balancer, the `web_server_ssl_cert` and `web_server_ssl_key` are shared by each controller. To prevent hostname mismatches, the certificate's Common Name (CN) must match the DNS FQDN used by the load balancer. This also applies when deploying multiple {PrivateHubName} and the `automationhub_ssl_cert` and `automationhub_ssl_key` variables. If your organizational policies require unique certificates for each service, each certificate requires a Subject Alt Name (SAN) that matches the DNS FQDN used for the load-balanced service. To install unique certificates and keys on each {ControllerName}, the certificate and key variables in the installation inventory file must be defined as per-host variables instead of in the `[all:vars]` section. For example: +When multiple {ControllerName} are deployed with a load balancer, the `web_server_ssl_cert` and `web_server_ssl_key` are shared by each {ControllerName}. +To prevent hostname mismatches, the certificate's Common Name (CN) must match the DNS FQDN used by the load balancer. +This also applies when deploying multiple {PrivateHubName} and the `automationhub_ssl_cert` and `automationhub_ssl_key` variables. +If your organizational policies require unique certificates for each service, each certificate requires a Subject Alt Name (SAN) that matches the DNS FQDN used for the load-balanced service. +To install unique certificates and keys on each {ControllerName}, the certificate and key variables in the installation inventory file must be defined as per-host variables instead of in the `[all:vars]` section. +For example: ---- +[automationgateway] +gateway0.example.com + [automationcontroller] controller0.example.com web_server_ssl_cert=/path/to/cert0 web_server_ssl_key=/path/to/key0 controller1.example.com web_server_ssl_cert=/path/to/cert1 web_server_ssl_key=/path/to/key1 @@ -47,5 +57,8 @@ controller2.example.com web_server_ssl_cert=/path/to/cert2 web_server_ssl_key=/p hub0.example.com automationhub_ssl_cert=/path/to/cert0 automationhub_ssl_key=/path/to/key0 hub1.example.com automationhub_ssl_cert=/path/to/cert1 automationhub_ssl_key=/path/to/key1 hub2.example.com automationhub_ssl_cert=/path/to/cert2 automationhub_ssl_key=/path/to/key2 + +[automationedacontroller] +automationedacontroller0.example.com ---- diff --git a/downstream/modules/aap-hardening/ref-aap-authentication.adoc b/downstream/modules/aap-hardening/ref-aap-authentication.adoc new file mode 100644 index 000000000..5bd68e144 --- /dev/null +++ b/downstream/modules/aap-hardening/ref-aap-authentication.adoc @@ -0,0 +1,37 @@ +// Module included in the following assemblies: +// downstream/assemblies/assembly-hardening-aap.adoc + +[id="ref-aap-authentication_{context}"] + += {PlatformNameStart} authentication + +[role="_abstract"] + +{ControllerNameStart} currently supports the following external authentication mechanisms through the {GatewayStart} UI: + +* Azure Activity Directory +* GitHub single sign-on +** GitHub +** GitHub Organization +** GitHub team +** GitHub enterprise +** GitHub enterprise organization +** GitHub enterprise team +* Google OAuth2 single sign-in +* LDAP +* RADIUS +* SAML +* TACACS+ +* Generic OIDC +* Keycloak + +Choose an authentication mechanism that adheres to your organization's authentication policies. +The authentication mechanism used must ensure that the authentication-related traffic between {PlatformNameShort} and the authentication back-end is encrypted when the traffic occurs on a public or non-secure network (for example, LDAPS or LDAP over TLS, HTTPS for OAuth2 and SAML providers, etc.). + +For more information on authentication methods, see link:{URLCentralAuth}/gw-configure-authentication#gw-config-authentication-type[Configuring an authetication type]. + +In {GatewayStart}, any “system administrator” account can edit, change, and update any inventory or automation definition. +Restrict these account privileges to the minimum set of users possible for low-level {ControllerName} configuration and disaster recovery. + + + diff --git a/downstream/modules/aap-hardening/ref-automation-controller-operational-secrets.adoc b/downstream/modules/aap-hardening/ref-aap-operational-secrets.adoc similarity index 86% rename from downstream/modules/aap-hardening/ref-automation-controller-operational-secrets.adoc rename to downstream/modules/aap-hardening/ref-aap-operational-secrets.adoc index 5361e5967..9c66ba908 100644 --- a/downstream/modules/aap-hardening/ref-automation-controller-operational-secrets.adoc +++ b/downstream/modules/aap-hardening/ref-aap-operational-secrets.adoc @@ -1,15 +1,15 @@ // Module included in the following assemblies: // downstream/assemblies/assembly-hardening-aap.adoc -[id="ref-automation-controller-operational-secrets_{context}"] +[id="ref-aap-operational-secrets_{context}"] -= {ControllerNameStart} operational secrets += {PlatformNameShort} operational secrets [role="abstract"] -{ControllerNameStart} contains the following secrets used operationally: +{PlatformNameShort} contains the following secrets used operationally: -.{ControllerNameStart} operational secrets +.{PlatformNameShort} operational secrets |=== | *File* | *Details* | `/etc/tower/SECRET_KEY` | A secret key used for encrypting automation secrets in the database. If the `SECRET_KEY` changes or is unknown, no encrypted fields in the database will be accessible. diff --git a/downstream/modules/aap-hardening/ref-architecture.adoc b/downstream/modules/aap-hardening/ref-architecture.adoc index c17181439..e673fcd26 100644 --- a/downstream/modules/aap-hardening/ref-architecture.adoc +++ b/downstream/modules/aap-hardening/ref-architecture.adoc @@ -3,15 +3,24 @@ [id="ref-architecture_{context}"] -= {PlatformNameShort} reference architecture += {PlatformNameShort} deployment topologies [role="_abstract"] -For large-scale production environments with availability requirements, this guide recommends deploying the components described in section 2.1 of this guide using the instructions in the xref:ref-architecture_{context}[reference architecture] documentation for {PlatformName} on {RHEL}. While some variation may make sense for your specific technical requirements, following the reference architecture results in a supported production-ready environment. +Install {PlatformNameShort} {PlaformVers} based on one of the documented tested deployment topologies defined in link:{LinkTopologies}. +Enterprise organizations should use one of the enterprise topologies for production deployments to ensure the highest level of uptime, performance, and continued scalability. +Organizations or deployments that are resource constrained can use a "growth" topology. +Review the Tested deployment models document to determine the topology that best suits your requirements. +The topology chosen will include planning information such as a topology diagram, the number of {RHEL} servers required, the network ports and protocols used by the deployment, and load balancer information for enterprise topologies. + +It is possible to install the {PlatformNameShort} on different infrastructure topologies and with different environment configurations. +Red Hat does not fully test topologies outside of published deployment models. + +The following diagram is a tested container enterprise topology: .Reference architecture overview -image::aap-ref-architecture-322.png[Reference architecture for an example setup of an {PlatformNameShort} deployment for large scale production environments] +image::cont-b-env-a.png[Infrastructure topology that Red Hat has tested that customers can use when self-managing {PlatformNameShort}] -{EDAName} is a new feature of {PlatformNameShort} {PlatformVers} that was not available when the reference architecture detailed in Figure 1: Reference architecture overview was originally written. Currently, the supported configuration is a single {ControllerName}, single {HubName}, and single {EDAController} node with external (installer managed) database. For an organization interested in {EDAName}, the recommendation is to install according to the configuration documented in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/assembly-platform-install-scenario#ref-single-controller-hub-eda-with-managed-db[{PlatformNameShort} Installation Guide]. This document provides additional clarifications when {EDAName} specific hardening configuration is required. +//{EDAName} is a new feature of {PlatformNameShort} {PlatformVers} that was not available when the reference architecture detailed in Figure 1: Reference architecture overview was originally written. Currently, the supported configuration is a single {ControllerName}, single {HubName}, and single {EDAController} node with external (installer managed) database. For an organization interested in {EDAName}, the recommendation is to install according to the configuration documented in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/assembly-platform-install-scenario#ref-single-controller-hub-eda-with-managed-db[{PlatformNameShort} Installation Guide]. This document provides additional clarifications when {EDAName} specific hardening configuration is required. -For smaller production deployments where the full reference architecture may not be needed, this guide recommends deploying {PlatformNameShort} with a dedicated PostgreSQL database server whether managed by the installer or provided externally. +//For smaller production deployments where the full reference architecture may not be needed, this guide recommends deploying {PlatformNameShort} with a dedicated PostgreSQL database server whether managed by the installer or provided externally. diff --git a/downstream/modules/aap-hardening/ref-automation-controller-authentication.adoc b/downstream/modules/aap-hardening/ref-automation-controller-authentication.adoc deleted file mode 100644 index be7e3266f..000000000 --- a/downstream/modules/aap-hardening/ref-automation-controller-authentication.adoc +++ /dev/null @@ -1,26 +0,0 @@ -// Module included in the following assemblies: -// downstream/assemblies/assembly-hardening-aap.adoc - -[id="ref-automation-controller-authentication_{context}"] - -= {ControllerNameStart} authentication - -[role="_abstract"] - -{ControllerNameStart} currently supports the following external authentication mechanisms: - -* Azure Activity Directory -* GitHub single sign-on -* Google OAuth2 single sign-in -* LDAP -* RADIUS -* SAML -* TACACS+ -* Generic OIDC - -Choose an authentication mechanism that adheres to your organization's authentication policies, and refer to the link:https://docs.ansible.com/automation-controller/latest/html/administration/configure_tower_in_tower.html#authentication[Controller Configuration - Authentication] documentation to understand the prerequisites for the relevant authentication mechanism. The authentication mechanism used must ensure that the authentication-related traffic between {PlatformNameShort} and the authentication back-end is encrypted when the traffic occurs on a public or non-secure network (for example, LDAPS or LDAP over TLS, HTTPS for OAuth2 and SAML providers, etc.). - -In {ControllerName}, any “system administrator” account can edit, change, and update any inventory or automation definition. Restrict these account privileges to the minimum set of users possible for low-level {ControllerName} configuration and disaster recovery. - - - diff --git a/downstream/modules/aap-hardening/ref-complex-patching-scenarios.adoc b/downstream/modules/aap-hardening/ref-complex-patching-scenarios.adoc new file mode 100644 index 000000000..9c12df076 --- /dev/null +++ b/downstream/modules/aap-hardening/ref-complex-patching-scenarios.adoc @@ -0,0 +1,21 @@ +[id="ref-complex-patching-scenarios"] + += Complex patching scenarios + +In {PlatformNameShort}, multiple automation jobs can be chained together into workflows, which can be used to coordinate multiple steps in a complex patching scenario. + +The following example complex patching scenario demonstrates taking virtual machine snapshots, patching the virtual machines, and creating tickets when an error is encountered in the workflow. + +. Run a project sync to ensure the latest playbooks are available. In parallel, run an inventory sync to make sure the latest list of target hosts is available. +. Take a snapshot of each target host. +.. If the snapshot task fails, submit a ticket with the relevant information. +. Patch each of the target hosts. +.. If the patching task fails, restore the snapshot and submit a ticket with the relevant information. +. Delete each snapshot where the patching task was successful. + +The following workflow visualization shows how the components of the example complex patching scenario are executed: + +image:workflow.png[Workflow representation] + +.Additional resources +For more information on workflows, see link:{URLControllerUserGuide}/controller-workflows[Workflows in automation controller]. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/ref-dns-load-balancing.adoc b/downstream/modules/aap-hardening/ref-dns-load-balancing.adoc index 7690611d7..e0b3ff793 100644 --- a/downstream/modules/aap-hardening/ref-dns-load-balancing.adoc +++ b/downstream/modules/aap-hardening/ref-dns-load-balancing.adoc @@ -7,22 +7,6 @@ [role="_abstract"] -When using a load balancer with {PlatformNameShort} as described in the reference architecture, an additional FQDN is needed for each load-balanced component ({ControllerName} and {PrivateHubName}). +When using a load balancer with {PlatformNameShort} as described in the deployment topology, an additional FQDN is needed for the load-balancer. +For example, an FQDN such as `aap.example.com` might be used for the load balancer which in turn directs traffic to each of the {GatewayStart} components defined in the installation inventory. -For example, if the following hosts are defined in the {PlatformNameShort} installer inventory file: - ------ -[automationcontroller] -controller0.example.com -controller1.example.com -controller2.example.com - -[automationhub] -hub0.example.com -hub1.example.com -hub2.example.com ------ - -Then the load balancer can use the FQDNs `controller.example.com` and `hub.example.com` for the user-facing name of these {PlatformNameShort} services. - -When a load balancer is used in front of the {PrivateHubName}, the installer must be aware of the load balancer FQDN. Before installing {PlatformNameShort}, in the installation inventory file set the `automationhub_main_url` variable to the FQDN of the load balancer. For example, to match the previous example, you would set the variable to `automationhub_main_url = hub.example.com`. diff --git a/downstream/modules/aap-hardening/ref-dns.adoc b/downstream/modules/aap-hardening/ref-dns.adoc index e256ae079..0392eb8a6 100644 --- a/downstream/modules/aap-hardening/ref-dns.adoc +++ b/downstream/modules/aap-hardening/ref-dns.adoc @@ -5,4 +5,5 @@ [role="_abstract"] -When installing {PlatformNameShort}, the installer script checks that certain infrastructure servers are defined with a Fully Qualified Domain Name (FQDN) in the installer inventory. This guide recommends that all {PlatformNameShort} infrastructure nodes have a valid FQDN defined in DNS which resolves to a routable IP address, and that these FQDNs be used in the installer inventory file. \ No newline at end of file +When installing {PlatformNameShort}, the installer script checks that certain infrastructure servers are defined with a _Fully Qualified Domain Name_ (FQDN) in the installer inventory. +This guide recommends that all {PlatformNameShort} infrastructure nodes have a valid FQDN defined in DNS which resolves to a routable IP address, and that these FQDNs be used in the installer inventory file. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/ref-infrastructure-as-code.adoc b/downstream/modules/aap-hardening/ref-infrastructure-as-code.adoc index 1b8f70012..7aab7c1b5 100644 --- a/downstream/modules/aap-hardening/ref-infrastructure-as-code.adoc +++ b/downstream/modules/aap-hardening/ref-infrastructure-as-code.adoc @@ -3,11 +3,13 @@ [id="ref-infrastructure-as-code_{context}"] -= Use infrastructure as code paradigm += Use a configuration as code paradigm [role="_abstract"] -The Red Hat Community of Practice has created a set of automation content available via collections to manage {PlatformNameShort} infrastructure and configuration as code. This enables automation of the platform itself through Infrastructure as Code (IaC) or Configuration as Code (CaC). While many of the benefits of this approach are clear, there are critical security implications to consider. +The Red Hat Community of Practice has created a set of automation content available through collections to manage {PlatformNameShort} infrastructure and configuration as code. +This enables automation of the platform itself through _Configuration as Code_ (CaC). +While many of the benefits of this approach are clear, there are critical security implications to consider. The following Ansible content collections are available for managing {PlatformNameShort} components using an infrastructure as code methodology, all of which are found on the link:https://console.redhat.com/ansible/automation-hub[Ansible Automation Hub]: @@ -15,22 +17,25 @@ The following Ansible content collections are available for managing {PlatformNa |=== | *Validated Collection* | *Collection Purpose* | `infra.aap_utilities` | Ansible content for automating day 1 and day 2 operations of {PlatformNameShort}, including installation, backup and restore, certificate management, and more. - +// this will change to infra.aap_configuration | `infra.controller_configuration` | A collection of roles to manage {ControllerName} components, including managing users and groups (RBAC), projects, job templates and workflows, credentials, and more. | `infra.ah_configuration` | Ansible content for interacting with {HubName}, including users and groups (RBAC), collection upload and management, collection approval, managing the {ExecEnvShort} image registry, and more. | `infra.ee_utilities` | A collection of roles for creating and managing {ExecEnvShort} images, or migrating from the older Tower virtualenvs to execution environments. + +| `infra.eda_configuration` | |=== Many organizations use CI/CD platforms to configure pipelines or other methods to manage this type of infrastructure. However, using {PlatformNameShort} natively, a webhook can be configured to link a Git-based repository natively. In this way, Ansible can respond to git events to trigger Job Templates directly. This removes the need for external CI components from this overall process and thus reduces the attack surface. -These practices allow version control of all infrastructure and configuration. Apply Git best practices to ensure proper code quality inspection prior to being synchronized into {PlatformNameShort}. Relevant Git best practices include the following: +These practices enable version control of all infrastructure and configuration. +Apply Git best practices to ensure proper code quality inspection before being synchronized into {PlatformNameShort}. Relevant Git best practices include the following: * Creating pull requests. * Ensuring that inspection tools are in place. * Ensuring that no plain text secrets are committed. * Ensuring that pre-commit hooks and any other policies are followed. -IaC also encourages using external vault systems which removes the need to store any sensitive data in the repository, or deal with having to individually vault files as needed. For more information on using external vault systems, see section xref:con-external-credential-vault_{context}[2.3.2.3 External credential vault considerations] within this guide. +CaC also encourages using external vault systems which removes the need to store any sensitive data in the repository, or deal with having to individually vault files as needed. For more information on using external vault systems, see section xref:con-external-credential-vault_{context}[2.3.2.3 External credential vault considerations] within this guide. diff --git a/downstream/modules/aap-hardening/ref-initial-configuration.adoc b/downstream/modules/aap-hardening/ref-initial-configuration.adoc index 697b7fa70..9323348c1 100644 --- a/downstream/modules/aap-hardening/ref-initial-configuration.adoc +++ b/downstream/modules/aap-hardening/ref-initial-configuration.adoc @@ -7,9 +7,16 @@ [role="_abstract"] -Granting access to certain parts of the system exposes security vulnerabilities. Apply the following practices to help secure access: +Granting access to certain parts of the system exposes security vulnerabilities. +Apply the following practices to help secure access: -* Minimize access to system administrative accounts. There is a difference between the user interface (web interface) and access to the operating system that the {ControllerName} is running on. A system administrator or root user can access, edit, and disrupt any system application. Anyone with root access to the controller has the potential ability to decrypt those credentials, and so minimizing access to system administrative accounts is crucial for maintaining a secure system. -* Minimize local system access. {ControllerNameStart} should not require local user access except for administrative purposes. Non-administrator users should not have access to the controller system. -* Enforce separation of duties. Different components of automation may need to access a system at different levels. Use different keys or credentials for each component so that the effect of any one key or credential vulnerability is minimized. -* Restrict {ControllerName} to the minimum set of users possible for low-level controller configuration and disaster recovery only. In a controller context, any controller ‘system administrator’ or ‘superuser’ account can edit, change, and update any inventory or automation definition in the controller. \ No newline at end of file +* Minimize access to system administrative accounts. +There is a difference between the user interface (web interface) and access to the operating system that the {ControllerName} is running on. +A system administrator or root user can access, edit, and disrupt any system application. + with root access to the controller has the potential ability to decrypt those credentials, and so minimizing access to system administrative accounts is crucial for maintaining a secure system. +* Minimize local system access. {ControllerNameStart} should not require local user access except for administrative purposes. +Non-administrator users should not have access to the controller system. +* Enforce separation of duties. +Different components of automation may need to access a system at different levels. +Use different keys or credentials for each component so that the effect of any one key or credential vulnerability is minimized. +* Restrict {ControllerName} to the minimum set of users possible for low-level {ControllerName} configuration and disaster recovery only. In an {ControllerName} context, any {ControllerName} ‘system administrator’ or ‘superuser’ account can edit, change, and update any inventory or automation definition in {ControllerName}. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/ref-install-security-updates.adoc b/downstream/modules/aap-hardening/ref-install-security-updates.adoc new file mode 100644 index 000000000..87f2e3a92 --- /dev/null +++ b/downstream/modules/aap-hardening/ref-install-security-updates.adoc @@ -0,0 +1,18 @@ +[id="ref-install-security-updates"] + += Installing security updates only + +For organizations with a policy requiring that all RPMs including security errata be kept up to date, the following playbook might be used in a regularly scheduled job template. + +---- +- name: Install all security-related RPM updates + hosts: target_hosts + become: true + + tasks: + - name: Install latest RPMs with security errata + ansible.builtin.dnf: + name: '*' + security: true + state: latest +---- \ No newline at end of file diff --git a/downstream/modules/aap-hardening/ref-keep-up-to-date.adoc b/downstream/modules/aap-hardening/ref-keep-up-to-date.adoc new file mode 100644 index 000000000..dcd1c6aeb --- /dev/null +++ b/downstream/modules/aap-hardening/ref-keep-up-to-date.adoc @@ -0,0 +1,18 @@ +[id="ref-keep-up-to-date"] + += Keeping everything up to date + +For some RHEL servers, such as a lab or other non-production systems, you might want to install all available patches on a regular cadence. +he following example playbook might be used in a job template that is scheduled to run weekly, and updates the system with all of the latest RPMs. + +---- +- name: Install all available RPM updates + hosts: target_hosts + become: true + + tasks: + - name: Install latest RPMs + ansible.builtin.dnf: + name: '*' + state: latest +---- \ No newline at end of file diff --git a/downstream/modules/aap-hardening/ref-security-variables-install-inventory.adoc b/downstream/modules/aap-hardening/ref-security-variables-install-inventory.adoc index ee96dc49a..de18da3d9 100644 --- a/downstream/modules/aap-hardening/ref-security-variables-install-inventory.adoc +++ b/downstream/modules/aap-hardening/ref-security-variables-install-inventory.adoc @@ -7,29 +7,36 @@ [role="_abstract"] -The installation inventory file defines the architecture of the {PlatformNameShort} infrastructure, and provides a number of variables that can be used to modify the initial configuration of the infrastructure components. For more information on the installer inventory, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_installation_guide/index#proc-editing-installer-inventory-file_platform-install-scenario[Ansible Automation Platform Installation Guide]. +The installation inventory file defines the architecture of the {PlatformNameShort} infrastructure, and provides a number of variables that can be used to modify the initial configuration of the infrastructure components. For more information on the installer inventory, see link:{LinkInstallationGuide}. The following table lists a number of security-relevant variables and their recommended values for creating the installation inventory. .Security-relevant inventory variables +[cols="25%,25%,25%,25%",options="header"] |=== -| *Variable* | *Recommended Value* | *Details* -| `postgres_use_ssl` | true | The installer configures the installer-managed Postgres database to accept SSL-based connections when this variable is set. +| *RPM Variable* | *Containerized Variable* | *Recommended Value* | *Details* +| `postgres_use_ssl` | `postgre_disable_tls` |true | Determines if the connection between Ansible Automation Platform and the PostgreSQL database should use SSL/TLS. -| `pg_sslmode` | verify-full | By default, when the controller connects to the database, it tries an encrypted connection, but it is not enforced. Setting this variable to "verify-full" requires a mutual TLS negotiation between the controller and the database. The `postgres_use_ssl` variable must also be set to "true" for this `pg_sslmode` to be effective. +The default for this variable is false which means SSL/TLS is not used for PostgreSQL connections. + +When set to true, the platform connects to PostgreSQL by using SSL/TLS. + +| `pg_sslmode` | | verify-full | By default, when the controller connects to the database, it tries an encrypted connection, but it is not enforced. Setting this variable to "verify-full" requires a mutual TLS negotiation between the controller and the database. The `postgres_use_ssl` variable must also be set to "true" for this `pg_sslmode` to be effective. *NOTE*: If a third-party database is used instead of the installer-managed database, the third-party database must be set up independently to accept mTLS connections. -| `nginx_disable_https` | false | If set to "true", this variable disables HTTPS connections to the controller. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false". +| `nginx_disable_https` | `controller_nginx_disable_https` | false | If set to "true", this variable disables HTTPS connections to the controller. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false". -| `automationhub_disable_https` | false | If set to "true", this variable disables HTTPS connections to the {PrivateHubName}. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false". +| `automationhub_disable_https` | `hub_nginx_disable_https` | false | If set to "true", this variable disables HTTPS connections to the {PrivateHubName}. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false". -| `automationedacontroller_disable_https` | false | If set to "true", this variable disables HTTPS connections to the {EDAcontroller}. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false". +| `automationedacontroller_disable_https` | `eda_nginx_disable_https` | false | If set to "true", this variable disables HTTPS connections to the {EDAcontroller}. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false". |=== In scenarios such as the reference architecture where a load balancer is used with multiple controllers or hubs, SSL client connections can be terminated at the load balancer or passed through to the individual {PlatformNameShort} servers. If SSL is being terminated at the load balancer, this guide recommends that the traffic gets re-encrypted from the load balancer to the individual {PlatformNameShort} servers, to ensure that end-to-end encryption is in use. In this scenario, the `*_disable_https` variables listed in Table 2.3 would remain the default value of "false". [NOTE] ==== -This guide recommends using an external database in production environments, but for development and testing scenarios the database could be co-located on the {ControllerName}. Due to current PostgreSQL 13 limitations, setting `pg_sslmode = verify-full` when the database is co-located on the {ControllerName} results in an error validating the host name during TLS negotiation. Until this issue is resolved, an external database must be used to ensure mutual TLS authentication between the {ControllerName} and the database. +This guide recommends using an external database in production environments, but for development and testing scenarios the database could be co-located on {ControllerName}. +Due to current PostgreSQL 15 limitations, setting `pg_sslmode = verify-full` when the database is co-located on {ControllerName} results in an error validating the host name during TLS negotiation. +Until this issue is resolved, an external database must be used to ensure mutual TLS authentication between {ControllerName} and the database. ==== diff --git a/downstream/modules/aap-hardening/ref-sensitive-variables-install-inventory.adoc b/downstream/modules/aap-hardening/ref-sensitive-variables-install-inventory.adoc index fa6cdf009..6fe1a1a6c 100644 --- a/downstream/modules/aap-hardening/ref-sensitive-variables-install-inventory.adoc +++ b/downstream/modules/aap-hardening/ref-sensitive-variables-install-inventory.adoc @@ -12,18 +12,23 @@ The installation inventory file contains a number of sensitive variables, mainly * `cd /path/to/ansible-automation-platform-setup-bundle-2.5-1-x86_64` * `ansible-vault create vault.yml` -You will be prompted for a password to the new Ansible vault. Do not lose the vault password because it is required every time you need to access the vault file, including during day-two operations and performing backup procedures. You can secure the vault password by storing it in an encrypted password manager or in accordance with your organizational policy for storing passwords securely. +You are prompted for a password to the new Ansible vault. +Do not lose the vault password because it is required every time you need to access the vault file, including during day-two operations and performing backup procedures. +You can secure the vault password by storing it in an encrypted password manager or in accordance with your organizational policy for storing passwords securely. Add the sensitive variables to the vault, for example: +//Added containerized variables RPM/containerized: + ---- -admin_password: -pg_password: -automationhub_admin_password: -automationhub_pg_password: -automationhub_ldap_bind_password: -automationedacontroller_admin_password: -automationedacontroller_pg_password: +admin_password/controller_admin_password: +pg_password/controller_pg_password: +automationhub_admin_password/hub_admin_password: +automationhub_pg_password/hub_pg_password: +automationedacontroller_admin_password/eda_admin_password: +automationedacontroller_pg_password/eda_pg_password: +/gateway_admin_password: +/gateway_pg_password: ---- Make sure these variables are not also present in the installation inventory file. To use the new Ansible vault with the installer, run it with the command `./setup.sh -e @vault.yml -- --ask-vault-pass`. diff --git a/downstream/modules/aap-hardening/ref-specify-package-versions.adoc b/downstream/modules/aap-hardening/ref-specify-package-versions.adoc new file mode 100644 index 000000000..416d387aa --- /dev/null +++ b/downstream/modules/aap-hardening/ref-specify-package-versions.adoc @@ -0,0 +1,42 @@ +[id="ref-specify-package-versions"] + += Specifying package versions + +For production systems, a well-established configuration management practice is to deploy only known, tested combinations of software to ensure that systems are configured correctly and perform as expected. +This includes deploying only known versions of operating system software and patches to ensure that system updates do not introduce problems with production applications. + +[NOTE] +==== +The following example playbook installs a specific version of the `httpd` RPM and its dependencies when the target host uses the RHEL 9 operating system. +This playbook does not take action if the specified versions are already in place or if a different version of RHEL is installed. +==== +---- +- name: Install specific RPM versions + hosts: target_hosts + gather_facts: true + become: true + + vars: + httpd_packages_rhel9: + - httpd-2.4.53-11.el9_2.5 + - httpd-core-2.4.53-11.el9_2.5 + - httpd-filesystem-2.4.53-11.el9_2.5 + - httpd-tools-2.4.53-11.el9_2.5 + - mod_http2-1.15.19-4.el9_2.4 + - mod_lua-2.4.53-11.el9_2.5 + + tasks: + - name: Install httpd and dependencies + ansible.builtin.dnf: + name: '{{ httpd_packages_rhel9 }}' + state: present + allow_downgrade: true + when: + - ansible_distribution == "RedHat" + - ansible_distribution_major_version == "9" +---- + +[NOTE] +==== +By setting `allow_downgrade: true`, if a newer version of any defined package is installed on the system, it is downgraded to the specified version instead. +==== \ No newline at end of file diff --git a/downstream/modules/platform/proc-controller-set-up-logging.adoc b/downstream/modules/platform/proc-controller-set-up-logging.adoc index 0ca081244..c896f0d20 100644 --- a/downstream/modules/platform/proc-controller-set-up-logging.adoc +++ b/downstream/modules/platform/proc-controller-set-up-logging.adoc @@ -1,14 +1,17 @@ [id="proc-controller-set-up-logging"] +ifdef::controller-AG[] = Setting up logging Use the following procedure to set up logging to any of the aggregator types. - +endif::controller-AG[] .Procedure . From the navigation panel, select {MenuSetLogging}. . On the *Logging settings* page, click btn:[Edit]. +ifdef::controller-AG[] + image::logging-settings.png[Logging settings page] + +endif::controller-AG[] . You can configure the following options: * *Logging Aggregator*: Enter the hostname or IP address that you want to send logs to. @@ -21,10 +24,12 @@ However, TCP and UDP connections are determined by the hostname and port number Therefore, in the case of a TCP or UDP connection, supply the port in the specified field. If a URL is entered in the *Logging Aggregator* field instead, its hostname portion is extracted as the hostname. ==== ++ * *Logging Aggregator Type*: Click to select the aggregator service from the list: +ifdef::controller-AG[] + image:configure-controller-system-logging-types.png[Logging types] - +endif::controller-AG[] * *Logging Aggregator Username*: Enter the username of the logging aggregator if required. * *Logging Aggregator Password/Token*: Enter the password of the logging aggregator if required. * *Loggers to Send Data to the Log Aggregator Form*: All four types of data are pre-populated by default. @@ -45,17 +50,22 @@ Equivalent to the `rsyslogd queue.maxdiskspace` setting on the action (e.g. `omh It stores files in the directory specified by `LOG_AGGREGATOR_MAX_DISK_USAGE_PATH`. * *File system location for rsyslogd disk persistence*: Location to persist logs that should be retried after an outage of the external log aggregator (defaults to `/var/lib/awx`). Equivalent to the `rsyslogd queue.spoolDirectory` setting. +ifdef::controller-AG[] * *Log Format For API 4XX Errors*: Configure a specific error message. For more information, see xref:proc-controller-api-4xx-error-config[API 4XX Error Configuration]. - +endif::controller-AG[] +ifdef::hardening[] +* *Log Format For API 4XX Errors*: Configure a specific error message. For more information, see link:{URLControllerAdminGuide}assembly-controller-logging-aggregation#proc-controller-api-4xx-error-config[API 4XX Error Configuration] +endif::hardening[] Set the following options: * *Log System Tracking Facts Individually*: Click the tooltip image:question_circle.png[Help,15,15] icon for additional information, such as whether or not you want to turn it on, or leave it off by default. . Review your entries for your chosen logging aggregation. +ifdef::controller-AG[] The following example is set up for Splunk: + image:configure-controller-system-logging-splunk-example.png[Splunk logging example] - +endif::controller-AG[] * *Enable External Logging*: Select this checkbox if you want to send logs to an external log aggregator. * *Enable/disable HTTPS certificate verification*: Certificate verification is enabled by default for the HTTPS log protocol. Select this checkbox if yoiu want the log handler to verify the HTTPS certificate sent by the external log aggregator before establishing a connection. diff --git a/downstream/titles/aap-hardening/master.adoc b/downstream/titles/aap-hardening/master.adoc index 584183ec4..611624c52 100644 --- a/downstream/titles/aap-hardening/master.adoc +++ b/downstream/titles/aap-hardening/master.adoc @@ -3,6 +3,7 @@ :toclevels: 1 :experimental: +:hardening: include::attributes/attributes.adoc[] @@ -15,5 +16,5 @@ This guide provides recommended practices for various processes needed to instal include::{Boilerplate}[] include::aap-hardening/assembly-intro-to-aap-hardening.adoc[leveloffset=+1] include::aap-hardening/assembly-hardening-aap.adoc[leveloffset=+1] -// include::aap-hardening/assembly-aap-compliance.adoc[leveloffset=+1] -// include::aap-hardening/assembly-aap-security-enabling.adoc[leveloffset=+1] +//include::aap-hardening/assembly-aap-compliance.adoc[leveloffset=+1] +include::aap-hardening/assembly-aap-security-use-cases.adoc[leveloffset=+1] diff --git a/downstream/titles/aap-hardening/platform b/downstream/titles/aap-hardening/platform new file mode 120000 index 000000000..9a3ca429a --- /dev/null +++ b/downstream/titles/aap-hardening/platform @@ -0,0 +1 @@ +../../modules/platform/ \ No newline at end of file