-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: support cifs and azurite on pytest and e2e #2155
base: master
Are you sure you want to change the base?
Conversation
longhorn/longhorn-9699 Signed-off-by: Chris <[email protected]>
WalkthroughThe changes in this pull request enhance the deployment and management of backup store servers by adding support for CIFS and Azurite alongside existing options like NFS and Minio. Updates include modifications to documentation, Kubernetes configurations, and testing scripts to accommodate the new backup store types. Additionally, new functions and error handling improvements were introduced to facilitate the integration of these backup stores into the existing system. Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Outside diff range and nitpick comments (8)
pipelines/utilities/install_backupstores.sh (2)
4-4
: Consider version pinning instead of using master branch.
While the URL follows the established pattern, pointing to the master branch could lead to instability if the upstream YAML changes unexpectedly.
Consider using a specific release tag or commit hash in the URL for better stability:
- CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml"
+ CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/v1.5.1/deploy/backupstores/cifs-backupstore.yaml"
6-8
: Add error handling for kubectl operations.
The kubectl create command looks good, but consider adding error handling to ensure all resources are created successfully.
Consider adding error checking:
- kubectl create -f ${MINIO_BACKUPSTORE_URL} \
- -f ${NFS_BACKUPSTORE_URL} \
- -f ${CIFS_BACKUPSTORE_URL}
+ if ! kubectl create -f ${MINIO_BACKUPSTORE_URL} \
+ -f ${NFS_BACKUPSTORE_URL} \
+ -f ${CIFS_BACKUPSTORE_URL}; then
+ echo "Failed to create backup stores"
+ return 1
+ fi
manager/integration/README.md (1)
Line range hint 1-23
: Add CIFS prerequisites to the requirements section
While the NFS prerequisites are documented (point 6 mentions nfs-common
), there's no mention of CIFS client requirements. Please add a new requirement point specifying any necessary CIFS client packages or configurations needed on the nodes.
Example addition:
6. Make sure `nfs-common` or equivalent has been installed on the node to allow the NFS client to work.
+7. Make sure `cifs-utils` or equivalent has been installed on the node to allow the CIFS client to work.
🧰 Tools
🪛 Markdownlint
20-20: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
26-26: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_agent.sh.tpl (1)
Line range hint 1-3
: Improve error handling strategy.
The script currently uses set -e
but then disables it with set +e
for the k3s installation. This mixed approach could lead to silent failures in critical setup steps. Consider implementing proper error handling throughout the script.
Consider this more robust approach:
#!/bin/bash
-set -e
+set -euo pipefail
+
+# Error handling function
+handle_error() {
+ local exit_code=$?
+ local line_number=$1
+ echo "Error occurred in line ${line_number}, exit code: ${exit_code}"
+ exit $exit_code
+}
+
+trap 'handle_error ${LINENO}' ERR
test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_agent.sh.tpl (1)
8-8
: Consider adding basic Samba configuration for CIFS.
Since this script is preparing the node for CIFS support, it might be beneficial to include basic Samba configuration to ensure it's ready for use.
Consider adding these configurations after the package installation:
# Basic Samba configuration for CIFS client
sudo cat > /etc/samba/smb.conf <<EOF
[global]
workgroup = WORKGROUP
client min protocol = SMB2
client max protocol = SMB3
security = user
EOF
# Ensure Samba service is enabled and started
sudo systemctl enable --now smb
e2e/README.md (1)
20-21
: Consider enhancing CIFS-related documentation.
While the CIFS backupstore deployment command has been added, the documentation would benefit from:
- A brief explanation of CIFS and its purpose
- Any prerequisites or configuration needed for CIFS
- Updates to the environment variables section if CIFS requires specific configuration
Example addition:
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server) for test purposes.
+ - Note: CIFS (Common Internet File System) backup store provides Windows-compatible file sharing support.
+ - CIFS Prerequisites:
+ - Ensure SMB/CIFS server is accessible from the cluster
+ - Configure necessary credentials if required
manager/integration/tests/backupstore.py (1)
122-132
: Add documentation and improve error handling.
The function implementation is correct but could benefit from the following improvements:
- Add a docstring explaining the function's purpose and parameters
- Add error handling for cases where no CIFS backupstore is found
- Validate the backupsettings array after split
- Add comments explaining the expected format of the backupstore string
Here's a suggested improvement:
def set_backupstore_cifs(client):
+ """Configure the backup store for CIFS.
+
+ Args:
+ client: The Longhorn client instance
+
+ Expected backupstore format: 'cifs://server/share$credential'
+
+ Raises:
+ ValueError: If no valid CIFS backupstore is found
+ """
backupstores = get_backupstore_url()
poll_interval = get_backupstore_poll_interval()
+ cifs_found = False
for backupstore in backupstores:
if is_backupTarget_cifs(backupstore):
backupsettings = backupstore.split("$")
+ if len(backupsettings) != 2:
+ raise ValueError(f"Invalid CIFS backupstore format: {backupstore}")
set_backupstore_url(client, backupsettings[0])
set_backupstore_credential_secret(client, backupsettings[1])
set_backupstore_poll_interval(client, poll_interval)
+ cifs_found = True
break
+ if not cifs_found:
+ raise ValueError("No valid CIFS backupstore found in configuration")
manager/integration/tests/common.py (1)
3898-3900
: Consider renaming function to follow Python naming conventions
The function name should use snake_case instead of camelCase to be consistent with PEP 8 Python naming conventions. Consider renaming to is_backup_target_cifs
.
-def is_backupTarget_cifs(s):
+def is_backup_target_cifs(s):
return s.startswith("cifs://")
The implementation is correct and follows the same pattern as other backup target validation functions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (12)
- e2e/README.md (1 hunks)
- manager/integration/README.md (1 hunks)
- manager/integration/deploy/test.yaml (1 hunks)
- manager/integration/tests/backupstore.py (3 hunks)
- manager/integration/tests/common.py (1 hunks)
- pipelines/gke/scripts/longhorn-setup.sh (3 hunks)
- pipelines/utilities/install_backupstores.sh (1 hunks)
- pipelines/utilities/run_longhorn_e2e_test.sh (3 hunks)
- pipelines/utilities/run_longhorn_test.sh (2 hunks)
- test_framework/scripts/longhorn-setup.sh (3 hunks)
- test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_agent.sh.tpl (1 hunks)
- test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_agent.sh.tpl (1 hunks)
🔇 Additional comments (16)
manager/integration/README.md (1)
22-23
: LGTM: CIFS backupstore deployment added correctly
The addition of the CIFS backupstore deployment command is consistent with the existing structure and properly uses continuation characters.
test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_agent.sh.tpl (1)
Line range hint 27-44
: Consider adding CIFS-specific infrastructure setup.
For complete CIFS support in the CI environment, consider adding:
- Directory structure for CIFS mounts
- Network connectivity verification for CIFS ports (TCP 445)
- SELinux/AppArmor configurations if applicable
Let's verify if other CI configuration files handle CIFS setup:
test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_agent.sh.tpl (1)
8-8
: Verify CIFS mount capability in the CI environment.
Since this is for CI testing with CIFS, we should verify that the system can mount CIFS shares after setup.
✅ Verification successful
CIFS mount capability is properly set up in the CI environment
The verification shows that CIFS support is properly configured in the CI environment:
- The CIFS package (samba) is being installed in the provisioning script
- The CIFS backupstore is configured in the test environment with proper mount points and secrets
- The installation script includes CIFS backupstore deployment alongside other backup stores
- The test configuration includes CIFS backupstore URL in the environment variables for testing
The system has all the necessary components to mount and use CIFS shares during CI testing.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify CIFS capabilities are properly set up
# Test 1: Check if required packages are installed
echo "Checking required packages..."
rg -l "BACKUP_STORE_TYPE.*CIFS" | while read -r file; do
echo "Found CIFS configuration in: $file"
done
# Test 2: Check for mount.cifs capability
if ! which mount.cifs >/dev/null 2>&1; then
echo "ERROR: mount.cifs not found"
exit 1
fi
# Test 3: Check for proper kernel module
if ! lsmod | grep -q '^cifs\s'; then
echo "WARNING: CIFS kernel module not loaded"
fi
Length of output: 210
Script:
#!/bin/bash
# Let's check if samba package is being installed in other similar provisioning scripts
echo "Checking for samba package installation across provisioning scripts..."
rg -l "zypper.*install.*samba" --type sh
# Check for CIFS related configurations or mount commands
echo -e "\nChecking for CIFS/SMB related configurations..."
rg "mount.*cifs|mount.*smb" --type sh
# Check for any CIFS/SMB related test cases
echo -e "\nChecking for CIFS/SMB related test files..."
fd -e test.js -e test.sh -e spec.js -e spec.sh | xargs rg "cifs|smb|samba" -l
Length of output: 1055
Script:
#!/bin/bash
# Let's examine the test files that mention CIFS/SMB to understand the testing requirements
echo "Checking test files content..."
cat manager/integration/tests/backupstore.py manager/integration/tests/common.py
# Also check the installation script to see how CIFS is being handled
echo -e "\nChecking installation script..."
cat pipelines/utilities/install_backupstores.sh
# Check the test deployment configuration
echo -e "\nChecking test deployment configuration..."
cat manager/integration/deploy/test.yaml
Length of output: 238188
manager/integration/deploy/test.yaml (3)
Line range hint 18-44
: LGTM: Pod configuration is appropriate for testing purposes.
The pod specification includes necessary configurations for storage testing:
- Privileged containers for storage operations
- Required volume mounts
- Test reporting container
Also applies to: 46-106
45-45
: Verify CIFS dependencies and configuration.
The CIFS backup store configuration requires several prerequisites:
- The service
longhorn-test-cifs-svc.default
must be deployed - The
cifs-secret
must be created with appropriate credentials - The CIFS URL format should be validated
Let's verify these dependencies:
✅ Verification successful
Let me gather more information about the CIFS backupstore configuration by checking the referenced deployment files.
CIFS backup store configuration is properly defined and supported
The verification shows that all CIFS dependencies are properly configured:
- The
longhorn-test-cifs-svc
service is defined in the CIFS backupstore deployment manifest with the correct namespace and ports (139, 445) - The
cifs-secret
is created in bothlonghorn-system
anddefault
namespaces with the required CIFS credentials - The CIFS URL format
cifs://longhorn-test-cifs-svc.default/backupstore$cifs-secret
matches the deployment configuration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for required CIFS-related resources
echo "Checking for CIFS service definition..."
rg -g '*.yaml' -g '*.yml' 'longhorn-test-cifs-svc'
echo "Checking for CIFS secret creation..."
rg -g '*.yaml' -g '*.yml' -g '*.sh' 'cifs-secret'
echo "Checking for CIFS mount documentation or examples..."
rg -g '*.md' 'cifs|CIFS' -C 2
Length of output: 1997
Script:
#!/bin/bash
# Check the content of CIFS backupstore deployment file and related configurations
echo "Checking CIFS backupstore deployment file..."
curl -s https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml | cat -
echo -e "\nChecking for CIFS test setup in integration tests..."
rg -g '*.go' 'cifs|CIFS' -C 2
echo -e "\nChecking for CIFS secret creation in test setup..."
fd -e yaml -e yml -e sh setup test | xargs rg 'cifs-secret'
Length of output: 3241
Line range hint 1-17
: Verify if cluster-admin permissions are necessary for testing.
While cluster-admin access might be required for comprehensive testing, consider creating a custom role with only the necessary permissions following the principle of least privilege. This would better reflect real-world deployment scenarios where users typically don't have cluster-admin access.
✅ Verification successful
Cluster-admin role is justified for integration tests
Based on the codebase analysis, the cluster-admin role is necessary for the test service account. The test suite performs extensive operations requiring broad permissions:
- Direct Kubernetes API access across multiple resource types (Pods, PVs, PVCs, StatefulSets, etc.)
- Custom Resource operations for Longhorn resources
- Node-level operations and monitoring
- Cross-namespace operations for testing various scenarios
- Cluster-wide operations for backup, HA, and infrastructure testing
The test environment specifically requires these elevated permissions to validate Longhorn's functionality comprehensively. This is a standard practice for integration testing where the test suite needs to manipulate and verify cluster-wide resources.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Analyze the test code to identify required Kubernetes permissions
# Search for Kubernetes API calls in test files to understand permission requirements
rg -g '*.py' -B 2 -A 2 'api\..*\.create|api\..*\.delete|api\..*\.patch|api\..*\.get|api\..*\.list|api\..*\.watch'
Length of output: 115
Script:
#!/bin/bash
# Let's try a different approach to find Kubernetes API usage patterns
# Search for kubernetes imports and client usage in Python files
rg -g '*.py' -B 2 -A 2 'from kubernetes import|kubernetes\.client|core_v1_api|apps_v1_api|custom_objects_api'
# Also search for specific test files that might contain Kubernetes operations
fd -g '*test*.py' --exec grep -l 'kubernetes\|K8s\|k8s'
Length of output: 16445
e2e/README.md (1)
20-21
: LGTM! Command structure is well-formatted.
The addition of the CIFS backupstore deployment maintains consistent formatting with the existing command structure.
pipelines/utilities/run_longhorn_e2e_test.sh (2)
3-3
: Verify CIFS dependencies are documented.
The CIFS backup store configuration assumes the existence of:
- A Kubernetes service
longhorn-test-cifs-svc
in thedefault
namespace - A Kubernetes secret
cifs-secret
for authentication
Let's verify if these dependencies are documented:
81-82
: Verify CIFS authentication for out-of-cluster testing.
While the CIFS backup store is configured, there don't appear to be any environment variables passed to the Docker container for CIFS authentication (unlike AWS credentials for S3).
Let's check if CIFS authentication is handled elsewhere:
Consider adding CIFS-specific environment variables to the Docker run command for authentication, similar to how AWS credentials are handled for S3:
docker run --pull=always \
--network=container:"${IMAGE_NAME}" \
--name "${CONTAINER_NAME}" \
-e LONGHORN_BACKUPSTORE="${LONGHORN_BACKUPSTORES}" \
+ -e CIFS_USERNAME="${CIFS_USERNAME}" \
+ -e CIFS_PASSWORD="${CIFS_PASSWORD}" \
...
pipelines/utilities/run_longhorn_test.sh (1)
27-29
: Verify CIFS implementation across related files.
The CIFS implementation looks correct, but let's verify the complete setup:
Also applies to: 113-115
✅ Verification successful
CIFS implementation is properly integrated across the codebase
The verification shows complete CIFS implementation:
- Test manifest correctly configures CIFS backupstore URL and credentials
- Integration tests have proper CIFS support:
common.py
includes CIFS URL validationbackupstore.py
implements CIFS backupstore configuration
- CIFS backupstore deployment is documented in both e2e and integration test READMEs
- The changes in
run_longhorn_test.sh
align with the existing CIFS infrastructure
The implementation is consistent and complete across all related components.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify CIFS implementation across related files
# Test 1: Check if CIFS is properly configured in the test manifest
echo "Checking test manifest for CIFS configuration..."
yq e '.spec.containers[0].env[] | select(.name == "LONGHORN_BACKUPSTORES")' manager/integration/deploy/test.yaml
# Test 2: Check if CIFS-related functions are implemented in test files
echo "Checking test files for CIFS support..."
rg -l "cifs" manager/integration/tests/
# Test 3: Check if CIFS is documented in README files
echo "Checking documentation for CIFS..."
rg -l "cifs" -g "README.md" .
Length of output: 652
Script:
#!/bin/bash
# Let's check the actual CIFS-related code in the test files and documentation
# Check CIFS implementation in common.py
echo "=== CIFS implementation in common.py ==="
rg -A 5 -B 5 "cifs" manager/integration/tests/common.py
# Check CIFS implementation in backupstore.py
echo "=== CIFS implementation in backupstore.py ==="
rg -A 5 -B 5 "cifs" manager/integration/tests/backupstore.py
# Check CIFS documentation content
echo "=== CIFS documentation in e2e/README.md ==="
rg -A 5 -B 5 "cifs" e2e/README.md
echo "=== CIFS documentation in manager/integration/README.md ==="
rg -A 5 -B 5 "cifs" manager/integration/README.md
# Let's also check the manifest file directly
echo "=== Content of test manifest file ==="
cat manager/integration/deploy/test.yaml
Length of output: 6196
pipelines/gke/scripts/longhorn-setup.sh (3)
144-147
: LGTM: CIFS backupstore configuration added correctly.
The CIFS backupstore URL and its integration into the kubectl create command follow the established pattern consistently with the existing NFS and MinIO configurations.
183-185
: LGTM: CIFS support properly integrated in upgrade test configuration.
The CIFS case is correctly implemented, following the same pattern as S3 and NFS, with proper extraction and configuration of the CIFS backup store value.
144-147
: Verify CIFS prerequisites in the cluster.
Consider adding checks to verify that CIFS support is properly enabled in the cluster before proceeding with the tests. This could include:
- Verifying CIFS kernel modules are loaded
- Checking if required CIFS utilities are installed
- Validating CIFS mount capabilities
manager/integration/tests/backupstore.py (2)
20-20
: LGTM!
The import follows the established pattern for backup target type checks.
68-69
: LGTM!
The CIFS condition follows the established pattern and correctly integrates with the existing backup store type handling.
test_framework/scripts/longhorn-setup.sh (1)
336-336
: LGTM: CIFS backupstore integration
The CIFS backupstore URL and deployment integration is properly implemented alongside the existing NFS and S3 backup stores.
Also applies to: 338-339
@@ -5,7 +5,7 @@ set -e | |||
sudo systemctl restart guestregister # Sometimes registration fails on first boot. | |||
sudo zypper ref | |||
sudo zypper install -y -t pattern devel_basis | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add cifs-utils package for complete CIFS support.
While adding samba
is necessary, the cifs-utils
package is also required for mounting CIFS shares. This package provides the essential mount.cifs
utility.
Apply this diff:
-sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba
+sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba cifs-utils
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba | |
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba cifs-utils |
@@ -5,7 +5,7 @@ set -e | |||
sudo systemctl restart guestregister # Sometimes registration fails on first boot. | |||
sudo zypper ref | |||
sudo zypper install -y -t pattern devel_basis | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and verification for package installation.
While adding the samba
package is necessary for CIFS support, consider these improvements:
- Add error handling to ensure the package installation succeeds
- Verify the installation and required services
Consider this more robust implementation:
-sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba
+# Install required packages
+PACKAGES="open-iscsi nfs-client cryptsetup device-mapper samba"
+if ! sudo zypper install -y $PACKAGES; then
+ echo "Failed to install required packages"
+ exit 1
+fi
+
+# Verify samba installation
+if ! rpm -q samba > /dev/null; then
+ echo "Samba package verification failed"
+ exit 1
+fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba | |
# Install required packages | |
PACKAGES="open-iscsi nfs-client cryptsetup device-mapper samba" | |
if ! sudo zypper install -y $PACKAGES; then | |
echo "Failed to install required packages" | |
exit 1 | |
fi | |
# Verify samba installation | |
if ! rpm -q samba > /dev/null; then | |
echo "Samba package verification failed" | |
exit 1 | |
fi |
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
The yq command updates wrong environment variable name
The script modifies e2e/deploy/test.yaml
which uses LONGHORN_BACKUPSTORE
, but the CIFS modification is targeting the wrong index. The environment variable at index 1 is LONGHORN_BACKUPSTORE
and should be selected by name rather than index for reliability.
Consider updating the yq command to:
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
🔗 Analysis chain
Verify environment variable ordering in test manifest.
The implementation assumes the backup store environment variable is at index 1 in the container spec. This implicit ordering dependency could be fragile.
Let's verify the environment variable structure:
Let me try a different approach to verify the environment variables structure in the test manifest.
This will help us:
- Find the correct test manifest file location
- Examine its content to verify the environment variable structure
- Determine if the index-based access in the yq command is reliable
Environment variable ordering is safe but variable name differs
The review concern about environment variable ordering can be resolved, but there's a different issue. In both test manifests, the backup store environment variable is indeed at index 1, but the variable names are different:
e2e/deploy/test.yaml
usesLONGHORN_BACKUPSTORE
manager/integration/deploy/test.yaml
usesLONGHORN_BACKUPSTORES
Let's verify which manifest file is actually being used in the script:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check the environment variables structure in the test manifest
echo "Checking environment variables in test manifest..."
yq e '.spec.containers[0].env' e2e/deploy/test.yaml
Length of output: 263
Script:
#!/bin/bash
# First, let's locate the test manifest file
fd -e yaml -e yml test.yaml
# Then let's examine its content focusing on environment variables
for file in $(fd -e yaml -e yml test.yaml); do
echo "=== Examining $file ==="
cat "$file"
done
Length of output: 7419
Script:
#!/bin/bash
# Check which manifest file is referenced in the script
rg "LONGHORN_TESTS_MANIFEST_FILE_PATH.*=.*test.yaml" pipelines/utilities/run_longhorn_e2e_test.sh
Length of output: 158
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting backup store logic to a shared function.
The CIFS implementation is identical in both run_longhorn_test
and run_longhorn_upgrade_test
. Consider extracting this logic into a shared function to improve maintainability and reduce duplication.
+get_backup_store_value() {
+ local manifest_file="$1"
+ local store_type="$2"
+ local value_index
+
+ case "${store_type}" in
+ "s3") value_index=1 ;;
+ "nfs") value_index=2 ;;
+ "cifs") value_index=3 ;;
+ *) echo "Error: Unknown backup store type: ${store_type}" >&2; exit 1 ;;
+ esac
+
+ local store_value=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' "${manifest_file}" | awk -F ',' '{print $'${value_index}'}' | sed 's/ *//'`
+ if [[ -z "${store_value}" ]]; then
+ echo "Error: ${store_type} backup store value not found in env[1].value" >&2
+ exit 1
+ fi
+ echo "${store_value}"
+}
+
+set_backup_store_value() {
+ local manifest_file="$1"
+ local store_value="$2"
+ yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${store_value}'"' "${manifest_file}"
+}
run_longhorn_test(){
# ... existing code ...
if [[ $BACKUP_STORE_TYPE = "s3" || $BACKUP_STORE_TYPE = "nfs" || $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_value "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}")
+ set_backup_store_value "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_FOR_TEST}"
fi
# ... rest of the function ...
}
run_longhorn_upgrade_test(){
# ... existing code ...
if [[ $BACKUP_STORE_TYPE = "s3" || $BACKUP_STORE_TYPE = "nfs" || $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_value "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}")
+ set_backup_store_value "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_FOR_TEST}"
fi
# ... rest of the function ...
}
Committable suggestion was skipped due to low confidence.
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding validation for the CIFS backup store value.
The implementation follows the existing pattern for S3 and NFS, but consider adding validation to ensure the extracted CIFS value exists and is valid. This would prevent silent failures if the third comma-separated value is missing or malformed.
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
+ if [[ -z "${BACKUP_STORE_FOR_TEST}" ]]; then
+ echo "Error: CIFS backup store value not found in env[1].value" >&2
+ exit 1
+ fi
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | |
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | |
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | |
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | |
if [[ -z "${BACKUP_STORE_FOR_TEST}" ]]; then | |
echo "Error: CIFS backup store value not found in env[1].value" >&2 | |
exit 1 | |
fi | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider refactoring duplicated backup store handling logic.
The CIFS handling logic is duplicated between the upgrade test and regular test sections. Consider extracting this into a shared function to improve maintainability.
Example refactor:
+ configure_backup_store() {
+ local manifest_file="$1"
+ local store_type="$2"
+
+ if [[ $store_type = "s3" ]]; then
+ BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${manifest_file} | awk -F ',' '{print $1}' | sed 's/ *//'`
+ elif [[ $store_type = "nfs" ]]; then
+ BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${manifest_file} | awk -F ',' '{print $2}' | sed 's/ *//'`
+ elif [[ $store_type = "cifs" ]]; then
+ BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${manifest_file} | awk -F ',' '{print $3}' | sed 's/ *//'`
+ fi
+
+ yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${manifest_file}
+ }
run_longhorn_upgrade_test(){
# ... existing code ...
- if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $1}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
- elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $2}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
- elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
- fi
+ configure_backup_store "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}"
# ... rest of the function ...
}
run_longhorn_tests(){
# ... existing code ...
- if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $1}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
- elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $2}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
- elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
- fi
+ configure_backup_store "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}"
# ... rest of the function ...
}
Committable suggestion was skipped due to low confidence.
def set_backupstore_cifs(client): | ||
backupstores = get_backupstore_url() | ||
poll_interval = get_backupstore_poll_interval() | ||
for backupstore in backupstores: | ||
if is_backupTarget_cifs(backupstore): | ||
backupsettings = backupstore.split("$") | ||
set_backupstore_url(client, backupsettings[0]) | ||
set_backupstore_credential_secret(client, backupsettings[1]) | ||
set_backupstore_poll_interval(client, poll_interval) | ||
break | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Missing CIFS implementations confirmed for backup store operations
The codebase has several functions that handle backup operations with explicit implementations for S3 and NFS but raise NotImplementedError for other types. While CIFS support is partially implemented (detection and basic setup), the following functions need CIFS-specific implementations:
- backupstore_get_backup_volume_prefix
- backupstore_get_backup_cfg_file_path
- backupstore_get_volume_cfg_file_path
- backupstore_get_backup_blocks_dir
- backupstore_create_file_in_backupstore
- backupstore_delete_file_in_backupstore
- backupstore_delete_backup_cfg_file
- backupstore_delete_volume_cfg_file
- backupstore_delete_random_backup_block
- backupstore_count_backup_block_files
Each of these functions currently has an if-elif block for S3 and NFS but falls through to raise NotImplementedError, which will break CIFS backup operations.
🔗 Analysis chain
Implement CIFS-specific functions for backup operations.
Several functions in the file handle different backup store types but lack CIFS implementations. This could lead to NotImplementedError when these functions are called with CIFS backup store. Consider implementing CIFS-specific versions of the following functions:
- backupstore_get_backup_cfg_file_path
- backupstore_get_volume_cfg_file_path
- backupstore_get_backup_blocks_dir
- backupstore_create_file
- backupstore_delete_file
- Other similar functions that have S3 and NFS implementations
Let's verify the functions that need CIFS implementation:
Consider creating a backup store interface or abstract class to ensure consistent implementation across different backup store types.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Find all functions that handle different backup store types
echo "Functions potentially needing CIFS implementation:"
rg "if is_backupTarget_s3.*elif is_backupTarget_nfs.*else.*raise NotImplementedError" -A 1
Length of output: 203
Script:
#!/bin/bash
# Let's search for functions that handle backup store types with a broader pattern
echo "Functions handling backup store types:"
rg "is_backupTarget_(s3|nfs|cifs)" -B 2 -A 5
# Also search for NotImplementedError to find potential gaps
echo -e "\nFunctions with NotImplementedError:"
rg "NotImplementedError.*backup" -B 2 -A 2
# Look for existing CIFS implementations
echo -e "\nExisting CIFS implementations:"
rg "is_backupTarget_cifs" -B 2 -A 2
Length of output: 19333
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting duplicated CIFS handling logic
The CIFS backup store type handling logic is duplicated in both run_longhorn_upgrade_test()
and run_longhorn_tests()
functions. Consider extracting this into a helper function to improve maintainability and reduce code duplication.
+get_backup_store_for_test() {
+ local manifest_file="$1"
+ local store_type="$2"
+ local store_index=1
+
+ case "$store_type" in
+ "s3") store_index=1 ;;
+ "nfs") store_index=2 ;;
+ "cifs") store_index=3 ;;
+ esac
+
+ yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' "$manifest_file" | \
+ awk -F ',' "{print \$$store_index}" | sed 's/ *//'
+}
+
run_longhorn_upgrade_test(){
# ... existing code ...
if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $1}' | sed 's/ *//'`
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_for_test "$LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH" "s3")
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $2}' | sed 's/ *//'`
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_for_test "$LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH" "nfs")
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_for_test "$LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH" "cifs")
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
fi
This refactoring:
- Reduces code duplication
- Makes it easier to add new backup store types
- Centralizes the backup store extraction logic
- Improves maintainability
Also applies to: 458-460
This pull request is now in conflict. Could you fix it @chriscchien? 🙏 |
06626a3
to
679b07c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
Actionable comments posted: 14
🧹 Outside diff range and nitpick comments (9)
test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl (1)
30-30
: Add newline at end of file.It's recommended to maintain a newline at the end of shell scripts as per POSIX standards. This ensures proper behavior when concatenating or appending to the file.
fi +
test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl (1)
Line range hint
1-3
: Improve error handling strategy.The current use of
set -e
could cause premature script termination during critical setup steps. A more robust approach would be to implement proper error handling for each critical section.Consider this approach:
#!/bin/bash -set -e +# Helper function for commands that need retries +retry_command() { + local -r cmd="$1" + local -r description="$2" + local -r max_attempts=3 + local -r sleep_time=5 + + for ((i=1; i<=max_attempts; i++)); do + if eval "$cmd"; then + return 0 + fi + echo "Attempt $i/$max_attempts: $description failed. Retrying in ${sleep_time}s..." + sleep $sleep_time + done + + echo "Error: $description failed after $max_attempts attempts" + return 1 +} # Replace critical commands with retry mechanism -sudo systemctl restart guestregister +retry_command "sudo systemctl restart guestregister" "guest registration" -sudo zypper ref -sudo zypper install -y -t pattern devel_basis +retry_command "sudo zypper ref" "repository refresh" +retry_command "sudo zypper install -y -t pattern devel_basis" "development pattern installation"This approach:
- Removes the global
set -e
- Implements a retry mechanism for critical commands
- Provides better error visibility and recovery
- Maintains script execution even if non-critical commands fail
manager/integration/README.md (2)
19-19
: Fix typos and improve documentation clarity.
- Fix the typo "Azurtize" → "Azurite"
- Improve the grammar and formatting of the manual steps note
Apply this diff:
-1. Deploy all backupstore servers(including `NFS` server and `Minio` as s3 server `CIFS` and `Azurtize` server) for test purposes. +1. Deploy all backupstore servers (including `NFS` server, `Minio` as S3 server, `CIFS`, and `Azurite` server) for test purposes. - For Azurtize, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, there are some manual steps that need to be done after the manifest is deployed. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing) for details.Also applies to: 21-21
24-26
: Improve command block formatting and reliability.
- Add language specification to the code block
- Consider using versioned URLs or documenting the version requirement
- Improve command readability
Apply this diff:
-``` +```bash kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yamlConsider adding a note about the required Longhorn version compatibility for these manifests.
pipelines/utilities/install_backupstores.sh (1)
4-11
: Fix typo in variable name and add error handling.The variable name contains a typo:
AZURTIZE_BACKUPSTORE_URL
should beAZURITE_BACKUPSTORE_URL
.Consider adding error handling and URL verification:
- AZURTIZE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" + AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" + + # Verify URLs are accessible + for url in "${MINIO_BACKUPSTORE_URL}" "${NFS_BACKUPSTORE_URL}" "${CIFS_BACKUPSTORE_URL}" "${AZURITE_BACKUPSTORE_URL}"; do + if ! curl --output /dev/null --silent --head --fail "$url"; then + echo "Error: Unable to access $url" + exit 1 + fi + done + kubectl create -f ${MINIO_BACKUPSTORE_URL} \ -f ${NFS_BACKUPSTORE_URL} \ -f ${CIFS_BACKUPSTORE_URL} \ - -f ${AZURTIZE_BACKUPSTORE_URL} + -f ${AZURITE_BACKUPSTORE_URL} || { + echo "Error: Failed to create backup stores" + exit 1 + }e2e/README.md (2)
17-24
: Documentation improvements needed for backup store setup.
The Azurite setup link is an internal Confluence URL that may not be accessible to all users. Consider:
- Moving the setup instructions to a public documentation
- Adding the essential setup steps directly in this README
Add prerequisites for CIFS and Azurite setup:
- Required credentials or configurations
- Network requirements
- Storage requirements
Apply these improvements:
-1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server, `CIFS` and `Azurtize` server) for test purposes. +1. Deploy all backupstore servers for test purposes: + - NFS server + - Minio (S3 compatible) + - CIFS server (requires: <list prerequisites>) + - Azurite server (requires: <list prerequisites>) - For Azurtize, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite setup: + 1. <Step 1> + 2. <Step 2> + 3. For detailed instructions, see [Azurite Setup Guide](public-url-to-documentation) ```bash kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml🧰 Tools
🪛 LanguageTool
[uncategorized] ~19-~19: A determiner appears to be missing. Consider inserting it.
Context: ...r Azurtize, there are some manual steps need to be done after manifest deployed(http...(AI_EN_LECTOR_MISSING_DETERMINER)
🪛 Markdownlint
19-19: null
Bare URL used(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
19-20
: Fix grammar and markdown formatting.
- Grammar: "after manifest deployed" should be "after the manifest is deployed"
- The URL should be properly formatted as a markdown link
- Code block should specify the language
Apply these fixes:
- For Azurtize, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, there are some manual steps that need to be done after the manifest is deployed. See [Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). -``` +```bash🧰 Tools
🪛 LanguageTool
[uncategorized] ~19-~19: A determiner appears to be missing. Consider inserting it.
Context: ...r Azurtize, there are some manual steps need to be done after manifest deployed(http...(AI_EN_LECTOR_MISSING_DETERMINER)
🪛 Markdownlint
19-19: null
Bare URL used(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
pipelines/utilities/run_longhorn_test.sh (1)
27-32
: Add documentation for the backup store value format.The code assumes a specific comma-separated format in env[1].value but this format is not documented. Consider adding a comment explaining the expected format:
Add this comment before the if-elif block:
+ # env[1].value format: "<s3_value>,<nfs_value>,<cifs_value>,<azurite_value>" if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
test_framework/scripts/longhorn-setup.sh (1)
567-570
: Remove Dead Code: Commented-Out BlockLines 567-570 contain code that is commented out. Unless needed for future reference, it's best to remove dead code to enhance readability.
Consider removing or explaining the necessity of this commented-out block.
🛑 Comments failed to post (14)
test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_server.sh.tpl (1)
8-10:
⚠️ Potential issueFix incomplete and potentially unsafe Azure CLI installation.
The current Azure CLI installation command is incomplete and needs improvement:
- The
curl
command only downloads but doesn't execute or save the script- Missing error handling and installation verification
- No version pinning for better reproducibility
Apply this diff to implement a safer Azure CLI installation:
# install azure cli -curl -sL https://aka.ms/InstallAzureCLIDeb +echo "Installing Azure CLI..." +if ! command -v az &> /dev/null; then + curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash + if ! command -v az &> /dev/null; then + echo "Azure CLI installation failed" + exit 1 + fi + echo "Azure CLI installation completed" +fiThis implementation:
- Checks if Azure CLI is already installed
- Properly pipes the installation script to bash
- Verifies successful installation
- Includes error handling
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.# install azure cli echo "Installing Azure CLI..." if ! command -v az &> /dev/null; then curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash if ! command -v az &> /dev/null; then echo "Azure CLI installation failed" exit 1 fi echo "Azure CLI installation completed" fi
pipelines/utilities/run_longhorn_test.sh (2)
30-32:
⚠️ Potential issueFix typo in backup store type name.
The backup store type "azurtize" appears to be a typo and should be "azurite" (Azure Storage Emulator).
Apply this diff to fix the typo:
- elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then + elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
119-121:
⚠️ Potential issueFix typo in backup store type name.
The backup store type "azurtize" appears to be a typo and should be "azurite" (Azure Storage Emulator).
Apply this diff to fix the typo:
- elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then + elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
pipelines/gke/scripts/longhorn-setup.sh (5)
150-150:
⚠️ Potential issueCorrect the function call to
setup_azurite_backup_store
.The function call
setup_azuitize_backup_store
on line 150 has a typo. The correct function name should besetup_azurite_backup_store
.Apply this diff to fix the function call:
-setup_azuitize_backup_store +setup_azurite_backup_store📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.setup_azurite_backup_store
144-145:
⚠️ Potential issueCorrect the typo in variable name for Azurite backup store URL.
The variable
AZURTIZE_BACKUPSTORE_URL
appears to have a typo. It should beAZURITE_BACKUPSTORE_URL
to correctly reference the Azurite backup store.Apply this diff to fix the typo:
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" -AZURTIZE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" +AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"
262-267:
⚠️ Potential issueEnsure consistent spelling of 'Azurite' in test setup.
In the test setup, the backup store type and related variables have typos. Please correct
azurtize
toazurite
.Apply this diff to fix the typos:
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then BACKUP_STORE_FOR_TEST=`...` yq e -i '...' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} -elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then +elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then BACKUP_STORE_FOR_TEST=`...` yq e -i '...' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}Committable suggestion skipped: line range outside the PR's diff.
153-174:
⚠️ Potential issueFix typos in function name and variables in
setup_azurite_backup_store
.The function
setup_azuitize_backup_store
contains typos in its name and within its body. Please correct all instances ofazuitize
andazurtize
toazurite
.Apply these diffs to correct the typos:
-setup_azuitize_backup_store(){ +setup_azurite_backup_store(){# In line 156: -until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do +until (kubectl get pods | grep 'longhorn-test-azurite' | grep 'Running'); do# In line 157: -echo 'Waiting azurtize pod running' +echo 'Waiting for Azurite pod to be running'# In line 165: -nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 & +nohup kubectl port-forward --address 0.0.0.0 service/azurite-service 20001:10000 &# In line 168: -AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) +AZURITE_ENDPOINT=$(echo -n "http://$(kubectl get svc azurite-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64)# In line 169: -kubectl -n longhorn-system patch secret azblob-secret \ +kubectl -n longhorn-system patch secret azurite-secret \# In line 173: -az storage container create -n longhorn-test-azurtize --connection-string "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=..." +az storage container create -n longhorn-test-azurite --connection-string "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=..."📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.setup_azurite_backup_store(){ RETRY=0 MAX_RETRY=60 until (kubectl get pods | grep 'longhorn-test-azurite' | grep 'Running'); do echo 'Waiting for Azurite pod to be running' sleep 5 if [ $RETRY -eq $MAX_RETRY ]; then break fi RETRY=$((RETRY+1)) done nohup kubectl port-forward --address 0.0.0.0 service/azurite-service 20001:10000 & sleep 5 AZURITE_ENDPOINT=$(echo -n "http://$(kubectl get svc azurite-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) kubectl -n longhorn-system patch secret azurite-secret \ --type=json \ -p="[{\"op\": \"replace\", \"path\": \"/data/AZBLOB_ENDPOINT\", \"value\": \"${AZURITE_ENDPOINT}\"}]" az storage container create -n longhorn-test-azurite --connection-string "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;" }
208-213:
⚠️ Potential issueCorrect the backup store type comparison for Azurite.
The comparison
$BACKUP_STORE_TYPE = "azurtize"
has a typo. It should be"azurite"
to correctly match the backup store type.Apply this diff to fix the typo:
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then BACKUP_STORE_FOR_TEST=`...` yq e -i '...' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} -elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then +elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then BACKUP_STORE_FOR_TEST=`...` yq e -i '...' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
test_framework/scripts/longhorn-setup.sh (6)
342-342:
⚠️ Potential issueTypo in Function Call: Correct 'setup_azuitize_backup_store' to 'setup_azurite_backup_store'
At line 342, the function
setup_azuitize_backup_store
is called, but there seems to be a typo in the function name. It should besetup_azurite_backup_store
to match the intended functionality.Apply this diff to correct the function call:
-setup_azuitize_backup_store +setup_azurite_backup_store📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.setup_azurite_backup_store
336-337:
⚠️ Potential issueTypo in Variable Name: Correct 'AZURTIZE' to 'AZURITE'
The variables
AZURTIZE_BACKUPSTORE_URL
should be renamed toAZURITE_BACKUPSTORE_URL
to match the correct spelling of Azurite.Apply this diff to correct the variable name:
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" -AZURTIZE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" +AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"
430-432:
⚠️ Potential issueTypo in Backup Store Type: Correct 'azurtize' to 'azurite'
In the conditional statement at line 430, the backup store type is misspelled as
azurtize
. It should beazurite
to correctly match the intended backup store.Apply this diff to correct the backup store type:
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then # ... -elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then +elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; thenCommittable suggestion skipped: line range outside the PR's diff.
487-492: 🛠️ Refactor suggestion
⚠️ Potential issueConsistent Handling of Backup Store Types in Tests
The conditional blocks handling different
BACKUP_STORE_TYPE
values inrun_longhorn_tests()
contain a typo in the backup store typeazurtize
. It should beazurite
.Correct the backup store type and consider refactoring to reduce code duplication.
Apply this diff:
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then # ... -elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then +elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then # ... -elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then +elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; thenTo improve maintainability, consider extracting the duplicated logic for handling
BACKUP_STORE_FOR_TEST
into a helper function, similar to the prior suggestion.Committable suggestion skipped: line range outside the PR's diff.
362-367:
⚠️ Potential issueSecurity Concern: Avoid Hardcoding Credentials in Scripts
At line 366, the
az storage container create
command includes hardcoded credentials in the connection string. Storing sensitive information like account keys directly in scripts poses a security risk.Consider the following actions:
- Retrieve credentials securely from environment variables or a secret management system.
- Replace hardcoded values with secure references.
For example:
-ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurtize --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'" +ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string '${AZURITE_CONNECTION_STRING}'"Ensure
AZURITE_CONNECTION_STRING
is securely provided.Committable suggestion skipped: line range outside the PR's diff.
345-367:
⚠️ Potential issueTypo in Function Name and Consistent Naming
The function
setup_azuitize_backup_store()
has a typo in the name. It should besetup_azurite_backup_store()
to correctly reference Azurite.Apply this diff to correct the function name:
-function setup_azuitize_backup_store(){ +function setup_azurite_backup_store(){Additionally, review all occurrences of
azuitize
in the function and replace them withazurite
for consistency.Committable suggestion skipped: line range outside the PR's diff.
6dd4978
to
d12551d
Compare
This ticket is ready for review, thanks. cc @longhorn/qa @longhorn/dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are lots of azurtize
in this PR. Should it be azurite
?
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | ||
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') | ||
# port forward and az container create need to be run on control node | ||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know why we need to ssh into the control plane to run the az
command? Can it be directly run on the Jenkins agent like other kubectl
commands to skip the ssh
step?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should can perform remote create, but since we need to ssh into control plane node to perform port forward(the workaround to setup azurite backupstore), using one ssh command to create az storage on control plane node directly is more efficient. I'd love to hear your thoughts on this, thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't the port-forward workaround using a local port
? Not sure but I assume it means your local machine rather than the control plane node? So do you mean if we port-forward on the local machine, the workaround will not work?
3. Forward azblob-service
Forward local port 20001 traffic to port 10000 on the service
Additionally, the username ec2-user
in ec2-user@${CONTROL_PLANE_PUBLIC_IP}
limits the script to EC2 sles
or sle-micro
instances. But it won't be work on other distros like rocky, rhel, etc., and other environments like local VM or harvester HAL cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I did a try, if the port forward running on jenkins agent, the backup store can not be accessed. For the local port description, actually means the kubernetes node, I have updated the confluence content.
And the azurite
currently supported on sles only in this PR, I can create another ticket for implememt on other distros.
@@ -5,7 +5,7 @@ set -e | |||
sudo systemctl restart guestregister # Sometimes registration fails on first boot. | |||
sudo zypper ref | |||
sudo zypper install -y -t pattern devel_basis | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you like to add samba
to terraform scripts for other distros as well?
e2e/README.md
Outdated
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server) for test purposes. | ||
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server, `CIFS` and `Azurtize` server) for test purposes. | ||
|
||
For Azurtize, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we have a test report for the CIFS and Azurite robot test cases run on our Jenkins?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test case "Test Incremental Restore" passed on my jenkins with cifs
and azurite
, because our jenkins need to point to my branch to run the scripts, currently it can not have report on our jenkins.
@@ -64,6 +66,10 @@ def set_random_backupstore(request, client): | |||
elif request.param == "nfs": | |||
set_backupstore_nfs(client) | |||
mount_nfs_backupstore(client) | |||
elif request.param == "cifs": | |||
set_backupstore_cifs(client) | |||
elif request.param == "azblob": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I run other backup related test cases that manipulate the backup store, like test_backup_volume_list
, test_backup_block_deletion
,
test_backup_metadata_deletion
, test_all_replica_restore_failure
, etc., will they pass? If not, should they be skipped?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exactly what I'm concerned about: #2155 (comment)
e2e/README.md
Outdated
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server) for test purposes. | ||
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server, `CIFS` and `Azurtize` server) for test purposes. | ||
|
||
For Azurtize, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mantissahz @chriscchien Could this be resolve to eliminate the tedious manual steps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a dicsussion in longhorn/longhorn#9340, currently the manual steps is the workaround to setup azurite in test environment. If the discussion having further conclusion, I will follow up to modify code to match the behavior.
cc @mantissahz
d12551d
to
3c57f3b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (12)
test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl (1)
4-4
: Verify package installation error handling.The script should handle potential package installation failures gracefully, especially in CI environments where network issues might occur.
Consider adding error handling:
-apt-get install -y nfs-common cryptsetup dmsetup samba linux-modules-extra-`uname -r` +if ! apt-get install -y nfs-common cryptsetup dmsetep samba linux-modules-extra-`uname -r`; then + echo "Failed to install required packages" + exit 1 +fitest_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl (1)
7-7
: Consider adding package installation verification.For improved robustness in CI environments, consider adding package installation verification.
-apt-get install -y nfs-common cryptsetup dmsetup samba linux-modules-extra-`uname -r` +apt-get install -y nfs-common cryptsetup dmsetup samba linux-modules-extra-`uname -r` + +# Verify critical packages are installed +for pkg in nfs-common cryptsetup dmsetup samba; do + if ! dpkg -l | grep -q "^ii $pkg "; then + echo "Failed to install $pkg" + exit 1 + fi +donetest_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl (1)
Line range hint
1-7
: Consider adding SELinux configuration for CIFS.Since this script configures SELinux and installs samba, it would be beneficial to add the necessary SELinux boolean settings for CIFS functionality.
Consider adding these configurations after the SELinux mode setting:
elif [[ ${selinux_mode} == "permissive" ]]; then setenforce 0 fi + +# Configure SELinux for CIFS if in enforcing mode +if [[ ${selinux_mode} == "enforcing" ]] ; then + # Enable SELinux boolean for CIFS + setsebool -P samba_enable_home_dirs on + setsebool -P samba_share_t on +fitest_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl (1)
Line range hint
7-24
: Consider adding system tuning parameters for CIFS performance.While the script correctly sets up kernel modules and system parameters for various functionalities, it might benefit from additional CIFS-specific tuning parameters for optimal performance.
Consider adding these CIFS-related system parameters:
touch /etc/modules-load.d/modules.conf cat > /etc/modules-load.d/modules.conf <<EOF uio uio_pci_generic vfio_pci nvme-tcp dm_crypt +cifs EOF echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages echo "vm.nr_hugepages=1024" >> /etc/sysctl.conf +# CIFS performance tuning +echo "net.ipv4.tcp_window_scaling = 1" >> /etc/sysctl.conf +echo "net.core.rmem_max = 16777216" >> /etc/sysctl.conf +echo "net.core.wmem_max = 16777216" >> /etc/sysctl.conftest_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl (2)
13-13
: Consider version pinning for reproducible builds.The addition of the
samba
package is appropriate for CIFS support. However, consider pinning the version for better reproducibility across environments.-sudo dnf install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper samba +sudo dnf install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper samba-4.15.0
13-13
: Consider additional Samba configurations for robust CIFS support.While the Samba package is installed, the script might benefit from additional configurations to ensure reliable CIFS operations, especially in SELinux enforcing mode:
- SELinux context settings for Samba
- Basic SMB client configurations
- Service enablement if server functionality is needed
Consider adding these configurations after the package installation:
sudo dnf install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper samba +# Configure SELinux for Samba +if [[ ${selinux_mode} == "enforcing" ]] ; then + sudo setsebool -P samba_enable_home_dirs on + sudo setsebool -P samba_share_nfs on +fi + +# Basic SMB client configurations +sudo cat >> /etc/samba/smb.conf <<EOF +[global] +client min protocol = SMB2 +client max protocol = SMB3 +EOFtest_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl (1)
13-13
: Consider adding security hardening for Samba.Since CIFS/Samba can be a potential attack vector, consider implementing these security enhancements:
- Ensure SMBv1 is disabled
- Configure minimum SMB protocol version
- Set appropriate file permissions
Add these security configurations to strengthen Samba:
sudo dnf install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper jq nmap-ncat samba +# Disable SMBv1 and set minimum protocol version to SMBv2 +sudo tee -a /etc/samba/smb.conf > /dev/null <<EOF +[global] + client min protocol = SMB2 + server min protocol = SMB2 + security = user +EOFmanager/integration/README.md (2)
19-21
: Improve documentation formatting and clarity.
- The URL should be properly formatted as a markdown link.
- Consider adding more details about the manual steps required for Azurite setup.
Apply this diff to improve the formatting:
-1. Deploy all backupstore servers(including `NFS` server and `Minio` as s3 server `CIFS` and `Azurite` server) for test purposes. +1. Deploy all backupstore servers (including `NFS` server, `Minio` as S3 server, `CIFS`, and `Azurite` server) for test purposes. - For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, manual steps are required after deploying the manifest. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing) for details.🧰 Tools
🪛 Markdownlint
21-21: null
Bare URL used(MD034, no-bare-urls)
24-26
: Add code block language specification and improve command readability.The kubectl command block should specify the shell language and be properly formatted for better readability.
Apply this diff to improve the formatting:
-``` +```shell kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yamlpipelines/utilities/install_backupstores.sh (1)
25-28
: Improve secret patching with validation.The current implementation doesn't validate the service IP or secret patching success.
Consider adding validation:
- AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) - kubectl -n longhorn-system patch secret azblob-secret \ - --type=json \ - -p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" + # Get service IP with validation + SERVICE_IP=$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}') + if [ -z "$SERVICE_IP" ]; then + echo "Error: Failed to get Azurite service IP" + exit 1 + fi + + # Patch secret with validation + AZBLOB_ENDPOINT=$(echo -n "http://${SERVICE_IP}:10000/" | base64) + if ! kubectl -n longhorn-system patch secret azblob-secret \ + --type=json \ + -p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]"; then + echo "Error: Failed to patch azblob-secret" + exit 1 + fie2e/README.md (2)
17-19
: Enhance documentation and consider automation.
- Format the Confluence URL as a proper markdown link.
- Consider automating the manual Azurite setup steps to improve CI/CD efficiency.
Apply this diff to fix the markdown formatting:
- For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, there are some manual steps need to be done after manifest deployed. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).Would you like me to help create a script to automate the Azurite setup steps? This could eliminate the manual intervention currently required.
🧰 Tools
🪛 Markdownlint
19-19: null
Bare URL used(MD034, no-bare-urls)
22-24
: Add language specifier to the code block.The deployment commands look good, but the code block should specify the language for better syntax highlighting.
Add
bash
as the language specifier to the code block:-``` +```bash kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml</blockquote></details> </blockquote></details> <details> <summary>📜 Review details</summary> **Configuration used: CodeRabbit UI** **Review profile: CHILL** <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 4400e94cf45562e2770921746201a69d68df7d60 and 3c57f3b1454f3613d146da9a105b3c66170206d8. </details> <details> <summary>📒 Files selected for processing (23)</summary> * `e2e/README.md` (1 hunks) * `manager/integration/README.md` (1 hunks) * `manager/integration/deploy/test.yaml` (1 hunks) * `manager/integration/tests/backupstore.py` (14 hunks) * `manager/integration/tests/common.py` (1 hunks) * `pipelines/utilities/install_backupstores.sh` (1 hunks) * `pipelines/utilities/run_longhorn_e2e_test.sh` (3 hunks) * `pipelines/utilities/run_longhorn_test.sh` (2 hunks) * `test_framework/scripts/longhorn-setup.sh` (3 hunks) * `test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl` (2 hunks) * `test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl` (1 hunks) * `test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) * `test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (5)</summary> * manager/integration/deploy/test.yaml * pipelines/utilities/run_longhorn_e2e_test.sh * pipelines/utilities/run_longhorn_test.sh * test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl * test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl </details> <details> <summary>🧰 Additional context used</summary> <details> <summary>🪛 Markdownlint</summary> <details> <summary>e2e/README.md</summary> 19-19: null Bare URL used (MD034, no-bare-urls) --- 20-20: null Fenced code blocks should have a language specified (MD040, fenced-code-language) </details> <details> <summary>manager/integration/README.md</summary> 21-21: null Bare URL used (MD034, no-bare-urls) --- 22-22: null Fenced code blocks should have a language specified (MD040, fenced-code-language) </details> </details> </details> <details> <summary>🔇 Additional comments (31)</summary> <details> <summary>test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl (1)</summary> `4-4`: **LGTM! Consider additional CIFS configuration.** The addition of the `samba` package is correct and necessary for CIFS support. However, for complete CIFS functionality, you might want to consider adding basic CIFS configuration. Let's verify if any CIFS configuration files are present in the repository: </details> <details> <summary>test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl (2)</summary> `17-17`: **LGTM: Addition of samba package for CIFS support** The addition of the samba package aligns with the PR objective to support CIFS as a backup store type. --- `17-17`: **Consider adding package installation verification and service configuration** While the samba package installation is correct, consider: 1. Verifying successful package installation 2. Configuring and starting the samba service if required for CIFS backup store functionality Consider adding these verifications: ```diff sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools samba +# Verify samba package installation +if ! rpm -q samba > /dev/null; then + echo "Error: samba package installation failed" + exit 1 +fi + sudo systemctl -q enable iscsid sudo systemctl start iscsid + +# Configure and start samba service if needed for CIFS backup store +if [[ "${BACKUP_STORE_TYPE}" == "CIFS" ]]; then + sudo systemctl enable smb + sudo systemctl start smb +fi
test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl (2)
7-7
: LGTM: Addition of samba package aligns with CIFS support requirements.The addition of the
samba
package is appropriate for enabling CIFS support in the testing environment.
Line range hint
40-43
: Verify K3s version compatibility with CIFS workloads.The script installs K3s using a specified version. Ensure the chosen K3s version supports the CIFS-related workloads you plan to run.
✅ Verification successful
Let me gather more information about the CIFS workload requirements and version specifications.
Based on the gathered information, I can now provide the final response:
CIFS workload compatibility is already verified through extensive testing infrastructure
The codebase shows comprehensive CIFS support through Longhorn's backup store functionality, with:
- Dedicated CIFS backup store deployment and configuration
- Integration tests specifically covering CIFS scenarios
- E2E test infrastructure that validates CIFS workloads
The K3s version is properly managed through the
distro_version
variable, which is consistently used across different infrastructure deployments. The CIFS functionality is part of the standard test suite and doesn't require special K3s version considerations.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check if the K3s version variable is defined and not empty # This helps ensure we're using a specific, tested version # Test: Check if INSTALL_K3S_VERSION is used in the template rg "INSTALL_K3S_VERSION.*distro_version" -A 2 -B 2 # Test: Look for any K3s version specifications in related config files fd -e yaml -e json -e tf | xargs rg "k3s.*version"Length of output: 2976
Script:
#!/bin/bash # Check for CIFS-related configurations and requirements rg -i "cifs|smb" -A 3 -B 3 # Look for kubernetes version variables and their values rg "k8s_distro_version|kubernetes.*version" --type terraform # Check for any storage class or volume definitions that might use CIFS fd -e yaml -e yml | xargs rg -l "storageClass|persistentVolume.*|csi"Length of output: 15455
test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl (2)
5-5
: LGTM! Verify samba configuration.The addition of the
samba
package aligns with the PR objective to support CIFS backup stores. However, ensure that the samba configuration is properly handled elsewhere in the codebase.✅ Verification successful
LGTM! The samba package installation is correctly integrated.
The search results confirm that the
samba
package installation is part of a well-integrated CIFS backup store implementation:
- CIFS backup store is properly configured through
cifs-backupstore.yaml
manifest- The configuration is managed at the Kubernetes level using secrets (
cifs-secret
)- Integration tests include CIFS backup store testing alongside other storage backends
- No direct Samba configuration is needed as it's handled through Kubernetes/Longhorn's backup store implementation
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for samba configuration files in the codebase # Expected: Find related configuration files for samba setup # Search for potential samba config files echo "Searching for samba configuration files..." rg -g '!{*.md,*.txt}' -i 'smb\.conf|samba.*\.conf' # Search for samba-related setup code echo "Searching for samba setup code..." rg -g '!{*.md,*.txt}' -i '(configure|setup).*samba'Length of output: 346
Script:
#!/bin/bash # Let's check if there are any other files in the codebase that might be configuring or using CIFS/SMB # This will help us understand the broader context of Samba usage # Search for CIFS/SMB related code echo "Searching for CIFS/SMB usage..." rg -i "cifs|smb" -A 3 -B 3 # Check for mount commands or configurations echo "Searching for mount configurations..." rg -i "mount.*cifs|mount.*smb" -A 3 -B 3 # Look for storage class or PV/PVC configurations that might use CIFS echo "Searching for Kubernetes storage configurations..." rg -g '*.{yaml,yml}' -i "cifs|smb"Length of output: 15468
5-5
: Consider security implications of samba installation.While adding the samba package is necessary for CIFS support, it increases the attack surface. Consider:
- Ensuring samba service is only started when needed
- Implementing proper firewall rules
- Applying security hardening configurations
test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl (2)
Line range hint
1-47
: Verify systemd service dependencies.Since CIFS support is being added, we should ensure that the RKE2 agent service starts after the SMB/CIFS services are ready.
#!/bin/bash # Description: Check if there are any existing systemd service dependencies # Expected: Find systemd unit files or service configurations # Search for systemd service configurations rg -A 5 "systemctl|\.service" # Search for specific RKE2 agent service configurations fd "rke2-agent.service"
4-4
: LGTM: Samba package installation for CIFS support.The addition of the
samba
package aligns with the PR objective to support CIFS in the CI environment.Let's verify if similar changes are needed for other distributions:
✅ Verification successful
Based on the search results, I can now verify the review comment. The search shows that
samba
package installation has been consistently added across different distributions:
- For Ubuntu (apt-based):
apt-get install -y nfs-common cryptsetup dmsetup samba linux-modules-extra-`uname -r`
- For SLES/SUSE (zypper-based):
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba
- For RHEL/CentOS/Oracle Linux (yum-based):
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba
LGTM: Samba package installation is consistently handled across distributions
The addition of the
samba
package in the Ubuntu RKE2 agent provisioning script aligns with similar changes made in other distribution-specific provisioning scripts, ensuring consistent CIFS support across all supported platforms.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check if samba package installation is consistently handled across different distributions # Expected: Find other provisioning scripts that might need similar updates # Search for similar provisioning scripts fd provision_rke2_agent.sh.tpl # Search for package installation patterns in these files rg -A 5 "apt-get install|yum install|zypper install"Length of output: 117685
test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl (1)
Line range hint
1-55
: Verify error handling for CIFS-related operations.The script uses
set -e
for error handling, which is good. However, since we're adding CIFS support, we should ensure that any CIFS-related failures are properly handled and don't silently fail.Let's check for error handling patterns in similar scripts:
test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl (1)
13-13
: LGTM! Addition of samba package aligns with CIFS support requirements.The addition of the samba package is necessary for CIFS functionality and is correctly placed in the system package installation step.
Let's verify the package installation and service configuration:
Note: Consider adding samba service management (enable/start) if required by your CIFS implementation.
✅ Verification successful
LGTM! Samba package installation is consistent across agent nodes and aligns with CIFS support requirements.
The verification shows:
- Samba package is consistently added to agent node provisioning scripts across different OS distributions:
- RHEL:
yum install -y ... samba
- Ubuntu:
apt-get install -y ... samba
- SLES:
zypper install -y ... samba
- CentOS/RockyLinux:
yum/dnf install -y ... samba
- CIFS backupstore configuration exists in the test framework
- No explicit samba service management is required as CIFS functionality only needs the package installed
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify samba package installation and configuration in other scripts # Test 1: Check if samba configuration is handled in other scripts rg -A 5 'smb\.conf|samba\.conf' # Test 2: Check if samba service is managed in other scripts rg -A 5 'systemctl.*smb|systemctl.*samba' # Test 3: Check for any samba-related security configurations rg -A 5 'SELinux.*samba|selinux.*smb'Length of output: 117
Script:
#!/bin/bash # Let's verify if samba package is consistently added across all similar provisioning scripts # and check for any existing samba-related configurations # Check all provisioning scripts for package installations echo "=== Checking provisioning scripts for package installations ===" fd "provision.*\.(sh|tpl)" | xargs rg "yum install|apt-get install" # Check for any samba/CIFS related configurations or mount points echo -e "\n=== Checking for CIFS/samba configurations ===" rg -i "cifs|samba|smb" -A 5 # Check for any systemd service configurations echo -e "\n=== Checking systemd service configurations ===" fd ".*\.service" | xargs cat 2>/dev/null # Check for any security or SELinux related configurations echo -e "\n=== Checking security configurations ===" rg "selinux|apparmor|security" -A 5Length of output: 123821
test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl (2)
5-5
: Consider security implications of Samba installation.While adding the
samba
package is necessary for CIFS support, consider:
- Ensuring Samba service doesn't auto-start unless explicitly needed
- Documenting any required Samba configurations for CIFS usage
- Adding security hardening guidelines if Samba service needs to run
Let's check if Samba service is being managed elsewhere:
#!/bin/bash # Description: Check for Samba service management # Test: Search for smb/samba service management in scripts # Expect: Find any explicit Samba service configurations # Search for Samba service management rg "systemctl.*smb|systemctl.*samba"
5-5
: LGTM! Addition of samba package aligns with CIFS support requirements.The addition of the
samba
package is correctly implemented and necessary for CIFS support.Let's verify similar changes across other provisioning scripts:
test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl (2)
13-13
: Verify SELinux policies for CIFSSince this script configures SELinux, ensure that appropriate SELinux policies are in place for CIFS operations.
Run this script to check for SELinux-related CIFS configurations:
13-13
: LGTM: Samba package addition for CIFS supportThe addition of the samba package is correct and necessary for enabling CIFS support.
Let's verify if any additional CIFS-related packages or configurations might be needed:
test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl (1)
13-13
: Verify Samba configuration requirements.The addition of the
samba
package aligns with CIFS support requirements. However, ensure that any necessary Samba configurations (like SMB protocol version, security settings) are handled elsewhere in the deployment pipeline.Let's verify if there are any Samba configurations in the codebase:
✅ Verification successful
Samba package installation is appropriate and well-integrated
The addition of the
samba
package is verified to be correctly integrated with the existing CIFS backup store functionality. The codebase shows:
- CIFS backup store configuration is managed through Kubernetes manifests downloaded from the Longhorn repository
- CIFS endpoints are properly configured in test environments
- The backup store implementation handles authentication through secrets (
cifs-secret
)- No direct Samba configuration is needed in the agent nodes as they only need the client capabilities
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for Samba/CIFS configurations in the codebase # Expected: Find files containing Samba/CIFS configurations # Search for potential Samba config files fd -e conf -e cfg -e yaml -e yml | xargs rg -l 'smb|cifs|samba' # Search for Samba/CIFS related environment variables or configurations rg -g '!*.md' -i '(smb|cifs|samba).*=.*'Length of output: 609
Script:
#!/bin/bash # Let's examine the content of these files to understand the CIFS configurations # Check the test configuration echo "=== Checking test.yaml configuration ===" cat manager/integration/deploy/test.yaml # Check the CIFS backupstore setup echo -e "\n=== Checking longhorn-setup.sh ===" cat test_framework/scripts/longhorn-setup.sh | grep -A 5 -B 5 CIFS # Check the backup stores installation script echo -e "\n=== Checking install_backupstores.sh ===" cat pipelines/utilities/install_backupstores.sh | grep -A 5 -B 5 CIFS # Check the e2e test script echo -e "\n=== Checking run_longhorn_e2e_test.sh ===" cat pipelines/utilities/run_longhorn_e2e_test.sh | grep -A 5 -B 5 CIFSLength of output: 6543
Script:
#!/bin/bash # Let's check the CIFS backupstore configuration that's being downloaded # and any other Samba/CIFS related configurations # Search for any smb.conf or related configuration files fd -e conf | grep -i "smb\|samba" # Check for any security or mount related configurations rg -g '!*.md' "mount.*cifs|smbclient|security\s*=\s*|vers\s*=\s*" # Check if there are any other scripts that might configure Samba fd -e sh -e bash | xargs rg -l "mount.*cifs|smbclient|smb.conf"Length of output: 856
manager/integration/README.md (1)
19-26
: Verify the availability of new backup store manifests.The documentation references new YAML files for CIFS and Azurite backup stores. Let's verify their existence in the repository.
✅ Verification successful
Backup store manifests are available and accessible
The verification confirms that both
cifs-backupstore.yaml
andazurite-backupstore.yaml
are available at the specified URLs in the Longhorn repository, validating the documentation's accuracy.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the existence of new backup store manifests # Test: Check if the YAML files exist in the longhorn repository for store in cifs azurite; do echo "Checking ${store}-backupstore.yaml..." curl -s -f "https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/${store}-backupstore.yaml" > /dev/null if [ $? -eq 0 ]; then echo "✓ ${store}-backupstore.yaml exists" else echo "✗ ${store}-backupstore.yaml not found" fi doneLength of output: 630
🧰 Tools
🪛 Markdownlint
21-21: null
Bare URL used(MD034, no-bare-urls)
22-22: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
pipelines/utilities/install_backupstores.sh (2)
4-11
: LGTM! Clean integration of new backup stores.The changes properly integrate CIFS and Azurite backup stores alongside existing MinIO and NFS stores, maintaining consistent URL patterns and deployment structure.
30-34
:⚠️ Potential issueSecurity and reliability concerns in Azurite setup.
Several issues need attention:
- Hardcoded credentials in the Azure connection string pose a security risk
- No cleanup of the port-forward process
- No validation of container creation success
- Using SSH with public IP might not be necessary (as noted in previous review)
Let's verify the necessity of SSH:
Suggested improvements:
- Move credentials to Kubernetes secrets
- Add proper process management
- Add error handling
- Consider removing SSH requirement
- CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) - CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') - # port forward and az container create need to be run on control node - ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" - ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'" + # Start port forwarding with proper process management + PORT_FORWARD_PID="" + cleanup_port_forward() { + if [ ! -z "$PORT_FORWARD_PID" ]; then + kill $PORT_FORWARD_PID + fi + } + trap cleanup_port_forward EXIT + + # Get connection details from secret + AZURITE_CONN_STR=$(kubectl get secret azurite-credentials -o jsonpath='{.data.connection-string}' | base64 -d) + + # Start port forwarding + kubectl port-forward service/azblob-service 20001:10000 & + PORT_FORWARD_PID=$! + + # Wait for port-forward to be ready + sleep 5 + + # Create container with error handling + if ! az storage container create -n longhorn-test-azurite --connection-string "$AZURITE_CONN_STR"; then + echo "Failed to create Azurite container" + exit 1 + fie2e/README.md (1)
17-24
: Provide test reports for CIFS and Azurite.As requested in the previous review, please provide test reports from Jenkins for the CIFS and Azurite robot test cases to validate the implementation.
🧰 Tools
🪛 Markdownlint
19-19: null
Bare URL used(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
manager/integration/tests/backupstore.py (6)
20-21
: LGTM: Import statements are correctly added.The new imports for CIFS and Azurite backup targets are properly placed with other backup target imports.
69-72
: LGTM: Backup store type handling is properly extended.The
set_random_backupstore
fixture correctly handles the new CIFS and Azure blob storage types.
125-135
: LGTM: CIFS backup store setup is implemented correctly.The implementation follows the same pattern as other backup store types, properly handling URL and credentials.
137-146
: LGTM: Azurite backup store setup is implemented correctly.The implementation follows the same pattern as other backup store types, properly handling URL and credentials.
71-71
: Verify test coverage for CIFS operations.Based on the comment from @yangchiu, we need to verify if backup-related test cases like
test_backup_volume_list
,test_backup_block_deletion
,test_backup_metadata_deletion
, andtest_all_replica_restore_failure
are properly handled or need to be skipped for CIFS.#!/bin/bash # Description: Find test cases that might need updates for CIFS support # Test: Search for backup-related test cases echo "Backup-related test cases that might need updates:" rg "def test_backup.*" -B 2 -A 5
307-307
:⚠️ Potential issueCritical: Missing implementations for CIFS and Azurite backup operations.
While the basic setup for CIFS and Azurite is implemented, several critical backup operations are still missing implementations and currently skip the tests:
- backupstore_get_backup_volume_prefix
- backupstore_get_backup_cfg_file_path
- backupstore_get_volume_cfg_file_path
- backupstore_get_backup_blocks_dir
- backupstore_create_file
- backupstore_delete_file
- backupstore_delete_backup_cfg_file
- backupstore_delete_volume_cfg_file
- backupstore_delete_random_backup_block
- backupstore_count_backup_block_files
This matches the concerns raised in the past review comments. These operations are essential for backup functionality.
Also applies to: 344-344, 367-367, 390-390, 416-416, 466-466, 514-514, 554-554, 596-596, 665-665, 714-714
test_framework/scripts/longhorn-setup.sh (3)
336-337
: LGTM: Backup store URL definitionsThe new URL variables for CIFS and Azurite backup stores follow the existing pattern and convention.
339-342
: LGTM: Backup store installationThe installation of CIFS and Azurite backup stores is properly integrated with the existing backup store deployment process.
427-432
: Duplicate code in CIFS handling logicThis issue was previously identified in the past review comments. The suggestion to extract the duplicated CIFS handling logic into a helper function remains valid.
Also applies to: 487-492
manager/integration/tests/common.py (2)
3898-3900
: LGTM!The
is_backupTarget_cifs
function follows the established pattern for backup target validation functions and correctly checks for the CIFS URL prefix.
3902-3904
: LGTM!The
is_backupTarget_azurite
function follows the established pattern for backup target validation functions and correctly checks for the Azure Blob Storage URL prefix.
@@ -14,7 +14,7 @@ sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' / | |||
|
|||
sudo yum update -y | |||
sudo yum group install -y "Development Tools" | |||
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools nc | |||
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools nc samba |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Missing cifs-utils package for CIFS mount support
The samba
package installation is good, but for complete CIFS functionality, the cifs-utils
package is also required as it provides essential mount.cifs utilities. Please add cifs-utils
to the package installation line:
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools nc samba cifs-utils
🔗 Analysis chain
LGTM on samba package installation.
The addition of the samba
package is correct and aligns with the PR objective to support CIFS in the testing environment.
Let's verify if any additional samba-related packages or configurations might be needed:
Let me gather more information about the CIFS configuration to ensure we have all necessary dependencies.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for common samba dependencies and configuration patterns in other files
# Test 1: Search for other samba-related packages in provisioning scripts
echo "Checking for samba package patterns in other provisioning scripts..."
rg -g '*/provision*.sh*' 'yum.*install.*samba'
# Test 2: Search for samba/cifs configuration patterns
echo "Checking for samba/cifs configuration patterns..."
rg -g '*.sh*' -g '*.yaml' -g '*.yml' 'smb\.conf|cifs'
Length of output: 1795
Script:
#!/bin/bash
# Check for any samba-related configuration files or mount utilities
echo "Checking for samba/cifs mount utilities and configurations..."
rg -g '*.sh*' -g '*.yaml' -g '*.yml' 'mount\.cifs|cifs-utils'
# Check if cifs-utils is installed alongside samba in any provisioning scripts
echo "Checking for cifs-utils package installation..."
fd -g '*provision*.sh*' -x grep -l 'cifs-utils'
# Look for any CIFS/SMB related environment variables or configurations
echo "Checking for CIFS/SMB related configurations..."
rg -g '*.sh*' -g '*.yaml' -g '*.yml' 'SMB_|CIFS_|smbclient'
Length of output: 1436
@@ -4,7 +4,7 @@ set -e | |||
set -x | |||
|
|||
apt-get update | |||
apt-get install -y nfs-common cryptsetup dmsetup linux-modules-extra-`uname -r` | |||
apt-get install -y nfs-common cryptsetup dmsetup samba linux-modules-extra-`uname -r` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Additional CIFS-related package required for complete functionality
The codebase shows active usage of CIFS for backup stores and mounting operations. While samba
is being added, the cifs-utils
package is also required for full CIFS functionality, especially for mounting CIFS shares. Consider adding cifs-utils
to the package installation list:
apt-get install -y nfs-common cryptsetup dmsetup samba cifs-utils linux-modules-extra-`uname -r`
Key evidence from codebase:
- CIFS backup store configuration:
cifs://longhorn-test-cifs-svc.default/backupstore
- Multiple scripts handling CIFS operations in test and integration environments
🔗 Analysis chain
LGTM: Package installation aligns with CIFS support requirements.
The addition of samba
package is necessary for CIFS support and aligns with the PR objectives. The package is correctly added alongside other essential system packages.
Let's verify if any additional CIFS-related packages might be needed:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for common CIFS-related package dependencies in Ubuntu
# Expected: Should find references to cifs-utils or similar packages in other provisioning scripts
# Search for CIFS-related package installations in other provisioning scripts
rg -l 'cifs-utils|smbclient'
Length of output: 68
Script:
#!/bin/bash
# Let's check for any CIFS-related configurations or mount operations in the codebase
rg -A 3 'cifs|samba|smb' --type sh
# Also check for any documentation or configuration files mentioning CIFS setup
rg -A 3 'cifs|samba|smb' --type yaml --type md
Length of output: 6078
@@ -10,7 +10,7 @@ fi | |||
|
|||
sudo yum update -y | |||
sudo yum group install -y "Development Tools" | |||
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc | |||
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding cifs-utils package
While samba provides the core functionality, consider adding cifs-utils package which provides additional tools for mounting and managing CIFS shares.
Apply this diff to add the cifs-utils package:
-sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba
+sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba cifs-utils
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba | |
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba cifs-utils |
RETRY=0 | ||
MAX_RETRY=60 | ||
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | ||
echo 'Waiting azurite pod running' | ||
sleep 5 | ||
if [ $RETRY -eq $MAX_RETRY ]; then | ||
break | ||
fi | ||
RETRY=$((RETRY+1)) | ||
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance retry mechanism with proper error handling.
The current retry implementation has several areas for improvement:
- No error handling when max retries are reached
- Magic numbers in retry constants
Consider this improved implementation:
- RETRY=0
- MAX_RETRY=60
+ readonly AZURITE_POD_WAIT_SECONDS=300
+ readonly AZURITE_CHECK_INTERVAL=5
+ end_time=$(($(date +%s) + ${AZURITE_POD_WAIT_SECONDS}))
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do
echo 'Waiting azurite pod running'
- sleep 5
- if [ $RETRY -eq $MAX_RETRY ]; then
+ sleep ${AZURITE_CHECK_INTERVAL}
+ if [ $(date +%s) -gt $end_time ]; then
+ echo "Error: Timeout waiting for Azurite pod to be ready"
+ exit 1
+ fi
- break
fi
- RETRY=$((RETRY+1))
done
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
RETRY=0 | |
MAX_RETRY=60 | |
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | |
echo 'Waiting azurite pod running' | |
sleep 5 | |
if [ $RETRY -eq $MAX_RETRY ]; then | |
break | |
fi | |
RETRY=$((RETRY+1)) | |
done | |
readonly AZURITE_POD_WAIT_SECONDS=300 | |
readonly AZURITE_CHECK_INTERVAL=5 | |
end_time=$(($(date +%s) + ${AZURITE_POD_WAIT_SECONDS})) | |
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | |
echo 'Waiting azurite pod running' | |
sleep ${AZURITE_CHECK_INTERVAL} | |
if [ $(date +%s) -gt $end_time ]; then | |
echo "Error: Timeout waiting for Azurite pod to be ready" | |
exit 1 | |
fi | |
done |
setup_azuitize_backup_store(){ | ||
RETRY=0 | ||
MAX_RETRY=60 | ||
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | ||
echo 'Waiting azurite pod running' | ||
sleep 5 | ||
if [ $RETRY -eq $MAX_RETRY ]; then | ||
break | ||
fi | ||
RETRY=$((RETRY+1)) | ||
done | ||
|
||
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) | ||
kubectl -n longhorn-system patch secret azblob-secret \ | ||
--type=json \ | ||
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" | ||
|
||
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | ||
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') | ||
# port forward and az container create need to be run on control node | ||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" | ||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Several improvements needed in Azurite setup function
The function has several areas that need attention:
- Magic numbers in retry logic
- Missing error handling for SSH commands
- Background port-forward process may persist
- Hardcoded storage account credentials in command
Suggested improvements:
setup_azuitize_backup_store(){
+ # Maximum wait time for Azurite pod: 5 minutes
RETRY=0
- MAX_RETRY=60
+ MAX_RETRY=30
+ RETRY_INTERVAL=10
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do
echo 'Waiting azurite pod running'
- sleep 5
+ sleep $RETRY_INTERVAL
if [ $RETRY -eq $MAX_RETRY ]; then
+ echo "Error: Timeout waiting for Azurite pod"
break
fi
RETRY=$((RETRY+1))
done
+ # Verify pod is actually running
+ if ! kubectl get pods | grep 'longhorn-test-azblob' | grep -q 'Running'; then
+ echo "Error: Azurite pod failed to start"
+ return 1
+ fi
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64)
kubectl -n longhorn-system patch secret azblob-secret \
--type=json \
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]"
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip)
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}')
+
+ # Store port-forward PID for cleanup
+ PORT_FORWARD_PID_FILE="/tmp/azurite_port_forward.pid"
+
# port forward and az container create need to be run on control node
- ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &"
+ if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 & echo \$! > ${PORT_FORWARD_PID_FILE}"; then
+ echo "Error: Failed to start port forwarding"
+ return 1
+ fi
+
+ # Load connection string from secret or config to avoid hardcoding
+ STORAGE_CONN_STRING="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=\${AZURITE_ACCOUNT_KEY};BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;"
- ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'"
+ if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string '${STORAGE_CONN_STRING}'"; then
+ echo "Error: Failed to create storage container"
+ # Cleanup port-forward
+ ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "kill \$(cat ${PORT_FORWARD_PID_FILE})"
+ return 1
+ fi
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
setup_azuitize_backup_store(){ | |
RETRY=0 | |
MAX_RETRY=60 | |
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | |
echo 'Waiting azurite pod running' | |
sleep 5 | |
if [ $RETRY -eq $MAX_RETRY ]; then | |
break | |
fi | |
RETRY=$((RETRY+1)) | |
done | |
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) | |
kubectl -n longhorn-system patch secret azblob-secret \ | |
--type=json \ | |
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" | |
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | |
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') | |
# port forward and az container create need to be run on control node | |
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" | |
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'" | |
} | |
setup_azuitize_backup_store(){ | |
# Maximum wait time for Azurite pod: 5 minutes | |
RETRY=0 | |
MAX_RETRY=30 | |
RETRY_INTERVAL=10 | |
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | |
echo 'Waiting azurite pod running' | |
sleep $RETRY_INTERVAL | |
if [ $RETRY -eq $MAX_RETRY ]; then | |
echo "Error: Timeout waiting for Azurite pod" | |
break | |
fi | |
RETRY=$((RETRY+1)) | |
done | |
# Verify pod is actually running | |
if ! kubectl get pods | grep 'longhorn-test-azblob' | grep -q 'Running'; then | |
echo "Error: Azurite pod failed to start" | |
return 1 | |
fi | |
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) | |
kubectl -n longhorn-system patch secret azblob-secret \ | |
--type=json \ | |
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" | |
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | |
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') | |
# Store port-forward PID for cleanup | |
PORT_FORWARD_PID_FILE="/tmp/azurite_port_forward.pid" | |
# port forward and az container create need to be run on control node | |
if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 & echo \$! > ${PORT_FORWARD_PID_FILE}"; then | |
echo "Error: Failed to start port forwarding" | |
return 1 | |
fi | |
# Load connection string from secret or config to avoid hardcoding | |
STORAGE_CONN_STRING="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=\${AZURITE_ACCOUNT_KEY};BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;" | |
if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string '${STORAGE_CONN_STRING}'"; then | |
echo "Error: Failed to create storage container" | |
# Cleanup port-forward | |
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "kill \$(cat ${PORT_FORWARD_PID_FILE})" | |
return 1 | |
fi | |
} |
longhorn/longhorn-9699 Signed-off-by: Chris <[email protected]>
3c57f3b
to
19842e9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
manager/integration/README.md (2)
19-19
: Improve documentation formatting and accessibility
- The bare URL should be properly formatted as a markdown link
- Consider making the documentation publicly accessible or providing alternative public documentation
Apply this diff to improve the formatting:
-For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). +For Azurite, additional manual steps are required after deploying the manifest. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing) for details.Also applies to: 21-21
24-26
: Improve command block formatting and readability
- Add language specification to the code block
- Consider improving command readability with line continuations
Apply this diff to improve the formatting:
-``` +```bash kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yamle2e/README.md (1)
20-20
: Add language specifications to code blocks.For better readability and syntax highlighting, specify the language for all code blocks. For example:
-``` +```shell kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \🧰 Tools
🪛 Markdownlint
20-20: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (23)
e2e/README.md
(1 hunks)manager/integration/README.md
(1 hunks)manager/integration/deploy/test.yaml
(1 hunks)manager/integration/tests/backupstore.py
(14 hunks)manager/integration/tests/common.py
(1 hunks)pipelines/utilities/install_backupstores.sh
(1 hunks)pipelines/utilities/run_longhorn_e2e_test.sh
(3 hunks)pipelines/utilities/run_longhorn_test.sh
(2 hunks)test_framework/scripts/longhorn-setup.sh
(3 hunks)test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl
(2 hunks)test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl
(1 hunks)test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (19)
- manager/integration/deploy/test.yaml
- manager/integration/tests/backupstore.py
- pipelines/utilities/install_backupstores.sh
- pipelines/utilities/run_longhorn_e2e_test.sh
- pipelines/utilities/run_longhorn_test.sh
- test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl
- test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl
- test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
🧰 Additional context used
🪛 Markdownlint
e2e/README.md
19-19: null
Bare URL used
(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
manager/integration/README.md
21-21: null
Bare URL used
(MD034, no-bare-urls)
22-22: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (11)
manager/integration/README.md (1)
24-26
: Verify manifest URLs are accessible
Let's ensure all the manifest URLs are valid and accessible.
✅ Verification successful
All manifest URLs are accessible and valid
All the referenced manifest URLs in the README.md file are verified to be accessible:
- nfs-backupstore.yaml
- cifs-backupstore.yaml
- azurite-backupstore.yaml
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all manifest URLs are accessible
echo "Checking manifest URLs..."
for manifest in minio-backupstore.yaml nfs-backupstore.yaml cifs-backupstore.yaml azurite-backupstore.yaml; do
url="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/$manifest"
if curl --output /dev/null --silent --head --fail "$url"; then
echo "✓ $manifest is accessible"
else
echo "✗ $manifest is not accessible"
fi
done
Length of output: 1876
e2e/README.md (3)
17-17
: LGTM!
The description clearly indicates the addition of CIFS and Azurite backup store servers.
19-19
: 🛠️ Refactor suggestion
Improve documentation and consider automation.
- Format the URL as a proper markdown link:
-For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).
+For Azurite, there are some manual steps need to be done after manifest deployed. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).
- Consider adding:
- Brief explanation of why manual steps are required
- Estimated time for manual setup
- Impact on automation pipelines
Note: This aligns with the previous review comment about eliminating manual steps. Consider automating this process to improve user experience and CI reliability.
🧰 Tools
🪛 Markdownlint
19-19: null
Bare URL used
(MD034, no-bare-urls)
22-24
: LGTM! The deployment commands are clear and well-structured.
The addition of CIFS and Azurite YAML files aligns with the PR objectives.
Let's verify the existence of the new YAML files:
✅ Verification successful
Let me verify the URLs directly since the local files weren't found.
All backup store YAML files are accessible and valid
The verification confirms that all three YAML files (nfs-backupstore.yaml, cifs-backupstore.yaml, and azurite-backupstore.yaml) are accessible at the specified URLs in the Longhorn repository, with HTTP 200 responses and valid content lengths. The deployment commands in the README are correct and functional.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the existence of backup store YAML files
# Expected: Both YAML files should exist in the repository
echo "Checking for CIFS and Azurite YAML files..."
for file in "cifs-backupstore.yaml" "azurite-backupstore.yaml"; do
fd -t f "$file" || echo "Warning: $file not found"
done
Length of output: 293
Script:
#!/bin/bash
# Check if the YAML files are accessible at the specified URLs
for url in "https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml" \
"https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" \
"https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"; do
echo "Checking URL: $url"
curl -s -f -I "$url" || echo "Warning: Failed to access $url"
done
Length of output: 4666
test_framework/scripts/longhorn-setup.sh (5)
336-337
: LGTM: New backup store URLs added.
The new URLs for CIFS and Azurite backup stores follow the established pattern and correctly point to the master branch.
339-342
: LGTM: Backup stores installation updated.
The changes correctly add CIFS and Azurite backup stores to the installation process, maintaining consistency with the existing pattern.
345-367
: Reusing existing review comment.
The previous review comment about improvements needed in the Azurite setup function is still valid and applicable.
427-432
: Reusing existing review comment.
The previous review comment about extracting duplicated CIFS handling logic is still valid and applicable to these changes.
487-492
: Reusing existing review comment.
The previous review comment about extracting duplicated CIFS handling logic is still valid and applicable to these changes.
manager/integration/tests/common.py (2)
3898-3900
: LGTM! Clean implementation for CIFS backup target check.
The function follows the established pattern for backup target validation and correctly checks for the "cifs://" prefix.
3902-3904
: LGTM! Clean implementation for Azurite backup target check.
The function follows the established pattern for backup target validation and correctly checks for the "azblob://" prefix.
Updated! |
The title and description of this PR are still incorrect. |
Which issue(s) this PR fixes:
Issue #9699
What this PR does / why we need it:
ci: support cifs and azurtize on pytest and e2e, set
BACKUP_STORE_TYPE
on jenkins to cifs or azurtizeSpecial notes for your reviewer:
Currently azurtize only support on sles pipeline, for other distro, need to more time for implement
Verified on local jenkins
Additional documentation or context
N/A
Summary by CodeRabbit
Release Notes
New Features
Documentation
Chores
samba
package in various provisioning scripts to support CIFS functionality.samba
installation.