Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tag listing does not respect proxy config #573

Closed
evanstoner opened this issue Aug 2, 2024 · 1 comment
Closed

Tag listing does not respect proxy config #573

evanstoner opened this issue Aug 2, 2024 · 1 comment

Comments

@evanstoner
Copy link
Contributor

evanstoner commented Aug 2, 2024

Important

This bug has already been triaged by CrowdStrike engineering and was fixed in #569. We'll use this issue to track a workaround until the fix is released.

What happens

In an environment that requires a proxy for egress, the operator is not able to automatically list available image tags from the CrowdStrike registry to determine what sensor image to deploy. To confirm this, check the falcon-operator namespace, falcon-operator-controller-manager-XXX pod, manager container logs for output similar to the following:

Unable to get http response to list container registry tags: Get \"https://registry.crowdstrike.com/v2/falcon-sensor/us-2/release/falcon-sensor/tags/list\": dial tcp 44.241.67.109:443: connect: connection timed out

This bug was introduced in #549 when working around a bug in Artifactory.

Workaround

This bug occurs when allowing the operator to automatically select the container image from the CrowdStrike registry. To workaround it, we just need to manually specify a container image and pull token. This workaround is fully supported because it uses existing configuration options that the operator provides.

Obtain CID, pull token, and image

Refer to the Falcon container pull script for more details.

  1. Set your FALCON_CLIENT_ID, FALCON_CLIENT_SECRET, and FALCON_CLOUD environment variables (you can use the same API client created for the operator).
  2. ./falcon-container-sensor-pull.sh --get-cid outputs your customer ID (CID).
  3. ./falcon-container-sensor-pull.sh --get-pull-token outputs a base64-encoded Docker config JSON.
  4. ./falcon-container-sensor-pull.sh -t falcon-sensor --get-image-path outputs the full name of the latest sensor image.

Deploy the FalconNodeSensor

  1. If you already created a FalconNodeSensor, delete it. Wait until it is deleted (disappears from the console and oc get output).
  2. Create the falcon-system namespace.
  3. Create an image pull secret in falcon-system called falcon-manual-pull-secret using the output from step 3:
kind: Secret
apiVersion: v1
metadata:
  name: falcon-manual-pull-secret
  namespace: falcon-system
data:
  # TODO: copy from step 3
  .dockerconfigjson: OUTPUT_FROM_STEP_3
type: kubernetes.io/dockerconfigjson
  1. Create a FalconNodeSensor that references your CID from step 2 above, the new pull secret, and the image tag from step 4 (see basic example below, you may need to customize further for your environment):
apiVersion: falcon.crowdstrike.com/v1alpha1
kind: FalconNodeSensor
metadata:
  name: falcon-node-sensor
spec:
  falcon:
    apd: false
    tags:
      - daemonset
    trace: none
    # TODO: set your CID from step 2
    cid: YOUR_CID_FROM_STEP_2
  installNamespace: falcon-system
  node:
    # TODO: this is an example for US-2, you may need a different version tag or cloud region
    image: registry.crowdstrike.com/falcon-sensor/us-2/release/falcon-sensor:7.18.0-17106-1.falcon-linux.Release.US-2
    imagePullSecrets:
      # this was created in step 7
      - name: falcon-manual-pull-secret
    imagePullPolicy: Always
    backend: bpf
    terminationGracePeriod: 30
    disableCleanup: false
    tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/control-plane
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        operator: Exists
    updateStrategy:
      type: RollingUpdate
  1. In a few moments, confirm your FalconNodeSensor shows the Success condition and that you see falcon-node-sensor-XXX pods in the falcon-system namespace.
@evanstoner
Copy link
Contributor Author

This has been resolved in v1.2.0: https://github.com/CrowdStrike/falcon-operator/releases/tag/v1.2.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant