Deploying the sensor using Docker Hub on Kubernetes

Prerequisites:

  • Kubernetes setup should be up and running
  • K8S nodes should be able to communicate with the Docker hub/private registry
  • The container sensor image should be available in the private registry if you are installing from there.

Modify the cssensor-ds.yml file

Create a new yml file containing the following information and name it cssensor-ds.yml or download the yml file directly from https://github.com/Qualys/cs_sensor

The field alignment in the yml file is very important. Please make sure to honor the formatting provided in the below template.

kind: List

apiVersion: v1

items:

 - kind: Namespace

   apiVersion: v1

   metadata:

      name: qualys    

 # Service Account

 - kind: ServiceAccount

   apiVersion: v1

   metadata:

     name: qualys-service-account

     namespace: qualys

 # Role for read/write/delete permission to qualys namespace

 - kind: Role

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-reader-role

     namespace: qualys

   rules:

   - apiGroups: ["","batch"]

     resources: ["pods","jobs"]

      verbs: ["get", "list", "watch","create", "delete", "deletecollection"]

   - apiGroups: [""]

     resources: ["pods/status"]

     verbs: ["get"]

   - apiGroups: [""]

     resources: ["pods/attach", "pods/exec"]

      verbs: ["create"]    

 - kind: ClusterRole

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-cluster-reader-role

   rules:

   - apiGroups: [""]

      resources: ["nodes", "pods/status", "replicationcontrollers/status", "nodes/status"]

     verbs: ["get"]

   - apiGroups: ["apps"]

      resources: ["replicasets/status", "daemonsets/status", "deployments/status", "statefulsets/status"]

     verbs: ["get"]

   - apiGroups: ["batch"]

     resources: ["jobs/status", "cronjobs/status"]

     verbs: ["get"]

 # RoleBinding to assign permissions in qualys-reader-role to qualys-service-account

 - kind: RoleBinding

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-reader-role-rb

     namespace: qualys

   subjects:

   - kind: ServiceAccount

     name: qualys-service-account

     namespace: qualys

   roleRef:

     kind: Role

      name: qualys-reader-role

     apiGroup: rbac.authorization.k8s.io

 - kind: ClusterRoleBinding

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-cluster-reader-rb

   subjects:

   - kind: ServiceAccount

     name: qualys-service-account

     namespace: qualys

   roleRef:

     kind: ClusterRole

      name: qualys-cluster-reader-role

     apiGroup: rbac.authorization.k8s.io

  # Qualys Container Sensor pod with

 - apiVersion: apps/v1

   kind: DaemonSet

   metadata:

     name: qualys-container-sensor

     namespace: qualys

     labels:

       k8s-app: qualys-cs-sensor

   spec:

     selector:

       matchLabels:

         name: qualys-container-sensor

     updateStrategy:

         type: RollingUpdate

     template:

       metadata:

         labels:

           name: qualys-container-sensor

       spec:

         #tolerations:

          # this toleration is to have the daemonset runnable on master nodes

         # remove it if want your masters to run sensor pod

         #- key: node-role.kubernetes.io/master

         #  effect: NoSchedule

          serviceAccountName: qualys-service-account

         containers:

         - name: qualys-container-sensor

           image: qualys/qcs-sensor:latest

           imagePullPolicy : IfNotPresent

           resources:

             limits:

                cpu: "0.5" # Default CPU usage limit on each node for sensor.

           args: ["--k8s-mode"]

           env:

           - name: CUSTOMERID

             value: __customerId

           - name: ACTIVATIONID

             value: __activationId

           - name: POD_URL

              value:

           - name: QUALYS_SCANNING_CONTAINER_LAUNCH_TIMEOUT

             value: "10"

# uncomment(and indent properly) below section if using Docker HTTP socket with TLS

           #- name: DOCKER_TLS_VERIFY

            #  value: "1"             

# uncomment(and indent properly) below section if proxy is required to connect Qualys Cloud

           #- name: qualys_https_proxy

           #  value: <proxy FQDN or Ip address>:<port#>

           - name: QUALYS_POD_NAME

             valueFrom:

               fieldRef:

                 fieldPath: metadata.name

           - name: QUALYS_POD_NAMESPACE

             valueFrom:

               fieldRef:

                 fieldPath: metadata.namespace

           volumeMounts:

           - mountPath: /var/run/docker.sock

             name: socket-volume

             readOnly: true

           - mountPath: /usr/local/qualys/qpa/data

             name: persistent-volume

           - mountPath: /usr/local/qualys/qpa/data/conf/agent-data

             name: agent-volume

# uncomment(and indent properly) below section if proxy(with CA cert) required to connect Qualys Cloud

           #- mountPath: /etc/qualys/qpa/cert/custom-ca.crt

            #  name: proxy-cert-path              

# uncomment(and indent properly) below section if using Docker HTTP socket with TLS

           #- mountPath: /root/.docker

           #  name: tls-cert-path

           securityContext:

             allowPrivilegeEscalation: false

         volumes:

           - name: socket-volume

             hostPath:

               path: /var/run/docker.sock

               type: Socket

           - name: persistent-volume

             hostPath:

               path: /usr/local/qualys/sensor/data

               type: DirectoryOrCreate

           - name: agent-volume

             hostPath:

               path: /etc/qualys

               type: DirectoryOrCreate

# uncomment(and indent properly) below section if proxy(with CA cert) required to connect Qualys Cloud

           #- name: proxy-cert-path

           #  hostPath:

           #    path: <proxy certificate path>

           #    type: File

# uncomment(and indent properly) below section if using Docker HTTP socket with TLS

           #- name: tls-cert-path

           #  hostPath:

           #    path: <Path of directory of client certificates>

           #    type: Directory

         hostNetwork: true

 

Qualys Container Sensor DaemonSet should be deployed in 'qualys' namespace as a part of ServiceAccount with adequate permission to communicate with Kubernetes API Server. The Role, ClusterRole, RoleBinding and ClusterRoleBinding are used to assign the necessary permissions to the ServiceAccount. If you already have Qualys Container Sensor running in a different namespace other than 'qualys', you’ll need to first uninstall Qualys Container Sensor from the other namespace and then deploy it fresh in 'qualys' namespace.

You’ll need these permissions:

  • get, list, watch - to monitor the resources in 'qualys' namespace
  • create, delete, deletecollection - to spawn containers for image vulnerability assessment in 'qualys' namespace then clean up after itself

Modify parameters in the yaml file

Copy the cssensor-ds.yml file to Kubernetes cluster’s master node then modify it by providing values for the following parameters. In order for the yaml file to work properly, ensure you only update the parameters/sections specified below. Note that you can download the yaml file directly from https://github.com/Qualys/cs_sensor

Uncomment the tolerations section under spec if you want Sensor daemonset to be deployed on master nodes.

spec:

#tolerations:

# this toleration is to have the daemonset runnable on master nodes

# remove it if want your masters to run sensor pod

#- key: node-role.kubernetes.io/master

# effect: NoSchedule

 

If you want to prioritize Qualys Sensor PODs:

Locate and Uncomment the below lines of code in the .yml file. And change the PriorityClass value mentioned under kind: PriorityClass as per your    requirement.

#- kind: PriorityClass

#  apiVersion: scheduling.k8s.io/v1

#  metadata:

#    name: qualys-priority-class

#  value: 0

#  preemptionPolicy: PreemptLowerPriority

#  description: Priority class for daemonset

 

Also, locate the PriorityClass name present below and Uncomment it.

#priorityClassName: qualys-priority-class

 

In order for the yml file to work properly, ensure that you do not remove/comment the respective sections mentioned below. Ensure that from all Kubernetes nodes the Docker hub/private registry (where the CS Sensor image is published) is accessible.

containers:

     - name: qualys-container-sensor

       image: <CS Sensor image name in the private/docker hub registry>

       args: ["--k8s-mode"]

 

Make sure all nodes that will be running sensor pod have access to private or docker hub registry where sensor image is stored.

If you want to deploy the sensor for CI/CD environment provide the args value as:

args: ["--k8s-mode","--cicd-deployed-sensor"]

 

If you want to deploy a Registry Sensor provide the args value as: args:

["--k8s-mode","--registry-sensor"]

 

If you want print logs on the console, provide "--enable-console-logs" as an additional value in args.

If you want to change the log level, provide "--log-level", "<a number between 0 and 5>" as an additional value in args, e.g if you want logs in trace provide:

args: ["--k8s-mode", "--log-level", "5"]

 

If you want to launch the sensor with scan thread value other than default 4, provide
"-- scan-thread-pool-size", "<number of threads>" as an additional value in args.

args: ["--k8s-mode", "--scan-thread-pool-size", "6"]

 

If you want to define the number of archived qpa.log files to be generated provide "--log-filepurgecount", "" as an additional value in args. The default, "--logfilepurgecount", "5" is applied via config. Note that there will always be current qpa.log file in log/ directory.

If you want to define the number of archived qpa.log files to be generated and size per log file, provide "--log-filesize", "" where "K" means kilobyte and "M" means megabyte, and
"--log-filepurgecount", "" as an additional value in args. Default is "--log-filesize": "10M" and "--log-filepurgecount": "5" applied via config.

args: ["--k8s-mode", "--log-filesize", "5M", "--logfilepurgecount", "4"]

 

If you want image scanning pods to be instantiated using kubernetes native `kubectl run` command provide "--use-kubectl" as an additional value in args. In this case sensor uses native kubernetes facilities to launch image scans. When this argument is omitted image containers are launched using docker run.

args: ["--k8s-mode", "--use-kubectl"]

 

If TLS authentication is enabled specify docker client certificate, client private key and CA certificate names in the args,

args: ["--k8s-mode", "--tls-cacert","<file name of the CA certificate that was used to sign docker server certificate>", "--tls-cert", "<docker client certificate file name>", "--tls-key", "<docker client private key file name>"]

If any of the three files have a default name such as ca.pem, cert.pem, key.pem respectively the corresponding argument can be omitted.

 

If you want to mask environment variables for images and containers in sensor logs and in the Container Security UI, add the "--mask-env-variable" parameter to args:

args: ["--k8s-mode", "--mask-env-variable"]

 

If you want to disable image scans for General Sensor (supported for all Runtimes), add the "--disableImageScan" parameter to args:

args: ["--k8s-mode", "--disableImageScan"]

 

If you want to specify a scanning policy, add the "--scanning-policy" parameter to args. The available values for the scanning policy are: “DynamicScanningOnly”, “StaticScanningOnly”, and “DynamicWithStaticScanningAsFallback”.

args: ["--k8s-mode", "--scanning-policy", “StaticScanningOnly”]

 

If you want to disable log4j vulnerability scanning on your container images, add the
"--disable-log4j-scanning" parameter to args:

args: ["--k8s-mode", "--disable-log4j-scanning"]

 

If you want to disable log4j static detection for dynamic/static image scans, add the
"--disable-log4j-static-detection" parameter to args:

args: ["--k8s-mode", "--disable-log4j-static-detection"]

 

If you want to optimize image scans for General Sensor, add the "--optimize-image-scans" parameter to args:

args: ["--k8s-mode", "--optimize-image-scans"]

 

If you want to enable secret detection for container images, add the "--perform-secret-detection" parameter to args:

args: ["--k8s-mode", “--container-runtime”, “containerd”, "--perform-secret-detection"]

 

If you want to enable malware detection for container images, add the "--perform-malware-detection" parameter to args:

args: ["--k8s-mode", “--registry-sensor”, "--perform-malware-detection"]

 

If you want to limit the memory usage of sensor container, add the "--limit-resource-usage" parameter to args.

args: ["--k8s-mode", "--limit-resource-usage"]

 

Under resources specify the following:

resources:

  limits:

   cpu: "0.5" # Default CPU usage limit on each node for sensor.

You can edit the CPU usage. For example, for limiting the CPU usage to 5%, set resources:limits:cpu: "0.05". This limits the CPU usage to 5% of one core on the host.

If there are multiple processors on a node, setting the resources:limits:cpu value applies the CPU limit to one core only. For example, if you have 4 CPUs on the system and you want to set CPU limit as 20% of overall CPU capacity, then the CPU limit should be set to 0.8 i.e., 80% of one core only which becomes 20% of total CPU capacity.

To disable any CPU usage limit, set resources:limits:cpu value to 0.

Optionally, if you want to specify the memory resources for Container Sensor, you can specify it under resources. Recommended values for the Container Sensor’s memory requests and memory limits are:

  resources:

    limits:

      cpu: "0.5" # Default CPU usage limit on each node for sensor

      memory: "500Mi"

    requests:

      memory: "300Mi"

 

If you want to specify the memory resources for SCA Scan, or Secret Scan, or Malware Scan you can specify it under resources. It is recommended to use "--limit-resource-usage" argument while installing the sensor.
Recommended values for the Container Sensor’s memory requests and memory limits for above mentioned scan types are:

   resources:

    limits:

      cpu: "0.5" # Default CPU usage limit on each node for sensor

      memory: "3.5Gi"

    requests:

      memory: "1.5Gi"

 

The above recommendation is based on the average image size of 1.5GB, if the image sizes are more, please increase the memory limit accordingly.

 

When either of the memory resource values (limits or requests) is specified for Container Sensor and “--use-kubectl” is supplied in args, we automatically apply both memory requests and memory limits to image scanning containers. Default values are 200Mi and 700Mi, respectively.

Additionally, you could overwrite one or both values by specifying the following variables under env. In this example, the values were changed to 300Mi and 800Mi.

  - name: QUALYS_SCANNING_CONTAINER_MEMORYREQUESTMB

    value: "300Mi"

  - name: QUALYS_SCANNING_CONTAINER_MEMORYLIMITMB

    value: "800Mi"

 

Under env specify the following:

Activation ID (Required)

   - name: ACTIVATIONID

      value: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

 

Customer ID (Required)

   - name: CUSTOMERID

     value: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

 

Specify POD_URL when using docker hub image. Otherwise, remove it.

   - name: POD_URL

     value: <Specify POD URL>

 

Specify the scanning container launch timeout in minutes. If this env variable is not present, then 10 minutes is the default.

   - name: QUALYS_SCANNING_CONTAINER_LAUNCH_TIMEOUT

     value: "10"

 

With each scan, we check the node status to see if the node is schedulable or not, and launch the scan only if the node is schedulable. If the node status indicates that the node is unschedulable, then we retry the scan after a default interval of 15 minutes. You can increase or decrease the time the sensor waits before retrying the scan by specifying a different scan retry interval in minutes.

   - name: UNSCHEDULABLE_NODE_SCAN_RETRY_INTERVAL

     value: "30"

 

To enable TLS authentication uncomment the following 2 lines.

   - name: DOCKER_TLS_VERIFY

     value: "1"

 

To disable TLS use DOCKER_TLS_VERIFY=""(empty string) or remove or keep it commented in yml file.


- By enabling sensor communication with docker daemon over TLS customer can restrict the sensor's access to docker socket by using docker authorization plugin.
- When TLS authentication is enabled and DOCKER_HOST is not specified the sensor will automatically detect the FQDN of the worker node it is running on and set DOCKER_HOST environment variable inside the sensor to FQDN>:<2376> where 2376 is the default TLS TCP port for docker daemon.

You can set DOCKER_HOST yourself to 127.0.0.1:<port#> or localhost:<port#> by adding:
- name: DOCKER_HOST
value: "<loopback IPv4 address or hostname>: "

 

Please make sure that FQDN, or hostname, or IPv4 address set in the DOCKER_HOST matches the CN or Subject Alternative Name in the docker server certificate on each worker node.

Uncomment and indent properly the proxy information, or keep it as is if not required:

- For the communication between the sensor and the backend (Container Management Service):

#- name: qualys_https_proxy

#   value: <proxy FQDN or Ip address>:<port#>

 

- For a registry sensor version 1.21.0 or later used for a public registry: 

         #- name: https_proxy          

    #  value: <proxy FQDN or Ip address>:<port#>

 

Uncomment tls-cert-path under volumes if TLS authentication needs to be enabled and provide directory path for client certificates, or keep it as is if not required:

#- name: tls-cert-path

# hostPath:

# path: <Path of directory of client certificates>

# type: Directory

 

Uncomment proxy-cert-path under volumes, or keep it as is if not required:

#- name: proxy-cert-path

# hostPath:

# path: /root/cert/proxy-certificate.crt

# type: File

 

Activation ID and Customer ID are required. Use the Activation ID and Customer ID from your subscription. To get the Activation ID and Customer ID, login to the Container Security UI, go to Configurations > Sensors, click Download, and then click any sensor type. The installation command on the Installation Instructions screen contains your Activation ID and Customer ID. Activation ID is like a password, do not share it.

If you are using an https proxy, ensure that all Kubernetes nodes have a valid certificate file for the sensor to communicate with the Container Management Server.

If you are not using a proxy and you have kept the above-mentioned parts commented, you can keep the following part commented from volumeMounts as well:

#- mountPath: /etc/qualys/qpa/cert/custom-ca.crt

# name: proxy-cert-path

 

If you are not using TLS and you have kept the above-mentioned parts commented, you can keep the following part commented from volumeMounts as well:

#- mountPath: /root/.docker

# name: tls-cert-path

Remove hostNetwork: true when IP identification is not necessary

Remove/comment out the hostNetwork: true option to follow the security best practice of least privilege, where host IP address identification is not necessary.

hostNetwork: true

 

The hostNetwork: true option provides the sensor the ability to detect the Docker host’s IP address. The user can remove/comment out the hostNetwork: true line in the yaml file, however a drawback is that the UI will show the docker bridge network IP address as the host IP address. To aid with host identification, the hostname appears in the UI so customers can identify the Docker host by hostname. We also recommend that the hostname be unique within the environment.

Deploying the Container Sensor DaemonSet

Once you have created the cssensor-ds.yml file, run the following command on Kubernetes master to create a DaemonSet:

kubectl create -f cssensor-ds.yml

Removing the Container Sensor DaemonSet

If you need to uninstall Qualys Container Sensor, run the following command on Kubernetes master:

kubectl delete -f cssensor-ds.yml

Upgrading the Container Sensor DaemonSet

Perform the following steps on Kubernetes master for updating the Container Sensor DaemonSet to a new version.

Ensure that the Container Sensor DaemonSet is running in the Kubernetes environment.

  1. Get the name of the Container Sensor DaemonSet
    kubectl get ds -n kube-system
  2. Update the image for the DaemonSet to use new qualys/sensor image
    kubectl set image ds/<daemonset-name> -n kube-system <container-name>=<container-new-image>
  3. Monitor Rollout status - this may take a while depending on the number of nodes
    kubectl rollout status daemonset <daemonset-name> -n kube-system
  4. If something goes wrong during rollout, rollback the DaemonSet to use the last image
    kubectl rollout undo daemonset <daemonset-name> -n kube-system