Deploy in Kubernetes - Docker Runtime

This section assumes you have the sensor image: Obtain the Container Sensor Image.

Integrate the Container Sensor into the DaemonSet like other application containers and set the replication factor to 1 to ensure there is always a sensor deployed on the Docker Host. This information is applicable for Amazon Elastic Container Service for Kubernetes (Amazon EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS).

Perform the following steps for creating a DaemonSet for the Qualys sensor to be deployed in Kubernetes.

Ensure that the Container Sensor has read and write access to the persistent storage and the docker daemon socket.

Modify the cssensor-ds.yml file

Below is the Kubernetes DaemonSet deployment template that can be used to deploy the Qualys Container Sensor. For your convenience, this template is available in the QualysContainerSensor.tar.xz as cssensor-ds.yml file. Note that you can download the yaml file directly from https://github.com/Qualys/cs_sensor

The field alignment in the yaml file is very important. Please make sure to honor the formatting provided in the template.

kind: List

apiVersion: v1

items:

 - kind: Namespace

   apiVersion: v1

   metadata:

      name: qualys    

 # Service Account

 - kind: ServiceAccount

   apiVersion: v1

   metadata:

     name: qualys-service-account

     namespace: qualys

 # Role for read/write/delete permission to qualys namespace

 - kind: Role

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-reader-role

     namespace: qualys

   rules:

   - apiGroups: ["","batch"]

     resources: ["pods","jobs"]

      verbs: ["get", "list", "watch","create", "delete", "deletecollection"]

   - apiGroups: [""]

     resources: ["pods/status"]

     verbs: ["get"]

   - apiGroups: [""]

     resources: ["pods/attach", "pods/exec"]

      verbs: ["create"]    

 - kind: ClusterRole

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-cluster-reader-role

   rules:

   - apiGroups: [""]

      resources: ["nodes", "pods/status", "replicationcontrollers/status", "nodes/status"]

     verbs: ["get"]

   - apiGroups: ["apps"]

      resources: ["replicasets/status", "daemonsets/status", "deployments/status", "statefulsets/status"]

     verbs: ["get"]

   - apiGroups: ["batch"]

     resources: ["jobs/status", "cronjobs/status"]

     verbs: ["get"]

 # RoleBinding to assign permissions in qualys-reader-role to qualys-service-account

 - kind: RoleBinding

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-reader-role-rb

     namespace: qualys

   subjects:

   - kind: ServiceAccount

     name: qualys-service-account

     namespace: qualys

   roleRef:

     kind: Role

      name: qualys-reader-role

     apiGroup: rbac.authorization.k8s.io

 - kind: ClusterRoleBinding

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-cluster-reader-rb

   subjects:

   - kind: ServiceAccount

     name: qualys-service-account

     namespace: qualys

   roleRef:

     kind: ClusterRole

      name: qualys-cluster-reader-role

     apiGroup: rbac.authorization.k8s.io

  # Qualys Container Sensor pod with

 - apiVersion: apps/v1

   kind: DaemonSet

   metadata:

     name: qualys-container-sensor

     namespace: qualys

     labels:

       k8s-app: qualys-cs-sensor

   spec:

     selector:

       matchLabels:

         name: qualys-container-sensor

     updateStrategy:

         type: RollingUpdate

     template:

       metadata:

         labels:

           name: qualys-container-sensor

       spec:

         #tolerations:

          # this toleration is to have the daemonset runnable on master nodes

         # remove it if want your masters to run sensor pod

         #- key: node-role.kubernetes.io/master

         #  effect: NoSchedule

          serviceAccountName: qualys-service-account

         containers:

         - name: qualys-container-sensor

           image: qualys/qcs-sensor:latest

           imagePullPolicy : IfNotPresent

           resources:

             limits:

                cpu: "0.5" # Default CPU usage limit on each node for sensor.

           args: ["--k8s-mode", "--storage-driver-type", "overlay2"]

           env:

           - name: CUSTOMERID

             value: __customerId

           - name: ACTIVATIONID

             value: __activationId

           - name: POD_URL

              value:

           - name: QUALYS_SCANNING_CONTAINER_LAUNCH_TIMEOUT

             value: "10"

# uncomment(and indent properly) below section if using Docker HTTP socket with TLS

           #- name: DOCKER_TLS_VERIFY

            #  value: "1"             

# uncomment(and indent properly) below section if proxy is required to connect Qualys Cloud

           #- name: qualys_https_proxy

           #  value: <proxy FQDN or Ip address>:<port#>

           - name: QUALYS_POD_NAME

             valueFrom:

               fieldRef:

                 fieldPath: metadata.name

           - name: QUALYS_POD_NAMESPACE

             valueFrom:

               fieldRef:

                 fieldPath: metadata.namespace

           volumeMounts:

           - mountPath: /var/run/docker.sock

             name: socket-volume

             readOnly: true

           - mountPath: /usr/local/qualys/qpa/data

             name: persistent-volume

           - mountPath: /usr/local/qualys/qpa/data/conf/agent-data

             name: agent-volume

# uncomment(and indent properly) below section if proxy(with CA cert) required to connect Qualys Cloud

           #- mountPath: /etc/qualys/qpa/cert/custom-ca.crt

            #  name: proxy-cert-path              

# uncomment(and indent properly) below section if using Docker HTTP socket with TLS

           #- mountPath: /root/.docker

           #  name: tls-cert-path

           securityContext:

             allowPrivilegeEscalation: false

         volumes:

           - name: socket-volume

             hostPath:

               path: /var/run/docker.sock

               type: Socket

           - name: persistent-volume

             hostPath:

               path: /usr/local/qualys/sensor/data

               type: DirectoryOrCreate

           - name: agent-volume

             hostPath:

               path: /etc/qualys

               type: DirectoryOrCreate

# uncomment(and indent properly) below section if proxy(with CA cert) required to connect Qualys Cloud

           #- name: proxy-cert-path

           #  hostPath:

           #    path: <proxy certificate path>

           #    type: File

# uncomment(and indent properly) below section if using Docker HTTP socket with TLS

           #- name: tls-cert-path

           #  hostPath:

           #    path: <Path of directory of client certificates>

           #    type: Directory

         hostNetwork: true

 

Qualys Container Sensor DaemonSet should be deployed in 'qualys' namespace as a part of ServiceAccount with adequate permission to communicate with Kubernetes API Server. The Role, ClusterRole, RoleBinding and ClusterRoleBinding are used to assign the necessary permissions to the ServiceAccount. If you already have Qualys Container Sensor running in a different namespace other than 'qualys', you’ll need to first uninstall Qualys Container Sensor from the other namespace and then deploy it fresh in 'qualys' namespace.

You’ll need these permissions:

  • get, list, watch - to monitor the resources in 'qualys' namespace.
  • create, delete, deletecollection - to spawn containers for image vulnerability assessment in 'qualys' namespace then clean up after itself.

Modify Parameters in the yaml File

Copy the cssensor-ds.yml file to Kubernetes cluster’s master node then modify it by providing values for the following parameters. In order for the yaml file to work properly, ensure you only update the parameters/sections specified below. Note that you can download the yaml file directly from https://github.com/Qualys/cs_sensor

Uncomment the tolerations section under spec if you want Sensor daemonset to be deployed on master nodes.

spec:

#tolerations:

# this toleration is to have the daemonset runnable on master nodes

# remove it if want your masters to run sensor pod

#- key: node-role.kubernetes.io/master

# effect: NoSchedule

 

containers:

- name: qualys-container-sensor

 image: <CS Sensor image name in the private/docker hub registry>

 args: ["--k8s-mode"]

Make sure all nodes that will be running sensor pod have access to private or docker hub registry where sensor image is stored.

If you want to deploy the sensor for CI/CD environment provide the args value as:

args: ["--k8s-mode","--cicd-deployed-sensor"]

 

If you want to deploy a Registry Sensor provide the args value as:

args: ["--k8s-mode","--registry-sensor"]

 

If you want print logs on the console, provide "--enable-console-logs" as an additional value in args.

If you want to change the log level, provide "--log-level", "<a number between 0 and 5>" as an additional value in args, e.g if you want logs in trace provide:

args: ["--k8s-mode", "--log-level", "5"]

 

If you want to launch the sensor with scan thread value other than default 4, provide
"--scan-thread-pool-size",  "<number of threads>" as an additional value in args.

args: ["--k8s-mode", "--scan-thread-pool-size", "6"]

 

If you want to define the number of archived qpa.log files to be generated provide  
"--log-filepurgecount", "<digit>" as an additional value in args. The default, "--log-filepurgecount", "5" is applied via config. Please note that there will always be current qpa.log file in log/ directory.

 

If you want to define the number of archived qpa.log files to be generated and size per log file, provide "--log-filesize", "<digit><K/M/>" where "K" means kilobyte and "M" means megabyte, and "--log-filepurgecount", "<digit>" as an additional value in args. Default is
"--log-filesize": "10M" and "--log-filepurgecount": "5" applied via config.

args: ["--k8s-mode", "--log-filesize", "5M", "--log-filepurgecount", "4"]

 

If you want image scanning pods to be instantiated using kubernetes native `kubectl run` command provide "--use-kubectl" as an additional value in args. In this case sensor uses native kubernetes facilities to launch image scans. When this argument is omitted image containers are launched using `docker run`.

args: ["--k8s-mode", "--use-kubectl"]

 

If TLS authentication is enabled, specify docker client certificate, client private key and CA certificate names in the args

args: ["--k8s-mode", "--tls-cacert","<file name of the CA certificate that was used to sign docker server certificate>", "--tls-cert", "<docker client certificate file name>", "--tls-key", "<docker client private key file name>"]

If any of the three files have a default name such as ca.pem, cert.pem, key.pem respectively the corresponding argument can be omitted.

If you want to mask environment variables for images and containers in sensor logs and in the Container Security UI, add the "--mask-env-variable" parameter to args:

args: ["--k8s-mode", "--mask-env-variable"]

 

If you want to disable image scans for General Sensor, add the "--disableImageScan" parameter to args:

args: ["--k8s-mode", "--disableImageScan"]

 

If you want to disable container scans, add the "--disableContainerScan" parameter to args:

args: ["--k8s-mode", "--disableContainerScan"]

 

If you want to specify a scanning policy, add the "--scanning-policy" parameter to args. The available values for the scanning policy are: “DynamicScanningOnly”, “StaticScanningOnly”, and “DynamicWithStaticScanningAsFallback”.

args: ["--k8s-mode", "--scanning-policy", “StaticScanningOnly”]

 

If you want to disable log4j vulnerability scanning on your container images, add the
"--disable-log4j-scanning" parameter to args:

args: ["--k8s-mode", "--disable-log4j-scanning"]

 

If you want to disable log4j static detection for dynamic/static image scans, add the
"--disable-log4j-static-detection" parameter to args:

args: ["--k8s-mode", "--disable-log4j-static-detection"]

 

If you want to optimize Image scans for General Sensor, add the "--optimize-image-scans" parameter to args:

args: ["--k8s-mode", "--optimize-image-scans"]

 

If you want to assign tags to a sensor or sensor profile,

args: ["--k8s-mode", "----tag-sensor-profile"]

For more information, refer to Important Points Related to Sensor Tagging.

 

If you have the SCA scanning feature, then you can enable SCA scanning for container images by adding the "--perform-sca-scan" parameter to args:

args: ["--k8s-mode", "--perform-sca-scan"]

 

By default, SCA scans run in online mode. You can choose to disable Internet access for the SCA scan and run the scan in offline mode. Note - We recommend you run the SCA scan in online mode. Quality of software package enumeration for Java substantially degrades when the SCA scan is run in offline mode. The remote maven repository may need to be consulted for an accurate package detection. This can affect accuracy of the vulnerability posture of the image.

args: ["--k8s-mode", "--perform-sca-scan" "--disallow-internet-access-for-sca"]

 

The default SCA scan command timeout is 5 minutes (300 seconds). You can overwrite the default timeout with a new value specified in seconds. For example, you may need to increase the SCA scan timeout when scanning large container images to ensure the SCA scan has time to finish.

The --sca-scan-timeout-in-seconds=600 parameter also applies to secret and malware detection.

args: ["--k8s-mode", "--perform-sca-scan", "--sca-scan-timeout-in-seconds=600"]

 

By default, SBOM generation is enabled during an SCA scan. If you want to disable SBOM generation for container images, add the "--disable-features SBOM" parameter to args.

args: ["--k8s-mode", "--perform-sca-scan", "--disable-features=SBOM"]

 

If you want to enable secret detection for container images, add the "--perform-secret-detection" parameter to args. Note that secret detection is supported only on CICD and registry sensors.

args: ["--k8s-mode", “--container-runtime”, “containerd”, "--perform-secret-detection"]

 

If you want to enable malware detection for container images, add the "--perform-malware-detection" parameter to args. Note that malware detection is supported only on the registry sensor.

args: ["--k8s-mode", “--registry-sensor”, "--perform-malware-detection"]

 

Under resources specify the following:

resources:
    limits:

      cpu: "0.5" # Default CPU usage limit on each node for sensor.

 

For example, for limiting the CPU usage to 5%, set resources:limits:cpu: "0.05". This limits the CPU usage to 5% of one core on the host.

If there are multiple processors on a node, setting the resources:limits:cpu value applies the CPU limit to one core only. For example, if you have 4 CPUs on the system and you want to set CPU limit as 20% of overall CPU capacity, then the CPU limit should be set to 0.8 i.e., 80% of one core only which becomes 20% of total CPU capacity.

To disable any CPU usage limit, set resources:limits:cpu value to 0.

Optionally, if you want to specify the memory resources for Container Sensor, you can specify it under resources. Recommended values for the Container Sensor’s memory requests and memory limits are:

  resources:
     limits:

       cpu: "0.5" # Default CPU usage limit on each node for sensor
         memory: "500Mi"
     requests:
       memory: "300Mi"

 

To change the default CPU limit for image scanning PODs, specify the following variables under env. 
In this example, the values were changed to 300m and 150m. The default value for 'QUALYS_SCANNING_CONTAINER_CPULIMIT' is '200m' and QUALYS_SCANNING_CONTAINER_CPUREQUEST is '100m'.

 - name: QUALYS_SCANNING_CONTAINER_CPULIMIT     

   value: "300m" 
           
 - name: QUALYS_SCANNING_CONTAINER_CPUREQUEST 

   value: "150m"

 

When either of the memory resource values (limits or requests) is specified for Container Sensor and “--use-kubectl” is supplied in args, we automatically apply both memory requests and memory limits to image scanning containers. Default values are 200Mi and 700Mi, respectively.

Additionally, you could overwrite one or both values by specifying the following variables under env. In this example, the values were changed to 300Mi and 800Mi.

  - name: QUALYS_SCANNING_CONTAINER_MEMORYREQUESTMB

    value: "300Mi"

  - name: QUALYS_SCANNING_CONTAINER_MEMORYLIMITMB

    value: "800Mi"

 

Under env specify the following:

Activation ID (Required)

- name: ACTIVATIONID

 value: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

 

Customer ID (Required)

- name: CUSTOMERID

 value: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

 

Specify POD_URL when using docker hub image. Otherwise, remove it.

- name: POD_URL

 value: <Specify POD URL>

 

Specify the scanning container launch timeout in minutes. If this env variable is not present, then 10 minutes is the default.

- name: QUALYS_SCANNING_CONTAINER_LAUNCH_TIMEOUT

 value: "10"

 

To enable TLS authentication uncomment the following 2 lines.

   - name: DOCKER_TLS_VERIFY

     value: "1"

To disable TLS use DOCKER_TLS_VERIFY=""(empty string) or remove it or keep it commented in yml file.

By enabling sensor communication with docker daemon over TLS customer can restrict the sensor's access to docker socket by using docker authorization plugin.

 

When TLS authentication is enabled  and DOCKER_HOST is not specified the sensor will automatically detect the FQDN of the worker node it is running on and set DOCKER_HOST environment variable inside the sensor to FQDN>:<2376> where 2376 is the default TLS TCP port for docker daemon.
You can set DOCKER_HOST yourself to 127.0.0.1:<port#> or localhost:<port#> by adding:

  - name: DOCKER_HOST

    value: "<loopback IPv4 address or hostname>:<port#>"

 

Note: Please make sure that FQDN, or hostname, or IPv4 address set in the DOCKER_HOST matches the CN or Subject Alternative Name in the docker server certificate on each worker node.

Uncomment tls-cert-path under volumes if TLS authentication needs to be enabled and provide directory path for client certificates, or keep it as is if not required:

#- name: tls-cert-path

#   hostPath:

#      path: <Path of directory of client certificates>

 #     type: Directory

 

With each scan, we check the node status to see if the node is schedulable or not, and launch the scan only if the node is schedulable. If the node status indicates that the node is unschedulable, then we retry the scan after a default interval of 15 minutes. You can increase or decrease the time the sensor waits before retrying the scan by specifying a different scan retry interval in minutes.

- name: UNSCHEDULABLE_NODE_SCAN_RETRY_INTERVAL

 value: "30"

 

Uncomment below part under volumeMounts as well if you are using TLS. Otherwise, keep it commented out.

#- mountPath: /root/.docker

#name: tls-cert-path

 

Uncomment and indent properly the proxy information, or keep it as is if not required:

- For the communication between the sensor and the backend (Container Management Service):

#- name: qualys_https_proxy

#   value: <proxy FQDN or Ip address>:<port#>

 

- For a registry sensor version 1.21.0 or later used for a public registry: 

         #- name: https_proxy          

    #  value: <proxy FQDN or Ip address>:<port#>

 

Uncomment proxy-cert-path under volumes, or keep it as is if not required:

#- name: proxy-cert-path

#   hostPath:

#       path: /root/cert/proxy-certificate.crt

#       type: File

 

Activation ID and Customer ID are required. Use the Activation ID and Customer ID from your subscription. To get the Activation ID and Customer ID, login to the Qualys Enterprise TruRisk™ Platform, go to Container SecurityConfigurations > Sensors, click Download, and then click any sensor type. The installation command on the Installation Instructions screen contains your Activation ID and Customer ID. Activation ID is like a password, do not share it.

If you are using an https proxy, ensure that all Kubernetes nodes have a valid certificate file for the sensor to communicate with the Container Management Server.

If you are not using a proxy and you have kept the above-mentioned parts commented, you can keep the following part commented from volumeMounts as well:

#- mountPath: /etc/qualys/qpa/cert/custom-ca.crt
#   name: proxy-cert-path

 

Once you have modified the cssensor-ds.yml file, run the following command on Kubernetes master to create a DaemonSet:

kubectl create -f cssensor-ds.yml

 

If you need to uninstall Qualys Container Sensor, run the following command on Kubernetes master:

kubectl delete -f cssensor-ds.yml

 

The persistent storage will need to be removed manually on each worker node.

Using Persistent Volume Claims

You can use PersistentVolumeClaim (PVC) to request for storage of specific size from the gross Persistent Volume you have specified.

Modify the cssensor-ds_pv_pvc.yml file

Below is the cssensor-ds_pv_pvc.yml. For customers' convenience, the cssensor-ds_pv_pvc.yml file can be extracted from QualysContainerSensor.tar.xz. Note that you can download the yaml file directly from https://github.com/Qualys/cs_sensor

The field alignment in the yaml file is very important. Please make sure to honor the formatting provided in the template.

kind: List

apiVersion: v1

items:

 - kind: Namespace

   apiVersion: v1

   metadata:

     name: qualys

 - kind: PersistentVolume

   apiVersion: v1

   metadata:

     name: qualys-sensor-pv-volume

     labels:

       type: local

   spec:

     storageClassName: manual

     capacity:

       storage: 5Gi

     accessModes:

       - ReadWriteOnce

     hostPath:

       path: "/mnt/data/"

 - kind: PersistentVolumeClaim

   apiVersion: v1

   metadata:

     name: qualys-sensor-pv-claim

     namespace: qualys

   spec:

     storageClassName: manual

     accessModes:

       - ReadWriteOnce

     resources:

       requests:

         storage: 1Gi

 # Service Account

 - kind: ServiceAccount

   apiVersion: v1

   metadata:

     name: qualys-service-account

     namespace: qualys

 # Role for read/write/delete permission to qualys namespace

 - kind: Role

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-reader-role

     namespace: qualys

   rules:

   - apiGroups: ["","batch"]

     resources: ["pods","jobs"]

      verbs: ["get", "list", "watch","create", "delete", "deletecollection"]

   - apiGroups: [""]

     resources: ["pods/status"]

     verbs: ["get"]

   - apiGroups: [""]

     resources: ["pods/attach", "pods/exec"]

      verbs: ["create"]   

 - kind: ClusterRole

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-cluster-reader-role

   rules:

   - apiGroups: [""]

      resources: ["nodes", "pods/status", "replicationcontrollers/status", "nodes/status"]

     verbs: ["get"]

   - apiGroups: ["apps"]

      resources: ["replicasets/status", "daemonsets/status", "deployments/status", "statefulsets/status"]

     verbs: ["get"]

   - apiGroups: ["batch"]

     resources: ["jobs/status", "cronjobs/status"]

     verbs: ["get"]

 # RoleBinding to assign permissions in qualys-reader-role to qualys-service-account

 - kind: RoleBinding

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-reader-role-rb

     namespace: qualys

   subjects:

   - kind: ServiceAccount

     name: qualys-service-account

     namespace: qualys

   roleRef:

     kind: Role

      name: qualys-reader-role

     apiGroup: rbac.authorization.k8s.io

 - kind: ClusterRoleBinding

    # if k8s version is 1.17 and earlier then change apiVersion to      "rbac.authorization.k8s.io/v1beta1"

   apiVersion: rbac.authorization.k8s.io/v1

   metadata:

     name: qualys-cluster-reader-rb

   subjects:

   - kind: ServiceAccount

     name: qualys-service-account

     namespace: qualys

   roleRef:

     kind: ClusterRole

     name: qualys-cluster-reader-role

     apiGroup: rbac.authorization.k8s.io

 # Qualys Container Sensor pod with

 - apiVersion: apps/v1

   kind: DaemonSet

   metadata:

     name: qualys-container-sensor

     namespace: qualys

     labels:

       k8s-app: qualys-cs-sensor

   spec:

     selector:

       matchLabels:

         name: qualys-container-sensor

     updateStrategy:

         type: RollingUpdate

     template:

       metadata:

         labels:

           name: qualys-container-sensor

       spec:

         #tolerations:

          # this toleration is to have the daemonset runnable on master nodes

         # remove it if want your masters to run sensor pod

         #- key: node-role.kubernetes.io/master

         #  effect: NoSchedule

         serviceAccountName: qualys-service-account

         containers:

         - name: qualys-container-sensor

           image: qualys/qcs-sensor:latest

           imagePullPolicy : IfNotPresent

           resources:

             limits:

                cpu: "0.5" # Default CPU usage limit on each node for sensor.

           args: ["--k8s-mode"]

           env:

           - name: CUSTOMERID

             value: __customerId

           - name: ACTIVATIONID

             value: __activationId

           - name: POD_URL

              value:

           - name: QUALYS_SCANNING_CONTAINER_LAUNCH_TIMEOUT

             value: "10"

# uncomment(and indent properly) below section if using Docker HTTP socket with TLS

           #- name: DOCKER_TLS_VERIFY

           #  value: "1"

# uncomment(and indent properly) below section if proxy is required to connect Qualys Cloud

           #- name: qualys_https_proxy

           #  value: <proxy FQDN or Ip address>:<port#>

           - name: QUALYS_POD_NAME

             valueFrom:

               fieldRef:

                 fieldPath: metadata.name

           - name: QUALYS_POD_NAMESPACE

             valueFrom:

               fieldRef:

                  fieldPath: metadata.namespace

           volumeMounts:

           - mountPath: /var/run/docker.sock

             name: socket-volume

             readOnly: true

           - mountPath: /usr/local/qualys/qpa/data

             name: persistent-volume

           - mountPath: /usr/local/qualys/qpa/data/conf/agent-data

             name: agent-volume

# uncomment(and indent properly) below section if proxy(with CA cert) required to connect Qualys Cloud

           #- mountPath: /etc/qualys/qpa/cert/custom-ca.crt

           #  name: proxy-cert-path

# uncomment(and indent properly) below section if using Docker HTTP socket with TLS

           #- mountPath: /root/.docker

           #  name: tls-cert-path

           securityContext:

             allowPrivilegeEscalation: false

         volumes:

           - name: socket-volume

             hostPath:

               path: /var/run/docker.sock

               type: Socket

           - name: persistent-volume

             persistentVolumeClaim:

               claimName: qualys-sensor-pv-claim

           - name: agent-volume

             hostPath:

               path: /etc/qualys

               type: DirectoryOrCreate

# uncomment(and indent properly) below section if proxy(with CA cert) required to connect Qualys Cloud

           #- name: proxy-cert-path

           #  hostPath:

           #    path: <proxy certificate path>

           #    type: File

# uncomment(and indent properly) below section if using Docker HTTP socket with TLS

           #- name: tls-cert-path

           #  hostPath:

           #    path: <Path of directory of client certificates>

           #    type: Directory

         hostNetwork: true

Modify parameters in the yaml file

Modify the cssensor-ds_pv_pvc.yml file to provide values for the following parameters. In order for the yml file to work properly, ensure that you do not remove/comment the respective sections mentioned below.

You can download the yml file directly from https://github.com/Qualys/cs_sensor

 - kind: PersistentVolume

   apiVersion: v1

   metadata:

     name: qualys-sensor-pv-volume

     labels:

       type: local

   spec:

     storageClassName: manual

     capacity:

       storage: 5Gi

     accessModes:

       - ReadWriteOnce

     hostPath:

       path: "/mnt/data/"

 - kind: PersistentVolumeClaim

   apiVersion: v1

   metadata:

     name: qualys-sensor-pv-claim

     namespace: qualys

   spec:

     storageClassName: manual

     accessModes:

       - ReadWriteOnce

     resources:

       requests:

         storage: 1Gi

 

Here a PVC of 1Gi is made on a Persistent Volume to 5Gi. Click here for a list of supported Persistent Volume Types.

Add the name of the PVC under volumes:

 - name: persistent-volume

       persistentVolumeClaim:

        claimName: qualys-sensor-pv-claim

Launch Sensor without Persistent Storage

You can run the sensor without using persistent storage on host. In this case data is not stored on host but stored at the /usr/local/qualys/qpa/data folder relative to the Sensor.

To launch the sensor without persistent storage, modify the cssensor-ds.yml file and provide '--sensor-without-persistent-storage' as an additional value in args.

args: ["--k8s-mode","--sensor-without-persistent-storage"]

 

It is recommended to use the "--enable-console-logs" option along with "--sensor-without-persistent-storage" to preserve the logs.

Under volumeMounts remove/comment the persistent-volume section.

volumeMounts:

- mountPath: /usr/local/qualys/qpa/data

 name: persistent-volume

 

Under volumes remove/comment the persistent-volume section.

volumes:

- name: persistent-volume

 hostPath:

   path: /usr/local/qualys/sensor/data

   type: DirectoryOrCreate