Onboard CDR for AWS (New)
With TotalCloud 2.10.0, CDR is now integrated with Qualys Network Passive Sensor to bring in more exciting features and performance.
As part of this integration, we have introduced changes to the CDR setup for AWS. All the legacy CDR users can continue to manage their deployments from the "AWS (Legacy)" tab. This tab will not be visible to new CDR users.
The new onboarding instructions are explained below.
Pre-requisites
Things you need to get started.
-
You should create a deployment on the Threat Scanners tab. Refer to Deploy Threat Scanners.
-
Install Terraform to create and manage the Qualys environment on the AWS infrastructure. To install Terraform, go to Terraform downloads. Navigate to Linux > Amazon Linux, copy the commands to your terminal, and run them.
-
Ensure your subscription has the Network Passive Sensor (NPS) enabled. Contact your TAM to achieve this.
Prerequisites in AWS
- Ensure VPC and private subnets exist in the Service AWS account.
- The CDR appliance must be deployed in private subnets. Private subnets must have outbound internet access. Please verify outbound internet access before deploying the CDR appliance.
- Ensure that the security group attached to CDR Appliance has open ports for the following connections.
- If there is a firewall for outgoing internet connection, it must allow the following connections.
# | URL | Direction | Port |
---|---|---|---|
1 | nac-le-service.qg3.apps.qualys.com | Outbound | tcp/443 |
2 | qualysguard.qg3.apps.qualys.com | Outbound | tcp/443 |
3 | distribution.qg3.apps.qualys.com | Outbound | tcp/443 |
4 | 169.254.169.254 | Outbound | tcp/80 |
5 | GWLB/NLB IP | Inbound | tcp/8080 |
6 | Any AWS mirroring traffic (VxLAN) | Inbound | udp/4789 |
7 | GWLB IP(GENEVE) | Inbound | udp/6081 |
Get Started
Connect Qualys to your AWS account(s) to protect your cloud with Qualys Agentless Runtime Cloud Security powered by Deep Learning AI. Contact your TAM to proceed with connecting your accounts. After connecting your AWS account with your Qualys account, you can see instructions to follow on the CDR page before CDR can provide you accurate deep visibility into your workloads.
Deploy Threat Scanners to get your CDR Key. Follow the steps below to proceed with the rest of the configurations.
Deploy Qualys CDR in AWS
Deploy the scripts using AWS Cloud Shell and get started with CDR easily.
Qualys CDR supports three modes of deployment.
1. Standalone
2. High-availability auto-scaling mode with AWS Network Load Balancer
3. High-availability auto-scaling mode with AWS Gateway Load Balancer
CDR monitors your network via VPC Traffic Mirroring. The regions supported for traffic mirroring are mentioned below. This information is required when configuring CDR for your network.
VPC Traffic Mirroring available in AWS regions
The following AWS instance types are supported for CDR Appliance EC2.
- m5a.2xlarge
- r5a.2xlarge (default)
- r5.2xlarge
Standalone CDR deployment
You can deploy Qualys CDR as a standalone Amazon EC2 virtual machine.
- Qualys CDR stack consists of one EC2 instance and the required Security Group and IAM role in standalone mode.
- AWS VPC Traffic Mirroring supports a maximum of ten mirror sources per EC2 instance configured as a mirroring target.
The following diagram depicts how the deployment will look after deploying scripts.
1. Go to your AWS Console -> Navigate to CloudShell.
2. Download the qualys_aws_cdr_terraform.zip. Upload it to CloudShell.
3. Extract qualys_aws_cdr_terraform.zip.
4. Modify the terraform.tfvars file to manage deployment variables. Terraform variables are explained in detail in the README.md file.
Qualys has shared the latest AMI for your deployment region with your registered AWS account. You can find the latest Qualys CDR AMI here as well.
5. In the deployment wizard, specify the correct AMI ID for the region. Terraform is ready to deploy the CDR in your security account's VPC once terraform.tfvars files are set.
6. Run the following commands to deploy the module to each AWS subscription as needed.
-
terraform init
-
terraform apply --auto-approve
7. After terraform apply runs successfully and the application registers with Qualys, a CloudFormation stack should be created in the AWS account. This stack should show the resources and outputs needed to set up a traffic mirror session.
Refer to Traffic Mirroring - Standalone & NLB deployment section to setup traffic mirroring for this.
Network Load Balancer - High Availability Autoscaling Mode Deployment
For Qualys high-availability deployment, Qualys virtual appliances should be deployed within VPCs with private subnets across several AZs.
- Follow step 1-3 from Standalone section. Navigate to the cdr/nlb dir of extracted bundle.
- Depending upon the number of availability zones you want to opt for high-availability, go inside either the 2az or 3az folder.
- Modify the terraform.tfvars file here to manage deployment variables. Terraform variables are explained in detail in the README.md file.
Qualys has publicly released the latest AMI. You can find the latest Qualys CDR AMI here as well. - In the deployment wizard, specify the correct AMI ID for the region. Terraform is ready to deploy the CDR in your security account's VPC once terraform.tfvars file is set.
- Run the following commands to deploy the module to each AWS subscription as needed.
-
terraform init
-
terraform apply --auto-approve
-
- After terraform apply runs successfully, a CloudFormation stack should be created in the AWS account. This stack should show the resources and output needed to set up a traffic mirror session.
Refer to Traffic Mirroring - Standalone/NLB deployment section to setup traffic mirroring for this.
Gateway Load Balancer - High Availability Autoscaling Mode Deployment
Qualys CDR also supports deployment with Gateway Load Balancer (GWLB). With a GWLB CDR deployment, you can obtain the high availability, network load balancing, auto-scaling of Network Load Balancer (NLB), and ensure a single traffic mirror session for the VPCs of all your accounts. You can save significant time from setting up traffic mirroring on all your networks by setting up a GWLB.
The following diagram illustrates deployment once scripts are deployed.
- Follow steps 1-3 from the Standalone section to deploy in autoscaling mode, then navigate to cdr/web dir of the extracted bundle.
- Depending on the number of required availability zones for high availability, go inside either the 2az or 3az folder.
- Modify the terraform.tfvars file here to manage deployment variables. Terraform variables are explained in detail in the README.md file.
Qualys has publicly released latest AMI. You can find the latest Qualys CDR AMI here as well. - In the deployment wizard, specify the correct AMI ID for the region. Terraform is ready to deploy the CDR in your security account's VPC once terraform.tfvars file is set.
- Run the following commands to deploy the module to each AWS subscription as needed.
-
terraform init
-
terraform apply --auto-approve
-
After terraform apply runs successfully, a CloudFormation stack should be created in the AWS account. This stack should show the resources & outputs needed to set up a traffic mirror session.
Refer to Traffic Mirroring - GWLB deployment section to set up traffic mirroring for this.
Deploy Traffic Mirroring on Workloads
VPC Traffic Mirroring is supported on ENI (network interfaces) attached to EC2 and EKS instances as a source. You can find the full list of instance types that support VPC traffic mirroring and other considerations here.
On the other side, VPC Traffic Mirroring supports the following target types as destinations.
- ENI
- Network Load Balancer
- Gateway Load Balancer
AWS offers a serverless application for automating traffic mirroring based on VPCs, subnets, or tags as input.
You can launch the application using the Terraform module for traffic mirroring, which Qualys provides as an easy-to-use CloudFormation Template from the qualys_aws_cdr_terraform.zip downloaded for Standalone/NLB/GWLB deployments.
By specifying the VPCs or subnets to monitor, the serverless application sets up traffic mirroring sessions on existing instances or future instances in the selected VPCs or subnets.
Also, you can specify instance tags so that the serverless application mirrors traffic across instances with matching tags (existing or future).
Terraform's traffic mirroring module simplifies deployment using AWS's in-built features.
Traffic Mirroring - Standalone/NLB deployment
For traffic mirroring to Standalone deployment, traffic mirroring target will be CDR Appliance ENI, whereas for NLB deployment case it will be Network Load Balancer.
1. For launching the traffic mirroring stack, go to CloudShell and browse the unified dir of the extracted bundle.
The traffic mirroring stack should be deployed in the traffic source account.
2. Modify the terraform.tfvars file here to manage deployment variables. Terraform variables are explained in detail in the README.md file.
- Flow-log specific variable can be skipped and EnableFlowLog to false.
- Specify list of VPCs to enable traffic mirroring in comma separated if more than one. This step is optional.
- Specify list of subnets to enable traffic mirroring in comma separated if more than one. This step is optional.
- Specify AWS region where you want to deploy traffic mirroring stack.
- Obtain TrafficMirroringTargetId from output section of CDR deployment stack.
Run the following commands to deploy the module to each AWS subscription as needed.
-
terraform init
-
terraform apply --auto-approve
4. After terraform apply runs successfully and creates the stack, it will add resources to listen to events for workload activation. It will also add functions to update the traffic mirror session.
Traffic Mirroring - GWLB deployment
The traffic_mirror stack provided by Qualys creates a traffic mirror session for each workload, a common traffic mirror filter & traffic mirror target alongside.
For traffic mirroring to GWLB deployment, the traffic mirroring target will be the VPC endpoint.
We recommend you provide all AZs in the form of subnets in each where your application workload is running.
This creates a VPC endpoint in all those subnets, so VPCE will be available in each AZs and mirrored traffic won't cross AZ boundary.
1. For launching traffic mirroring stack go to CloudShell and browse cdr/gwlb/traffic_mirror dir of extracted bundle.
The traffic mirroring stack should be deployed in traffic source account.
2. Modify the terraform.tfvars file here to manage deployment variables. Terraform variables are explained in detail in the README.md file.
- QualysVpcEndpointServiceName: Specify Qualys VPC Endpoint Service Name in obtain this from GWLB CDR stack output section.
- EndpointSet: Specify VPC and subnets duo where traffic will be monitored, stack will create vpc endpoints in VPCs where traffic will be monitored. ONE endpoint will be created per VPC per subnet(AZ), effectively for all AZs in the VPC.
- TrafficMirroringSet: Specify VPC and subnets duo where application workloads are running and for which traffic mirror sessions will be created.
Refer terraform.example.tfvars for more details.
3. Run the following commands to deploy the module to each AWS subscription as needed.
-
terraform init
-
terraform apply --auto-approve
4. After terraform apply runs successfully and creates the stack, it will add resources to listen to events for workload activation. It will also add functions to update the traffic mirror session.