Mitigation for Business Continuity and Resilience – Understanding Attacks on Cloud Environments – SCS-C02 Study Guide

Mitigation for Business Continuity and Resilience

A business can often determine how to recover and protect a particular application much more easily than it can determine how quickly each application needs to be recovered. This can be resolved by setting a Recovery Time Objective (RTO) and a Recovery Point Objective (RPO) for each application.

How you mitigate your risk in this situation depends on the criticality of the workload being considered. For instance, in the case of a ransomware attack, highly critical workloads may need multi-site backups or air-gapped backups, based on network connectivity, that can’t be corrupted. Non-critical workloads may be backed up to cold storage, such as AWS Glacier archives or S3 buckets with multi-region replication to be restored after more critical workloads have been attended to. You can refer to Figure 3.1 for the type of backup and restoration system that would be needed, based on the amount of time that it would take to restore the system.

Figure 3.1: Disaster recovery options

Before any event, you want to have your workloads categorized for what RTO/RPO attention level they need. With AWS, you can use a tag for items such as EC2 instances or databases to designate criticality. The value of our criticality tag could have a value ranging from low, meaning that it would most likely be backed up once or twice a day, to very high, meaning that it would have to have a corresponding real-time backup and extra monitoring attached to the workload.

You just learned how to protect and restore your systems and data if there are outages. Next, you will learn about a type of attack that happens when someone (or something) gains access to your system and prefers not to be discovered – detection evasion.

Detection Evasion

When you enable CloudTrail on your AWS account or organization, you capture every API call from the AWS Management Console, the Command-Line Interface (CLI), or some other programmatic method, such as a Software Development Kit (SDK). A bad actor who has gained unauthorized access to your account may try to first disable the logs of CloudTrail so that their actions will not be captured, making it more difficult to determine what events took place when you finally realize that a breach has occurred.

Not only are the CloudTrail logs essential for the reconnaissance work afterward in uncovering the who, what, and when of an event, CloudWatch logs can also be a crucial part of a proactive strategy of alerting when combined with other services, such as Amazon EventBridge and Simple Notification Service (SNS).

Mitigation for Detection Evasion

One of the most effective ways to control the manipulation of log files is using a Service Control Policy (SCP). If you have multiple accounts, this policy can be pushed down from the top of the AWS organization through the different accounts and organizational units so that it is implemented on all accounts:

{

    “Version”: “2012-10-17”,

    “Statement”: [

        {

            “Action”: [

                “cloudtrail:StopLogging”,

                “cloudtrail:DeleteTrail”

            ],

            “Resource”: “*”,

            “Effect”: “Deny”

        }

    ]

}

Note

Chapter 14, Working with Access Policies, deals with SCPs in depth.

A second mitigation technique, besides just stopping the ability to turn off the CloudTrail service, is enabling CloudTrail log file integrity.

This can be accomplished from the console by choosing YES for the Enable log file validation option when you create a CloudTrail or update an existing trail.

You can also enable log file integrity validation using the AWS CLI, with the following command:

aws cloudtrail update-trail –name cloudtrail-trail-name –enable-log-file-validation