Cloud denialism is on the wane, but the most persistent excuses enterprises give for avoiding public cloud services remain a loss of control, security and visibility. These issues have been amply addressed and debunked, both by the cloud services themselves and independent analysts, but as we pointed out over a year ago, the “folded arms gang” of cloud resistors is on the wane as the services prove their value and integrity. But IT lives by the Cold War adage “trust, but verify” and no organization should blindly deploy applications on a cloud service without having a complete monitoring and auditing program. However the cloud requires rethinking traditional procedures since, unlike on-premise data centers, users don’t run the physical infrastructure. Fortunately, AWS has you covered.
Like every other administration function on AWS, when it comes to security monitoring and auditing, there’s a service (actually, several) for that, complete with APIs, scriptable command line interfaces (CLIs) and management consoles all of which makes them supremely automatable and extensible. As we’ll see, automating AWS security monitoring and auditing isn’t hard when you know the right tools.
The foundation of every security audit or forensic analysis is a log trail of activity. All major AWS services include logging features, but as this AWS white paper describes, for security purposes the most important items to log, collect and analyze include:
- CloudTrail management activity: CloudTrail records all AWS API calls making it very useful for monitoring access to the management console, CLI usage and programmatic access to other AWS services.
- CloudFront access: CloudFront is the AWS CDN for Web content and it can be configured to log detailed information about every user request. This could lead to information overload, but is useful for certain content.
- RDS databases: RDS logs console, CLI and API activity including things like query errors and performance
- S3 server access and bucket policies: S3 can record changes to bucket and object policies and details of every access request, including requester, bucket name, request time, action taken, response status, and error code, if any. It can also log object expiration and scheduled removal.
CloudTrail is the provides key input for security audits since it records all administrator activity such as changing policies on an S3 bucket, starting and stopping EC2 instances and changing user groups or roles.
From Events to Configurations
Logging provides a detailed record of all admin activity, but it’s nice to have a comprehensive summary and history of your AWS resources and configurations. Again, there’s a service for that: AWS Config provides detailed inventory of EC2 instances, configurations and associated block (EBS) and network (VPC) resources. Config records changes and can send notifications via SNS (Simple Notification Service). Much like a version control system, Config can display the state of AWS infrastructure at any point in time.
Combining Config with CloudTrail and logs from other AWS services allows auditors to correlate configuration changes, such as access policies for an instance or storage bucket, with with specific events including details like the username, source IP and other actions that happened around the same time. The following example illustrates how Config and CloudTrail combine in the forensic analysis of AWS systems.
|Configuration report shows wrong security policies for a particular database.
||When did the DB policy change?Who made the change?What specifically happened (APIs used, via Web console or CLI/API)?
|How has the new security policy affected relationships with dependent resources?
||Were changes made to related services about the same time?If so, who, what, where?
In sum, AWS Config does four things:
- aggregates configuration and change management records
- provides AWS resource inventory
- records configuration history
- triggers configuration change notifications
Using AWS logs: monitoring, alerts, reports
Collecting all the relevant data isn’t enough, you need a way to automatically monitor, measure, act on and visualize it. That’s where CloudWatch comes in; it’s the monitoring and reporting engine for AWS resources and log files. Like all AWS services, CloudWatch is programmable via an API/SDK and CLI and can be used to trigger both real-time alerts, such as resource utilization over a set threshold, or chart historical metrics, like CPU utilization. Indeed, since CloudTrail and other logs can feed CloudWatch, you can track CloudTrail events alongside those from the operating system, applications, or other AWS services that are sent to CloudWatch logs.
Although AWS security and event monitoring tools are quite different from those used on premise, the system design strategy is the same: aggregate log data into a single repository, use software to monitor, flag anomalies, measure and chart metrics and aid in forensic, post hoc analysis. CloudTrail, CloudWatch and the logging capabilities of each AWS service form the data input, S3 is typically used for persistent storage and CloudWatch and third-party software do the data analysis.
AWS + Third-Party Software: Better together
Although CloudTrail provides a good set of basic features, it can’t match the sophistication of dedicated log and operational analysis software. Popular products from Alert Logic (Log Manager), Logentries, Loggly and Splunk are available through the AWS Marketplace and mirror the features of their on-premise counterparts. These are deployed via a plug-in service running on AWS. For example, the Splunk Add-on collects events, alerts, performance metrics, configuration snapshots, and billing information from CloudWatch, CloudTrail, and Config, along with generic log data stored in S3 buckets. The service then feeds Splunk Enterprise, which can be deployed as a self managed service on AWS using a Splunk-supplied AMI or as SaaS from Splunk.
Although AWS and Marketplace third parties provide an ample toolchest for building and automated cloud monitoring and auditing systems, putting a system together still requires some effort and expertise. The AWS documentation, white papers and re:Invent presentations provide ample information on the details. Organizations that don’t have the skills or time for a DIY project should look for a managed service like 2nd Watch or Datapipe that both design and operate complex AWS infrastructure.
By exploiting AWS’s inherent management policies, its secure infrastructure and the many logging and analysis services available, IT leaders will find themselves agreeing with the CTO of NASA’s JPL who said he believes it can be more secure in AWS cloud than NASA’s own data centers.