The transition of corporate infrastructure into software defined private clouds has transformed IT automation from an aspirational goal into an existential imperative. The move from treating servers, storage and network gear as individual units that are carefully managed into interchangeable resources — the familiar ‘pets to cattle’ metaphor — has spawned the DevOps movement and host of related automation tools that allow infrastructure to be managed as code, not unique physical instances. Indeed, public cloud services using millions of systems like AWS, Azure and Google would be impossible without extreme automation. Yet trying to use the same set of tools across several sources of infrastructure is challenging since each cloud platform has its own management console and interfaces and most organizations now use both internal systems and public IaaS.
Amazon encapsulates and exposes its automation tools in APIs and although it offers several management and application orchestration services like [CloudFormation] and [OpsWorks] these only work within AWS. One of the most popular DevOps automation packages is Chef, the open source project that has gone commercial. Over [half the Fortune 500] use the supported version of Chef on their internal infrastructure and the company has 750 customers in all. Many of these organizations also use AWS, so they need a way to integrate infrastructure automation between the two. Since it is also available on the major IaaS platforms, Chef is an ideal solution.
Chef Deployment Decision: On-Premise or In-cloud
A Chef deployment encompasses three elements with an optional fourth:
- Chef server (control hub for one or more application environments)
- Workstations (used to develop configuration recipes)
- Nodes (the systems running a particular application)
- Chef Analytics (optional: monitoring and reporting system that logs, audits and reports upon Chef server activity)
Organizations that have already made the DevOps transition to infrastructure as code will most likely have all four Chef elements installed on-premise. For them, the goal is adding AWS nodes to an existing workload pool. In contrast, those just starting out with infrastructure automation will need a Chef server. There are three options:
- self-managed using a [pre-packaged download] and private server
- self-managed on AWS using either an [AMI from the AWS Marketplace] or a manual install of open source Chef onto EC2
- SaaS using the [hosted Chef service]
We’ll focus on the pure AWS solution where all elements (Chef server, analytics, workstations, nodes) are EC2 instances, however the developer workstations could easily run standalone on their own PC. The basic workflow for controlling cloud resources looks as follows, which shows developers using pre-packaged cookbooks and custom code to build configuration recipes that are sent to the Chef server, which then directs the Chef client to deploy and configure cloud-resident nodes.
Organizations with an existing Chef deployment can get access to and control EC2 nodes in a couple ways. The best option, particularly for those with multiple cloud workloads, perhaps spread across different availability zones, and a commensurately deeper understanding of AWS is by using a VPC: a private, encrypted connection between your private data center and AWS resources. Using a VPC, the EC2 nodes sit on a private subnet so the Chef server can access them just like any other internal server.
Another option is to access EC2 instances via SSL using the Chef Knife CLI. Knife can manage nodes, cookbooks and recipes, user roles, chef client installations[and much more]. Controlling EC2 instances requires installing a the Knife EC2 plugin on Chef workstations and opening an SSH port in your AWS configuration ([step-by-step details here]). Once configured, developers can start, stop and list EC2 instances, configure and run new instances as Chef nodes and apply Chef recipes to one or more nodes.
Running Chef Server on AWS
Cloud natives that only want to control AWS workloads can stop right here since they probably don’t even need a Chef server. AWS includes OpsWorks, an application management service based on Chef and fully compatible with Chef recipes, as a standard feature, meaning you can apply Chef recipes to any EC2 instance. However, it doesn’t provide the flexibility of hosted or self-managed Chef to control resources across clouds, for that you need to run Chef Server itself.
The most convenient option is a [pre-packaged AMI] from the AWS Marketplace that takes care of the porting and installation details and comes as a fully supported service. Of course, convenience and support come at a price, in this case about a 25% markup to the base EC2 rate for the Chef server instance. Alternatively, you can [download] open source Chef and install it on your choice of Ubuntu or Red Hat servers (of course, you’ll need to install these as well, but it’s easy to [import an existing VM image] using the AWS CLI).
Running Chef Server on AWS allows managing individual EC2 instances or machine clusters and can exploit existing cookbook recipes to manage other AWS resources including Security Groups, Elastic Load Balancers (ELB) and Elastic Block Storage volumes (EBS). It’s even possible to [integrate Chef with CloudFormation] to manage and update autoscaling groups.
Integrating Chef with AWS is relatively easy and extends Chef’s powerful capabilities into the cloud. Of course, Chef isn’t the only configuration management alternative, so organizations embarking on an infrastructure automation strategy are wise to evaluate other options like [Ansible], [Puppet] and [SaltStack]. Each works with all the major IaaS vendors and can provide a common platform for consistent application/system configuration, deployment and lifecycle management.