Monthly Archives: July 2015

Composite Applications and Next-Gen APM: A Fusion of Dev And Ops

By | July 22, 2015

Applications used to be so simple, some self-contained code linked with a few system libraries that accessed local data: a distinct bundle where everything required was on a single machine. Client-server software complicated things a bit, but the demarcation was clean and the hub-spoke design pattern straightforward. No longer. Apps are now composite mashups, often mixing custom and packaged code, accessing both public and private data sources and using APIs to access multiple Web services that each run on highly distributed cloud infrastructure. Gone are the boundaries between an application, associated infrastructure and data. Of course, this fusion of apps and infrastructure is transparent and irrelevant to users, but to developers, release managers, business owners and IT it’s a nightmare of complexity that makes monitoring, troubleshooting and maintaining application performance frustratingly difficult. Indeed, as I detail in this column, the situation requires a new approach that mixes traditional application performance monitoring (APM) and IT operational intelligence (OI).


Maintaining application performance is a looming problem since as I wrote last year, applications are central to business success. As CA’s CEO put it at the time, “applications now define a business’ relationship with its customers and fuel the productivity of its employees. We now live in a world where customers are no longer just loyal to the brand or product or service. Instead, they are loyal to the complete experience a brand delivers. And that experience is delivered by software.” No one cares if the code on their smartphone is working flawlessly if an application can’t access the data sources or remote cloud services required to view information or complete a transaction. Furthermore, the composite nature of modern apps means there’s a wider array of stakeholders in application performance that most importantly includes customers. What’s the first thing many Gmail or Google Drive users do when they can’t access the service? Hit the Google Apps Status Dashboard. This means APM is no longer just the domain of developers.

Source: Splunk

Source: Splunk

Source: Splunk

Source: Splunk

As the lines between applications and infrastructure have blurred, the worlds of APM and operational intelligence (OI) have collided, there are a couple obvious strategies for meeting the needs of next-gen APM. Given the dynamic nature of the problem and technology, it’s unclear how the specifics will evolve, however it’s obvious that organizations need a new, comprehensive approach to APM that reflects the same blending of development and operations that spawned the DevOps movement.

Read the rest of the column where I explain how next-gen APM looks a lot like this and represents the next big data opportunity.



Smart Consolidation: Make Branch Offices Server Free with WAN Appliances

By | July 20, 2015

A version of this article, with complete coverage of  branch office WAN appliances, use cases and recommendations is available this Channel Partners report.

With the pressure on IT to deliver new services without bigger budgets, organizations large and small are under pressure to cut costs and increase efficiency while simultaneously improving application performance and security. It’s a tall order, but an effective strategy borrows a page from the cloud provider playbook by consolidating data center operations to very few locations. The plan exploits economies of scale and new technology like Moore’s Law price/performance improvements in hardware, new infrastructure management software and steady increases in WAN bandwidth. Moving servers, and more importantly data, out of branch offices can lower costs, particularly for operations and support, and improve disaster readiness, data resilience and information security.

Branch offices have a diverse employee population with equally varied needs. Source Forrester

Branch offices have a diverse employee population with equally varied needs.
Source Forrester

Yet the strategy hinges on the proper WAN design and associated remote office/branch office (ROBO) infrastructure: inadequately provision remote users and they can easily turn into unproductive second-class citizens. Fortunately, there are many technological tools available to IT partners that can stitch a distributed organization into a digital business run on consolidated IT infrastructure. In earlier reports we outlined new options for WAN connectivity and cloud-based WAN services [links] that provide key elements of the network design. Here we focus on the ROBO equipment and options for turnkey, remotely manageable systems that require little more from local staffers than the ability to plug in a couple cables.

Start with the WAN

As I detailed in an earlier report, there are a plethora of services suitable for stitching together a distributed enterprise. “Rather than settling for expensive T-1 links, where it makes sense, network managers can opt for a business broadband service and VPN gateway providing 30 times the throughput at one-third the cost. Or why not ditch the VPN entirely and use a cloud service to build an enterprise WAN over any Internet connection, including employee smartphones using hotel Wi-Fi?” In sum: Get the fastest, least expensive WAN connection you can for each site, use a backup service that needn’t be of the same caliber (even wireless), but that’s sufficient in a pinch, and optimize the daylights out of both.

WAN Optimization appliances enable LAN-like performance.  Source: Cisco

WAN Optimization appliances enable LAN-like performance.
Source: Cisco

Provisioning the right pipes is just the start. WAN optimization software insures that they are used efficiently, squeezing every bit of usable performance out of each megabit. It does this by using various techniques to minimize traffic, reduce latency, prioritize real-time and mission-critical applications, optimize chatty protocols and accelerate application designed for WAN links. As the report details, as WAN optimization appliances have matured, they have taken on features like link load balancing, VPN termination and file/Web caching that were previously implemented in other appliances making them single-box solutions for many branch office situations. For those needing some local horsepower, converged appliances incorporate more powerful CPUs and a hypervisor.

The report details several business recommendations:

  • Updating and hardening WAN links can eliminate performance bottlenecks and network downtime
  • Optimizing WAN traffic increases the network’s usable capacity, performance and reliability.
  • Provisioning ROBO locations requires the proper hardware for accessing remote data, plus centralized data storage.

Video stream splitting allows an appliance to serve multiple copies of the same content from a single network stream. Source: Riverbed


Security’s Silver Bullet Is Virtualization: Bromium, Microsoft, VMware Show How

By | July 15, 2015

Virtualization has long been used to wring efficiency out of over-sized, under-used systems, but isolating applications and operating systems from the underlying hardware also produces immense flexibility that cloud services like AWS, Azure and Google Cloud exploit to deliver infrastructure on demand. Yet virtualization has become instrumental to solving the most vexing and serious problem facing IT providers and users: security. The trend towards virtualization-enhanced security arguably started last year when VMware VMware updated its network virtualization product, NSX, to support micro-segmentation by stressing its security applications and advantages. But the use of virtualization to create precise zones of protection isn’t limited to the network as Microsoft and Bromium recently demonstrated in announcing support for the latter’s micro-virtualization technology in Windows 10.

The essence of virtualization-enhanced security is the ability to arbitrarily shrink the OS and network attack surface of an application to the point that it is completely isolated from everything else on a system. I covered this last year with the NSX rollout. As I detail in this column, unlike traditional VMs that run with OS-level granularity, Bromium has developed a microvisor, a lightweight, highly secure hypervisor, that automatically creates a new micro-VM for every task, which can be a browser tab, media stream, Word document or cloud file share, on a system. In that sense, they resemble Docker containers, but unlike software-based application isolation, micro-VMs exploit hardware security features like Intel VT to protect the underlying OS, network stack and peripherals.


Source: Bromium

Source: Microsoft

Source: Microsoft

The Microsoft announcement ensures that Bromium can be easily and seamlessly integrated with Windows 10 clients and management systems. Crosby is encouraged that Microsoft is adopting a security architecture in which virtualization is a key element and hearing him describe it, Windows 10 plus Bromium will be the most secure, bulletproof client to date. Yet Microsoft had already embraced virtualization as a security tactic. Windows 10 and upcoming server releases incorporate hardware-enforced system sandbox called Virtual Secure Mode (VSM) to protect key parts of the OS, including security tokens and OS boot code, from attack.

Source: VMware

Source: VMware

Read on to understand the beauty of micro-virtualization, whether applied to a software task or network segment, namely its software-enabled granularity and how network micro-segementation fits into the picture. Indeed, the combination of system and network micro-virtualization techniques may have created the Goldilocks Zone: an ideal mix of application isolation, situational awareness and hardware-reinforced security.

Bimodal IT Explained: It Doesn’t Imply Bipolar Organizations, but the Path to IT Transformation

By | July 13, 2015

Never underestimate a buzzword’s power to frame the discussion. As I recently discussed, the term bimodal IT has captured the imagination and polemical energy of technology commentators and like many IT discussions in the age of 140-character commentary, it often degenerates into polarized, all-or-nothing positions. Using a variant of the classic reductio ad absurdum strategy, critiques of bimodal IT characterize it as a path to bipolar IT, a separate, but unequal partitioning. In the dynamic mode 2 corner we have hip, swashbuckling cloud gurus mashing together exciting new applications out of myriad cloud services, while in the dingy mode 1 corner we have the conservative old guard serving out their golden years by tending to legacy systems that (ideally) hum along until both they and their caretakers, like all good soldiers, just fade away. While it makes good rhetoric, this isn’t the way successful IT organizations navigate foundational change agents like the cloud.


As the column details, the bimodal model helps to focus IT attention on three areas of transformation:

  1. New ways of designing, developing and deploying applications and services using cloud services, distributed, fault-tolerant system architecture and agile, DevOps methodologies.
  2. Business applications amenable to new architectures, iterative feature development and fast release cycles.
  3. IT skills required to work within the agile/cloud environment and successfully implement the resulting applications and services.

Bimodal partitioning allows IT to locally optimize placement of people, processes and investments for services with very different requirements.

Indeed, the bimodal path to cloud and IT transformation is a reflection of business enlightenment, namely that building and operating data centers is not what they are in business for.

But the story of bimodal’s bridge to the cloud continues…here.


PaaS Product Survey: Focus on Features and Ecosystem, Not Cost

By | July 1, 2015

A previous version of this article appeared on Tech Target SearchCloudApplications as Comparing features, tiers and pricing of PaaS vendors

Cost isn’t the most important factor when evaluating cloud application platforms, but you must understanding the cost models

Cloud services are supposed to make things easier for IT, application developers, business buyers and software users and they largely live up to that ideal. However if the service itself doesn’t fit into a standard category like compute resources and storage space or as products themselves get more complex, nuanced and customized, cloud services become harder to evaluate and compare. Such is the state of enterprise PaaS and just like we found when analyzing the costs of IaaS app deployments, there are no simple answers. Cloud-Layers-x

At least the IaaS market enjoys relatively standard units of consumption —  virtual servers of various sizes, object storage, databases, network transfers — typically priced by the hour or month. In contrast, PaaS offerings are much more varied in both features offered, customization options and vendor pricing models. Vendors seem to understand the potential for confusion with most gravitating towards SaaS-like tiered services with monthly pricing, but as we’ll see, even these can have capacity-based pricing options. Yet choosing the best PaaS is seldom purely a matter of price shopping, but instead requires analyzing your required app development features, estimating the size of deployment and finding a development environment that fits your development team’s skillsets, work style and long term strategy, since the porting PaaS applications between platforms isn’t easy.

PaaS promises ‘just add water’ convenience by providing a pre-built, managed app dev framework and runtime infrastructure. Indeed, PaaS has elements of both SaaS (for the development environment, toolchain, project and source code management) and IaaS (for the deployment infrastructure), with the resulting hybridization of pricing models: service tiers billed monthly (SaaS) and resources consumed billed hourly (IaaS).

PaaS Vendor Scorecard

The easiest way of understanding PaaS pricing is by looking at the service features, tiers and pricing models of the major PaaS vendors. As the following list illustrates, PaaS is a diverse market composed of pure-play products like CloudFoundry, development platforms bolted on to SaaS applications (Salesforce 1) and IaaS services augmented by app dev features like AWS CodeDeploy, CloudFormation and Elastic Beanstalk.

  • Salesforce 1: Salesforce actually has two significant PaaS offerings —, designed for internal apps for employeee, and Heroku for external customer apps — bundled under the Salesforce 1 platform. provides tools tailored for both business users needing a point-and-click UI to automate workflows and build dashboards and developers using its Java-like Apex language and includes a cloud dev/test/deploy runtime environment and central management console. The tiered pricing model is based upon the number of custom app objects available to each user/developer and runs from $25/user/month (10 objects) to $150 (2000 objects). Salesforce doesn’t publish pricing of Salesforce 1 bundles that include Heroku, but does offer standalone instances at priced according to the number of compute containers and associated database. For example, 10 medium compute dynos and a 64GB premium database runs about $900/month.

Screenshot 2015-07-01 at 6.44.18 PMScreenshot 2015-07-01 at 6.41.59 PM

  • Microsoft Azure: Azure App Service is Microsoft’s new (released in late March) PaaS bundle that includes features designed for Web and mobile apps, middleware (what Microsoft calls Logic Apps) and API management. App Service has a tiered, monthly subscription pricing model that includes a ‘free’ level for up to 10 apps on shared infrastructure  where the only charge is for outbound data. Enterprise apps will generally want the Basic, Standard or Premium tiers that use dedicated servers. Pricing is by number of servers per tier level per month plus bandwidth. For example, 10 standard one core servers plus 100GB of traffic runs about $750/month. Azure, like AWS, is an umbrella for all Microsoft’s cloud service offerings meaning Web or Mobile apps built with App Service can be scaled using IaaS services like additional Web and Worker servers, SQL databases and storage resources (object, block, files, etc.).
  • Google App Engine: Like Azure, App Engine has a pricing model based on the usage of compute instances, storage and outbound traffic, however just like Google Cloud IaaS, the model is more granular, with instances and high-speed cache memory billed by the hour and SQL databases in chunks of 100k I/O operations. A further complication is a choice in app frontend instance types spanning a 4:1 range of capacity based on allocated memory and virtual CPU performance with a concomitant 5- to 20-cent per hour price differential. You can’t criticize Google for embracing choice and customization, but that means developers must have a thorough understanding of application resource requirements before using its pricing calculator. For example, 10 instances using a 2GB memcache pool, 1TB of storage and 50GB of outgoing network traffic runs $135/month.

Sidebar: Important PaaS Vendors

The following is a list of key PaaS vendors and products:

  • Salesforce: Salesforce 1;, Heroku
  • Microsoft Azure: App Service, Cloud Services
  • AWS: various application development, deployment and management services
  • Google: App Engine
  • IBM: Bluemix
  • Red Hat: OpenShift
  • SAP: Hana Cloud Platform AppServices
  • VMware/Pivotal: CloudFoundry
  • ActiveState: Stackato
  • CenturyLink: App Fog
  • Engine Yard
  • Mendix: App Platform
  • Progress: Pacific suite
  • IBM Bluemix: Like Google App Engine, Bluemix charges for app runtime resource usage by the hour, however IBM has simplified the model by combining compute and memory into a single GB-hour metric. Optional services are either charged a fixed monthly price or metered. Again, developers must do their homework before using IBM’s app price estimator since Bluemix has more options than a Chinese menu.
  • The Rest: Many other PaaS vendors use a simplified variant of the instance/hour pricing model with predefined instance sizes. For example, Red Hat OpenShift segments instances by the number of virtual CPUs and associated memory allocation. Pivotal simplifies the Cloud Foundry model even further with four-tiered monthly pricing based on the memory per app instance. CenturyLink App Fog has perhaps the simplest approach, with 5 plans designed for a maximum number of apps and delineated by the total amount of app memory, databases and storage.


Selecting a PaaS is more like selecting a house than buying a TV: there are a plethora of options, inconsistent features between services, various, often complex pricing models and you’ll be living with the decision for a long time. Here are a few tips:

  • Developers in organizations already using AWS or Azure should start there. Business users wanting to exploit data on Salesforce and other SaaS platforms should try
  • Those starting from scratch that know exactly what they want and expect (or hope) to rapidly scale should study Google App Engine. Others that are just investigating, but not fully committed to PaaS and want an easy way to do real world testing should try pure-play PaaS like AppFog, Cloud Foundry or OpenShift, all of which have free or very low-cost entry-level service tiers.

How to Use CloudTrail to Guard AWS Applications

By | July 1, 2015

A previous version of this article appeared on TechTarget SearchAWS as Police your public cloud with AWS CloudTrail

CloudTrail is a powerful tool for monitoring and auditing AWS deployments, but as a relatively new service, introduced in late 2013, many AWS users may not be aware of its capabilities and potential.  As we summarized in this article on AWS logging tools, “CloudTrail records all AWS API calls, making it useful for monitoring access to the management console, CLI usage and programmatic access to other Amazon services. CloudTrail also provides key input for security audits, recording all administrator activity such as policy changes on an S3 bucket, starts and stops on Amazon Elastic Compute Cloud (EC2) instances and changes to user groups or roles.” CloudTrail records data to an S3 bucket in JSON format to facilitate parsing, filtering and data analysis, can trigger alerts via the Simple Notification Service (SNS), can be accessed via custom applications using APIs and feed other logging and operational analysis systems like AWS CloudWatch, Alert Logic, Loggly and Splunk. As we discussed in our earlier article, having a detailed record of API calls is useful for troubleshooting, security forensics and policy compliance audits. Here’s how to get started.

CloudTrail Workflow

Like all AWS services, CloudTrail is setup and configured using the Web management console or command line interface (CLI). Configuring the service primarily entails specifying an S3 bucket storing the logs,  by default the UI will create a new bucket or you can select an existing one, and a couple options. Once enabled, CloudTrail starts recording events which can be viewed via the management console and programmatically queried using the LookupEvents API. You can optionally create an SNS topic that receives notifications when a new log file has arrived.


CloudTrail stores log files in a gzip archive using a standard, hierarchical naming scheme organized by day making it easy to pull entries for specific time periods or individual entries. Log entries can be retrieved using any S3 access method: console, CLI or API. As mentioned, entries are written in JSON format to simplify post-processing or can be viewed directly in the browser via an add-on extension like JSON View.  JSON format also allows third-party log analysis tools to aggregate, parse and analyze CloudTrail data.


Configuring CloudTrail with SNS allows users to subscribe to a particular log and be notified whenever it is updated, however topic subscriptions are still managed through the SNS console or API. Since some log files can be quite active, be sure to heed this tip from the CloudTrail documentation:

“Because CloudTrail sends a notification each time a log file is written to the Amazon S3 bucket, an account that’s very active can generate a large number of notifications. If you subscribe using email or SMS, you can end up receiving a large volume of messages. We recommend that you subscribe using Amazon Simple Queue Service (Amazon SQS), which lets you handle notifications programmatically.”

Permissions and Access Controls

Access to logs and other resources CloudTrail uses, like SNS topics, S3 buckets, message queues, etc. is managed through the AWS Identity and Access Management (IAM) system. IAM allows complete control over who can create, configure, or delete CloudTrail entries, start and stop logging, and access the buckets that contain log information. More details about IAM are available in this SearchAWS article, however the IAM policy generator provides an easy interface for creating and editing CloudTrail permissions, including templates for full and read-only access. As per IAM best practices, it’s wise to first create IAM groups like Administrators and Viewers and then add users to the appropriate group. You can also create custom policies using the IAM JSON syntax for special situations, for example read CloudTrail logs and objects in the associated S3 bucket, but not create, update or delete them. CloudTrail-entry-JSON-view

Using CloudTrail with Other Services

CloudTrail’s standard log format and API means it can feed third-party log analysis tools or a custom developed application. One example from the AWS Security blog illustrates how to use CloudTrail, AWS Lambda (an event-triggered compute service) and SNS to generate email notifications when certain APIs in your AWS infrastructure are used. In this scenario, Lambda watches the CloudTrail S3 bucket and triggers an SNS notification when specified APIs are logged. SNS then sends a message to every topic subscriber via email, SMS or mobile push. CloudTrail-IAM-policy

Popular log management and analysis products can also consume CloudTrail logs, combining it with data from other AWS services like Config or OpsWorks and on-premise infrastructure to produce comprehensive usage and security reports. Tracking changes across services and infrastructure allows a product like DataDog to correlate change events with performance metrics to help identify the cause of any degradation or highlight the source of any security incidents.
CloudTrail works with every major AWS service, with support for new products regularly being added (the full list is available here). The only charge for using CloudTrail is the S3 storage, which AWS estimates to be less than $3 per account for most customers. Given how easy it is to setup and the availability of free open source log analysis software like Graylog2, using CloudTrail on your AWS infrastructure is a no-brainer.

AWS CloudTrail Splunk for Managed Services

Using Splunk to analyze CloudTrail and other AWS log data.

For more information on using Splunk to analyze AWS data see this datasheet.