Monthly Archives: November 2016

A Closer Look At Google Cloud: Don’t Dismiss It

By | November 11, 2016

Google Cloud services have evolved significantly over the past year, with an eye toward the enterprise. But there’s still room for improvement in 2017.

logo_lockup_cloud_platform_icon_verticalThe popularity of Amazon Web Services’ public cloud makes it easy to overlook other large, competitive infrastructure as a service options, such as Google Cloud Platform. Most people are familiar with Google’s cloud offerings through its online productivity software, Google Apps, which has been rebranded as G-Suite. However, its Google Cloud Platform services make it a serious cloud service competitor due to its infrastructure as a service option, known as Google Compute Engine, and platform as a service option, known as Google App Engine.

As I describe in this article, while the cloud provider made a series of steps in 2016 aimed at broadening its enterprise appeal, there is still work to be done in terms of integrating Google Cloud Platform services with on-premises legacy workloads.


GCP services

GCP includes a mix of core infrastructure services like virtual machines and storage combined with management, network and application services. Here’s a summary of quite an extensive menu:

Compute: Compute Engine (like AWS EC2), App Engine (a managed PaaS for Web apps and mobile backends similar to Azure App service) and Container Engine (Docker container images with cluster management and automation using Kubernetes)

Storage: Cloud Storage (object storage like AWS S3)

Networking: Cloud DNS and Interconnect (high-bandwidth, low-latency private connections through Google POPs; similar to AWS Direct Connect)

Databases: Cloud SQL (MySQL similar to AWS Aurora), Cloud Datastore (NoSQL like AWS DynamoDB) and Cloud Bigtable (distributed, big data NoSQL like AWS HBase)

Like all IaaS providers, Google layers higher-level services on top of these basic infrastructure building blocks. These include:

App notification: Cloud Pub/Sub (an asynchronous message queue)

Identity management and security: Cloud IAM (user, group and security policy management), Resource Manager (hierarchical control over service bundles and projects), Security Scanner (scans App Engine instances for common vulnerabilities; similar to AWS Inspector)

Big data analytics: Cloud Dataflow (batch, stream and ETL processing), Dataproc (Hadoop and Spark), Datalab (visualization) and BigQuery (search)

Machine learning: model-driven algorithms using TensorFlow, image analysis, speech recognition and natural language processing and translation

Management and Automation: Stackdriver (monitoring, logging and diagnostics), Trace (performance and bottleneck analysis), Deployment Manager (template-based service automation), Cloud Shell (CLI access), Cloud Console (central management GUI) and various service and billing APIs.

As I point out, Google differentiates its cloud service in ways that can save savvy users a lot of money without degrading application performance. I quote one online business that switched from AWS to GCP after a detailed technical comparison that found significant cost savings and performance improvements by moving to Google Cloud.

Follow the Google Cloud Platform blog and you’ll see new services and enhancements every week, but over the past year under the leadership of former VMware CEO Diane Greene, GCP has made a decided push for enterprise customers by beefing up monitoring, logging, automation, identity management and networking features. GCP is also a leader in application containerization, making a technology Google itself has long used to streamline deployments and improve infrastructure efficiency available to public cloud users.


Although customers can run Linux and Windows applications in a VM, GCP doesn’t make it easy to integrate with legacy, on-premise virtualization management platforms like VMware or MS System Center and isn’t designed for cross-cloud environments. As Gartner notes, GCP is designed to allow organizations to “run like Google” and “has a comprehensive vision for, and extensive experience with, how cloud-native applications are developed and managed through the life cycle.”

Google’s cloud vision and expertise, along with tight integration between its infrastructure and application platform services, makes GCP ideal for cloud-native applications, particularly those using big data analytics or machine learning and targeting a broad audience of consumers and business partners, not just internal employees. GCP’s comprehensive set of container and automation features mean it is also an excellent platform for organizations that have adopted DevOps and CI/CD (continuous integration and delivery) processes and microservice-based application architectures. Conversely, GCP is a poor choice for cloud laggards and organizations looking for a place to offload legacy virtual infrastructure and applications.


How-to: Exporting Data from AWS and Avoiding Hotel California

By | November 10, 2016

It’s easy to migrate data to the cloud, but not as easy to get data out. AWS has eased some lock-in concerns with data export and transfer techniques.

fijx3pjopcew62uiuhhmCloud data lock-in is a perennial concern of IT execs who fear that once they move applications and data to an infrastructure as a service provider, technical constraints will make it hard to switch vendors in the future. In several surveys, IT pros voted lock-in to be one of the top inhibitors to cloud adoption. Fears of a vendor such as AWS obstructing data and resource migration have inhibited the majority of enterprises from taking full advantage of cloud services.

And as I detail in this article, these concerns aren’t unjustified, as cloud providers make it easy to deploy cloud services, but migrating elsewhere is invariably an afterthought. The worry is that migrating data will become a one-way street — easy to get data in, but getting data out requires more effort.

As the leading cloud provider, but also one focused on solving its customer’s problems, AWS is aware of the fears and has addressed some of the objections. Although its marketing and documentation naturally emphasizes inbound migration of workloads, it hasn’t ignored the need for bi-directional data movement and export.


There are several ways to efficiently move large amounts of data to and from AWS including physically shipping disks (the cloud version of sneakernet), on-premise storage gateway appliances and private network connections. This article outlines some techniques, including the S3 Storage Gateway, Direct ConnectΒ and AWS SnowballΒ to make sure AWS doesn’t turn into a roach motel for your data. AlthoughΒ AWS has enough options for data export to prevent strict lock-in, egress costs do disincentivize large-scale export and the article concludes with some recommendations to minimize the pain and cost of data migration.