Monthly Archives: February 2016

Azure In Your Data Center: Understanding And Positioning Azure Stack

By | February 29, 2016

AWS is undeniably the big kahuna in cloud services, however Microsoft has emerged as a strong number two with the Azure IaaS/PaaS combination, which it is now extending into private clouds via Azure Stack, a packaged version of Azure services that can be deployed in private data centers. The goal is to deliver a completely consistent hybrid cloud platform regardless of where organizations choose to deploy their applications. In this column targeted to IT channel partners, VARs, service providers and their SMB customers, I outline the basics of Azure Stack, the hybrid cloud vision and customer demand fueling it, where it fits and the business opportunities.

As I previously discussed when Azure Stack was unveiled,

Microsoft’s strategy to continue the growth is an all-in bet on hybrid cloud, which it sees addressing business concerns like infrastructure, application and transaction latency, data sovereignty and control regulations and needs for customized infrastructure that can’t be met in a public, shared-services environment. Its solution, via Azure Stack, is to bring Azure inside the enterprise. Azure Stack provides full private-public cloud compatibility for developers and ISVs (common APIs and automation tools), business analysts (consistent design patterns and service elements) and IT (the same management console, admin constructs and access controls). This is possible because Azure Stack uses the same code base as public Azure: the internal differences primarily occur at the hardware level to account for idiosyncrasies of Azure’s much larger scale and occasional use of one-off, non-commercial hardware.

34137_primergy_cx400_m1_open_3_1_1

Since public Azure is designed for warehouse-scale data centers and is typically deployed on clusters of 20 racks with 1,000 nodes, scaling the design down to a size usable by most organizations is challenging. Although the Azure Stack preview (aka POC) will run on a single beefy machine, the general release will require a minimal deployment of four nodes. This focus on the hardware is to underscore the minimum level of cloud capacity an organization needs before Azure Stack makes sense. Organizations not running at least 200 simultaneous VMs and prepared to spend over $100K on new hardware need not apply. For partners this means Azure Stack must be positioned for large and medium enterprises: SMBs are best served either evolving to all public cloud or using an MSP for private workloads. That said, Azure Stack should be a compelling option for large enough Microsoft shops, particularly those already using or experimenting with Azure public cloud.

Perhaps a more intriguing partner, VAR and SP opportunity for Azure Stack is targeting specific markets, whether industry verticals, underserved geographies or niche use cases, with tailored services by using Azure Stack as the foundation for their own set of cloud services. See the column for details.

In sum, Azure Stack holds promise for customers that have firmly committed to a true hybrid-cloud architecture, understand what that entails, and want the flexibility to easily spread workloads across both private and public infrastructure. That said, it will be most appealing to those that have built their private infrastructures on Windows Server, Microsoft System Center and Hyper-V, not vSphere shops.

Customers already using Azure IaaS and PaaS will love Azure Stack since there’s no learning curve, and applications built for public Azure can seamlessly move to a private cloud without change.

Hyperconverged Appliances Are The Next Front For Control Of Enterprise Data Centers

By | February 21, 2016

A version of this column was originally published on Diginomica as EMC-VMware, Nutanix Battle For Enterprise Virtualization Platform Supremacy


Nutanix-VCE-clash CopyVendors and analysis regularly wax enthusiastic about the cloud, and for good reason when you consider the phenomenal growth rates at AWS and Microsoft Azure. Yet most enterprise workloads still run on internal systems. Indeed, spending on public cloud remains but a tiny fraction of a $3+ trillion IT market. While cloud infrastructure may represent the future of IT, with a mere $7 billion spent on “true” private cloud, i.e. internal systems with self-service provisioning of shared, metered compute, network, storage and application services, it seems obvious that when 77% of recent survey respondents claim they have an internal cloud, what they’re really talking about is a virtual server farm most likely running VMware ESXi, not anything that remotely resembles AWS. That’s why the latest product announcements from the EMC-VMware VCE consortium and pesky hyperconverged competitor Nutanix are so significant: they represent the battle for customers and mindshare as enterprises build the foundation for next-generation VM infrastructure.

Nutanix arguably invented a market and product category with its scale out, hyperconverged appliances combining compute, storage, hypervisor and virtualization management software in a convenient, integrated package. While Nutanix has plenty of smaller competitors, it eventually attracted the attention of enterprise virtualization’s big kahuna when VMware introduced the EVO:RAIL product line in 2014 and was later joined by Federation partner EMC with its VSPEX Blue products last winter. Both promised the same appliance-like simplicity, density and software integration as Nutanix with the advantage of using the same management stack and vendor support structure many large enterprises were already comfortable with. It is a classic contrast between disruptive innovator and established incumbent. Where Nutanix had the lead on features, flexibility and depth of product line, VMware/EMC had the benefits of early access to native VMware software enhancements, established business relationships and a large, global sales force.

IDC-HCI-market

Where both stumbled was pricing. With typical prices for a 4-node box well into six-figures, neither were particularly attractive to enterprises already loaded with racks of Dell or HP servers and both were off limits to smaller enterprises. Furthermore, as relatively new platforms, the evolution of performance and feature enhancements and scalability of hyperconverged products was unknown.

In separate, almost simultaneous, but almost certainly not coincidental announcements, VCE and Nutanix tackled each of these shortcomings. Nutanix led with a major software update that it claims “delivers up to 4x performance improvement for any workload with no additional hardware or software license” via 25 new software features. Although typical improvements are more like 2-3x, it’s still an impressive achievement particularly when you consider that a two-box, 8-node cluster of the all-flash NX-9000 product can now deliver up to a million IOPS, a figure once the domain of 7-figure all-flash disk arrays.

Source: Nutanix App Mobility Datasheet

Source: Nutanix App Mobility Datasheet

The more impressive half of the Nutanix announcement, particularly for enterprises using public clouds or multiple Nutanix sites for production workloads, is news that the Acropolis App Mobility Fabric can switch workloads from vSphere to the native Acropolis hypervisor “in minutes with minimal disruption and risk”, a feature that allows automatic failover and DR from a site running vSphere to another that doesn’t. It’s significant since not only does such hypervisor agnosticism threaten VMware’s hegemony by simplifying workload migration to Nutanix clusters from native vSphere, but it conceivably means the ability to migrate even complex VMware-based applications to AWS or Google Cloud using software from Nutanix partner Ravello Systems. Ravello software allows nesting one hypervisor, in this case, ESXi or the Nutanix Community Edition, inside another, for example Xen or KVM running on AWS or Google Cloud. However, by first migrating on-premise ESXi instances to the Nutanix hypervisor, organizations avoid the need to install VMware software or buy licenses before spinning up VMs on the public cloud, saving time and money. Ravello estimates it can provision workloads on AWS or Google using the Nutanix hypervisor “for less than $1 per hour.”

VCE VxRail Details

Where Nutanix addressed the price/performance shortcomings of hyperconverged systems by improving the denominator, EMC-VMware, aka VCE squarely target the numerator. In a briefing, VCE President and former head of EMC’s Systems Engineering Chad Sakac said the company has learned that the hyperconverged market is quite sensitive to entry-level pricing and that its existing systems (e.g. vBlocks, VxRACK, EVO:RAIL) were much too large, expensive and inflexible. In rebooting its hyperconverged line, Sakac said VCE focused on four parameters:

  • Pricing: By starting small and scaling up, VxRail at $60K is less than half the price of entry EVO:RAIL products at $140-200K.
  • Product: Optimized around flash storage (although available in both all-flash or hybrid configurations), VxRail adds storage features enterprises demand like deduplication, compression, erasure coding, remote replication and cloud object storage.
  • Positioning: Forget the Swiss Army Knife approach and instead focus small hyperconverged systems on SMBs, departments within large enterprises and distributed, scale out workloads and not replacing large, integrated converged systems like vBlocks for traditional large enterprise workloads. As Sakac puts it, “Anyone who says that one converged or hyper-converged offer can cover every use case is (IMO) as high as a kite, or suffering from ‘single product delusion’.”
  • Packaging: Appliance simplicity with a single sales and support channel for the entire hardware and software stack.

VxRail-configs

The result is VxRail, a hyperconverged appliance optimized for vSphere users comfortable using server-side storage (in the form of the just-refreshed VSAN 6.2) instead of a central SAN array. Using the same 4-node, 2U configuration popularized by Nutanix, VxRail packs up to 20 cores, 512GB RAM and 10TB of hybrid (flash cache)storage into each node with the ability to scale up to 16 boxes (64-nodes) per managed cluster.

My Take

Read between the lines of the VxRail PR blitz and it’s clear EMC and VMware have Nutanix, among others, in their sights. Indeed, the slide deck used to brief analysts on the product takes a shot at someone that sure resembles Nutanix in a chart showing the “most widely deployed hyper-converged solution in the market” showing VMware passing the “#2 HCI Vendor” last year and touting 200% YoY revenue growth in Q4’15.

VxRail-HCI-chart

Source: VCE

Nutanix and VCE are chasing a multi-billion dollar opportunity. According to IDC, the market for hyperconverged systems, which it defines as products that “collapse core storage and compute functionality into a single, highly virtualized solution”, is growing at over 150% annually with a $1.1B run rate. Sakac says that VCE exited 2015 with a $3+B run rate, although the bulk of its business is still in converged (integrated) infrastructure using external storage arrays. Nutanix booked $241M last fiscal year, running at about $350M annually as of it’s October 2015 quarter, meaning it had about 25% of the hyperconverged market using IDC’s totals. The remaining 75% plus triple-digit growth leaves plenty for VCE, Nutanix and others to fight over.

The significance of the competition over hyperconverged is less about the hardware sales and more about the platform. VCE wants to serve VMware customers and keep them bound to its virtualization-cum-cloud stack. While Nutanix addresses those same VMware users, it is less concerned about keeping them locked into a specific cloud stack as long as they continue using Nutanix platform: whether the workload runs in vSphere/vCloud, Xen, KVM (OpenStack), or potentially even Azure Stack is less material. It will be interesting to watch how customers react and whether choose to optimize hardware around and existing and familiar software platform or prefer hardware flexible enough to host their cloud platform of choice.

Making IT Safe For Innovation Using A Bimodal Organization

By | February 20, 2016

A version of this article first appeared on TechTarget SearchCIO as BiModal IT Opens Up Opportunities For Innovation


By establishing a framework for IT strategies, BiModal IT can help organizations act more like a startup and tackle some high-risk, high-reward digital business moonshots

bimodal-itIt’s easy to think of BiModal IT as little more than a way to segregate vastly different parts of an IT organization into two mutually exclusive sections. Mode 1 is for the legacy, ‘keep the lights on’ activities while Mode 2 is where the exciting, new cloud work happens. As I discuss in an upcoming SearchCIO  Webinar, this is a simplistic view on several levels: it understates the amount of innovation and service improvement that needs to happen in Mode 1, business critical systems, creates a false dichotomy concerning cloud usage within IT and romanticizes the nature of Mode 2 work. The Webinar dispels these and other myths about Bimodal by emphasizing that most Mode 1 applications aren’t on life-support and should be rearchitected for the cloud to improve efficiency, scalability and resilience, all of which leaves many places for important, rewarding work. Still, Mode 2 is where IT’s experimentation and risk taking should occur and that’s our focus here: the Mode 2 opportunities for IT rejuvenation, modernization and innovation.

Mode 2: The Place for Experimentation

The impetus for the BiModal structure is to create space within a traditionally conservative, methodical, risk-averse and yes, bureaucratic IT organization for some disruptive, experimental, startup-like activities. However, the goal isn’t to attract a bunch of scruffy, young, 10x developers that can code mobile apps and cloud backends while binge-watching Netflix, although that might be a beneficial side effect. Instead, Mode 2 provides the structure, or lack thereof, for IT and developers to learn, inculcate and perfect the behaviors and technologies required to attack fast-changing prospects for digital business through new applications and services.

The following are the key areas of Mode 2 development and experimentation:

  • Cloud Application and Infrastructure Design: All of IT is becoming service-oriented, virtualized and cloud-like however Mode 2 is where organizations have the freedom to start with a blank slate, unencumbered by legacy requirements and build cloud-native applications. This is a far cry from deploying a conventional, monolithic code on some AWS instances with the data sitting in S3, rather it requires designing systems around high-level platform services and making them more granular, distributed, scalable and stateless.
    Popular cloud services including AWS, Azure and Google keep adding to their portfolio of application services with PaaS (platform-as-a-service) features like mobile backends that include data synchronization, user profile management and database integration, or advanced data services like predictive analytics using Hadoop/MapReduce, programmable data pipelines and real-time stream processing. These platform services obviate the need for DIY implementation of complex software, however using them properly requires thinking about application design in a different way, like decomposing tasks into microservices instead of macro systems, and using stateless, RESTful APIs instead of persistent network connections.
  • Agile Development Processes and DevOps Collaboration: Today’s application lifecycle is measured in weeks, not years, meaning neither customers nor employees have the patience for a lengthy software development process. Organizations that are too slow to capitalize on an emerging digital business opportunity lose out to competitors that move quickly. But this requires using agile development processes, fostering close cooperation between developers and IT operations, heavily instrumenting applications to measure performance, feature usage and errors, and employing continuous delivery processes that facilitate a steady stream of bug fixes and feature enhancements.
    These result in significant cultural, governance and process changes that would disrupt a well-functioning Mode 1 organization, but are perfect for Mode 2 experimentation. Indeed, Mode 2 is the perfect place to break down silos within IT and between IT and business units by building multidisciplinary, project-focused teams that are unencumbered by sclerotic bureaucracy and empowered to make quick decisions.
  • Digital Business Platforms: Mode 2 is a place to incorporate entirely new application platforms. Today’s most compelling clients are mobile, yet mobile development is sufficiently different that the learning curve is best tackled on new projects that don’t require integrating legacy code or achieving mission critical reliability. The same could be said for other projects requiring entirely new platforms like IoT and big data. By combining smart devices with backend data services and compelling client applications, IoT promises to unleash new revenue-producing and cost-saving services. Likewise, data-driven analytics and dashboards improve executive decision making and expose trends, preferences and behaviors that can increase customer satisfaction and inspire new products and services.
Source: Gartner

Source: Gartner

Ideas for new business are fraught with risk, with unknown prospects of success. The key to exploiting them is to minimize the cost of failure, or more rightly, learning, by attacking them quickly, iterating aggressively as new information comes in and being unafraid to kill a failing project and move on. Mode 2 IT provides a structure where this can happen without threatening the mission critical IT activities that keep a business running.

Replacing your Data Center with IaaS: Opportune Scenarios

By | February 16, 2016

A version of this article originally appeared on TechTarget SearchDataCenter as These IaaS examples show data centers can share the load


You might not be ready to go all in on AWS like GE and Netflix, however it’s both feasible and advisable to start moving some applications online

DC-signThe emergence of rentable, online software and infrastructure, aka The Cloud, has fueled one of the most polarizing debates in IT history. Fortunately, we’ve moved past the black-versus-white phase, as the stark intransigence of corporate cloud deniers has waned while the smug assertiveness of cloud triumphalists has matured leaving the cloud discussion much more nuanced. Of course, there are still cases, like cash-starved, tech-savvy startups where IaaS is the only option. Conversely, there are situations like legacy financial systems or applications with highly regulated and sensitive data controls that are best left on internal infrastructure.

Enterprise applications in every organization invariably have a wide variety of characteristics. Some have variable and unpredictable workloads, others are new and built using a cloud-friendly, scale-out architecture, while still others have become a commodity that’s widely (and less expensively) available as a SaaS product. All three categories are great candidates for the cloud. Conversely, most organizations also rely on a plethora of largely static legacy applications, often in maintenance mode and sometimes highly customized, that run business-critical systems where the risk-reward of changing a stable, working implementation is far too high to consider a cloud deployment. The trick is figuring out which is which and determining where best to replace your internal data center with IaaS. Few organizations will get want to emulate GE’s aggressiveness in reducing its data center footprint by 90% by moving to AWS, however there are almost certainly systems in every IT facility that are best run on IaaS.

Source: SuperNAP

Source: SuperNAP

Bimodal and the Legacy Versus Greenfield Dichotomy

IaaS can be appropriate for both legacy and Greenfield applications, but generally the latter group make a more natural target given the ability to design from scratch with the cloud in mind. This means using a distributed, loosely-coupled microservices architecture that not only exploits basic IaaS compute and storage, but higher level services like load balancing, auto scaling, content caching, Hadoop/MapReduce data processing, Spark or equivalent data analytics, mobile backends and others. Whether you agree or not with the concept of Bimodal IT, new applications should be built borrowing concepts from mobile app startups by using Agile development methodologies, multidisciplinary specialists and rapid release/update cycles. Due to the ease and low cost of deployment, along with the ability to rapidly add new IaaS capacity and services, these projects should start and likely remain in a public cloud.

DC-to-IaaS-quoteAlthough legacy infrastructure is often best left alone that doesn’t mean there aren’t situations where IaaS can add value whether via increased flexibility, reduced CapEx upgrade expenditures, better redundancy and availability or new capabilities previously unavailable on legacy systems. These will generally entail hybrid architectures where legacy infrastructure is augmented with IaaS, however in cases where the application is relatively simple or tightly integrated and with few dependencies on internal data sources, a ‘forklift upgrade’ to IaaS is feasible and perhaps preferable.

Sample Scenarios

There’s no prescriptive formula for when and where to use IaaS, but the following scenarios illustrate situations where moving private data center infrastructure to the cloud can pay dividends:

  • Scaling an Application of Service Used by External Customers: Two key benefits of IaaS are easy and rapid scalability and global distribution. Both can be invaluable for smaller companies trying to enhance and scale an existing customer-facing application delivered out of an internal data center. For example, a regional online travel agent wanted to grow into new, emerging markets in Asia and the Middle East, however its transaction processing application lived in a New Zealand data center. Since the application was built on SQL Server, it was relatively easy to ‘lift and shift’ to Azure. Moving to and properly scaling cloud infrastructure tripled performance, with an additional 50% gain projected from future upgrades and allowed tapping new customers from nearby regional cloud locations. Joomla-Drupal-Wordpress
  • Consumer-facing Marketing Content: Every company has a website, but for consumer products manufacturers and retailers these are often the most important way to reach customers and influence buying decisions. Today’s sites must be dynamic, engaging and snappy, no matter where the customer is located: qualities that make them good candidates for IaaS deployment. Many use popular content management systems (CMS) like WordPress, Joomla or Drupal along with custom code, all of which work well on cloud infrastructure, are easily deployed using rebuild packages or recipes and easily scaled. The value of IaaS becomes apparent when running a marketing or sales campaign that is too successful. It’s next to impossible to scale internal systems fast enough to meet unexpected demand, however IaaS apps can be quickly redeployed to larger instances, new regions or accelerated with CDN and other services. For example, the AWS content and media serving reference architecture uses its CDN (CloudFront), distributed DNS (Route 53) and S3 object store to accelerate distribution of static content and distribute the workload.
  • Disaster Recovery, Backup Sites: A classic use of IaaS is for DR and backup locations. Indeed, in many cases the cloud can provide better capability than that currently available. Smaller organizations might not have a secondary location at all, in which case IaaS could save the company in a cataclysmic event. Other times, the secondary site is outfitted with old, surplus equipment that’s under-sized, seldom tested and end-of-life on the theory that something is better than nothing. Here, IaaS can provide on-demand capacity equivalent to that in the main data center with no CapEx. Even large enterprises with multiple regional data centers can exploit the IaaS to provide a secondary location in each region to avoid costly and performance-sapping failovers to distant facilities.
  • Test and Development Infrastructure: The other archetypal use of IaaS is for test and development. R&D and even IT engineering organizations often have sizable investments in test hardware to accommodate code builds, beta and stress testing and product staging. All of these can be replaced and improved upon by IaaS, which can allow each developer to have a private, virtual sandbox of servers, storage and networks without the need for CapEx investment or system managers.

Einstein-ROI

Recommendations, Caveats

Moving infrastructure and applications to IaaS is a multivariate decision: it’s not just a question of cost, which is often the least important factor. Organizations must consider the complexity of the move, including disruption to existing operations, need (if any) for application modifications and IT willingness and capacity to learn and incorporate new management portals and processes. Conversely, organizations shouldn’t underestimate the unique benefits of IaaS including easy, rapid and low-cost scalability, offloading of routine system administration (updates, patches), CapEx avoidance and usage-based pricing models. It’s a nuanced problem, however we feel most organizations will find many cases where the upside of an IaaS move far superior to the status quo.

Building Apps UIs: A Tool Walkthrough

By | February 15, 2016

A version of this column originally appeared on TechTarget SearchCloudApplications as The stark contrast between UI design software and manual coding


What can you expect when using higher-level UI development and prototyping software versus manual coding? Think of the difference between Visio and a pencil. A walk through the UI development lifecycle

mobile-cloud-appdevThe first (and often last) impression for client applications, whether on Windows, the Web or a mobile device, is the user interface. Yet designing an effective one is far more complex than tying some actions to a button or menu pick. That might have worked in the era of Visual Basic, but not for today’s Web and mobile app UIs. Fortunately, the nexus of cloud backend services, application frameworks and UI prototyping tools, allows app developers more time to focus on designing the user experience by providing efficient ways of translating ideas into contextual, visual expressions. The design process can easily get complex, however the essential steps involve planning an application’s various screens, defining the relationships and navigation flows between them, picking a visual design pattern (tabbed, modal, etc.) and designing elements of each screen.

The exact workflow will vary depending on the type of project, but upon choosing a fundamental application architecture or design pattern, the process starts by prototyping the UI as a wireframe diagram using a graphical editor before translating design elements to code. The code conversion process can be either manual and laborious, if starting from scratch, or assisted and iterative when using an application framework with templates such as Bootstrap, Foundation or Apache Cordova (PhoneGap) for HTML5, CSS and Javascript apps. Our goal here isn’t to provide a step-by-step getting started guide for a particular workflow and platform, but an overview of the toolchain used to create attractive, engaging apps.

Choose a Design Pattern and Backend Services, Then Prototype UI

Most UIs, whether for PC, Web or mobile apps, still use the model-view-controller (MVC) design pattern originally introduced way back in the 80’s with the Smalltalk language. Condensing a CompSci textbook’s worth of material into a paragraph, the three main components of an MVC app are the model, which manages data and app behavior, the view handling the display of information and the controller to capture user input and send commands to the model. When using cloud backends, particularly PaaS services like the Azure Mobile App and Web Services or AWS Mobile Hub, the model runs on the cloud while the client UI handles user input and display, using APIs to remotely execute application logic, store data and user state, authenticate users and pull information from remote databases. Such a cloud-based backend design obviates the need to build server applications and infrastructure, allowing developers to focus on the client code and UI.

Mobile and Web apps typically use pages or screens as the UI paradigm with app logic and controls to allow navigation between them and to trigger backend processing. Here, each screen represents a logical portion of app functionality that contains information, controls and navigation elements. The first step in UI design is planning the navigation flow using something like a flow chart to illustrate screens and transitions.

UI-screen-navigation

Source: Keynotopia

Detailing each screen is where wireframes come in. Wireframes are simple graphical blueprints of the screen design, including the text and image layout, controls and other graphical elements. Although wireframes can still be drawn on the back of a napkin, most developers now prefer using a graphical editor. Sophisticated software like Photoshop is often used by professional UI designers, however DIY developers should stick with a vector drawing tool like Visio (Windows), Omnigraffle (Mac), or even slide software like PowerPoint or Keynote. Indeed, Microsoft has a Visual Studio add-in providing PowerPoint integration. One nice trick is using the hyperlink feature in slide software to indicate navigation between pages. As app designer Luke Wroblewski points out, presentation software is great for showing an app’s narrative flow.

“Because Keynote is a presentation tool, it’s suited for designing Web apps where you need to show process flows and interactions. And, the animation tools make it easy to show how things like transitions and rich interactions might actually work. Combine all these and you’ve got a tool that allows you to quickly communicate how something will look and work!”

Source: Android developer documentation.

Source: Android developer documentation.

Translating Drawings to Code

Translating wireframe mockups and navigation flows to actual code, whether HTML5, Javascript (e.g. AngularJS) or Java and Objective-C for native mobile device SDKs, is typically a manual process of using interface templates that implement the various UI features of your prototype. Although there are tools with integrated code generators like App Builder, Kony Studio, Dreamweaver and others, these can be expensive (both money and time, having their own learning curve) and limiting (they only support certain types of apps and languages, plus your design may not fit into something from their template library).

A more common approach is to use code templates from an application framework. For example, the Bootstrap framework for Web apps has an extensive template library with a wide variety of layouts. Choose one that best resembles a particular wireframe and you’ll have all the code, including base HTML, CSS, fonts and Javascript, for a complex page that’s ready to tweak into the exact form required. The coding itself is made even easier by using an IDE like Eclipse, Visual Studio, Webstorm/IntelliJ IDEA, Xcode and others that includes a syntax-aware editor with code hinting and completion, real-time code inspection and error correction, hierarchical project, file and class navigation and integration with build repositories (GitHub) and tools.

Wrapping Up: Testing the UI and Linking Backend Services

Debugging and refining the UI requires an emulation environment, either a browser with development tools like the Chrome Developer tools for browser-based apps (along with the Ripple Emulator for mobile Web emulation) or a mobile OS emulator for native iOS and Android app testing. From there, it’s a matter of repeating the standard code-build-test-debug cycle until you’re satisfied with the results.

Chrome_Dev-Tools

The details of wiring in backend cloud services are specific to the PaaS of choice. For example, when using the Azure App Service, it starts by using Azure’s management portal to create the necessary services (e.g. database, server, etc.) and then downloading a customized code bundle for your platform of choice. If building an iOS app, you’ll get an Xcode project with all the code and modules preconfigured to connect to your backend and ready to run in the Xcode iOS simulator. The sample code provides the necessary hooks to the relevant Azure services, platform-specific service calls that can be easily integrated into the client UI.

The combination of graphical UI prototyping software, code frameworks, intelligent, framework-aware IDEs and cloud backend app services have greatly simplified the process of building elegant, innovative and effective Web and mobile apps. These apps exploit the nexus powerful client GUIs and cloud services to enable clients lightweight enough to work on a smartphone with application logic sophisticated enough to tackle real world problems.


The many disciplines involved in user experience design

Source: envis precisely on visual.ly

Source: envis precisely on visual.ly

An Analysis of iPhone “Error 53”: Poorly Implemented Protection of a Secure System

By | February 14, 2016

This article originally appeared in Diginomica as iPhone Error 53 – a study in bungled user experience, but great security


TouchIDApple is one of the most polarizing tech companies around, attracting both loyal supporters and equally strident critics whenever it does something remotely newsworthy. The latest dustup concerns an ambiguous, but apparently fatal error that some iPhone users report when trying to upgrade to the latest system version, iOS 9. According to a report in the Guardian publicizing the phenomenon,

“The issue appears to affect handsets where the home button, which has touch ID fingerprint recognition built-in, has been repaired by a ‘non-official’; company or individual. It has also reportedly affected customers whose phone has been damaged but who have been able to carry on using it without the need for a repair.”

Upon installing iOS 9, these users faced a wholly nondescript message reading, “The iPhone ‘iPhone’ could not be restored. An unknown error occurred (53).” Worse, yet, there’s no easy way to get past it: the phone is seemingly bricked along with any unique and unbacked up data.

Screen-Shot-2015-03-13-at-19.49.15

Why Would Apple Intentionally Brick a Phone?

Of course, there’s much more to the story and the details are traceable to the iPhone’s sophisticated hardware-based security. Indeed, this is a case where Apple can be praised for doing the right, and perhaps only reasonable thing in the worst possible way. Although Apple hasn’t confirmed causality, it turns out, the error typically (always?) occurs on phones where the Touch ID home button has been replaced with an aftermarket, non-Apple-authorized facsimile. This may seem like an arbitrarily punitive response by a greedy company looking to maximize repair revenues, however when one considers the security function of Touch ID, it’s entirely logical and a virtual requirement for Apple to assure the integrity of the hardware-based biometric security system that is the foundation of trust upon which its Apple Pay mobile payment platform is based.

Understanding why requires looking at the details of Touch ID’s implementation. The home button scanner takes extremely high-resolution pictures of a fingerprint, including “minor variations in ridge direction caused by pores and edge structures”. As Apple describes,

“It then creates a mathematical representation of your fingerprint and compares this to your enrolled fingerprint data to identify a match and unlock your device. Touch ID will incrementally add new sections of your fingerprint to your enrolled fingerprint data to improve matching accuracy over time.”

Here is where the iPhone’s hardware security kicks in. Instead of storing this mathematical representation, which to us sounds like a cryptographic hash, of your fingerprint as a password online in iCloud, Apple uses dedicated memory, called the Secure Enclave, built into each iPhone A-Series SoC.

“Touch ID doesn’t store any images of your fingerprint. It stores only a mathematical representation of your fingerprint. It isn’t possible for someone to reverse engineer your actual fingerprint image from this mathematical representation. The chip in your device also includes an advanced security architecture called the Secure Enclave which was developed to protect passcode and fingerprint data. Fingerprint data is encrypted and protected with a key available only to the Secure Enclave. Fingerprint data is used only by the Secure Enclave to verify that your fingerprint matches the enrolled fingerprint data. The Secure Enclave is walled off from the rest of the chip and the rest of iOS. Therefore, iOS and other apps never access your fingerprint data, it’s never stored on Apple servers, and it’s never backed up to iCloud or anywhere else. Only Touch ID uses it, and it can’t be used to match against other fingerprint databases.”

Source: UNMITIGATED RISK

Source: https://unmitigatedrisk.com/?p=389

This explains why Apple effectively bans third-party fingerprint scanners on the iPhone. There’s nothing but Apple’s iOS bootloader preventing a rogue home button with embedded firmware from executing a Man-in-the-Middle (MitM) attack by creating a copy of the fingerprint representation before passing it onto the Secure Enclave. Of course, the attackers would need to reverse engineer Apple’s hash function (“mathematical model”), no doubt a daunting task, however with enough trial and error (remember, the Secure Enclave will have the valid copy of the hash output) it’s conceivable. Having the digital version of one’s print would allow unlocking all kinds of things on the phone, including Apple Pay.

Mobile Payments: A Matter of Trust

Perhaps the most compelling feature of Apple Pay is the fact that it doesn’t store, nor use your actual credit or debit card numbers when making a transaction. According to Apple,

“When you add your card, a unique Device Account Number is assigned, encrypted, and securely stored in the Secure Element … When you make a purchase, the Device Account Number, along with a transaction-specific dynamic security code, is used to process your payment. So your actual credit or debit card numbers are never shared by Apple with merchants or transmitted with payment. And unlike credit cards, on iPhone and iPad every payment requires Touch ID or a passcode, and Apple Watch must be unlocked — so only you can make payments from your device.”

Should a rogue Touch ID sensor be able to replicate the digital fingerprint model (hash), it could allow attackers to compromise the entire Apple Pay reservoir of device account numbers and create transactions unbeknownst to the iPhone owner. Since mobile e-comm sites and apps are now integrating Apple Pay into their checkout process, it would be relatively easy to remotely monetize compromised accounts without getting near an NFC PoS terminal. In this context, an Apple representative’s statement to the Guardian sounds much less capricious,

“When iPhone is serviced by an authorized Apple service provider or Apple retail store for changes that affect the touch ID sensor, the pairing [beteween device and sensor] is re-validated. This check ensures the device and the iOS features related to touch ID remain secure. Without this unique pairing, a malicious touch ID sensor could be substituted, thereby gaining access to the secure enclave. When iOS detects that the pairing fails, touch ID, including Apple Pay, is disabled so the device remains secure.”

My Take

Apple Pay, used by an estimated 10-20% of users with capable devices and supported by millions of stores, is North America’s most successful mobile payment platform. Yet adoption has been slow compared to other Apple services due in part to people’s unfamiliarity with and resulting distrust of the technology. Aside from convenience, the fact that the system is far more secure than traditional payment methods is undoubtedly a key factor for many early adopters. Their trust in Apple’s security would be instantly undone if the Touch ID-Apple Pay system were compromised by rogue third-party hardware, damage that would jeopardize its roll out in China and other large markets.

We applaud Apple for doing the right thing to protect its security technology, but must chastise both their utter lack of communication about the necessity of authorized repairs for the Touch ID button assembly and the equally opaque error message presented to users should they use a third-party component. This is a classic case of nailing the product design, but bungling the user experience and presents a teachable moment for other organizations implementing sophisticated technology: fail gracefully when users do the unexpected and don’t leave them in the dark when the unusual invariably happens.

Understanding IoT in AWS: A Primer

By | February 13, 2016

A version of this article originally appeared in TechTarget SearchAWS as AWS IoT platform connects devices to cloud services


Amazon wants to be the hub for sensor data, whether from industrial instrumentation or personal gadgets. A look the AWS IoT platform

AWS-IoT-buttonMillions upon millions of intelligent devices streaming information and waiting for commands poses the type of data and device management problem that seems tailor-made for the cloud. It’s hard to imagine many organizations having the scale of systems and communications infrastructure that’s required to build a real world IoT backend capable of handling the volume of messages and processing the resulting data in real time where a modern aircraft engine might have 5,000 sensors generating gigabytes of data per second. Indeed, it’s an opportunity that’s not lost on the biggest IaaS providers as both Amazon AWS and Microsoft Azure introduced IoT services in the past six months. We surveyed the new IoT options in a previous article, so this time we’ll take a closer look at the AWS IoT platform.

Announced at reInvent 2015, AWS IoT is a suite of services designed to manage intelligent devices, whether industrial sensors or consumer wearables, and connect them to the broader AWS ecosystem where the captured information stream can feed databases, trigger other AWS services and respond to commands from external applications.

awsiot-how-it-works_HowITWorks_1-26

The platform has five major components plus an SDK with libraries to connect, authenticate and register devices to the IoT portal. These are:

  • Device Gateway: A publish/subscribe message broker that facilitates secure, one-to-one and one-to-many communications between devices and AWS. It supports both HTTP via a RESTful API and MQTTT. The latter is an OASIS standard designed as a lightweight, publish-subscribe protocol that is preferable for IoT devices due to its small code footprint, speed and low resource utilization. According to one set of tests, MQTTT is much faster and more efficient, with less network overhead than HTTP, uses far less power (important for battery-powered devices) to transmit messages or maintain a connection and provides more reliable message delivery and retention. The gateway allows clients, both IoT devices and mobile apps to receive command and control signals from the cloud and is capable of supporting billions of devices.
  • Authentication and Authorization: AWS IoT features strong authentication, incorporates fine-grained, policy-based authorization and uses secure communication channels. Each device needs a credential, typically an X.509 certificate or AWS key, to access the gateway message broker and has a unique identity, used to manage individual and group permissions within the system. Like other AWS services, IoT operates on the policy of least privilege, meaning IoT clients can only execute operations if specifically granted permission. All traffic to and from the service is encrypted over TLS with support for most major cipher suites.

AWS-IoT-Security-overview

  • Device Registry: The Registry is like an identity management system for devices, where they check in, are given a unique identifier and store metadata such as device attributes and capabilities. Typical metadata might include the type of data a particular sensor provides, e.g. temperature, pressure, position, the units, e.g. Fahrenheit, Celsius, psi, the manufacturer, firmware version and serial number. AWS doesn’t charge for using the Registry and metadata doesn’t expire as long as an entry is accessed or updated at least once every 7 years.
  • Device Shadows: Shadows are virtual representations of a device, recorded as JSON documents, that live in the cloud and are available whether a device is connected or not. They include data such as device state (both desired and reported), device metadata (e.g sensor types), a client token (a unique ID), a document version (incremented every time the shadow information is updated) and timestamp of the last message to AWS. The desired state is typically updated by IoT apps used to manage or control devices while the reported state is data sent from the device. Applications interact with the Shadow, not the actual device, which enables proper operation whether the device is connected or not; an important consideration given the intermittent nature of IoT connectivity.
  • Rules Engine: The brains of AWS IoT, the Rules Engine is how IoT applications gather and process data and execute instructions. Like other data pipelines, it parses and analyzes incoming messages and triggers actions on other AWS services, including Lambda, Kinesis, S3, Machine Learning, and DynamoDB based on predefined criteria. It can also communicate with external devices or apps using Lambda, Kinesis and SNS (Simple Notification Service). The Rules Engine uses an SQL-like syntax (e.g. SELECT * FROM ‘things/sensors’ * WHERE sensor = ‘temperature’) with functions for string manipulation, math operators, context-based helper functions, crypto support and metadata lookup (UUID, timestamp, etc.). Rules can also trigger the execution of Java, Node.js or Python code in AWS Lambda allowing for the execution of arbitrarily complex operations.

Examples and Getting Started

AWS has 10 hardware partners, including Broadcom, Intel, Qualcomm and TI with IoT Starter Kits that support the AWS SDK. These include development microcontroller development boards, sensors and actuators and a copy of the SDK. Another option is the AWS IoT Button, a variant of the company’s Dash Button that can be used to trigger IoT workflows without writing device-specific embedded code. For example, a button press could launch a Lambda job that connects to Twilio and sends a text message to Dominos ordering your favorite pizza.

AWS IoT released to general availability in December and is available in four regions (two US, EU and APAC). The price is $5 per million messages (up to a 512-byte block of data) published to or delivered by the service. Thus, a 900-byte payload counts as two messages. For example, if an organization has 100 sensors, each updating data every minute, that’s 4.32 million messages per month. If the Rules engine sends each sensor reading to an external metering device and records it in a DynamoDB table, that’s another 4.32 million external and internal (with AWS) message deliveries. Since messages within AWS are free, the total is 8.64 million messages for the month or $43.20 (8.64*$5. Note that the AWS Free Tier includes 250,000 IoT messages, so developers can do a lot of prototyping without incurring any charges.

Other innovative applications are showcased by the winners of the AWS IoT Mega Contest, such as this voice-controlled drone using an Amazon Echo and Raspberry Pi.

drone-control-winner

AWS IoT is a remarkable suite of services paired with an SDK supporting a variety of popular IoT hardware platforms. Since it’s hard to see most organizations duplicating anything of its sophistication and scale, we hope this overview inspires IT pros and developers to familiarize themselves with the details, dream up some creative business applications for cloud-aware intelligent devices and give it a try.

 

Behind The Scenes At Super Bowl 50: Impressive Network Technology

By | February 6, 2016

PrintLevi’s Stadium, where the Broncos and Panthers will tangle in the Golden State’s golden anniversary Super Bowl, bills itself as the most technologically advanced stadium in the world. How could it be otherwise for the newest home to an NFL team right in the heart of Silicon Valley? Sitting just a mile from corporate sponsor Intel’s headquarters, with hundreds of other tech giants nearby, it’s only natural that the stadium is probably the best connected facility of its kind in the world. As I wrote last year, Levi’s is like a small data center masquerading as a football stadium and blanketed with wireless coverage for both fans and employees, including those all-important people on the sidelines. 

11755141_10207786649330706_7295713264934855191_n

For Super Bowl fans all that technology means never missing a play and getting the same isolation shots and replays on the stadium smartphone app as views at home on their 60-inch flat screens. Why bother squinting at the Jumbotron when you’re already holding an HD screen? Long cognizant of the ubiquity of HDTVs and 5-inch smartphones, the NFL’s goal isn’t to compete with broadcast media, but to give fans the best of both worlds: an exciting, unique live experience, without sacrificing the intimacy and analysis of a TV production.

Read on for a complete look at how the fans attending Super Bowl 50 will stay connected and the technology behind the scenes that ensures they won’t be looking at spinning wait cursors when they’re trying to share sights from the game on Instagram.

11817178_10207796011124745_6302647210371571060_n

In sum, Super Bowl 50 promises to be most connected, app-ready game in history, but making it work requires a lot of technology and engineering. If you are lucky enough to have tickets to the game, take a moment to give thanks to all the engineers, app developers and technicians that made your 5-bar connections possible. You’ll have plenty of time for gratitude as you’re waiting for the train, bus or car taking you back to the hotel.