Monthly Archives: December 2015

Nutanix IPO Analysis: Suspect Timing, Significant Risks, But Thriving Business

By | December 30, 2015

imageComing a couple days before Christmas, the timing of Nutanix’s IPO filing, a document that’s undoubtedly been in preparation for months, was more than a little suspect. What was the company trying to hide while everyone else was distracted by holiday festivities? As I detail in this column, a thorough analysis doesn’t uncover any smoking guns or even smouldering embers, however it does show a company growing very fast, burning lots of cash, some serious systemic market risks, but plenty of opportunities.

Nutanix-system

Nutanix displays its hardware at VMworld 2015 | Source: author

The highlights of any S-1 are the business details and financial data. Nutanix reveals that it has 2,100 customers including 226 Global 2000 enterprises such as the U.S. DoD, Best Buy, Jabil Circuit, Kellogg, Nasdaq, Nordstrom and Toyota of North America. Nutanix believes this is but “a small portion of our potential end-customer base” that it can grow by increasing spending on sales and marketing, exploiting its network of channel partners and focusing on internationals sales, particularly to large, Global 2000 enterprises. Indeed, the S-1 shows a huge increase in sales and marketing expense, which now accounts for 2.7% of sales, up more than tenfold in absolute terms and from 2% of sales just two years ago.

Nutanix-financials

One of the most revealing parts of an IPO filing is where companies must disclose the risks to future success. Much of this is generic, legal boilerplate that applies to any business and situation, however Nutanix does confess some unique potential roadblocks. Of note is the pending Dell-EMC merger given that Dell is a key outlet for Nutanix products. I detail this an other challenges facing Nutanix in the full column.

Concerning issues I didn’t cover, but that are noted in this annotated copy of the Nutanix S-1 include:

  • The dual-class share structure in which existing executives and pre-IPO investors are granted preferential Class B shares with 10-times the voting weight of the Class A shares offered to the public. This obviously greatly disadvantages public investors and renders shareholder meetings and governance effectively meaningless since insiders will control the vast majority of votes. Although this structure isn’t uncommon among tech firms that doesn’t make it fair or wise. As activist investor Carl Icahn is fond of pointing out (see this commencement address for the perfect distillation of his thoughts), too many corporations end up be mismanaged through such crony capitalism since maintaining power, perks and prestige becomes more important than business and stock performance.
  • Nutanix’s non-product-related expenses, i.e. R&D, sales and overhead, have been rising almost as fast as revenue. While rising expenses are expected when companies are in growth mode, Nutanix must exercise much better expense control if it hopes to reach break even any time soon. The following are 3-year CAGRs:
    • Nutanix-expenses
  • Given the cost of its hardware, the focus on Global 2000 customers is logical however these are the same companies that already have heavy investments in traditional server and storage infrastructure. Th key benefit to Nutanix’s converged approach is simplicity, an attribute that SMBs with limited IT expertise particularly value. Yet Nutanix is often priced out of these deals. Relying on channel partners to sell and support he product provides Nutanix an excellent entry point to these businesses, however without a greater variety of cost-appropriate products and cluster configurations, those same partners, which as the S-1 notes have non-exclusive agreements, will gravitate to other vendor’s more competitively priced alternatives or even cloud services.

IPO Not A Given

Given ongoing consolidation in the server and storage business, it’s conceivable that Nutanix never makes it to IPO, however the Nutanix IPO/bidding war will be one of 2016’s notable tech and financial events and watching the process unfold is something I’m looking forward to in the coming months.

 

AWS Security Management: In Need of Automation

By | December 27, 2015

A verson of this article originally appeared on TechTarget SearchAWS as Rely on cloud security policy — not tools — to protect AWS


Managing security policies and incidents on IaaS can be complex and challenging. Here’s what vendors are doing about it

2015-12-23_14-26-35Once enterprises move workloads to cloud infrastructure, they soon realize that the tools for enforcing security policy and managing incident response are inadequate. Configuration can be very confusing, with important details often spread across different management screens, resulting in complicated, multi-step processes required to build consistent policies across the cloud service stack. Although IaaS and its fully abstracted services bring several security benefits, like the ability to micro-segment networks and services with application-specific firewalls and granular access controls, central visibility and management of all resources and hardened infrastructure designed and operated by experts, using it securely requires ample planning, some new management processes and learning new tools.

The granularity of cloud resources, in which user privileges and resource access controls can be specified with incredible precision, is a mixed blessing. Although it allows much greater precision in defining and auditing security policies, the resulting complexity means cloud security is often poorly implemented, leaving unintended gaps and backdoors. Although not necessarily thinking about the operational details, cloud users remain concerned about security in general. For example, a survey by the Information Security Community on LinkedIn found the biggest perceived cloud security threats to be unauthorized access, hijacking of accounts or services, malicious insiders and insecure APIs. The same survey finds the most popular suggestion for closing the cloud security gap is for cloud services to provide the ability to set and enforce consistent security policies across clouds. Although, not specified, we presume this applies across both public and private clouds.

All of these needs and deficiencies can be addressed by existing cloud security management tools when properly configured, however it’s too easy to make mistakes. The good news is that cloud vendors see the problems and are addressing them with new services that promise to centralize, automate and simplify cloud security management.

Cloud-security-threats-survey

Survey by the Information Security Community on LinkedIn

Rundown of New Cloud Security Services

The virtual, ephemeral nature of cloud services is both a boon to security and source of management headaches. A benefit since it allows easily inserting security services and control points between every layer of the infrastructure. But the ease with which cloud instances can be deployed, moved and destroyed also makes it exceedingly difficult to keep track of the security policies and configuration applied to each one. This problem of management complexity and security compliance received major attention by the major events AWS and Microsoft held this fall to unveil new features and educate customers.

AWS

At this year’s re:Invent, AWS announced two new security services and enhancements to a third. Although one product was a straightforward Web application firewall (WAF) — useful, but hardly groundbreaking — the other two squarely tackled the problem of overly complex security administration. These compliment the existing AWS Trusted Advisor service that analyzes an environment to identify ways to improve performance, security and reliability and reduce cost.

AWS_Security-Tools

Key AWS Security Tools | Source: AWS

  • Amazon Inspector provides automated security compliance auditing by comparing the configuration of server instances, networks and storage against a knowledge base of hundred of rules, looking for violations of best practices and standards like PCI DSS. These include things like allowing remote root logins, unpatched software with known vulnerabilities or leaving network ports unnecessarily open. Inspector generates a prioritized report of each violation and suggested remediation steps. According to the product announcement, “The initial launch of Inspector will include the following sets of rules: Common Vulnerabilities and Exposures, Network Security Best Practices, Authentication Best Practices, Operating System Security Best Practices, Application Security Best Practices, PCI DSS 3.0 Assessment.”
  • AWS Config Rules is an enhancement to the Config service we mentioned in a previous article on AWS security auditing that adds templates and guidelines, using a mix of pre-built AWS best practices and a user’s custom rules, to flag errors in provisioning and configuring resources. The service continuously monitors the environment to ensure resources remain compliant. Example rules include mandating that volumes be encrypted, proper tagging of all EC2 instances or that CloudTrail (logging of API calls) be enabled on all resources.

Both Inspector and Config Rules are still in preview release, which limits deployment size and regions, with no indication of when they might be generally available.

AWS_Config-Rules_Supported-Resources

AWS Config Rules, Resources Supported | Source: AWS

Azure

One of the major announcements out of Microsoft’s Azurecon event was the Azure Security Center, a service that consolidates security management and monitoring under a single portal. For example, admins can quickly see if VM images and configurations are up to date, configured according to predefined standards or Microsoft guidelines and running necessary security software. From the same portal admins can also check on network and database settings like ensuring that virtual networks are members of the correct security groups and have properly set ACLs, or whether SQL databases are encrypted.

Security Center also draws upon data threat intelligence data Microsoft collects from all Azure deployments and notifies customers of unusual or threatening activity. For example, Microsoft has built a reputation database of known bad sites such as those part of botnet control networks. As an Azure blog post puts it, “The IP address of those bad actors is then used to help detect attacks against other customers. Azure Security Center can also analyze outbound traffic and leverages threat intelligence sourced from the Microsoft Digital Crimes Unit to detect when resources are communicating with malicious IP addresses like command and control centers. It can also alert you to suspicious actions on Virtual Machines that indicate an attack is in progress.”

Security Center is Microsoft’s platform for connecting third-party security products like next-generation firewalls, vulnerability monitors (IDS/IPS) and others from Azure’s ecosystem of service partners. Consolidating built-in and third-party security products under one umbrella simplifies both deployment and ongoing management.

Like the new AWS services, Security Center is currently a preview release and not ready for production workloads.

Google Cloud

Although not as ambitious as its competitor’s new services, Google has recently automated a key security task, vulnerability scanning, at least for its PaaS App Engine customers. According to documentation for the new Security Scanner, “It crawls your application, following all links within the scope of your starting URLs, and attempts to exercise as many user inputs and event handlers as possible.” Security Scanner can detect the following vulnerabilities: XSS (cross-site scripting), Flash injection, mixed content (fetching unencrypted HTTP content on an SSL HTTPS page) and usage of insecure JavaScript libraries.

Action Items

Cloud security management remains a challenge given the ability to deploy vast numbers of many different types of virtual resources. Yet recent announcements show that the major cloud services recognize the resulting complexity and are responding with better tools. AWS users should sign up for both the Inspector and Config Rules previews and build test environments up to the limitations of the respective beta programs. Since both AWS services rely on tags, users should be vigilant in categorizing resources with a consistent schema that maps to meaningful categories like business unit, primary owner, application, security level, stack tier, etc.

Likewise, Azure customers should become familiar with Security Center by viewing the online video training and tutorials and setting up some test resources to get hands-on experience with the new features.

What’s Next In Tech? Some Obvious Changes, Plus The Ultimately Most Significant Unknowables

By | December 23, 2015

This column previously appeared in Forbes.


 

The annual deluge of year-end predictions is as predictable as the winter solstice and about as dispiriting. Ironically for such a dynamic industry, those in the tech world are among the most predictable. That’s because most technology predictions extrapolate existing trends. Granted, most ‘experts’ will throw in a couple outliers, the equivalent of a Hail Mary pass that no one will remember if incorrect, but will be endlessly flogged for self-promotion if it pans out, but these are just hedges against the main bets.

Most technology prediction pieces are so trite, formulaic and unoriginal that, like listicles and ad-infested slide shows, it’s a wonder they still attract readers. But click magnets they are and like most click magnets, yearly prediction lists are a waste of time. The reason is perfectly encapsulated in that famous Donald Rumsfeld quote about the state of wartime intelligence (emphasis added),

As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

For technology, I would amend the last sentence to close with “the important ones.” Prediction lists invariably focus on the easy stuff, the things we know we know, by just offering logical extrapolations of existing technology. Yet, it’s that last category of unknown unknowns that ultimately prove to be the most significant technology events. For example, who predicted the nexus of mobile apps, location-aware smartphones, sophisticated backend software and a ready workforce of underemployed people would yield ride-sharing services like Uber and Lyft and the resulting disruption of the taxi and soon delivery business? Or who thought that a low-level NSA contractor would release a trove of top secret documents detailing expansive government surveillance programs and end up galvanizing the tech world towards the pervasive use of encrypted communications and data storage?

These two examples illustrate characteristics of events that prove the most significant in the tech world. The sharing economy typifies what can happen when the maturation of many different technology threads enables a creative business to fuse them into an innovative new product or service. Independently, each component is evolving on a relatively predictable path, however they become revolutionary when crystallized into something entirely new and unforeseen. Indeed, going further back in time, the smartphone itself embodies this creative fusion and of processing, telephony, networking, memory and high-resolution touch-sensitive displays into a handheld package. That such a pocketable device could displace the PC for an ever increasing number of activities, device displacement that has upended an entire industry, was laughable a dozen years ago when a ‘smart’ phone was a glorified PDA.

The second, Snowden example illustrates an exogenous shock that catalyzes a flurry of technology activity and innovation in response. Historically, these have often come in times of war. For example, the Manhattan Project in World War II or the DARPA-funded Internet during the Cold War. However, sometimes it’s when a ‘known unknown’ suddenly becomes known. For example, everyone knew that AWS was the dominant cloud platform, but until this year, when Amazon began breaking out AWS revenues and profits, no one outside the company knew how big or how successful. Indeed, the conventional wisdom had it that AWS was a money losing proposition that existed primarily to defray the costs of Amazon’s own sizable infrastructure needs. When Amazon finally broke out AWS in its earnings reports and it became apparent how big, profitable and fast-growing AWS actually is, the news shocked the industry and likely fueled both panic and competitive juices among other cloud services and hardware vendors.

As you’re reading yet another prediction piece stating the obvious, like IoT will be a key business strategy for increased revenue and efficiency, VR will make its way to enterprise applications, cyber security will shift from detection to prevention or predictive analytics will reshape how companies develop and market products just remember that in hindsight, almost none of these will prove to be the most significant and disruptive to enterprise IT. One unexpected hack of an IoT system that destroys a big company’s manufacturing line or predictive software that runs amok and automatically starves someone’s supply chain during a key shopping season costing millions of dollars and unhappy customers can do more to shape technology’s future than all the sage advice from high-paid research firms and consultants.

Here’s wishing everyone a skeptical new year.

AWS Operations Management: Improvements Needed

By | December 17, 2015

A version of this article originally appeared on TechTarget SearchAWS as Where AWS monitoring tools fall short


 AWS has a rich set of management APIs, automation tools and a central management console, but it can’t yet provide end-to-end performance and troubleshooting data

AWS has an overwhelming list of services, but piecing together a multi-tier application design and then monitoring and managing the result can feel like ordering off a Chinese menu. There are so many choices, many with similar or overlapping features, that finding the best solution, or even a workable one, is an arduous task. The complex mix of services only exacerbates system management, particularly when you layer on the fact that most enterprise AWS applications don’t exist in isolation, with their entire lifecycle spent in the cloud. Instead, they pull data from internal and third-party sources and target many different user groups and platforms, whether field service reps on tablets or B2B information exchanges with business partners. This resulting heterogeneous mix of services, networks and data sources makes comprehensive system and application management almost impossible.

Logo-schematicWhile AWS services provide a good idea of what’s happening in the cloud, they can’t measure the big picture of end-to-end performance and reliability, which are ultimately the only parameters enterprises really care about. Furthermore, AWS management services are designed for use on via own console, not the system management platforms enterprises already have deployed, adding yet another tool to learn and monitor to already overworked admins. In sum, it means AWS has some major holes, or, for those of a more optimistic bent, significant opportunities for improvement to its operations management portfolio.

The holes/opportunities are apparent when you consider the complexity of trying to get a complete view of an application’s performance, never mind troubleshooting any anomalies, for a reasonably elaborate enterprise app. For example, at the last re:Invent conference, Coursera discussed the data flow and ETL processes for its AWS-based data warehouse. It’s a system that pulls data from 15 sources including client events, external databases and third parties into a pipeline consisting of EC2 instances, S3 storage and EMR (Hadoop) processing that ends up in a multi-TB Redshift warehouse that combines it with even more data from internal business intelligence applications to power recommendations, search and other Coursera data products.

Even a simpler example, running SharePoint on AWS, shows the challenges of managing composite applications consisting of many different server and storage systems. The AWS SharePoint reference architecture includes no less than six AWS servers and two databases spread across two subnets, with both VPC (to an internal data center) and public Internet connections. Imagine trying to manage the performance of an internal Excel application that pulls data from an internal database and AWS-resident SharePoint repository, crunches the data and writes a report back out to another SharePoint share. Each AWS SharePoint server could be operating fine, but bottlenecks and resource contention at any point in the processing/communication chain could cause the application to fail.

Source: AWS

AWS SharePoint refereence design. Source: AWS

Trying to monitor, much less guarantee, end-to-end transaction performance, or worse yet, find and fix problems when something goes wrong, isn’t something existing AWS tools are designed to do. Yet this is precisely what enterprises need. Indeed, the problem is so intractable that Bernd Herzog, founder and CEO of OpsDataStore, claims, “The bottom line is that today end to end service quality assurance in the public cloud is impossible.” Herzog founded OpsDataStore to solve this problem, however it’s sufficiently diverse and demanding that they don’t intend to do it alone. Instead, the company is building a data platform that it hopes will support an ecosystem of point products spanning infrastructure, application performance, security, automation and financial management.

Operations data collection architecture. Source: OpsDataStore

Operations data collection architecture. Source: OpsDataStore

Ops Management Roadmap

Examine typical enterprise cloud deployment scenarios illustrate the challenges and opportunities for improvement to AWS’s operations management capability. AWS currently relies on third-party marketplace suppliers like AppDynamics, New Relic or Splunk for more extensive monitoring and troubleshooting feature, leaving the market open for multi-cloud management specialists like RightScale, Scalr, SevOne and Skeddly that augment or outright replace the AWS console with SaaS. Indeed, a SevOne post on monitoring public cloud infrastructure provides a good list upon which to build a some recommendations for the AWS product roadmap.

Enterprise admins need to:

  • Monitor cloud and on-premise infrastructure from a single platform. The task for AWS: provide better integration to popular enterprise management software from the likes of CA, IBM, Microsoft and VMware.
  • Track both cloud and on-prem resource consumption, trend usage over time and trigger alerts on spikes or anomalies. The task for AWS: augment existing cloud-only capabilities by tying usage to users, projects and budgets. The AWS service must also tie into enterprise account and billing systems. Tying usage to projects will entail more thorough use of resource tags. AWS needs to make these easier to setup and use.
  • Measure the End User Application Experience end-to-end across the entire application stack. The task for AWS: develop, acquire or more seamlessly integrate tools for end-to-end system performance monitoring. Performance management features must also tie into troubleshooting software like log and configuration analysis tools. AWS has a piece of this with the new AWS Config Rules, but it needs much more.
  • Integrate Performance Metrics, Data Flows and System/Device Logs into an aggregated view of the entire infrastructure, what Splunk calls end-to-end Operational Intelligence and the goal of OpsDataStore and other next-generation, cloud-centric management software firms. The task for AWS: again involves integration of cloud data with existing enterprise management systems to create a single version of the truth.

Cloud Services And The Demise Of Storage Arrays, But Intel Won’t Own Server Side Storage

By | December 16, 2015

Enterprises large and small have been flocking to cloud services and it’s sent the server business into a tailspin. Storage vendors are next to feel the disruptive pain. Recent market estimates show that storage industry growth has moved to ODMs and niche vendors targeting hyperscale cloud services. Indeed, most of the incremental storage, which is going into hyperscale cloud data centers, is being provided by converged, scale out systems. In the context of cloud services, this means they’re using distributed, virtualized storage on top of commodity servers; i.e. JBOD disks paired with cloud-native software.

Data Source: IDC | Chart: Author

Data Source: IDC | Chart: Author

The implications for top-tier storage vendors are obvious, declining sales and squeezed profit margins, however the ramifications for the rest of hardware supply chain could be equally profound, particularly for Intel. After much delay, ARM server SoCs are finally here and cloud-scale storage systems offer one of their best inroads to the data center. Here’s why and what it means to vendors and hardware buyers.

Cell_disruptor-Genie-x

Cloud-scale storage systems offer one of the best inroads for ARM servers in the data center. By coupling many small cores with hardware modules for things like data compression, parity (RAID) calculations, SSL acceleration (crypto calculations) and integrated storage and network controllers, ARM-based SoCs can be for storage servers what custom mobile chips like Apple’s A-series are to mobile devices: a tailored, all-in-one processing engine.

Read on for more on who’s affected and what to watch for in 2016.

Picking the Best iPad Pro Keyboard: Apple’s Smart Keyboard Not the Smartest Choice

By | December 5, 2015

Unless you recently came into an inheritance, or that special someone has been extraordinarily good this year, the iPad Pro is a bit pricey for most holiday gift lists. However for those Apple fans in your life that have already upsized to the Pro, there are plenty of accessories that make nice gifts. For business users and students, none is more important than a keyboard and although Apple released the specially designed Smart Keyboard for the iPad Pro, it remains hard to get, meaning a Plan B is essential with Christmas just three weeks away. Not to worry, since there are several good alternatives that best Apple’s product on several fronts.

File_001_label

In my experience, there’s no better iPad Pro-sized portable keyboard than the Zagg Messenger Universal. I’ve used both the Zagg and Apple keyboards for the last couple weeks and unless you value form over function (in this case thinness versus features), the Zagg is a better choice. Here’s my full discussion of why, In summary, although unlike the Smart Keyboard, the Messenger doesn’t double as a cover, it does fold into a compact, self-sealing case that’s just slightly thicker than the iPad itself. Still, the Zagg has several things to commend it:

  • A full row of iOS shortcut keys, including one I haven’t seen before to display the stack of running apps (equivalent to double-tapping the iPad home button).
  • Better key action: With a deeper profile, the Zagg uses a more traditional key mechanism that offers a longer key stroke and better tactile feel.
  • Works in portrait or landscape: Like all tablet keyboards, the Zagg folds into a stand, however since it’s wireless and not tied to a mechanical connector, the iPad Pro works in either portrait or landscape. Note, that due to the Pro’s size, it’s a bit top-heavy in portrait, so you have to be careful when using it on your lap in this orientation.
  • Good battery life: Zagg claims three months between charges, but I haven’t had it long enough to verify; I’m still on my first charge.

Read on for the details…

Zagg-w-iPad

Zagg Messenger Universal turns the iPad Pro into a laptop.

Powering IoT with Cloud Backends: New Front in the Platform Wars

By | December 2, 2015

A version of this article originally appeared as IoT cloud services market spurs products from cloud giants on SearchCloudApplications at Techtarget. 


IoT Developers Can Learn From Mobile Apps and Exploit Cloud Services for IoT Backends

The past year has seen IoT evolve from IT buzzword to strategic business imperative as a steady drumbeat of big business projects and vendor product announcements have legitimized the concept of connected devices. IoT was one of 2015’s top trend predictions that technology analysts got right, although it was a phenomenon with significant momentum. There are about 10 billion connected devices now in use and various forecasts project that number to double or even quintuple by 2020. This translates into at least a billion dollars in annual revenue for companies active in the IoT industry with a total economic impact rivaling that of the German economy by 2025. Even should these estimates prove wildly optimistic, companies and IT developers can’t ignore business applications that promise new sources of revenue, higher customer satisfaction and greater efficiency by incorporating intelligent, connected devices into products, services and business processes.

IoT-Device-forecas

Consumer products like wearables, connected appliances and smart home controllers have generated most of the IoT buzz, but its more important, profit and revenue enhancing applications come from adding sensors, intelligence and connectivity to equipment. The combination of smart sensors, cheap, battery-powered processors and storage and ubiquitous wireless networks yields a bonanza of new information that can be transformed into business insight.

IoT-Revenue-forecast

Indeed, ‘things’ are only half of the IoT the story since device ‘intelligence’ is a relative term: they only collect and distribute data about local conditions with the ability to process the data. Thus IoT is equally a big data problem since the whole point of connecting intelligent devices is to gather and share data, information that once aggregated and analyzed can spot trends, detect problems, flag anomalies and modify actions. Yet IoT isn’t your typical big data system since it involves thousands, if not millions of data sources scattered across myriad remote networks that combined can generate enormous amounts of data.

Cisco estimates that connected devices will create 507.5 zetabyes (1 billion terabytes) of data per year by 2019. Although most of this raw data, like machine telemetry or device logs, will never make it to a data center, it still implies gigabytes, if not terabytes per year per device flowing into some sort of IoT analysis system. The question is where? What can handle IoT data volumes, from millions of connections, where the data flow can be highly variable and episodic, and process the data into useful information? Hyperscale cloud services are a natural fit.

Cloud and IoT: Central Intelligence for Distributed Data

We agree with IDC’s forecast that within five years, “more than 90% of all IoT data will be hosted on service provider platforms as cloud computing reduces the complexity of supporting IoT ‘Data Blending’.” IDC also projects that “the growing importance of analytics in IoT services will ensure that hyperscale data centers are a major component of most IoT service offerings;” that is, IoT will fuel cloud growth.

We already have an example from the smartphone world. Mobile app developers needing backend processing, data aggregation and state management for millions, if not billions (in the case of Facebook) of connected clients, recognize the value of cloud backends and have fueled the rise of MBaaS (mobile backend-as-a-service) products. IoT is following a similar path, although this time cloud providers are ahead of the developers. The last few months have seen a spate of IoT service announcements from the cloud giants as each seeks to build an ecosystem that can win developers and capture the fast-growing market for enterprise IoT projects. GE was prominently featured at October’s reInvent conference discussing its use of AWS to replace traditional data center workloads and as an IoT data processing engine.

Azure_IoT-SAC_arch

Azure IoT Communication Architecture | Source: Microsoft

AWS and Other Cloud Majors Fighting for Business

That cloud services have gotten IoT religion is evidenced by recent product introductions. AWS launched an ambitious IoT service to manage intelligent things (connected devices and physical objects) that includes an object abstraction layer (Shadows), object registry, message brokers, message rules engine that can trigger other AWS services. Not to be outdone, and in a preemptive strike the week before, Microsoft released an IoT Suite that like Amazon’s is designed to capture, integrate, analyze and report the information from myriad devices with the cloud acting as the focal point for data aggregation and processing. Google also has an IoT message for its cloud services, however it’s not a cohesive product and requires a DIY approach stitching together existing services like Big Query, Cloud Pub/Sub (message bus), Firebase (MBaaS) to a streaming data backend.

Google_RealTimeStream_ArchDiagram

Google Cloud Real-time Streaming | Source: Google

Besides handling IoT data analysis, the other key requirement cloud IoT services must address is security. Here cloud services are ideal due to their proven ability to scale and if there’s one thing IoT requires, with millions of devices and terabytes of data, it’s scale. It’s a multifaceted problem that includes: device and user authentication, security credentials management, incident detection, alerting and auditing and even threat prevention and mitigation. A promising strategy uses the cloud backend as a security hub/controller to control connections and enforce policies for IoT device communication. Microsoft implements what it calls service assisted communication through the Azure IoT hub, however the AWS IoT security model takes a similar approach.

AWS_IoT-security-arch

AWS IoT Security Architecture | Source: AWS

Quick Tips

Intelligent devices generating reams of data are coming whether enteprises want them or not. Industrial and IT products will increasingly provide much richer telemetry about their state of operation, usage and anomalies, however without an IoT data collection and analysis strategy, organizations will end up wasting it. We offer the following suggestions.

  • Investigate the IoT features of existing data center, manufacturing and facilities equipment and select a few areas in which better understanding of operating conditions might eliminate service calls, prevent or mitigate equipment problems or provide deeper understanding of user behavior.
    organizations developing hardware products should make IoT data collection and communication a part of the design. Look at reference architectures from component manufacturers like Intel, Marvell, MediaTek and others.
  • Exploit cloud services for the IoT data aggregation and processing backend. Although AWS and Azure are leading the way and have beta services available today, others are sure to follow.
  • Build the IoT software architecture on three pillars:
    • data streaming, collection and management
    • big data analysis
    • security, looking at the full spectrum of authentication, credential and monitoring features

IoT is still a new and dynamic field meaning projects must start small, adapt and iterate quickly and include user and business unit feedback early and often since the goal is improved operations, greater efficiency and new sources of revenue. Look for problems that could be easily fixed with better information, but that don’t require a major new hardware design (unless of course, you’re in the hardware business and starting a new design cycle). Using cloud serivces eliminates a major roadblock, namely backend infrastructure deployment and management, from the project and will reduce the time between idea and results.