Monthly Archives: January 2016

In Azure Stack, Microsoft Sees the Foundation for Enterprise Clouds

By | January 31, 2016

The following post originally appeared in Diginomica as Microsoft tilts at enterprise cloud domination with Azure Stack 


2016-01-31_16-09-44There was an old rule about Microsoft software: never use it until version 3. The generalized axiom is that it takes Microsoft a while to get things right, however once the company sets its mind to something, you can expect big results. While every rule has an exception, mobile in this case, Microsoft’s history with PC UIs, the Internet, search, Apple software and most recently the cloud show that it’s seldom first to market, but capable of stunning turnarounds once it acknowledges and confronts an existential threat. Microsoft’s current obsession is the cloud and concomitant new infrastructure delivery and software development models that both threaten its traditional packaged software business and open opportunities to dominate the still-nascent era of SaaS subscriptions, rental infrastructure and retooled, omni-virtualized, service-oriented data centers. Azure is Microsoft’s answer to AWS for public cloud services, but with just-released Azure Stack, it’s bringing the same capabilities to internal IT.

Azure and Azure Stack demonstrate Microsoft’s total commitment to hybrid public/private clouds and together mark the most significant differentiator to it’s biggest cloud competitor, AWS. Like VMware with vCloud/vCloud Air, Microsoft aims for complete compatibility between private and public Azure instances, however unlike vCloud, Azure includes a growing list of application-level PaaS services and, more importantly, enough current and potential customers from its huge Windows Server installed to pose a realistic threat to Amazon’s dominance. Most estimates put Azure as a strong number two in the IaaS market with between 30% and 60% of AWS’s market share, albeit with a faster growth rate. According to Microsoft’s figures — taken with an appropriate amount of skepticism, but which sound plausible — 80% of the Fortune 500 use Azure in some capacity, the service adds over 90,000 subscriptions per month and currently hosts over 1.4 million SQL databases. In the company’s October FY16 Q1 earning’s report Azure revenues doubled year over year, a rate twice that of AWS. Both services topped that to start 2016. While Amazon reported a near 70% increase in AWS revenue, Microsoft again doubled it with an astonishing 140% increase in Azure revenue.

Microsoft’s strategy to continue the growth is an all-in bet on hybrid cloud, which it sees addressing business concerns like infrastructure, application and transaction latency, data sovereignty and control regulations and needs for customized infrastructure that can’t be met in a public, shared-services environment. Its solution, via Azure Stack, is to bring Azure inside the enterprise. Azure Stack provides full private-public cloud compatibility for developers and ISVs (common APIs and automation tools), business analysts (consistent design patterns and service elements) and IT (the same management console, admin constructs and access controls). This is possible because Azure Stack uses the same code base as public Azure: the internal differences primarily occur at the hardware level to account for idiosyncrasies of Azure’s much larger scale and occasional use of one-off, non-commercial hardware.

Azure public vs. private service comparison. Source: Microsoft

Azure public vs. private service comparison. Source: Microsoft

Although we’ve heard hybrid cloud stories before, at a private briefing with a small group of analysts and journalists, I was struck by how comprehensive Microsoft’s Azure vision actually is namely the ability to run a complete Azure PaaS in a private data center. Indeed, an important customer segment for Azure Pack isn’t enterprises at all, but service providers that can customize a base Azure cloud for specific industries and application categories or provide hosting in underserved parts of the world where Microsoft doesn’t have a regional presence. The tight public-private integration was driven home during hands-on labs using the existing Azure management portal and where Azure Stack running on a local (very powerful) workstation looked like just another cloud region when it came time to deploy services. Everything, whether JSON service templates on Github or PowerShell automation scripts, worked the same regardless of the target Azure infrastructure.

Still Work in Progress

Although eminently useable, Azure Stack is far from production worthy: this week’s announcement is just the first Proof of Concept (PoC) preview release that contains a small subset of public Azure’s menu of services. While Microsoft hasn’t publicly detailed features included in Azure Stack, slides presented at the private briefing indicated that few of Azure’s PaaS services made the initial cut, the notable exception being the Web Apps Service. Azure’s Mobile and IoT services will have to wait and some requiring sizable infrastructure like HDInsight and Machine Learning may never be included with Azure Stack.

Azure Stack architecture. Source: Microsoft

Azure Stack architecture. Source: Microsoft

Azure Stack might run on a single machine, but developers shouldn’t expect to install it on their PC, the PoC hardware requirements are significant. You’ll need at least a 12-core machine with 96 GB and four disks, however expect to wait. Our lab machines had over twice this capacity and still bogged down on certain operations. A minimal Azure Stack deployment will require four 16- to 32-core servers with at least 128 GB and multiple disks. Like VMware EVO RAIL, Microsoft expects most deployments to use purpose-built hardware pre-integrated with the Azure Stack software, although a DIY option is possible for those willing to closely hew to an as-yet-to-be published hardware compatibility list.

My Take

Azure Stack represents the logical endpoint of a hybrid cloud strategy: the same technology stack available for rent as a shared service or for sale as a private cloud. While not uniquely qualified to “bring the full power of a true hybrid cloud platform” to market as Mike Neil, CVP of Enterprise Cloud for Microsoft says in announcing the preview, I would agree that it’s one of a handful of companies that can. Indeed, given Microsoft’s presence in enterprise data centers and expertise running one of the world’s largest suite of cloud services, it’s certainly the best positioned.

Azure Stack solidifies Microsoft as the safe, familiar enterprise cloud alternative to AWS. Don’t expect Azure to unseat AWS as the kind of cloud anytime soon, however its mix of familiar services (AD, SQL Server), developer support (Visual Studio, .NET) and seamless hybrid infrastructure integration means Azure will appeal to a much broader enterprise audience. Microsoft’s Azure strategy presents much more significant threats to the two other oft-mention hybrid cloud alternatives: OpenStack and vCloud. Although both will have adherents in particular niches like academia and HPC (OpenStack) or organizations already committed to VMware’s vision, neither can foreseeably deliver the gamut of platform services across public and private clouds to a customer base already comfortable with many of the management and development tools.

Key to assessing how fast Azure picks up momentum will be the pace of adoption by service providers and OEM partners, not enterprises. If Dell, HP, Lenovo and others flood the market with a variety of Azure-integrated products, perhaps in conjunction with a branded shared service, ISPs build regional Azure clouds for emerging markets and industry verticals and trusted channel partners push Azure to their mid-market customers, forget VMware or IBM, even Amazon will take notice.

Azure management portal.

Azure management portal.

4G Isn’t Just For Phones: LTE As A Backup Network

By | January 30, 2016

LTE availability and performance make it a viable option for branch office, retail WAN redundancy: Some options and considerations

This column summarizes a report done for Channel Partners available here (with registration). 


Reliable network connectivity is such a fundamental requirement for the digital business and mobile lifestyle that most people consider it on par with power and water: a critical utility. For individuals, network downtime is arguably an inconvenience, your Facebook feed will be waiting for you when the cable modem comes back online, but for business it’s a matter of money. When the network goes down, sales and critical business processes grind to a halt. Although data center and campus networks are hardened with multiple WAN connections, multi-WAN load balancing and link failover, branch offices, retail stores and temporary locations like construction sites, drilling rigs and convention center booths are almost always dependent on a single connection.

Today, there’s an easy way to boost remote site availability with an independent, redundant circuit using wireless LTE. Availability is virtually universal in North America with 5-bar coverage and usage constantly increasing. According to the Global mobile Suppliers Association (GSA) there are one billion LTE subscribers worldwide growing at a 29% CAGR over the next five years, while most recent Ericsson mobility report pegs current LTE North American penetration at 40%, expected to hit 90% by 2020.

Ericsson-LTE-penetration

From Ericsson Mobility Report, June 2015.

Indeed, LTE provides more than enough speed to act as a remote backup network. There are several ways to seamlessly integrate LTE into a site’s WAN using existing equipment, but don’t worry, you needn’t ask employees to tether their PCs to a smartphone. The most common is support for plugging a USB modem like the AT&T Beam or Verizon MiFi into a WAN router or UTM appliance, however those aren’t the only options.

Typical backup WAN using LTE, courtesy Cisco.

Typical backup WAN using LTE, courtesy Cisco.

In general, for branch locations without a secondary WAN link we recommend that organizations strongly consider adding LTE to the mix. Download the full report (your reward for making it this far) for details. We also advise investigating LTE, in conjunction with a VPN for maximum security, for securely isolating internal networks from public-facing systems, at least for low-bandwidth applications like PoS transactions. Hardware and LTE service options abound, so there’s no reason to leave remote employees and customers exposed to the vagaries of a single broadband carrier.

Cutting Costs in AWS: Look beyond the obvious to Tame your AWS budget

By | January 25, 2016

A version of the following article appeared on TechTarget SearchAWS as Four principles of AWS cost management


Money-IIThe cost calculus of moving applications from on-premise infrastructure to cloud services like AWS is akin to going from a fixed-price, all-you-can-eat buffet to an à la carte cafeteria. Gone is the incentive to consume as much as possible in order to maximize ROI (or food per dollar). Instead, the goal is to use (or eat) exactly what you need, but no more. It’s the classic economic trade-off between fixed and variable costs.

AWS has embraced pay-as-you-go as a defining feature, but it’s but one of four basic principles defining Amazon’s pricing philosophy, the others being:

  • pay less per unit by using more
  • pay less when you reserve
  • pay less when AWS itself grows

Although users have no direct control over the last tenet, albeit collectively they’re doing a great job of growing Amazon’s business, effectively exploiting the first three elements is key to getting the most value from AWS.

Start with the Basics

The obvious way to cut AWS costs is by using less. Unfortunately, that advice is about as useful as saying the way to investment riches is to buy low and sell high: the devil’s in the details. Indeed, the tendency is to over consume since the abstract, ephemeral nature of AWS, with resources that can be instantly and programmatically instantiated gives rise to a phenomenon that’s long plagued internal virtual infrastructure: VM, or in the case of AWS, resource sprawl. Much like the U.S. legal code, where laws and regulations once added never go away even long after they’re obsolete, there’s a tendency on AWS for machine instances and storage buckets to proliferate and persist long after they’re no longer needed. However, unlike internal systems where zombie VMs merely waste system resources, on AWS, the meter is always running, meaning that these dormant instances rack up the bill.

pricing-page_cycle

The key to reducing AWS usage is assiduously monitoring and auditing usage looking for orphaned, underutilized or over-speced resources. Although monitoring and inventory of virtual infrastructure is a niche targeted by a number of third-party products, for AWS users the easiest (and cheapest) place to start is with Trusted Advisor, a software wizard that automatically checks one’s AWS portfolio and provides advice in four areas: cost optimization, security, fault tolerance and performance.

Among its cost optimization features, Trusted Advisor can find EC2 instances with low CPU utilization, idle load balancers and RDS databases, underutilized EBS volumes and orphaned IP addresses. It also checks historical EC2 usage to identify candidates with steady workloads appropriate for reserved instances, which is key to exploiting another of AWS’s pricing strategies: lower rates for pre-committed, contracted usage.

Pay Now and Save Later

Like any contractual subscription with tiered pricing, whether a cell phone data plan or cloud sync and share storage service, reserved instances (RIs) require understanding one’s usage and buying just enough to cover the need without either overspending or hitting a usage cap. On AWS, this means a hybrid deployment strategy can often save money, with RIs used for baseline, steady-state workloads and on-demand or spot instances to cover usage spikes. Spot instances, which often go for 10-15% the cost of on-demand equivalents, are particularly appropriate for batch jobs or time-insensitive workloads. Note that RIs aren’t limited to EC2 but are also available for databases (RDS, DynamoDB, ElastiCache) and CDN (CloudFront).

RIs can be purchased three ways: with monthly installments, partial or all payment up-front. Note however that the price difference between partial and all up-front isn’t that great, so a hybrid up-front/monthly payment schedule typically provides the best balance between capital commitment and price discount. For example, recent pricing for m4.large EC2 instances showed just a 1% difference between the partial and all up-front discount for a one-year term.

AWS Reserved Instance Pricing

AWS Reserved Instance Pricing

Although RIs do lock buyers into a contractual one- or three-year commitment, you still have some flexibility. RIs can be redeployed to other apps, moved to another zone within the same region and (for Linux/Unix instances) even resized all without charge. Buyers finding that they no longer need RIs can also resell them in the AWS Marketplace.

Cost-effective Cloud Architecture

Designing cloud-savvy infrastructure is the most effective way to minimize AWS costs over the long haul. Given the variety of AWS services, there is generally more than one way to achieve the same thing and the most obvious solution of deploying another EC2 instance or S3 bucket, isn’t alway the best given the rich variety higher-level AWS services. Cost optimization techniques include:

  • Using S3 and CloudFront for content caching to offload the handling of static content in applications like WordPress from EC2.
  • Using load balancing (ELB) and auto-scaling to reduce the average number of EC2 instances by only bursting capacity when needed.
  • Using AWS managed services instead of self-managed equivalents on generic EC2 instances. Examples include replacing things like Rabbit MQ with SQS (message queuing), Exchange or Sendmail with Simple Email Service, a NoSQL cluster with DynamoDB, Memcached with ElastiCache and a media encoder like HandBrake with the Elastic Transcoder service.
  • Properly sizing EC2 instances by selecting the right mix of CPU power and memory from the dozens of EC2 instance types.
  • Using Reduced Redundancy Storage (RRS) and Glacier for derived copies (non-originals) of data, log files, archives and anything that doesn’t require 100% uptime. Note however that Glacier should only be used for truly archival data since recovery takes hours, not minutes.
  • Scaling out databases with read-only replicas on ElastiCache instead of new RDS instances.
  • Using the AWS free tier for small, short-term dev/test projects and only moving a paid tier if and when the application goes to production and demonstrates significant customer demand.

Conclusion

No one wants to waste money on AWS, so cost management and wise usage is imperative, however it’s important to remember that running in the cloud isn’t really a cost minimization exercise for most organizations, it’s about increasing IT and developer agility and innovation. Still, by designing infrastructure for AWS, using a mix of reserved, on-demand and spot instances of the right size and assiduously pruning orphaned or underutilized resources organizations can invariably cut their AWS bill.

Privacy and Security Tensions Illustrate a Clash of Cultures Between DC and SV: Authoritarian vs Libertarian

By | January 19, 2016

This article originally appeared in Diginomica as Authoritarian versus libertarian – intractable privacy and security concerns between the Valley and Washington?


Ever since FBI Director James Comey decried the use of unbreakable encryption as a national security threat by enabling terrorists and other suspected malefactors to “go dark”, the Internet has been ablaze with criticism over the prospect of government backdoors to secure communications and data. Given recent events, whether the attacks in Paris and San Bernardino or a recent high-level meeting between White House, Department of Justice (DoJ) and technology executives in Silicon Valley, it’s clear the issue won’t be going away anytime soon. Indeed, the rhetoric and tension between Washington and Silicon Valley escalated in December when Comey testified that tech companies must rethink their “business models” as they relate to customer data and privacy and Tim Cook reiterated his stance that pitting privacy against national security is a false dilemma.

At its core, the division stems from what seemingly irreconcilable differences between a centralized command-and-control, authoritarian worldview that defines the national security state versus the risk-taking, libertarian, ecumenical ethos of the Internet and the technology companies that enable and profit from it. That the most visible and unyielding spokesman for Valley values is the reserved, neatly coifed CEO of the world’s most profitable company and not a strident, shaggy computer scientist spouting a GNU Manifesto shows both the breadth and significance of dispute and how deeply the forces of globalism and transnationalism have permeated the high tech community.

Tim-Cook

Apple CEO Tim Cook

Richard-Stallman-2

Richard Stallman: Founder of the GNU Project and Free Software Foundation

Even should the three-letter agencies get everything on their wish list, it would change little for smartphone users in other countries or those savvy enough to sideload an ‘unauthorized’ app from outside the curated walls of the App and Play Store. Comey himself admits as much testifying that “I think there’s no way we solve this entire problem. … The sophisticated user could still find a way.” Instead, the federal security apparatus is banking on user inertia by getting Apple, Google, Microsoft and others to change defaults and settings to be snoop-friendly: in effect, to change the ”business model”, or more accurately corporate values, that now includes guarding customers’ privacy.

Comey first made the case in a statement to the Senate Judiciary Committee last July and although he never uses the inflammatory phrase “back door” that’s the logical result of his line of argument. Indeed, he directly targets the notion that users have complete and sole control over their encryption keys (emphasis added):

“In recent months, however, we have on a new scale seen mainstream products and services designed in a way that gives users sole control over access to their data. As a result, law enforcement is sometimes unable to recover the content of electronic communications from the technology provider even in response to a court order or duly-authorized warrant issued by a federal judge. For example, many communications services now encrypt certain communications by default, with the key necessary to decrypt the communications solely in the hands of the end user. This applies both when the data is “in motion” over electronic networks, or “at rest” on an electronic device. If the communications provider is served with a warrant seeking those communications, the provider cannot provide the data because it has designed the technology such that it cannot be accessed by any third party.

Not Just Encryption, But Expression

FBI Director James Comey

FBI Director James Comey

Although the encryption debate has dominated online discussion, the government’s other appeal to the Brahmins of tech concerns freedom of expression, namely the use of social networks as megaphones for “extremism”. Its goals are manifest in the White House task force on Combatting Violent Extremism (CVE), an effort to “prevent violent extremists and their supporters from radicalizing, recruiting, or inspiring individuals or groups in the United States and abroad to commit acts of violence.” Last week, those efforts spread to Silicon Valley as the group convened a summit with tech execs, including Cook, to discuss cooperative ways “to make it even harder for terrorists or criminals to find refuge in cyberspace.” Although the meeting was closed, the public agenda focused on uncontroversial topics like making it easier for countervailing voices, particularly in ISIS-controlled areas, to develop and distribute content and stickier issues like making it easier for law enforcement to identify and stop terrorists using online media. However, a leaked preparatory briefing paper distributed to participants made clear that hotter topics like encryption and data-driven profiling were also on the docket.

The encryption talking points echoed those Comey has been making for months, however the detection and prevention topics were new. Some of the ideas in the briefing notes will certainly disturb civil libertarians, but are sure to give many tech companies pause. For example,

“Are there technologies that could make it harder for terrorists to use the internet to mobilize, facilitate, and operationalize? Or easier for us to find them when they do? What are the potential downsides or unintended consequences we should be aware of when considering these kinds of technology-based approaches to counter terrorism?”

“Some have suggested that a measurement of level of radicalization could provide insights to measure levels of radicalization to violence. While it is unclear whether radicalization is measurable or could be measured, such a measurement would be extremely useful to help shape and target counter-messaging and efforts focused on countering violent extremism.”

“There is a shortage of compelling credible alternative content; and this content is often not as effectively produced or distributed as pro-ISIL content and lacks the sensational quality that can capture the media’s attention. … We invite the private sector to consider ways to increase the availability alternative content. Beyond the tech sector, we have heard from other private sector actors, including advertising executives, who are interested in helping develop and amplify compelling counter-ISIL content.”

Cook’s response was swift and firm. According to The Intercept, citing informants briefed on details of the task force meeting, “Cook lashed out at the high-level delegation of Obama administration officials who came calling on tech leaders in San Jose last week.” Cook reportedly reiterated his call for the White House to cease efforts to get tech companies like Apple to install encryption back doors.

NSA's Bluffdale, Utah Data Center

NSA’s Bluffdale, Utah Data Center

My Take

The idea of encryption back doors is like a con job that never gets old, it just comes back around every couple decades to snare a new generation of suckers. Today’s controversy is a rerun of the debates in the 90’s over the NSA’s proposed Clipper chip, a government-sponsored crypto accelerator with an NSA master key. It was roundly rejected then when people rightly pointed out that any sanctioned back door is also a juicy vulnerability that will inevitably be reverse-engineered and exploited by hackers, cyber criminals and nation states. The idea is even less viable and enforceable today when every smartphone has plenty of horsepower to perform local encryption and algorithms are both more sophisticated and freely available.

The notion of countering extremism by encouraging opposing voices from within the ranks of the disaffected and radicalizable is both banal and uncontroversial, however data mining online content trolling for proto-extremists sounds a lot like the NSA’s reviled and borderline unconstitutional PRISM metadata collection program. Although the profiling technology used by Google and Facebook to target and sell advertising might be useful in identifying inciteful, extremist content, it’s one thing when a company uses it for commercial activity and quite another when the government uses it for criminal profiling. Further, we doubt the willingness of companies to share proprietary technology of significant commercial value with the government and to assist in deploying it on cloud servers (like those at the NSA’s Bluffdale, Utah Data Center) that are presumably vacuuming communication traffic from vast swaths of the Internet.

Painless, Bulletproof Wi-Fi Ideal For SMBs: I Find Cisco, Ruckus Deliver The Goods

By | January 18, 2016

ME_APs-300x300It has long been clear that the days of wired Ethernet networks were numbered as an endpoint connection and while most homes long ago ditched switches and CAT5 cables for an all-in-one Wi-Fi router, however building a business-class WLANs still takes planning and skill. WLAN configuration and management is still something of a black art that often leaves the IT generalists that generally operate SMB infrastructure overwhelmed.

As I detail in this column, that’s changing as exemplified by new products by Cisco and Ruckus that deliver on the promise of 10-minute provisioning and feature a beautifully designed Web management interfaces exposing a plethora of network data and configuration settings. I share my experiences with both products and explain how SMBs or organizations with remote locations lacking IT personnel have plenty of great choices for building a bulletproof WLAN without the setup pain and management frustration typical of traditional enterprise network gear.

The products have a similarly simple setup process requiring just a few steps:

  1. Power up the AP (using power over Ethernet for a single cable to a switch is preferred)
  2. Connect to a default and conspicuously-named SSID (“CiscoAirProvision” for the Aironet and “ConfigureMe-xxxxx” for the Ruckus)
  3. Connect to the management portal on a default IP address (typically the gateway address on the temporary wireless network you’ve just connected to)
  4. Run a setup wizard (which will require few if any changes, other than names and addresses, from the default configuration)
  5. Save the configuration and reboot the AP

The real magic of both systems comes when expanding the network by adding APs since these are automatically discovered by the master and seeded with its configuration. APs aggregate usage stats back to the master which populates consolidated reports on a management dashboard and automatically syncs configuration changes. Should the master ever drop offline, another AP assumes the role. Once setup, as the full column explains, the management interfaces are both easy to use and beautifully designed.

Cisco-Network-Summary

Cisco and Ruckus aren’t the only vendors seizing the opportunity to make enterprise WLAN setup and administration easier. Aruba (now part of HP) has a line of Instant APs and Aerohive has its controllerless, Cooperative Control design (although Aerohive uses a cloud service, not a local master AP, for system management). This new generation of plug-and-play products means SMBs and organizations with remote locations that lack IT staff have no excuse for not deploying bulletproof, enterprise-class wireless.

Intel CES Keynote Is Long On Disjointed Ideas, But Missing A Big Trend: Autos

By | January 11, 2016

Intel CEO Brian Krzanich had the leadoff spot in the lineup of big name CES 2016 keynotes and used the opportunity to highlight Intel’s work with just about everything except PCs. However, absent from Krzanich’s presentation was any mention of car tech, which is notable since the transformation of cars into self-driving, mobile entertainment centers is a major theme this year. No less than 9 automakers and dozens of suppliers will be occupying 200,000 square feet, up 25% over last year, to show off products this year. In this column I posit that Intel could be repeating the mistakes of omission that cost it in mobile.

Intel-Car-?

Ignoring car tech in a high-profile keynote, even as Intel’s competitors were making major announcements, was a mistake and casts doubt on the company’s strategy in this important, emerging market. One wonders whether Intel is repeating the mistakes it made in mobile and letting another big market slip into the hands of competitors like NVIDIA and Qualcomm. NVIDIA’s next-generation car computer, DRIVE PX 2, was particularly impressive. By combining four GPUs capable of trillions of the neural network calculations used in deep learning algorithms crucial to autonomous driving, with the ability to process data from dozens of sensors, including cameras, lidar (laser-based distance sensors), radar and ultrasonic in a package the size of a lunchbox.

big_nvidia-drive-px-2.jpg.ashx

NVIDIA DRIVE PX 2 module

I detail other notable car tech from CES in the column and question Intel’s commitment to this market ripe for technological disruption. Given the vast market potential, with a record 17.5 million vehicles sold last year and the average age of cars on the road at generational highs, one would expect Intel to be fighting to  reproduce its PC platform dominance in such a large, untapped market. Although disappointed that Krzanich didn’t use his CES megaphone to espouse Intel’s car tech vision, it’s possible the company has other venues in mind for a major announcement given that it does have an automotive group and product line. It will be interesting to see if there’s an Intel Inside logo on future autonomous vehicles or whether, like the smartphone, the company misses another emerging market.

Consumer Technology Market In Decline: CES Searching For The Next Big Hit

By | January 11, 2016

This article originally appeared on Diginomica as CES Market update: Still searching for the Next Big Thing


Like the swallows returning to San Juan Capistrano, New Year’s revelers have barely cleaned out their rooms before hoards of tech execs, entrepreneurs, marketeers, journalists, PR flacks, technophiles and gawkers descend upon Las Vegas for the annual gala to gadgetry that is CES. Most years, it’s a testament to the wisdom of Macbeth, full of overwrought product announcements for things of little ultimate significance, however there are always a few major themes worth watching that measure the pulse of what the event’s organizing body, the Consumer Technology Association (CTA) now terms the Global Technology Market. Perhaps the most accurate barometer of the CES ecosystem is the annual technology spending forecast and analysis provided by the CTA’s Senior Director of Market Research, Steve Koenig. His presentation on Monday was full of mostly discouraging data interspersed with bright spots that seemed designed to temper the stark numbers with rays of hope.

Tech-spending

The backdrop to CES is a world where technology spending and unit sales are in steady decline. According to the CTA’s latest forecast, global technology spending will drop 2% this year while unit sales are essentially flat. Looking at the longer trend, since 2011, both units and revenues are declining just over 1% annually. Extrapolate this out a couple years and we’ll see total tech revenue about $90 billion less than in 2011. The culprit is declining ASPs across the board: down 7% for smartphones, 4% for laptops and 2% for TVs in the last year.

It’s a Mobile (and More Diverse) World

The overriding theme of CTA’s data is how mobile has taken over the industry, however the spread of technology from developed countries to emerging markets has notably crimped technology spending. The situation is similar to the effect globalization and the ready availability of low-skill workers has had on wages: as mobile technology spreads to low-wage emerging market countries, the pressure to reduce prices is extreme. The CTA estimates that emerging markets will account for 71% of smartphone units this year, up from 40% in 2010. Indeed, across all tech spending, the revenue split between developed and emerging markets has reached near parity: 51% versus 49%.

regional-market-share

The expanding smartphone market has been a boon to sales volumes, increasing over 5-fold since 2010, however ASPs have been slashed by over a third. Indeed, the long-term rate of price decline is over 7%. But at least smartphones keep selling in greater numbers. Pity the poor tablet market where both units and prices are in free-fall. From 2014’s peak, the number of tablets sold this year will be down 21%. From 2010, when the iPad created the tablet market with prices well over $500, ASPs are expected to be just $228 this year, a 17% annual decline over five years.

Smartphone-salesSmartphone--tablet-ASP

Clearly the smartphone and tablet market trends are linked. As the former spreads throughout the world, it’s often one’s only computing and communication device, which has driven the market to screen sizes. Phablets, in turn, have obviated the need for tablets in many situations with a resulting race to the bottom as everyone but Apple turns tablets into cheap secondary devices for kids, schools and occasional media consumption.

tablet-sales

Rays of Hope

CES-1In a separate presentation, CTA’s Chief Economist Shawn Dubravac offered his trends to watch for 2016. The backdrop is five megatrends shaping the tech industry:

  • ubiquitous computing: epitomized by smartphones
  • cheap digital storage: flash following Moore’s Law to exponential improvements in capacity/price
  • connectivity: wireless everywhere with LTE penetration exceeding 50% in most of the world
  • proliferation of digital devices: wearables, smart appliances, IoT
  • ‘sensor’ization of tech: the proliferation of low-cost sensors beyond smartphones to everyday appliances and special-purpose gadgets

CES-2Three themes emerge in 2016 according to Dubravac:

  • ambient sensing: things like baby monitors, diet/food sensors, environmental
  • aggregated learning or predictive customization: think Netflix recommendations or the Nest thermostat learning your behavior and preferences
  • maturing of five nascent ecosystems:
    • VR: From Oculus Rift on the high-end to Google Cardboard at the bottom, CTA estimates it as a $540M market growing over 4x this year.
    • 4K TV: With $10.7B in revenue, 4K now accounts for 21% of the TV market
    • Wearables: Split between smart watches with $3.7B in sales and fitness trackers at $1.3B, if Fitbit’s Blaze is an indication, expect to see a broader spectrum of health and fitness data.
    • Drones: Now almost a $1B market, with 2.9M units sold and growing over 100% this year
    • 3D printers: Still relatively tiny at $152M in revenue, like drones these seem destined to be a niche consumer product, but hugely important to manufacturers
    • Smart homes: Another billion-dollar business, but it’s unclear whether it develops into a standalone market or the technology gets absorbed into existing devices (appliances, HVAC equipment, lighting).

A final trend that Dubravac lumps under the aggregated learning category, but really deserves special mention is autonomous vehicle technology. With NVIDIA announcing a virtual supercomputer to analyze sensor data and power machine learning algorithms, Ford’s CEO saying the industry will put full Level 4, fully autonomous cars on the market by 2020 and Volvo partnering with Microsoft to allow voice control of the car’s entertainment and control systems using a Band 2 fitness tracker, car technology is becoming a major catalyst of tech innovation and spending. Although initial applications target consumers, the implications of autonomous vehicles on the transportation and logistics business are profound, as evidenced by GM’s investment of $500M in Lyft.

Although the macro environment for technology spending is ugly and CES itself is again littered with tchotchkes that are soon forgotten, it does demonstrate the ongoing osmosis of technology throughout every industry and object. Some of these trends will remain niches, however things like autonomous vehicles, smart sensors/IoT (particularly applied to industrial settings) and machine learning/predictive analytics will reshape industries and disrupt unprepared companies throughout the rest of this decade.