Monthly Archives: June 2015

Google Isn’t So Much Waging War on App as Making Them Irrelevant

By | June 24, 2015

Browsers, the quintessential and supremely flexible Web application that can morph from video player to newspaper to photo gallery with the simple change of a URL has been usurped in the mobile era by the app. These single purpose walled gardens create an irresistibly convenient alternative that is optimized for the small screen and limited navigational space of a smartphone. Much like TV created popular new entertainment genres like sitcoms and miniseries, distinct from movies and designed to both mitigate the medium’s limitations and exploit its advantages, the mobile app has surpassed the browser to become the window to the Internet. But what the app giveth in the form of convenience, a slick interface and focused feature it also taketh away in home screen clutter, lack of searchability and silos of information and functionality.

Nielsen_Mobile-app-usage

As I detail in this column, the typical smartphone uses about 27 apps per month, yet spend most of their time in just a handful.

Apps tend to Balkanize the Internet, sacrificing browser universality for convenience, specialization and a slick UI. I have previously argued the benefits of mobile apps, primarily because of the superior user experience, however the intervening years of app proliferation have revealed some hidden costs. In contrast to the ease, universality  and accuracy of Web search, it’s impossible to find information outside an app’s moat of topics, nor is it always obvious. Yes, searching within the app is likely to produce relevant results for a given topic, but changing topics means switching apps. Even selecting the best app for a particular set needs is usually a trial-and-error proposition.

agawi-appglimpse

But what if the mobile app user experience could be delivered with browser convenience, state management, resource footprint and searchability? Perhaps a combination of OS-supported app streaming and predictive, Google Now-like notifications could usurp the need for a hodgepodge of locally-installed apps. Read on for details about how Google will likely be using a combination of app streaming (from a recent technology acquisition) and predictive notifications to reduce our reliance on dozens of ghettoized apps targeting various niche categories. As one developer puts it, “This is the beginning of the end for apps as destinations. Why open the app when you don’t need to? Let’s take this a step further.”

For Hospitality Business, Mobility is Mandatory

By | June 16, 2015

For hospitality executives and their partners, failing to build a first-class mobile experience could be a mistake of existential proportions. In this business, providing a home away from home has always been the goal. And that requires a solid wireless network foundation to support customer requirements and new mobile apps. In this report (registration required) I detail how hospitality businesses and their IT partners can meet the mobile challenge.

In the hospitality business, providing a home away from home has always been the quintessential goal. Make guests happy and comfortable and they will become loyal customers and unpaid spokespeople that spread your brand while extolling your service. In today’s tech-centric lifestyle, where a person’s day literally begins and ends with smartphone in hand, the hospitality industry’s vision of creating the “third place” between home and office, where guests feel so familiar and comfortable they want to return requires more than down pillows and a flat screen. Today’s transitory home must be built for our mobile device lifestyle. High-quality wireless Internet is table stakes: hospitality customers increasingly expect a mobile app for reservations, check-in, property directions, payment and (soon) even room access. Sadly, both survey data and our personal experience say that many properties fail on the basics: offering mediocre (or worse) Wi-Fi service, while charging for the privilege. But in the mobile era, this is about as big a turnoff as cockroaches scurrying through the bathroom.

Source: Forrester Research

Source: Forrester Research

Hospitality executives must develop a mobile mindset that guides a strategic redesign of business processes and guest amenities as seen through the lens of mobile-centric customers. However, it’s impossible to execute some of the most innovative ideas percolating through the hospitality braintrust, things like smartphone room access, mobile payments and other app-based services without solid network infrastructure. This starts with reliable, secure, high-performance and free, yes free, guest Wi-Fi access and ends with a mobile app that reduces the friction between a guest and your business processes for things like reservations, room upgrades, services and billing. It’s imperative that hospitality executives and their partners build a first-class mobile experience. Not doing so could be a mistake of existential proportions. Here’s why and how to meet the mobile challenge.

SecureEdge_in-room-APs

Register and download the report for details on building first-class Wi-Fi and engaging mobile apps.

Hybrid Cloud, Bimodal IT? Buzzwords Reflect New Ways of Building Services

By | June 16, 2015

IT analysts are like fashion designers, always searching for something new, even if that means recycling age-old concepts in new terminology. Yet the creative linguistics can often mask more important truths about effectively building and operating IT services. Like successful fashionistas, IT leaders need to stay on top of the latest trends if only to be prepared when the CEO comes back from an industry conference full of questions. As I discuss in this column, one of the hottest buzzwords circulating the industry is Bimodal IT: another Gartner creation like Hype Cycle and Magic Quadrant. As Gartner defines it, Bimodal IT is an organizational model that segments services into two categories based on application requirements, maturity and criticality. “Mode 1 is traditional, emphasizing scalability, efficiency, safety and accuracy. Mode 2 is nonsequential, emphasizing agility and speed.” Seems logical enough and hardly controversial, but also not new. Yet it offers a lesson in how cloud services and engineering practices can improve enterprise IT.

After cleaning and alignment, the feed mechanism works very smoothly.

Some things are better left alone.

Whether you call it legacy versus emergent systems, Brownfield versus Greenfield deployments or sustaining versus disruptive technologies, the dichotomy between old and new or maintenance and development has been around since the dawn of IT. Each category has always required a different set of investment, management and governance techniques. The difference now is the pace at which new products are developed and refined and a concomitant decrease in useful half-life of mature services.

As I point out, the bifurcation of IT into fast and slow lanes is tied to the DevOps and Agile Development philosophies responsible for most mobile apps and cloud services, concepts that are beginning to revolutionize IT. The strategy is to optimize legacy systems for reliability, stability and security while giving emergent IT projects creative space to rapidly innovate, iterate and yes, fail if necessary.

Bimodal-IT-relationship - Plain

While stratifying IT development and operations in this manner makes sense, the bimodal model doesn’t fully address the process of transitioning successful new products into core enterprise processes and services. But as the column highlights, the mutability of virtualized cloud services is key to the solution in the form of parallel virtual infrastructure and continuous delivery processes.  The beauty of using cloud services is their ethereal nature, where arbitrarily complex infrastructure can be easily created, tweaked, scaled and disposed.

I will have more to say on this topic in the coming months, but a point to remember is that the benefits of cloud services aren’t always, or even primarily centered on cost and efficiency. When used creatively, they can enable rich, highly adaptive services and more efficient IT processes.

Intel Sees Future of FPGA-Accelerated Hardware, But it Requires New Software

By | June 9, 2015

With last week’s big Altera acquisition Intel made an expensive bet on a future of data center hardware that uses significantly more customized designs than today’s monolithic racks of commodity x86 servers. As I wrote at the time, “The only justification for Intel’s move can be its perception of a secular technology shift from commodity processors to custom hardware purpose-built for specific applications,” since the financial numbers didn’t justify such an exorbitant price. The market apparently agreed, since Intel’s stock price has lagged the S&P 500 by more than 6% in the intervening week. Markets focus on the short-term whereas this deal is decidedly part of a long-term strategy, however incorporating the FPGA hardware is actually the easy part. FPGA-column-feature

In justifying the acquisitions, Intel CEO Brian Krzanich highlighted the potential for dramatic performance improvements by integrating Altera FPGAs with Xeon processors and there are many proven cases, notably in scientific computing, where executing application code on customized hardware yields astounding improvements. The problem is that FPGAs and GPUs are more difficult to program, requiring specialized code using device-specific APIs and an understanding of the underlying peculiarities of the FPGA or GPU hardware. It will be impossible for Intel to realize Krzanich’s prediction that FPGAs are part of one-third of all cloud servers by 2020 without eliminating the software development hurdles between conventional application code and its execution on non-traditional hardware. What Krzanich didn’t highlight is the significant technical progress being made on this front. For an overview of promising solutions, ranging from high-level languages and APIs to new software layers that perform real-time code translation, see the full column.

From this: IBM_FPGA-compile-run

To this: Bitfusion-hardware

As the column concludes, with Altera soon to be part of Intel’s hardware arsenal, expect the company to focus on solving the problems of using FPGAs to accelerate a wide variety of applications. In fact, Intel and Altera recently co-sponsored the Heterogeneous Architecture Research Platform (HARP) research program through the ACM “to spur research in programming tools, operating systems, and innovative applications for accelerator-based computing systems.” As the column highlights, startups like Bitfusion could make attractive acquisition for its hardware accelerator abstraction software, but Intel has also shown, through work on Hadoop, Cloudera, OpenStack and the Linux kernel, that it understands how to foster organic software development in support of its hardware. The race to simplify application development in a world of CPU accelerators will be interesting to watch.

AWS Logging Tools Simplify Automated Security Monitoring

By | June 3, 2015

A version of this article appeared on SearchAWS as “AWS logging tools provide extra security

Cloud denialism is on the wane, but the most persistent excuses enterprises give for avoiding public cloud services remain a loss of control, security and visibility. These issues have been amply addressed and debunked, both by the cloud services themselves and independent analysts, but as we pointed out over a year ago, the “folded arms gang” of cloud resistors is on the wane as the services prove their value and integrity. But IT lives by the Cold War adage “trust, but verify” and no organization should blindly deploy applications on a cloud service without having a complete monitoring and auditing program. However the cloud requires rethinking traditional procedures since, unlike on-premise data centers, users don’t run the physical infrastructure. Fortunately, AWS has you covered.

Like every other administration function on AWS, when it comes to security monitoring and auditing, there’s a service (actually, several) for that, complete with APIs, scriptable command line interfaces (CLIs) and management consoles all of which makes them supremely automatable and extensible. As we’ll see, automating AWS security monitoring and auditing isn’t hard when you know the right tools.

The foundation of every security audit or forensic analysis is a log trail of activity. All major AWS services include logging features, but as this AWS white paper describes, for security purposes the most important items to log, collect and analyze include:

  • CloudTrail management activity: CloudTrail records all AWS API calls making it very useful for monitoring access to the management console, CLI usage and programmatic access to other AWS services.
  • CloudFront access: CloudFront is the AWS CDN for Web content and it can be configured to log detailed information about every user request. This could lead to information overload, but is useful for certain content.
  • RDS databases: RDS logs console, CLI and API activity including things like query errors and performance
  • S3 server access and bucket policies: S3 can record changes to bucket and object policies and details of every access request, including requester, bucket name, request time, action taken, response status, and error code, if any. It can also log object expiration and scheduled removal.

CloudTrail is the provides key input for security audits since it records all administrator activity such as changing policies on an S3 bucket, starting and stopping EC2 instances and changing user groups or roles.

From Events to Configurations

Logging provides a detailed record of all admin activity, but it’s nice to have a comprehensive summary and history of your AWS resources and configurations. Again, there’s a service for that: AWS Config provides detailed inventory of EC2 instances, configurations and associated block (EBS) and network (VPC) resources. Config records changes and can send notifications via SNS (Simple Notification Service). Much like a version control system, Config can display the state of AWS infrastructure at any point in time.

Combining Config with CloudTrail and logs from other AWS services allows auditors to correlate configuration changes, such as access policies for an instance or storage bucket, with with specific events including details like the username, source IP and other actions that happened around the same time. The following example illustrates how Config and CloudTrail combine in the forensic analysis of AWS systems.

AWS Config CloudTrail
Configuration report shows wrong security policies for a particular database. When did the DB policy change?Who made the change?What specifically happened (APIs used, via Web console or CLI/API)?
How has the new security policy affected relationships with dependent resources? Were changes made to related services about the same time?If so, who, what, where?

In sum, AWS Config does four things:

  • aggregates configuration and change management records
  • provides AWS resource inventory
  • records configuration history
  • triggers configuration change notifications
AWS-Config

Source: AWS

Using AWS logs: monitoring, alerts, reports

Collecting all the relevant data isn’t enough, you need a way to automatically monitor, measure, act on and visualize it. That’s where CloudWatch comes in; it’s the monitoring and reporting engine for AWS resources and log files. Like all AWS services, CloudWatch is programmable via an API/SDK and CLI and can be used to trigger both real-time alerts, such as resource utilization over a set threshold, or chart historical metrics, like CPU utilization. Indeed, since CloudTrail and other logs can feed CloudWatch, you can track CloudTrail events alongside those from the operating system, applications, or other AWS services that are sent to CloudWatch logs.

Source: AWS

Source: AWS

Although AWS security and event monitoring tools are quite different from those used on premise, the system design strategy is the same: aggregate log data into a single repository, use software to monitor, flag anomalies, measure and chart metrics and aid in forensic, post hoc analysis. CloudTrail, CloudWatch and the logging capabilities of each AWS service form the data input, S3 is typically used for persistent storage and CloudWatch and third-party software do the data analysis.

CloudWatch_metric_graph_several

Source: AWS

Source: AWS

Source: AWS

AWS + Third-Party Software: Better together

Although CloudTrail provides a good set of basic features, it can’t match the sophistication of dedicated log and operational analysis software. Popular products from Alert Logic (Log Manager), Logentries, Loggly and Splunk are available through the AWS Marketplace and mirror the features of their on-premise counterparts. These are deployed via a plug-in service running on AWS. For example, the Splunk Add-on collects events, alerts, performance metrics, configuration snapshots, and billing information from CloudWatch, CloudTrail, and Config, along with generic log data stored in S3 buckets. The service then feeds Splunk Enterprise, which can be deployed as a self managed service on AWS using a Splunk-supplied AMI or as SaaS from Splunk.

Source: http://harish11g.blogspot.com/

Source: http://harish11g.blogspot.com/

Although AWS and Marketplace third parties provide an ample toolchest for building and automated cloud monitoring and auditing systems, putting a system together still requires some effort and expertise. The AWS documentation, white papers and re:Invent presentations provide ample information on the details. Organizations that don’t have the skills or time for a DIY project should look for a managed service like 2nd Watch or Datapipe that both design and operate complex AWS infrastructure.

By exploiting AWS’s inherent management policies, its secure infrastructure and the many logging and analysis services available, IT leaders will find themselves agreeing with the CTO of NASA’s JPL who said he believes it can be more secure in AWS cloud than NASA’s own data centers.

 

Intel’s Big Bet: Will Altera Assure Its Dominance Or Waste Its Money?

By | June 3, 2015

Intel isn’t a company known for being a spendthrift, but it just spent more on Altera than the combined total for all previous acquisitions. Intel isn’t know for placing big bets on anything other than its own R&D and fabs. It’s  acquisition strategy has been more like Apple’s, surgical buys and acqui-hires designed to fill defined technology gaps. Unlike HP, Intel, isn’t known for spending big money on new markets. As I write in this column, that history is one reason Intel’s $16.7 billion buyout of Altera, more than twice the size of Intel’s next largest deal ($7.6B for McAfee that also went against the grain in its size and market) is so significant. The deal is very strategic since Altera technology is a great fit with Intel’s rapidly growing data center business unit. Whether Altera is worth the heavy price is debatable, but in this deal, Intel takes a page from Facebook’s playbook, specifically the WhatsApp and Oculus acquisitions, by making a big bet on the future that is based more on vision and faith than products and financials.

Source: Intel

Source: Intel

As I detail in the column, it’s very hard to justify the Altera deal based on the numbers, which translate to 30-times Altera’s operating income and 8-times sales. This deal isn’t about financial engineering, but data center engineering. As I wrote last fall, Intel Is Already Inside Your Data Center, But Wants a Bigger, Better Spot, and again after Intel’s last earnings report, the company has aggressively moved beyond being a mere supplier of commodity server CPUs and motherboards and is building a complete portfolio designed for software-defined cloud data centers. The only justification for Intel’s move can be its perception of a secular technology shift from commodity processors to custom hardware purpose-built for specific applications.

Source: Intel

Source: Intel

The column walks through Intel’s rationale and the secular technology trends affecting its data center strategy, but the ultimate wisdom of the Altera acquisition will depend on how accurate Krzanich and his leadership team have assessed hardware trends for cloud data centers and connected devices.

Source: Altera

Source: Altera

Indeed, this deal may be an expensive admission that the days of counting on Moore’s Law process scaling to regularly yield significant performance improvements using general purpose CPUs are over. Rather, the path to faster applications entails customizing the hardware to application-specific algorithms, something that is difficult, costly and time-consuming using custom-designed silicon, but eminently feasible with FPGAs. I am looking forward to seeing how incorporates Altera technology and adjusts its long-term strategy and will explain what it all means here on MarkoInsights as events transpire.