Monthly Archives: December 2018

In 2019, DevOps is about execution, not cultural assimilation

By | December 31, 2018

2019 marks a decade since the term DevOps first entered the nomenclature and ever since, developers, operations teams and IT executives have struggled to both precisely define it and, more importantly, apply it to their organizations. Like most new ideas, DevOps went through a hype cycle in which evangelists exuberantly touted the concept as the solution to a host of IT woes. After failing to live up to exaggerated promises, DevOps slipped into seeming irrelevance as cynics trashed it for failing to be IT’s promised silver bullet. We have now entered a DevOps renaissance where a deeper understanding of its concepts and mature set of IT practices and tools have combined to turn DevOps principles into a reality of improved productivity, innovation and agility.

In many organizations, the cultural work to get developers and operations teams to accept and internalize new ways of working and collaborating is complete. Indeed, an interesting metric of DevOps maturity comes from DevOps Research and Assessment (DORA) and its State of DevOps 2018 report in which it categorizes organizations based on their software delivery performance as defined by:

  • Deployment frequency: from multiple deploys per day to once per month
  • Lead time for changes: from less than an hour to between one and six months
  • Time to restore service: from less than one hour to between a week and a month
  • Change failure rate: from near 0 to 46-60 percent.

Using these metrics, DORA categories organizations on a scale from DevOps laggards to elites (think online service providers and cloud-native startups). Compared to the DevOps laggards, the elites have:

  • 46 times more frequent code deployments
  • Go from code commit to deployment 2,555 times faster
  • Have one-seventh the failure rate
  • Recover from incidents 2,604 times faster

A bigger question is, what form will that automation take, which leads me to the trends to watch in 2019 (if you want to call them predictions, fine, but I can’t make quantitative forecasts, hence I prefer the term “trends”).

  • Container usage continues to explode and displace VMs for more enterprise workloads as the critical pieces of container infrastructure, notably the image format, runtime engine and cluster orchestrator are now standardized.
  • Cloud container services (CaaS) like AWS EKS, Azure AKS and Google Cloud GKE become a preferred destination for new container infrastructure as more organizations see the value in outsourcing infrastructure management to a service provider.
  • Container-native microservice applications discover the power and efficiency of the service mesh as usage of Istio, Linkerd and Envoy takes off. Cloud service mesh products like the newly-introduced AWS App Mesh, Azure Service Fabric Mesh and Google Cloud Managed Istio will be particularly popular given their convenience and ability to tie into other cloud services.
  • Serverless functions like Lambda and Azure Functions spread from the cloud-native cognoscenti as enterprise developers embrace them as an integration layer for composite applications.
  • Organizations using automated CI/CD will turn to multi-cloud PaaS like Cloud Foundry and OpenShift. A significant advantage of PaaS stacks is their encapsulation of best practices and sophisticated automation tools into a system that is easy for DevOps organizations to implement and use.
  • DevOps teams will use open source for an increasing share of their application and automation script code base.

For details about each trend and some other supporting data, see my Tech Target article entitled “IT organizations anticipate DevOps evolution in 2019 — DevOps shifts around like ice cubes skate across water: quickly and always toward its outer limits.” The coming year should provide many examples of organizations significantly improving developer efficiency and reducing application cycle times via DevOps automation and by shifting to higher levels of abstraction using PaaS

What to expect from AWS in 2019

By | December 31, 2018

Let’s get the disclaimer out of the way up front. Making predictions about the behavior of a company that innovates as rapidly as AWS is simultaneously safe and risky: safe because AWS introduces so many new products and services that there’s a good chance some of the predictions will come to pass, risky because you’re anticipating things from a position of relative ignorance, unaware of all the internal projects and research that might manifest themselves in the coming months. Thus, consider these as reasonably-informed (as reasonable as possible from the outside) guesses, not calculated projections.

Despite having vigorous competition from some of the biggest names in tech, AWS continued its dominance of the cloud services market in 2018 and that won’t change in the new year. With more market share than the next four largest cloud providers combined, AWS still acts like a hungry startup, introducing dozens of new products and enhancements at re:Invent 2018. None were more significant than its moves in two areas: enterprise hybrid cloud and custom-built hardware tailored to its needs. Look for AWS to redouble efforts in both areas in 2019. A few particular items are worth watching for as we embark on a new year:

  • Look for AWS to expand the use of Graviton (ARM-based) and Inferentia (machine learning model execution, i.e. inference) processors beyond their initial use in EC2 instances and SageMaker (Inferentia). I expect to see Graviton variants with more cores and memory deployed in native services to reduce costs and improve performance. Candidates would include Lambda, DynamoDB, CloudFront, its developer services (CodeDeploy/Commit/Build/Pipeline) and business applications (WorkDocs/Mail/Chime). Given its secrecy, we might not ever hear about such deployments or the extent of AWS’s use of alternative processors in services where the processor is insulated by a service layer and not directly exposed to the customer.
  • Outposts, its service for on-premise implementations of AWS services, are said to use “fully managed and configurable compute and storage racks built with AWS-designed hardware,” which includes its nitro security and network hardware. While this sounds like an AWS-labeled hardware product, don’t be surprised if AWS reaches an agreement with Dell to provide Dell-branded hardware for Outposts options using VMware as the cloud software stack (recall that Outposts comes in two options: one for VMware, one providing native AWS services). If so, it could put a strain on Dell’s longtime relationship with Microsoft, which has its own hybrid cloud offering in Azure Stack, including a Dell-built Azure Stack rack
  • A perennial criticism of AWS (and other cloud services) is that their pricing model, particularly for infrastructure services, is far to complicated. Indeed, sAWS did an extensive guide to AWS cost management software in 2018 that illustrated the need for software help in navigating and optimizing the many options. Given such feedback, now that AWS has an on-premise enterprise option in Outposts, look for it to introduce a simplified purchasing model. One option would be bulk purchase of service credits that are automatically applied to whatever the customer chooses to use and where unused credits can roll over month to month like carry-over minutes on wireless plans. While AWS won’t eliminate the micro measurement of service usage, it could hide it behind a bulk line item and handle the messy usage statistics and billing adjustments internally. 
  • Given the high-profile spat between Oracle and AWS over the latter’s use of Oracle databases internally and AWS’s accelerated timeline for migration to Aurora, expect AWS to more aggressively push is database migration service as a way to both win more enterprise business and stick it to a vocal critic and competitor.

AWS will undoubtedly surprise us many times throughout the year, but its pursuit of enterprise customers, which is behind the recent hybrid infrastructure offerings, and flexing of its technological muscle through custom hardware and native AWS services like Aurora and SageMaker, will be two major themes to watch.

2018 in review: Building the technological foundation for mainstream AI applications

By | December 30, 2018

2018 was also a milestone year in AI as the technology community writ large developed a more complete, nuanced understanding of its benefits and limitations while innovations laid the foundation for future applications. From part 2 of my Diginomica year in review.

Brain in an open storage jar © ktsdesign - Fotolia.comPart one of my year-in-review, centered on 2018 developments in enterprise cloud, notably how mainstream adoption of cloud infrastructure has shifted the emphasis from kicking the tires to the operational complexities of integrating cloud services with existing IT systems and networks. In part two, the focus is on how AI-related technologies are migrating from research labs and niche scenarios to broader applications in healthcare, the enterprise and even IT itself.

Disentangling hype from reality shows that most ‘AI’ looks more like statistics than intelligence

The renaissance of AI, rescued from the ash heap of symbolic reasoning and expert systems by the rise of recursive deep learning algorithms, has fueled no end of hyperbolic predictions and dystopian narratives. Much of the exaggeration stems from the moniker itself, since despite the many impressive achievements of today’s incarnation of AI, it bears a closer resemblance to advanced statistics than it does cognitive intelligence. As I discussed in this article, there’s a growing backlash and active debate among academics as to whether machine and deep learning are ‘intelligent’ at all or merely clever ways of analyzing the massive troves of data now available:

Medicine proves to be a rich target for AI-enhancement

The biggest advances in millennial-generation AI have come via deep learning: recursive algorithms modeled after human neural networks that are particularly adept at pattern matching and image analysis. Although early deep learning demonstrations typically involved tagging simple objects from the type of quotidian photos that get shared on social media, more significant uses come from the fields of physical surveillance, aerial and satellite mapping and medical imaging.

Read more at the link below.

Content retrieved from: https://diginomica.com/2018/12/20/2018-when-vendors-built-the-foundat

ions-for-ai-applications-while-enterprises-looked-for-useful-applications/.

2018 in review: Cloud expertise becomes an IT necessity

By | December 30, 2018

From my Diginomica column looking back on highlights from the world of enterprise cloud computing.

2018 in review, part 1 – a year in which cloud competency became an enterprise IT requirement as it moved onto managing complexity.

Business man stepping on clouds towards sun outline © red150770 - Fotolia.comFive observations on a significant year for cloud competency…

(1) Mainstreaming public cloud

The year began with fresh evidence of enterprises treating public cloud services as legitimate alternatives to private infrastructure, a view that was only reinforced later on by events, whether it was record attendance at AWS re:Invent, VMware and AWS expanding a partnership that enables legacy systems to run on AWS infrastructure or Oracle making a last-ditch push to establish cloud bona fides and forestall a mass migration from its applications. Nevertheless, we’re still in the early stages of enterprise cloud adoption. As I wrote at the time, …

To read more, follow the link below.

Content retrieved from: https://diginomica.com/2018/12/19/2018-the-year-of-cloud-competency-as-an-enterprise-it-must-have/.