Monthly Archives: October 2016

Using Azure For Backup and Disaster Recovery: Built-In Features Make It Easy

By | October 6, 2016

Backup and redundancy continue to be two of the most common usage scenarios for public cloud — and the reason is simple economics. Renting or building space in a secondary facility for backup is expensive, particularly when there’s a thriving, competitive industry devoted to providing rentable IT infrastructure.

Due to its integration with Windows Server, and flexible licensing models for Windows workloads, Azure is a common cloud service for Microsoft-centric organizations. As I outline in this article, built-in services like Azure Site Recovery can simplify cloud backup, disaster recovery and business continuity processes for organizations already using Azure and familiar with its portfolio.

drtoazurega

Microsoft rolled out Azure Site Recovery two years ago and the service automates the process of data and virtual application replication to back up private Windows infrastructure. But, most importantly, it provides that same application orchestration to the Azure public cloud. As I detail here, Azure Site Recovery provides six primary features for backup as a service and disaster recovery as a service (DRaaS).

See the article for specific cloud DR planning recommendations including areas where ASR can provide redundancy and DR protection for non-Microsoft applications like Oracle and VMware. Services like Azure Site Recovery simplify the cloud DR process and are great for larger organizations with cloud expertise. Smaller organizations, however, should also consider software as a service DR products, such as HotLink, Infrascale and Zerto.

 

Using Business Process Management Methodology To Deliver IT-as-a-Service

By | October 6, 2016

In many organizations, the IT department was an evolutionary outgrowth of increased dependence on technology. This meant IT itself was often organized around technologies, not business processes or salable products. The resulting IT silos, marred by intra-departmental finger-pointing and information hoarding, create inefficient operations, poor IT service delivery and slow responsiveness to new service requests. Indeed, IT is increasingly at odds with companies that require speed in order to become successful digital businesses that blend the online and offline worlds. Consequently, the enterprise IT service delivery model is under pressure to show value and responsiveness without exploding the budget.

In this article, I outline how a business process management structure for IT service delivery can bridge the gap between IT and the rest of the business, allowing IT leaders to sit at the table for important decisions. Originally developed for manufacturing, logistics, finance and sales, BPM is also useful for IT. Since BPM focuses on outcomes, not tasks, it helps IT organizations align services, investments and customer-facing processes with business requirements and strategies. Today’s breadth and depth of IT operational data, log analysis tools and big data analytics provide IT leaders with fact-based insights into business use of services and resources, service reliability, security, costs and the utilization of existing IT assets. This information is vital to measure how IT meets business needs, while optimizing IT efficiency and spending.

bpm-lifecycle

While some may question the implicit assumption that running IT as a service and using BPM to improve its service delivery model with structured, repeatable processes is the best way to run IT, there’s plenty of evidence showing that is how successful businesses selling infrastructure, software and other managed IT services actually work. The article details the steps of organizational maturity IT must go through in order to evolve from delivering IT as a hodgepodge of individual technologies to managing it as a value-producing service. It includes recommendations for successfully handling the BPM transition including the need to secure both executive leadership and business buy-in, since the IT services must be clearly aligned with business goals, needs and strategies.

 

3D Graphics as a Service: A Look at AWS Lumberyard

By | October 2, 2016

As AWS continues to evolve, it moves from being an infrastructure as a service provider to a platform for application development and runtime infrastructure. While many AWS back-end services are tailored to the enterprise, other services such as Mobile Hub and Device Farm have commercial uses. And Amazon Lumberyard is the newest among those. On the surface, Lumberyard is a 3D gaming platform. But when you dig into its different levels, the service could hold an Easter egg or two for enterprise IT. This article takes a deeper dive into its features.

AWS-Lumberyard.jpgAmazon Lumberyard is a development platform and back-end engine for standalone connected, multiplayer 3D games with features that simplify scene and character creation, 3D modeling, image rendering, object motion, light physics, and audio and game play scripting. Similar to other game engines — Unity, Unreal, Double Helix and CryEngine — Lumberyard reduces the overhead and complexity of writing low-level rendering and physics code using DirectX, OpenGL and XNA. The service can also simplify the creation of interactive, multiplayer games using features that integrate them with Twitch, the live game-streaming platform and social network that Amazon acquired in 2014.

As I detail, Lumberyard has an integrated development environment (IDE) with editors for source code, characters, animations, particle effects and the game interface. The IDE simplifies development of indoor and outdoor game environments using a Photoshop-like canvas. Developers can use Lumberyard features, like the graphical editor to design the entire game environment, including creating levels, objects, terrain, lighting, animations and layers with 3D navigation controls familiar to gamers. From these GUIs, Lumberyard generates C++ code that can be modified in its native editor or external IDE.

Although Lumberyard is still in beta, expect to see enhancements and case studies showing how it can be used for enterprise applications at AWS reInvent starting November 28th.

Functions As A Service: Comparing Offerings From AWS and Azure 

By | October 2, 2016

Despite the advantages of cloud services, such as Amazon Web Services and Microsoft Azure, they still come with overhead. Whether its virtual servers, object storage buckets or SQL databases, you must provision resources before using them. While that seems like an obvious requirement, it does add cost, time and friction to the cloud experience. But what if the system could automatically provision services and execute jobs in response to events, such as a message from another application? That’s the principle behind serverless applications, also referred to as cloud functions.

Serverless computing offers a way to more easily deploy and manage complex enterprise apps. In this article, I detail two popular options, AWS Lambda and Azure Functions, including how they differ in terms of pricing, triggers and containers. These services, sometimes called functions-as-a-service, along with similar offerings from Google Cloud and IBM Bluemix can change how developers approach application design for event-driven needs since functions only execute in response to defined triggers such as changes in storage containers, activity on a message queue or access from an exposed HTTP API. Furthermore, triggering events aren’t confined to activity on AWS or Azure, but  can come from a third-party or on-premises system.

Azure Functions respond to triggers with actiions that can output to other Azure services or data stores
Functions have become a standard feature of the major infrastructure as a service providers, but they’re a new and rapidly evolving category. Watch for developments as providers add features, language support and integration with development environments and continuous delivery tools. One area of change will likely be the configuration dashboards, which can be confusing, given the number of event sources and function mappings. See the rest of the article for details on specific features and how Lambda and Azure Functions compare, including a pricing example.

Modernizing Business Applications, Part 1: Overview and Rationale

By | October 1, 2016

In IT, each generational transition has called for modernizing and redesigning applications, business processes and IT infrastructure to exploit new capabilities and efficiencies. This occurred when PCs and LANs usurped the mainframe and drove client-server computing, eliminating expensive hardware and the issue of data scarcity. Innovation happened again when the internet and WANs disrupted client-server computing, and then subsequently tech modernized again when cloud computing gained popularity. Consequently, the need for application modernization is a regular, if not entirely predictable occurrence in IT. App modernization isn’t carried out as a fashion statement, status symbol or to keep up with nimble tech startups, but for cold, hard business reasons. Regardless of the era, the benefits of a periodic app overhaul include better performance, more features, greater usability and higher reliability. But while the need to modernize is obvious, it’s unique to each business. And choosing the modernization approach to go with can be difficult because there are several options. In part one of this five-part series, I explain the technological and business catalysts driving the need to modernize business applications and overview common architectures and techniques.


All the business reasons for application modernization are addressed in the current cycle of modernization. However, moving to the cloud, whether public or private, also provides much greater application scalability, deployment flexibility, responsiveness for today’s mobile users and efficient use of IT resources. The third platform is an excellent model for understanding modern application design. It derives its name as the successor to prior mainframe and client-server frameworks, and its impetus is the nexus of four primary technologies: mobile devices, social networks, cloud services and big data analytics. Together, these change nearly everything about applications: their features, UIs, internal instrumentation, how they are designed, developed and deployed, even the application lifecycle and update frequency. 

Read on for a discussion of using cloud services to modernize legacy applications and future installments will cover two popular modernizations techniques, PaaS stacks and containers by first outlining the basics and then surveying various product and service offerings for each.