Monthly Archives: March 2016

Ransomware Doesn’t Just Target Noobs: The Enterprise Implications of Cyber-Hijacking

By | March 18, 2016

This column initially appeared on Diginomica as Digital Hijackers: The Rising Threat of Ransomware to Business


ransomeware-snipImagine you are the CEO of a hospital and come in one day to find staff in a panic because they can’t use critical systems like CT scanners, lab test and emergency room equipment or access pharmacy records. That’s precisely the nightmare scenario that faced executives at Hollywood (California) Presbyterian Medical Center last month as most of the hospital’s backend computer systems, including email, was shut down for more than a week. The cause wasn’t a software bug or admin error, but a targeted attack that hijacked and encrypted data and executables until the hospital paid a ransom. Hospital operations ground to a crawl as staff went back to paper records, fax transmissions and phone calls. Despite assistance from the FBI and LAPD, executives didn’t see a viable, timely solution other than paying up to the tune of 40 Bitcoins ($17,000) to unlock their machines. Sadly, the reputational damage and operational costs were undoubtedly far higher.

ransomeware-screen

The hospital’s predicament is becoming more common as ransomware moves from the criminal fringes to become a potentially disruptive scourge on business operations. According to a new report (PDF) by the Institute for Critical Infrastructure Technology, a Washington cyber security think tank, ransomware will become more common as previously dormant infiltrations are activated and weaponized. As the ICIT authors put it, “‘To Pay or Not to Pay,’ will be the question fueling heated debate in boardrooms across the Nation and abroad.”

Ransomware is a cyber version of kidnapping, with the same motives: money. It works like a virus that secretly encrypts files. Victims don’t get the key until paying the ransom. It’s as if instead of a thief stealing your car, they took the car keys and put them in a safe left in your garage. You don’t get the combination to the safe, and use of your car unless you pay up.

Like all malware, ransomware exploits have become more sophisticated, borrowing APT (advanced persistent threat) techniques such as the ability to subvert signature-based security checks or scans typically designed to detect unusual system activity and data exfiltration. According to the ICIT report,

As of 2016, ransomware is mutating again to be more vicious and less predictable than in the past. This transition may be the result of adoption by more knowledgeable and ruthless adversaries, such as Advanced Persistent Threat groups.”

As the attacks have gotten more advanced and correspondingly expensive to develop, they have also become more costly, with an average ransom of about $300 per infected host. What is an extortionate annoyance to someone trying to get their family photo library back can be a significant business expense, both in the ransom itself and the indirect costs of operational disruption and cleanup, when faced with a data center full of affected systems.

Although ransomware usually targets Windows machines, the ICIT report warns that,

Unlike traditional malware actors, ransomware criminals can achieve some profit from targeting any system: mobile devices, personal computers, industrial control systems, refrigerators, portable hard drives, etc. The majority of these devices are not secured in the slightest against a ransomware threat.”

Indeed, we recently saw the first report of Mac-based ransomware.

KeyRanger-OSX-ransom

Ransomware’s growing sophistication takes several forms:

  • Malware that targets zero-day, undisclosed and unpatched vulnerabilities.
  • Distribution and ransom demands that incorporate social engineering, prior surveillance and self-propagation to spread throughout a network.
  • Strong, asymmetric, in-memory encryption that is both impossible to break (see the Apple-FBI case for example) and leaves no trace of unique session keys on the device.
  • The use of multiple anonymizing technologies such as Tor, proxy servers and crypto-currencies (for payment) like Bitcoins, Litecoins (LTC) and Dogecoins (DOGE) to hide and thwart tracking the attacker’s identity. bitcoin

The vast majority of victims follow traditional law enforcement advice not to pay ransom, although the FBI has now reversed course for the most advanced attacks, saying you really have no recourse. Estimates are that only 1-3% of victims pay, however due to the ease of targeting millions of systems, even with low rates of successful infiltration and payment, ransomware is profitable. According to FBI figures cited in the ICIT report, one exploit netted over $18 million between 2014-2015. But as the hospital incident highlights, these numbers will surely increase as more businesses are hit with viral ransomware that can grind their entire operations to a halt. Indeed, the hospital’s experience is hardly unique. Many law enforcement agencies themselves have been the victims of ransomware, in which they often ignore their own advice and pay.

With ransomware adopting the stealth distribution techniques pioneered by APTs and botnets it becomes a greater threat to large enterprises, even those with detailed backup and DR processes. The ICIT report details the implications,

Modern crypto ransomware maps networks, enumerates drives, and spreads onto as many systems as it can before it activates. As a result, numerous systems, including the backup and redundancy systems, may be infected. Not even a large organization can ignore half their systems going offline. The organization will have to react through remediation, surrender, or allowing the loss of the data. Many organizations cannot survive the loss of essential data for an extended period. Without adequate backups, business continuity may be impossible and customers or end users may be affected.”

My Take: Vigilance, but not Panic

Ransomware is just the latest in a long line of disruptive and potentially expensive cyber threats, but it’s most alarming for the brazen and direct way it monetizes an attack. Still, as it becomes harder and less efficient to cash in on stolen identities or information espionage, expect to see more cyber crime take on a ransom element. For example, the Target credit card breach cost the company and affected banks hundreds of millions of dollars in mitigation costs, while the perpetrators netted an estimated $54 million by selling the information on the black market. Imagine that instead of skimming card numbers, the thieves had crypto-locked Target’s entire PoS and transaction processing infrastructure just days before Black Friday. Might the company be willing to part with $50 or even $100 million to quickly restore operations rather than risk millions of lost transactions and angry (perhaps former) customers over that busy shopping weekend?

The ominous potential of ransomware serves as a reminder that organizations must heighten their cyber security strategy and preparedness. While having advanced network, system and data security technologies with layered defenses are important, it’s not enough, nor even the most important element. Indeed, ransomware, like many exploits primarily relies on human weakness and naïveté to gain a toehold. Thus, regular training in basic security hygiene and cyber threat awareness is more valuable and cost-effective than buying expensive new security equipment and software. As the ICIT report puts it (emphasis added),

“The vast majority of breaches and cyber security incidents are directly correlated to the innocuous or malicious actions of personnel. Malicious emails are the favored attack vector of ransomware and other malware alike. Employees should be trained to recognize a malicious link or attachment. There is no justifiable reason that most organizations cannot reduce their personnel’s malicious link click rate below 15 percent. A single employee is all it takes for the entire network to be compromised.

Of course, organizations with thorough and frequently validated backup and recovery plans have an advantage in that they can try to restore data from archives rather than submit to ransom.

One key to limiting the spread of ransomware is the use of virtualized security functions (NFV) and micro segmentation of virtual data center networks. By placing security controls on the host and strictly limiting communications between hosts and applications through explicit security policies, NFV should contain or, at least slow the spread of all types of malware.

On the client, the use of application sandboxing techniques such as Bromium’s microvisor can nip ransomware at the source by preventing access to the local file system, applications and the OS network stack.

Despite the myriad available security technologies that can reduce the risk of ransomware and other cyber exploits, we agree with ICIT that,

Ultimately, personnel are the strongest and the weakest link in organizational security. If they make a mistake, then the organization has made a mistake. If they fail, the organization has failed.”

Critics Oversimplify BiModal IT: Here’s How It Can Transform IT

By | March 17, 2016

The following article was originally published on Diginomica as Bimodal IT is only harmful when oversimplified


bimodal-itI’ve been asked and written about bimodal IT a fair bit over the last few months and have come to develop a nuanced view of the concept that doesn’t comport with the dire warnings summarized in an earlier Diginomica column, Gartner’s bimodal IT considered harmful. Although the concept can be polarizing, I believe much of the blowback originates from assumptions made due to an unfortunate choice of name, reflexive distaste for analyst buzzwords and particularly the term’s originator, the analyst firm so many love to hate. A common construction takes bimodal to mean bipolar, with IT segregated into two separate, but unequal entities: Mode 1 where all the stuffy IT old-timers live out their days caring for decaying databases and molding mainframes, versus Mode 2 where all the cool kids play with the latest toys and work unshackled from IT bureaucracy and processes. If that’s your view, bimodal is a recipe for disaster: a warring, dysfunctional IT organization.

From Skitch

If one subscribes to the us-versus-them characterization, I fully agree with ActiveState’s CEO cited in the earlier column that “one can expect many companies to experience huge conflict as the two camps engage in pitched battles for influence, resources, and power.” However, I contend that any IT executive that implements bimodal as a caste system with implied winners and losers misses the point and is guilty of gross mismanagement. Indeed, as I wrote last summer, requirements-based segmentation has been going on since the dawn of IT,

Whether you call it legacy versus emergent systems, Brownfield versus Greenfield deployments or sustaining versus disruptive technologies, the dichotomy between old and new or maintenance and development has been around since the dawn of IT. Each category has always required a different set of investment, management and governance techniques. The difference now is the pace at which new products are developed and refined and a concomitant decrease in useful half-life of mature services.”

A key problem in much of the bimodal debate is that it overemphasizes the importance of Mode 2 development and minimizes the innovation, rejuvenation and reinvestment required in Mode 1 systems to maintain competitiveness in the era of digital business. As I put it here, the reductionist view of bimodal IT

understates the amount of innovation and service improvement that needs to happen in Mode 1, business critical systems, creates a false dichotomy concerning cloud usage within IT and romanticizes the nature of Mode 2 work.”

The flaw is in assuming that Mode 1 systems are on life-support instead of in need of some life-saving surgery and facelifts. Indeed, I completely subscribe to the view expressed in the previously-cited quote from analyst Jason Bloomberg,

What many organizations are finding is that for digital transformation to be successful, it must be end-to-end — with customers at one end and systems of record at the other. Traditional IT, of course, remains responsible for those systems of record.”

Yet the need for IT transformation doesn’t invalidate the bimodal concept, it underscores the need to do things differently since the pace of change, degree of risk and tolerance for mistakes in these Mode 1 “systems of record” cannot be so high that it jeopardizes critical business operations. These require stable, reliable, highly available applications using mature and well-tested systems, however they can’t be static and moribund. A key tenet of bimodal IT is what Gartner calls renovating the core. To me, this means bolstering mission critical infrastructure and applications with new technologies like distributed, scale out, microservices-based designs, virtualized, containerized cloud stacks and use of public cloud services whether for bursting, DR or implementing new features. It’s important work that requires innovative engineering and IT resources, but with deliberation, security and risk control required of mission-critical business services.

In contrast, Mode 2 is the place for IT experimentation and risk-taking. As I wrote earlier,

Mode 2 provides the structure, or lack thereof, for IT and developers to learn, inculcate and perfect the behaviors and technologies required to attack fast-changing prospects for digital business through new applications and services.”

Creative business people reviewing proofs in sunny office

Here the emphasis is on new products and services in often dynamic markets with unknown odds of success. Mode 2 provides a structure to perfect processes in agile development, continuous delivery and rapid, data-driven customer feedback for both application developers and infrastructure architects. In this wave of IT innovation, whether you call it digital business, 3rd Platform or just today’s competitive reality, both the business opportunities and customer tastes are uncertain, fickle and often fleeting. The goal in Mode 2 is to maximize IT’s ability to create, adapt and react while minimizing the cost of failure.

Another way the bimodal debate sometimes mischaracterizes the desired organizational structure is by erecting walls between the two parts of IT. Again, this is understandable given the name and a superficial look at the concept, however it’s a misreading of bimodal in my view. Instead, I see successful Mode 2 behaviors migrating throughout IT over time, an osmosis that brings increased dynamism and adaptability to all of IT.

My Take

Perhaps Phil Wainewright is correct that enterprises trying an all-or-nothing IT transformation will find it easier than feared, but count me skeptical, particularly for large organizations with hundreds of legacy systems feeding dozens of business critical processes. To borrow my earlier metaphor, bimodal IT is old wine in new bottles, however unlike the skunk works of old, Mode 2 can’t be isolated from the rest of IT, just insulated from the innovation-sapping, risk-averse bureaucracy. Done right, bimodal IT should inject fresh thinking and faster, more efficient processes without fracturing the organization into new-versus-old, us-versus-them tribes, where Mode 2 plays the role of catalyst in digital business transformation.

The Apple-FBI Imbroglio Offers Lessons For Enterprise IT

By | March 17, 2016

The following article was originally published on Diginomica as Apple-FBI Impasse: A Teachable Moment For Enterprise IT


AppleLogoWhen technology and public policy collide, it invariably creates waves, however in the case of iPhone security versus FBI evidence collection, it’s more like a tsunami. The technical and legal details of the FBI’s case against Apple and the inevitable back-and-forth that will likely only be resolved by Congress or the Supreme Court are nuanced and deserving of a thorough public discussion. Indeed, the case is the latest example of a clash of cultures I characterized as the authoritarian (DC) versus libertarian (SV). Yet a key fact buried in the details about this incident suggests the entire fiasco is the result of carelessness by the San Bernardino County IT department, which owned and issued the phone to the employee-turned-terrorist. Yes, a precedent-setting chain of events could have been prevented with proper IT oversight.

Even though it was an employer-supplied device, the iPhone in question was unmanaged, meaning the County IT department had no way of monitoring the employee’s usage, controlling access to applications or resetting the passcode. This sorry incident is a painful reminder of the importance of proper mobile device governance including the use of enterprise mobility management (EMM) software. Although EMM is often cited as a prerequisite for BYOD programs where organizations need control over sensitive data on an employee’s personal phone, the San Bernardino case shows that it’s just as necessary on employer-provided devices because you never know when an employee might lose a phone, forget a passcode, contract some malware or, regrettably, go postal.

The sad irony here is that the County IT department already uses one of the most popular EMM suites from Mobile Iron on some of its devices. A spokesman says it requires some, but not all employees to install the software, but didn’t know why this particular department chose not to. He went on to say that the County might review this policy in light of events. I should hope so. Here’s why.

Start with the basic facts of this case. When the FBI seized the terrorist’s iPhone 5C (no Touch ID on this device), it was locked. Since iOS includes a feature that increasingly delays the time between incorrect passcode entries after the fifth try and erases the phone after the 10th attempt, it makes guessing the passcode by brute force impossible since the FBI has to assume the latter ‘nuclear option’ is enabled. Lacking direct access to the data, according to a County statement,

Passcode-lock-erase-data

“A logical next step was to obtain access to iCloud backups for the phone in order to obtain evidence related to the investigation in the days following the attack. The FBI worked with San Bernardino County to reset the iCloud password on December 6th, as the County owned the account and was able to reset the password in order to provide immediate access to the iCloud backup data.”

Unfortunately, officials then learned that the last backup was 6-weeks old, hence the fight over direct access to the phone’s data via an Apple-assisted unlock. Here’s how EMM could have prevented this. The foundation of EMM is device management, namely the ability to provision software and remotely view, control and wipe the device. This includes forcing a data backup, which the FBI could have recovered via the newly-assigned iCloud password (or from the County’s servers if IT had its own backup system) and passcode reset. Had the County actually deployed the software on every device, MobileIron’s VP of Strategy, Ojas Rage describes how it could have bypassed any need for Apple’s involvement,

“If an employee forgets the passcode, he or she calls the company’s IT department for help. If the device is using MobileIron, the IT department can, after confirming the employee’s identity, send a command to the device to clear the passcode. The employee can then set a new passcode.

“Note that even when the passcode is cleared, only the person holding the phone can see all the data that is on that phone — the company’s IT department cannot. In other words, the IT department cannot get remote access to the data on the phone simply by unlocking the phone. The phone must also be physically present. This protects the employee’s privacy.”

Since the FBI has the phone, it would set the new passcode and have full access to the device. End of story. Of course, that’s water under the bridge since as Rage explains,

“San Bernardino County cannot use MobileIron to unlock the shooter’s phone because it is too late to install MobileIron once the device is locked. So now neither the County IT department nor Apple can clear the passcode.”

OSX_Profile-MgrIronically, the County, or any other organization using Apple devices, didn’t even need to deploy expensive EMM software to perform basic tasks like remote device configuration, app installation, passcode reset, lock or wipe. Apple includes these features in OS X Server, which runs on any Mac, even a MacBook and costs a whopping $20. Of course, most organizations won’t want to replace their Windows file sharing, Exchange servers and backup systems with services running on OS X, but the device management features alone are worth setting aside an old Mac and $20.

My Take

Apple-Customer-Message_FBIAs Apple points out in defense of its actions, “The passcode lock and requirement for manual entry of the passcode are at the heart of the safeguards we have built into iOS.” We agree that weakening this technology is unwise and if done, very unlikely to be confined to this case, including potential exploitation by cyber criminals. Yet for employee-provided phones, weakening the passcode is entirely unnecessary since EMM provides the means to securely reset it and/or remotely backup or wipe the device. In most circumstances, this allows employees to gain access to their own device, however in situations like the San Bernardino terrorist, EMM allows law enforcement unfettered access to any device properly held as part of an investigation.

Of course, EMM isn’t a universal law enforcement tool since it does no good for personally-owned devices. However, the Apple-FBI standoff provides a teachable moment for IT departments that employees’ mobile devices do contain valuable data and some will become inaccessible. You’d better have a plan in place to recover the data.

Azure PaaS: A Closer Look At Microsoft’s Application Platform

By | March 17, 2016

The following article was originally published on Diginomica as Azure Is More Than Cloud Infrastructure: A Look At Microsoft’s Application Platform


Azure-logoDespite its seeming maturity and inclusion in the general lexicon, the cloud remains maddeningly hard to define, much less circumscribe. Like its atmospheric analog, cloud services are dynamic, indistinct and ephemeral and although accepted definitions and product categories exist, they don’t capture the subtle distinctions between various services. One area where the lines are especially fuzzy is the difference between infrastructure and platform services. It’s an important distinction since IaaS is becoming a fungible commodity while higher level application services are the way cloud vendors produce customer stiction, if not outright lock-in. Thus, it’s not surprising that the two most popular cloud services, AWS and Microsoft Azure, are building contrasting versions of a cloud-based infrastructure and service substructure for enterprise applications. Yet they’re coming at it from completely different directions: AWS building from IaaS up the platform stack, Azure starting with platform services only to later expose basic IaaS capabilities.

Our need for clarity means that most people still consider AWS to be IaaS even though its set of services now includes machine learning analytical models, an IoT SDK and backend services, media transcoders and all manner of databases and data analytics systems. Conversely, Microsoft Azure was originally developed as a cloud platform for Windows application developers complete with .NET and SQL services and app synchronization using Microsoft Live. Azure has since evolved to include standard IaaS compute, storage and network service to compete with AWS, however it’s most valuable as a PaaS for applications.

Azure’s essence or soul became apparent after attending a Microsoft workshop unveiling the Azure Stack private cloud (see my coverage here). Microsoft doesn’t position Azure as a way to replace IT infrastructure, but rather as a framework for rethinking enterprise architecture and application design through the lens of targeted, cloud-based (often, micro-) services. Understanding what this means and entails requires looking past the individual virtualized components to the overall Azure platform.

First, one must abandon traditional mental models of IT. As Phil Wainwright pointed out here last year, “The IT industry seems compelled to reinterpret cloud computing as a remake of how things have always been done in the past. And yet it never works out that way.” Exploiting Azure, whether as a public, private or hybrid service demands unlearning old habits of client-server or mainframe design with monolithic applications and big-bang software releases and adopting a cloud-first scheme implemented using an agile, DevOps methodology that facilitates faster response to digital business plans.

Cloud Native Apps

Azure, like AWS and Google Cloud, are built for cloud-native applications, which differ from traditional IT systems by being inherently distributed, resilient and scalable, where APIs to micro-services and asynchronous message buses replace persistent connections to network sockets and file shares. Azure consists of a growing portfolio of application, data, analytics, messaging, monitoring, security and developer services that can be integrated like building blocks into arbitrarily complex applications. Yet like IaaS, these higher level components are managed by Microsoft, provisioned and priced as no-contract, usage-based services, consumed and interconnected using APIs and the Azure Service Fabric and deployable on either shared (public Azure) or private (Azure Stack) infrastructure. Unlike AWS or other virtualization platforms, Azure was designed up front as a PaaS. As Mike Schutz, Microsoft’s General Manager, Cloud Platform Product Marketing told Stuart Lachlan last year, “Microsoft is taking and managing that underlying stack from the app down instead of from the operating system down. So from an operating standpoint, we believe we have a more mature offering in Platform-as-a-Service and because of the footprint that we have with developers today.”

Azure-PaaS-svcs

Designing for Azure

Microsoft’s paradigm for Azure apps is the Cloud Platform Integration Framework (CPIF), a set of design patterns that provide a lightweight structure for solving a variety of business problems using distributed, often stateless cloud services. There are 24 application patterns for such things as loading data on demand from a cached data store or building a load-leveling message queue, along with general guidance for architectural decisions about 10 concepts including autoscaling, data caching, replication and synchronization and application instrumentation. Each pattern contains the details application architects and developers will need when designing for Azure including the list of Azure Services required to use the pattern, architectural diagrams and dependencies, and interfaces.

The keys to exploiting Azure PaaS are understanding the range of services available and creatively stitching them together to solve application problems. For example, traditional Web apps should be stateless and autoscaling with static content cached on a CDN; however unlike the typical IaaS, most of these details are handled automatically by the Azure Web Apps Service. Likewise, given the wide range of storage options in Azure, including object, blob, relational database, column store and cache, it’s important to identify the best fit for the job. Many of Microsoft’s application patterns deal with the subtleties of managing communications, queues, user identity or telemetry in a distributed system. Again, these are much easier to solve by using an available Azure service rather than incorporating code into the core application.

My Take: The Meaning for Business

Azure PaaS, like other cloud application platforms, represents a new way of designing, operating and funding IT and business services. While it promises to improve the performance, scalability and reliability of new product delivery, Azure and other PaaS clearly pose an acute risk of lock-in. Of course, this is nothing new. Application services are the reason Windows server and IBM mainframes became ubiquitous and persistent within most organizations. As companies rearchitect IT and move to the cloud, executives must carefully consider the long-term implications of their application platform decisions. The choice of Azure, Salesforce, AWS, CloudFoundry or another cloud platform will have as much to do with the business relationships and prior experiences as technical merit.

The lessons from the consumerization of IT and rise of cloud services are that users, whether individuals or business units, demand rapid, continuous innovation of digital products, reliable, predictable performance and integration with existing systems and data sources. Azure PaaS is a viable choice for meeting these objectives, but not the necessarily the best one for every organization and then, only for those willing to learn and adopt new ways of system design.