Monthly Archives: August 2015

A Guide to AWS Identity Management and Policies

By | August 29, 2015

Portions of this article appeared in the Tech Target SearchAWS E-HandBook: Your Role in AWS Security.


AWS security is built upon a powerful identity and access management service (IAM) with a rich set of features befitting of an enterprise platform. Yet the IAM management console, nestled within an overflowing AWS service dashboard, is deceptively simple, belying the complexity of the myriad account, credential and security policy options within AWS. IAM shares many features, like users, groups, passwords and permissions, from server operating systems, however it adds others like conditions, roles, access keys and credential rotation that are less common. Further complicating the picture, AWS uses identities and groups for several purposes: access to AWS management features via the Web console or command line interface (CLI), programmatic access to AWS services via APIs and network access and traffic policy when using a private networks on a virtual private cloud (VPC).

iam-logoAWS hews to established identity management practice where security policy starts at the most atomic layer, individual user identities with unique credentials, that are mapped to a set of usage permissions. However, as the number of users and associated sets of permissions grows, minimizing management complexity requires making molecules out of those atoms. Central to efficient and consistent identity and access management is the ability to define a common set of permissions for particular tasks and job requirements by placing users with common needs into a single pool with a consistent set of policies for everyone in the group. Indeed, the foundation of an effective and secure AWS access policy framework are users and groups.

User Management Basics

AWS access control starts with user identity where the prime directive is that every individual should have a unique user ID and credentials. These  map to a set of permissions that by default should  have no permissions. In other words, creating a new AWS user doesn’t allow them to do anything. In AWS, user IDs have up to four types of security credentials:

  • password: used for Web access to AWS the management console
  • access keys (public and private): used for CLI or API access to AWS services
  • multi-factor authentication (MFA): associate with a physical (USB token) or software (Google Authenticator) generating one-time passwords for extra security.
  • SSH key pairs: used to secure code management software for traffic between private repositories hosted on AWS CodeCommit and local Git clients.

AWS is designed around the security strategy of granting least privilege since by default users can do nothing, even if they have their own access keys. Administrators should build user permissions built from the bottom up: only those required do the job should be added. In practice, this means users that never need to access the management console, like developers, shouldn’t have passwords and those that never programmatically access AWS services, like service admins, shouldn’t know their keys. The principle here is that it’s always easier and more secure to add permissions when necessary than take them away when misused.

Permissions, Policies and Groups

IAM permissions proscribe access to AWS resources and can apply to individual users or overall resources. For example, a user might have read permission to an S3 bucket that allows access to the buckets content, but no ability to add, delete or change its contents. Another S3 bucket used for test and development might have resource permissions that allow any users coming from a specific range of IP addresses or from EC2 instances in a certain zone to have full access. Individual IAM permissions can be aggregated into a policy.

AWS-user-v-resource-permissions

IAM policies are a set of logical statements using JSON syntax that describe an arbitrarily complex set of permissions. For example, a policy could allow users from a business partner (teaser: defined using groups) to drop files only in a specific portion of a larger corporate file sharing bucket, or a user might need full access to just EC2 reports, but just read-only to AWS account usage.

AWS policies can be arbitrarily complex, however most situations are covered by the policy templates built into IAM. Before discussing groups that apply policies to many users at once, a big word of warning: permissions don’t apply to the root user, i.e. the identity used to establish an AWS account. Like the superuser in Linux, AWS root has unfettered access to everything, which is the reason the IAM setup wizard nags administrators to delete (or not create in the first place) root access keys and to secure the root password with MFA.

Groups: A Proxy for User Tasks

Groups are merely a set of users with the same AWS access requirements, i.e. permissions and are a handy shortcut for assigning and changing permissions for large subsets of one’s AWS user population. Groups also make it simple to reassign a user’s set of permissions when they change job responsibilities.

Groups should be based on business function and job requirements, indeed, it’s best to define permissions and groups as a unit and assign users as necessary. Start by defining the needs of each constituency using AWS and then map to a set of AWS permissions. Use the default set of AWS access policies as templates and double check your IAM configuration using the AWS Trusted Advisor.

IAM-Policy-templates

Default IAM polciies

Group permissions can also have a set of conditions that allow granular control over certain actions and logically take the form of “if-then-else” statements. The goal of conditional permissions is to allow legitimate work while minimizing accidents, particularly with restricted, administrative activities. For example, a conditional permission could require access to a certain resource or the admin console from a certain IP address range and only when using MFA. Conditional policy logic can get complicated and easily lead to unintended consequences, thus it’s wise to use the AWS Policy Simulator, documented here, to test the effects of policy changes before deployment.

AWS-Conditions

Security groups can be very granular, controlling access to specific EC2 instances, however this is overkill that adds administrative overhead. A better option is to create groups for user access to specific applications, or better yet, application tiers, such as access to front ends and ELB (load balancers), app logic and database layers. However when defining permissions for service requests, say an EC2 instance calling other AWS services, it’s best to use roles, not groups, as this IAM FAQ explains:

Q: What is the difference between an IAM role and an IAM user?

An IAM user has permanent long-term credentials and is used to directly interact with AWS services. An IAM role does not have any credentials and cannot make direct requests to AWS services. IAM roles are meant to be assumed by authorized entities, such as IAM users, applications, or an AWS service like EC2.

Group policies can also apply to AWS virtual private cloud (VPC) network security to control access to specific subnets, server ports and API access. We don’t have room for the details, but here groups serve the same function as they do in network firewall policies and ACLs.

IAM-EC2-conditional-policy

Fine tuning policies using JSON statements

Identity Integrity

Core to group policies is the integrity of individual user credentials. Thus, it’s wise to enforce a strong password policy that includes strength, expiration and re-use, and to regularly rotate access credentials. AWS admins should use credential reports to identify the use of access keys, flag those that are dormant and remove unused keys. Key rotation requires a few steps, detailed here, to avoid accidentally disabling application access.

  1. Create a second access key in addition to the one in use.
  2. Update all your applications to use the new access key and validate that the applications are working.
  3. Change the state of the previous access key to inactive.
  4. Validate that your applications are still working as expected.
  5. Delete the inactive access key.

AWS admins should also use CloudTrail to log user activity including API calls and logins. For more detail, see these SearchAWS articles on CloudTrail and logging tools for details.

 

Book Review – Geekonomics: The Real Cost of Insecure Software

By | August 27, 2015

ImageIf you’ve ever wondered why the first thing you have upon booting a new PC, with it’s pristine copy of Microsoft’s latest and greatest, is spend the next few hours loading extraneous security software – anti-virus, spyware protection, firewall, spam filter – David Rice has a theory for you. As he expounds at great length in his first book, Geekonomics, our methods of developing software are crude and error prone, with the industry valuing speed and features over quality and security.

While the book’s subtitle, “The Real Cost of Insecure Software”, suggests an emphasis on software security holes that regularly make news, the vista of poor software quality Rice describes extends far beyond incidents involving hackers and identity thieves. Whereas many software flaws are merely an inconvenience, some of Rice’s most egregious examples are tragic; such as the time software controlling an X-Ray machine designed to treat cancerous tumors malfunctioned and, lacking hardware failsafes used on prior models, delivered massive doses of radiation that killed six patients.

Major Themes

Modern software is some of the most complex creations of mankind and Rice begins his work by outlining the myriad ways sophisticated programming code is pervading almost every aspect of modern life – from mobile phones to airliners, it “cuts across almost every aspect of global, national, social, and economic function.” Yet unlike materials such as cement (an analogy he uses frequently) or steel that form the foundations of our physical infrastructure, software is infested with design and implementation defects that Rice contends are easily preventable.

The reasons for software’s poor quality are legion. Rice decries the industry’s economic incentives that reward speed and functionality over reliability and security – a condition ironically summed up by former Apple executive Guy Kawasaki in one of his rules for success during the go-go dot-com days, “Don’t Worry, Be Crappy.” Aggravating the problem is a software sales model that Rice describes as shifting responsibility for product maintenance and upkeep to end users away from manufacturers and that relies upon a licensing (instead of outright ownership) model enabling vendors to dictate favorable terms of purchase.

Key Concepts

  • Software is one of the core “construction materials” of modern civilization and has crept into almost all aspects of 21st Century life
  • Despite it’s importance to society, software’s means of production, acquisition and maintenance are flawed, leading to unacceptable product quality, reliability and security
  • Examples of the effects of shoddy software are legion, with the deleterious effects resulting in substantial financial, and in some cases, human costs
  • While the software ecosystem is currently broken, it’s not irreparable; legal, engineering and professional reforms are available that can bring software production up to normative standards for similar products having wide scale societal impact

Rice cites a legal framework that doesn’t hold manufacturers responsible for software defects nor resulting damages as supporting this lack of accountability. Finally, Rice laments the lack of rigorous software engineering standards and practitioner licensing. This litany of problems leads to a sense of despair and distrust on behalf of users, sustaining low expectations of software quality.

The majority of the book is dedicated to explicating the structural problems that create a ‘fast and loose’ environment for software production, however Rice concludes with a faint (or perhaps just feigned) bit of optimism by offering some potential solutions. He recounts each of the major problem areas and suggests ways of filling gaps in the current state of affairs. For example, to address a legal system that allows software vendors to escape responsibility for errors or security holes in their products, Rice suggests legislation or class action lawsuits aimed at applying legal theories of liability and negligence – just as they pertain to car or drug manufacturers. In order to bring a higher degree of professionalism, accountability and standardization to workers in the software industry, he recommends states or professional bodies such as the ACM or IEEE develop licensing standards and requirements for software engineers similar to those imposed on civil engineers, doctors or lawyers.

Evaluation and Conclusion

Despite the title, the only thing geeky about Rice’s book is the object of his wrath – software. The book seldom strays into technical minutia. It’s really a public policy treatise about the role of software in modern life and how our public institutions should apply policies and remedies used in other realms to that of software development, sales and ownership. While IT managers can certainly benefit from Rice’s detailed explanation of the causes and effects of shoddy software, the book is a must read for legislators, legal scholars and public policy wonks searching for ways to lift software to the standards of excellence and safety required for any of civilization’s critical infrastructure.


Bibliography: Geekonomics: The Real Cost of Insecure Software

  • Author:  David Rice
  • Publisher: Addison-Wesley Professional (November 28, 2007)
  • Price: $29.99 (list)
  • ISBN-10: 0-321-47789-8
  • ISBN-13: 978-0-321-47789-7
  • Hardcover: 362 pages
  • AmazonGoogle Books

Data Analytics Meets Farming in Precision Agriculture: A Recipe For Cloud Services

By | August 25, 2015

Agriculture marches to its own version of Moore’s Law, with crop productivity steadily increasing for decades. While past improvements were the result of better plant hybrids, fertilization and production equipment, information technology will be the key to sustaining and perhaps accelerating agricultural productivity. Precision agriculture, a collection of data collection, analysis and prediction technologies that looks like something out of Google, not John Deere, describes a group of technologies designed to collect and analyze detailed information about growing and crop conditions that feed complex models designed to provide actionable recommendations to improve yields and reduce costs. A complex problem that combines sensor technology, data collection, crop modeling and predictive analytics, the computational elements of precision agriculture are ideal for cloud deployment. I explain why in this column.

Historical Corn Grain Yields for Indiana and the U.S. - Corny News Network (Purdue University).clipular

The field has already attracted the attention of big companies like IBM, which has researchers working on agricultural weather forecasts, models and simulations to improve farm decisions, and Accenture, along with a host of startups as profiled in this Forbes column. Yet farming is a hands-on activity and many of the measurements that feed precision agriculture models require instruments and implementation expertise that small farmers don’t possess.

Source: Accenture

Source: Accenture

Here’s a look at the connected tractor and some of the many data sources used by today’s farmers. The scenario is not unlike many IoT designs for industrial equipment and infrastructure and makes for interesting networking and data collection challenges.

Source: Prof. dr. ir. Josse De Baerdemaeker Department of Biosystems Division MeBioS KULeuven, Belgium

Source: Prof. dr. ir. Josse De Baerdemaeker
Department of Biosystems
Division MeBioS
KULeuven, Belgium

The full column has details, but although it’s still relatively small, one estimate shows the precision agriculture market growing at over 13% per year hitting $3.7 billion by 2018, with the rate in emerging markets expected to exceed 25%. According to an investment bank report on precision agriculture, “The entire industry is realizing that a key value driver in the development of precision agriculture is data — collecting it, analyzing it, and using it.” Although data collection will remain a local problem, shared cloud services can accelerate the analysis and lower the barriers to farmers needing actionable intelligence. Precision agriculture will be an interesting field to monitor for both technological advancements and investment opportunities.Details here, but although it’s still relatively small, one estimate shows the precision agriculture market growing at over 13% per year hitting $3.7 billion by 2018, with the rate in emerging markets expected to exceed 25%. According to an investment bank report on precision agriculture, “The entire industry is realizing that a key value driver in the development of precision agriculture is data — collecting it, analyzing it, and using it.” Although data collection will remain a local problem, shared cloud services can accelerate the analysis and lower the barriers to farmers needing actionable intelligence. Precision agriculture will be an exciting field to monitor for both technological advancements and investment opportunities.

 

Google Fi Frees Mobile Users from Carrier Lock-In, Onerous Pricing Models

By | August 18, 2015

Lost in the alphabet soup that is Google’s new holding company is one of its many experiments in changing the business rules for network services while nudging the technology in a direction that ultimately benefits consumers. No, not Fiber, although it’s a fantastic broadband disruptor for those that can get it, but Project Fi, Google’s mobile service. After arriving with great fanfare this spring, Fi has slipped under the radar because of Google’s slow, by-invitation rollout, its very limited phone support (one!) and the fact Google hasn’t publicized the service like its more sexy Project X experiments like Glass, Loon and Self-Driving Cars. Indeed, the individual pieces that make Fi appealing aren’t unique, but collectively they are trendsetting and could herald welcome changes in how we all use and pay for phones and wireless service.

ProjectFi

In this column, I explain the technical side of Fi, as an MVNO, VoWiFi service, but also how it changes the billing and pricing model in ways that provide consumers with much more flexibility and transparency. The Fi meta-network is a great idea and well implemented, but not without limitations, which I detail. Fi also only supports a single device, the Nexus 6: a great phone, but not for everyone. Although its technical features are commendable, far more important is what Fi does to the mobile business model. By decoupling the user from a particular carrier, eliminating contract lock-ins and associated phone subsidies and adopting a form of usage-based pricing, Fi turns the smartphone into just another a consumer appliance: more like a PC than a condo.

Nexus6-v-iPhone6P

Nexus 6 vs. iPhone 6 Plus | Source: author

Collectively these changes don’t necessarily mean lower bills, but they do provide much greater transparency regarding one’s overall mobile costs and usage that Econ 101 says should drive more rational purchase decisions. By providing a model of usage flexibility, for both the network and data plan, billing clarity and service freedom, Fi is a success regardless of Google’s future plans. Although there’s always a risk with using any Google experiment, I couldn’t be happier with Fi and strongly hope it grows the service with new devices, including iPhones, and more trusted WiFi locations.

Micron Has a Rosy Future, But Faces Bumpy Product Transition

By | August 18, 2015

Micron Technology provided its view of the memory market, technology trends and company strategy at an analyst conference where executives stressed the company’s R&D investments and product roadmaps designed to exploit changing customer requirements and new memory applications. The company has a thorough understanding of the memory business, a compelling vision and strong product roadmap, however its near-term results remain shackled to a secularly declining PC market and cut-throat flash pricing.

While the company painted a rosy picture of future prospects, investors are more concerned with the here and now where ASPs (average selling price) are falling and the largest market for DRAM, PCs, is “challenging”. Micron’s lackluster Q3 financial results, what one Forbes contributor described as “dismal”, triggered a 30% decline in the company’s stock in the intervening weeks.

MU-memory-fragmenting

In the full post I analyze Micron’s strategy, the business drivers behind it and specific product directions, including:

  • DRAM customers demanding high performance memory: Micron’s answer is to ramp its next-generation 20nm fabrication process andmigrate to faster, more efficient DDR4 products.
  • NAND buyers in both the mobile and data center markets can’t get enough capacity: Here, Micron has partnered with Intel on a 3D NAND process that DeBoer says it can scale for the next decade and enable 10TB and up SSDs.
  • New types of memory products: The recently announced 3D XPoint memory, another joint development with Intel, promised the speed and durability of DRAM with the density and nonvolatility of NAND. At IDF,Intel demonstrated a 3D XPoint PCIe SSD that provides 5-times the I/O throughput of its high-end NAND SSD. Micron has also developed the hybrid memory cube, a multichip package combining a stack of traditional DRAM chips with a unique logic layer that can provide 15-times the I/O bandwidth of a standard DDR3 module while using 70% less power.

Screenshot 2015-08-18 at 7.33.11 PM

The column details the three growing markets Micron is targeting through an expanding product portfolio: mobile devices, automotive and data centers. For example, Micron’s data center strategy includes elements of all three umbrella strategies DeBoer outlined: higher-density DDR4 DRAM and NAND, flash storage products like SSDs, and PCIe cards, plus emerging high-speed memory (HMC, 3D XPoint) for new applications.

MU-tech-priorities

The problem for Micron and its investors is that it must first navigate the transition from relying on a secularly declining market, PCs, where margins are nonexistent, to a much more diverse set of products with exciting, but unproven (and unknowable) potential. Micron has a compelling product and technology roadmap, helped in no small part by its strategic partner, Intel and isn’t content to react to events dictated by competitors like Samsung or large buyers like Apple. Although Micron has a sound strategy, that doesn’t mean it’s a great investment until it demonstrates some success with its new product initiatives.

Screenshot 2015-08-18 at 7.36.55 PM

 

Networks For The Next Generation: OpenDaylight Hits Critical Mass

By | August 3, 2015

An overarching trend sweeping the industries that combine to create IT infrastructure is the embrace of open, inter-company collaboration on core technology. There have been plenty of examples of corporate affection for open source of late, however the trend was on full display at the recent OpenDaylight Summit where network hardware vendors, component suppliers, telecom companies, software developers and online service providers came together to plot the future of software defined networks (SDN) and cloud provisioned services. The OpenDaylight project, under the auspices of the Linux Foundation and sponsored by 50 (and growing) organizations, exists to catalyze and direct projects that address network plumbing. By shepherding the strategy and development of full stack SDN, from physical switches to virtual network appliances, OpenDaylight hopes to do for networks infrastructure and services what server virtualization and automation have done for cloud services and business applications.

ODL-projects-x

ODL-devs-code-xx

My reactions after immersing myself in the OpenDaylight ethos for three days include both shock at how far the project has come in two short years and awe at the degree to which major forces across various network business constituencies, from services providers (and big equipment buyers) like AT&T, Baidu and Tencent to vendors like Brocade, Cisco and NEC to users like the Large Hadron Collider have rallied around the project, its technology and strategic direction. See my complete analysis in this column, but the takeaway for network designers, application developers and technology executives is that the future of SDN and virtual network services passes through the portal of OpenDaylight.

Source: author

When behemoths like AT&T make a technology apart of its strategy it sends a powerful signal that OpenDaylight isn’t just a Trojan Horse to maintain the proprietary dominance of a few equipment vendors or an academic exercise for a few tenured professors and exuberant volunteers. Read on to see why, but business and IT executives and their technical leaders ignore OpenDaylight at their peril.


For highlights of presentations I attended at the OpenDaylight Summit, see this photo album.

Disclosure: The Linux Foundation paid for my travel expenses to the Summit, however I have no current or prior commercial relationships with them.