Don’t just worry about data security on hotspots. Hotspot providers culling metadata pose a bigger threat to privacy
Public hotspots have become the water fountains and restrooms of the digital age: amenities expected of every public venue. But as I wrote in a previous column, they can be a a privacy and security minefield, where the combination of few controls, little oversight and careless users make them a target rich environment for data thieves. But hackers setting up rogue APs and copying every Web page, email, photo and shopping cart you see online pose less risk of data loss than more routine, and entirely legal forms of data mining.
There’s another, more insidious means of privacy invasion while on public networks: metadata tracking. Perhaps the easiest and most common method uses DNS, the Internet’s address book, to record your every move. I have the details, plus advice on protecting your privacy in this column.
Protecting your data from prying eyes while on the move is important and easier than ever. Here’s why.
An axiom among network security pros is that you should treat public Wi-Fi hotspots like the cyber equivalent of public bathrooms: a convenience we all use, but only with the requisite hygiene. You wouldn’t share personal items like a toothbrush or razor with others at an office, gym or airport restroom, but too often people broadcast personal information that could be disastrous in the wrong hands over wireless networks where intercepting data is easier than many people realize. In addition, users on public hotspots leave breadcrumbs documenting their every move on the Internet for anyone, including the hotspot operator to mine through for valuable, and privacy-compromising, insights; a topic I’ll cover in more depth in my next column.
We all know that personal data leaks like a sieve on the Internet writ large, whether through Google GOOGL -0.15%’s collection of search history, Facebook’s aggregation of login credentials and activity tracking (using cookies and social plug-ins) on sites far and wide and other ad networks that track our every move. However the risk is acute out in the wild, in the world of public hotel, airport, cafe and convention center Wi-Fi. While Google and Facebook collect data that profiles and tailors ads and other promotions to their users, at least their customers (i.e. essentially all of us) generally know what we’re signing up for in the bargain. Out in the wilds of public hotspots, there are no the rules.
I explain the risks and what to do about them in this column.
Security threat and response is a vicious circle of escalating (and increasingly cagey) attacks and sophisticated (and increasingly costly) defenses. The latest generation of malware includes deviously creative evasive techniques crafted to exploit ambiguities in the Internet’s underlying technology, flaws in network software stacks, and limitations of security appliances.
One example operates at the network-protocol level to bypass firewalls and intrusion-prevention systems by hiding malicious traffic within abnormal, but still compliant, TCP/IP packets. Another category works entirely within common applications using normal rules for web traffic. They don’t so much trick network security software as bypass it using HTML5 and embedded scripts to distribute malicious payloads. In this report, I discuss these techniques, how IT teams can test their level of exposure, and how to detect and block attacks using advanced packet normalization.
The term “cloud” is by now so overused that most people lump it in with the rest of the marketing buzzword pantheon with terms like “solution”, “leverage” or “ROI.” You’re probably thinking that the cloud is what Google and Amazon do and how could it possibly be relevant to your business? Well it is (what Google does) and it will be (important to every business) and VMware wants to be the company that delivers it. It’s called the software defined data center (SDDC) and for CEO Pat Gelsinger it’s increasingly a part of discussions with his C-level customers.
In part one of this exclusive Q&A after his Interop keynote, Gelsinger shares details about VMware’s SDDC strategy and how it’s evolved from servers to networks and storage.
VMware CEO Pat Gelsinger used his Interop keynote to lay out a four-pronged strategy for building what the company has coined the software defined data center, adding details and customer testimonials to a concept he initially described last year. Besides the firm’s ubiquitous server virtualization products, which abstract compute resources into customizable chunks that can be reconfigured and reallocated as needed to suit specific application needs, the strategy includes virtual networks, using NSX technology released last fall, storage virtualization provided by the recently shipping and Interop award-winning VSAN product and software to centrally and programmatically automate configuration, deployment and deployment of the entire infrastructure stack.
It’s an expansive vision that could put VMware in the middle of enterprise IT application and service design, construction and delivery. Read more in this column.
The nexus of mobile, social, cloud, and big data is radically reshaping the business world, nowhere is the upheaval more dramatic than in areas with high customer engagement: retailing, financial and communications services, travel, customer service and technical support. Indeed, the world of customer service has been turned upside down. Forget about waiting on hold for 20 minutes and explaining your problem three times to three different people. Today, customers have been conditioned by instant, everywhere access to information and resources to expect answers now and in context, namely answers and information that reflects the customer’s profile and prior interactions. It’s called the omnichannel customer experience and I outline the basics here.
The rapid adoption by consumers of new types of technology (e.g. smart phones, social media) have led to the consumerization of customer experience. This change in how customers interact with businesses impacts more than just the customer service department. It can be seen in everything from client devices and applications to IT infrastructure and application provisioning. Collectively, the proliferation of mobile, social and cloud-based technology has permanently altered consumer behavior, notably in how people learn about, evaluate and buy things, how they share opinions about products, retailers and service providers with an ever-widening circle of online friends and the ways they expect to access documentation and support.
This white paper (registration required) provides the full details.
Complex Financial Relationships and Often Competing Alliances Make the Company Harder to Value and Susceptible to Product Overlap and infighting
A key theme at this week’s EMC World conference, the company’s annual event showcasing its business strategy, technology innovations and new products, was what the company calls the federation: its corporate structure of four quasi-independent business units and associated product lines. This column analyzes the company’s strategy, structure and business valuation. It may come as a surprise to those not closely following the IT business that still associate EMC with the big iron storage arrays that made it famous, but the company’s ambitious acquisition activity over the past decade means it is now a major player in five key areas of IT: storage, server and network virtualization (VMware), cyber security (RSA), cloud software platforms(Pivotal) and application software (Documentum with associated content management add-ons and Syncplicity).
Yet EMC goes further than most in providing a substantial degree of autonomy to its business units. The column, dives into EMC’s strategy, it’s fascinating business structure and how trying to apply U.S.-style federalism to a large enterprise has drawbacks.
Note to CIOs: Cloud First’ Isn’t Just A Suggestion
Public cloud services are the biggest change driver in enterprise IT since the rise of the Internet two decades ago. The most recent InformationWeek Cloud Security and Risk Survey finds that 38% of respondents already use public infrastructure-, platform-, and software-as-a-service products, with an additional 9% planning to within the next 12 months. The IW 2014 Private Cloud Survey shows that, among private cloud adopters using or planning to adopt public cloud services, 72% use or plan to use a hybrid model. Private enterprises are clearly taking public cloud mainstream.
But what about government agencies? Yes, their needs and constraints are often quite different from those of commercial enterprises, but ignoring cloud’s efficiencies is a nonstarter. This report details why, where and how government IT managers can move to the cloud.
I analyze the use of cloud services by government IT, including the requirements, executive initiatives and service qualifications, and auditing and procurement programs that make government cloud adoption unlike that in the private sector. Can government agencies be as aggressive as their commercial counterparts in moving infrastructure and applications to shared cloud services? Are the constraints on and requirements for government systems similar to those of heavily regulated industries such as finance and health care? What can government IT execs learn from these industries, and private enterprise more broadly, about cloud service adoption and how best to balance public and private in a hybrid cloud architecture? Is sharing private clouds among several agencies, or using a federal cloud brokerage, a viable option? Building a multitenant private cloud within the friendly confines of an internal government datacenter could be a win-win.
The report examines these questions and provide government CIOs and IT leaders with the information they should know as they migrate government IT to the cloud.
SDN, big data, and scale-out storage architectures have increased the complexity of datacenter convergence projects since InformationWeek last surveyed its readership on the topic. Hybrid clouds are on the horizon for 65%, but to get there, IT must sort out everything from server architectures (internal vs. network storage, blades vs. rack mount) to storage protocols (FC vs. iSCSI vs. NAS) to network management and administration (including SDN) to virtualization and cloud platforms (OpenStack, CloudStack, vCloud).
We decided to explore adoption of datacenter technologies that support convergence and how willing IT is to entertain proprietary specs versus waiting for standards bodies to ward off lock-in. Some key findings (full report available here):
- 19% of the 214 respondents to our 2014 Datacenter Convergence Survey say they are not looking to converge. The top reasons: no perceived business advantage and other projects having a higher priority, both cited by 32%.
- 73% of those respondents with data convergence plans say reducing costs is the top driver for adopting technologies that support convergence. The No. 2 response, building a private cloud, was selected by 30%.
68% say deploying an FCoE and/or iSCSI SAN allowed them (21%) or will allow them (47%) to eliminate Fibre Channel.
22% will devote more than 20% of their fiscal-year 2014 budgets to achieving datacenter convergence, virtualization, and private cloud; 7% will devote more than 30%.
20% have consolidated personnel with networking, storage, and server skill sets into one integrated unit versus 30% with separate teams. In our 2012 poll, 28% had consolidated.
p>In this report, I analyze the survey findings, discuss cloud specs to watch, recommend six places to standardize now, and provide four steps to forward convergence goals.
This series on SDN products concludes with a look at Big Switch’s updated SDN strategy, VMware NSX, IBM’s hybrid approach, and Avaya’s focus on virtual network services.
My last post in this series on vendors’ SDN strategies looked at SDN products from Juniper, Dell, Brocade, and Alcatel-Lucent/Nuage. In this final post of the series, I examine how Big Switch, Avaya, IBM, and VMware approach software-defined networking.