If you are responsible for IT spending, both CapEx and OpEx, you should read this article. Changes to the IT landscape, with the potential to shake things up as much as virtualization and cloud computing have, are on the way if not already here. It’s called Hyper-convergence, and it will have profound ramifications for how IT, on the infrastructure side, is managed and staffed.
About twenty years ago you had servers running in the computer room, and each server had its own hard drive that stored that server’s data. This is our starting point, which was really the status quo for quite some time, from the early 1980s until around 2000.
Then along came server virtualization, which allowed multiple ‘virtual servers’ to run on one physical server. The virtualization software generically is called a ‘hypervisor’, and this market was and remains dominated by VMWare, with old line competitors like Microsoft (Hyper-V) and upstarts like Nutanix (Acropolis) nipping at their heels. And, along with this virtualization came storage devices capable of handling the massive amount of storage from multiple physical and virtual machines, called SANs and NASs.
Mostly known by their acronyms, a SAN stands for a “storage area network” and a NAS stands for “network area storage”. In other words, these devices run on their own network, the network that accesses data storage, as opposed to the user facing networks such as a LAN or WAN. Servers had to access these different networks separately, from separate switch ports, because the networks were not the same type of networks—they had different ‘protocols’. The storage networks were typically fiber optic because the distances were shorter, and the speed was faster. This faster speed was due in large part because the fiber network didn’t incorporate the error checking necessary on the Ethernet networks that are the backbone of a LAN or WAN, which service the user community.
This was revolutionary and cutting edge up until about 2011, and is why there are so many fewer servers in computer rooms. This saves power and real estate, plus it’s a lot easier to manage multiple virtual servers than multiple physical servers. Then the industry evolved again. First we had converged systems, and now we have hyper-converged systems, which as the names imply, are converged back into one system again. However, despite how it looks at first glance, we’re definitely not back to mainframes again—it’s actually quite different and revolutionary. Plus, “the cloud”, which is the ultimate ‘virtual’ system, is becoming more and more a part of businesses’ IT landscape.
If you read about ‘converged’ systems on the Internet, it sounds like it’s just a prepackaged solution from one vendor, which has a lot of advantages, but isn’t exactly revolutionary. OK—so the servers, storage devices and network devices all run in a nice, prepacked stack that fits neatly in a rack. Why’s it called converged--What’s converged about it? The answer is surprising. It’s the least glamorous components, the networking devices, which are the heart of a converged system. In the HP world, where converged systems originated, the convergence happens in the networking devices called “Virtual Connects”. These devices converge, or combine access to the Local Area Network and the Storage Area Network, using virtualization at the networking level as well as what’s called FCoE, or Fiber Channel over Ethernet.
Storage networks tend to use Fiber Channel and Local Area networks use Ethernet. Ethernet is slower, but it’s more fault tolerant, which allows for error fixing. Fiber Channel is faster but doesn’t handle errors as well. The way these new networking devices work is that network packets heading for the storage area network are able to be sent over Ethernet because they are wrapped in an ‘envelope’ that makes them appear, and be treated as, Fiber Channel, so they can be accepted by the storage devices.
This ability to disguise Ethernet packets as Fiber channel allows virtualization at the ‘virtual connect’ level, and hence, now everything but storage is virtualized. The servers are virtualized, the network devices are virtualized, and the storage devices (called SANs) function in this virtual world. Therefore, each converged system can be managed virtually, not quite a “single pane of glass”, but getting close.
For the record, some other advantages to converged systems are:
1. Fixes and enhancements are always up-to-date, patched as needed
2. The IT staff can be more generalists rather than specialists in the various components, like servers, networking, and storage. This could very well mean less staff overall.
3. Easier to manage
4. Easier to resolve issues (one vendor)
The cloud has been defined on the Internet plenty of times—the problem is that the definition keeps changing, due to evolving technology plus “evolving definitions”, as more players enter the market and want to be able to say they operate “in the cloud”. At this point is seems like a clean definition of the cloud is “a completely software controlled computer room” (or more commonly known as a SDDC--software defined data center), the pieces of which can reside anywhere—they can be in the AWS public cloud, scattered around the globe, or they can be in a private cloud housed down the hall or in someone else’s computer room. Or, in the increasingly popular ‘hybrid’ situation, the cloud can be in both your computer room and in the public cloud. This is where hyper-convergence comes in.
Hyper convergence takes it one step further, to what seems the limit. It’s the culmination of virtualization, so that all devices, no matter where they are or what they do (except for things like a UPS), are managed on one console, from “one pane of glass”. You truly have a completely software managed computer room rather than one where physical devices need to be managed individually. And, cloud storage or compute resources can easily be incorporated as another device. On the console it appears as just one more option. This is why hyper-convergence is one of the hottest topic these days.
Co-founded by the former Google executive, Mohit Aron, who led development of the GFS (Google file system), the current pace-setter in hyper-convergence is a start-up called Nutanix. From one easy to use console an IT person can create or upgrade virtual machines in minutes, assign storage from any device on the network to any virtual server, create virtual storage area network connections, and access cloud resources if and when needed, all from “one pane of glass”. There’s even built in AI (artificial intelligence) to tell you when something is wrong or needs to be addressed. In fact, Nutanix interfaces directly with SalesForce.com to immediately create trouble tickets as required.
Nutanix works with any hypervisor, VMWare or Microsoft’s Hyper-V, but much to the dismay of both companies, also now has its own hypervisor, Acropolis. Right now Acropolis doesn’t provide anything more than Hyper-V or VMWare’s ESXi, but Nutanix didn’t develop it for nothing, so it probably will soon. Hyper-convergence and Nutanix are behind much of the mergers and partnerships by competitors taking place right now. Don’t believe it? Read this article from Business Insider. There’s a war going on in the hyper-convergence world.
Personally I wouldn’t do anything major in infrastructure right now without checking out hyper-convergence. Entry prices are surprisingly low, so even SMB class companies can hop on the bandwagon.
Versatile Consulting Services, backed by the top shelf engineering and design services of Versatile Communications, Inc. offers CIO Advisory services, project financial analysis, software RFP and analysis, project management and implementations, security analysis and program development, and affordable packages for moving to Office 365, developing a cloud roadmap, and more. The author can be contacted at
or by calling 508-597-2857 (Direct) or 978-758-8516 (cell).
Copyright Crow Hill Associates, LLC; which is solely responsible for the contents of this article