Doing “IT” The Hard Way, or Why You Should Be Hyper Converged

So to counter balance my other piece from this week on some of the challenges that the Hyper Converged Infrastructure space faces, I thought I would argue the flip side of the coin. If you are not looking to take a Hyper Converged First approach to your infrastructure needs, you are throwing away your money.

Doing “IT” The Hard Way

Screen Shot 2015-02-12 at 4.49.07 AMLet’s face it, Virtualization first tends to be the dominate strategy for most companies today unless they have a very specific kind of workload for which Virtualization is not a good fit. Be it VMware, KVM, Hyper-V, Proxmox, CoreOS/Docker, etc. The ability to create flexible portable workloads in the data center is the de facto standard for application deployment and management. So why are we continuing to build IT infrastructure that fits the old silo’d models of delivery? If we are virtualizing everything in the server application space, why are we not doing the same for the infrastructure itself?

As I said in my previous post, Hyper Convergence brings Simplicity. There is nothing simple about the Legacy IT Stack, quite to the contrary if you look at the image above, every one of those hardware pieces is simply a commodity x86 server running Linux. Each has its own management console, power/cooling, support contract, and requirement for training. None of these components have an inherent interconnected awareness of each other. So as we look to provide virtualized workloads to our end users, why are we also not looking to virtualize the infrastructure along with it? This is the promise of Hyper Convergence. It brings the ability to reduce the high cost, and high complexity of a legacy IT model that was designed for the previous era of computing. It also offers an easy, repeatable, and scalable delivery model, without the increased complexity of a disaggregated infrastructure stack.

Competing Approaches To The Modern Data Center

For the heavily virtualized customer base, those in the 70-100% Virtualized range, or for customers looking to address that specific infrastructure there tends to be two approaches that will come to Screen Shot 2015-02-12 at 5.00.56 AMdominate the decision making process in the future, and those approaches will hinge very much on the number of workloads they are looking to deploy. I interviewed Chad Sakac at VMware Partner Exchange last week around the introduction of VSPEX:Blue for Cisco’s Engineers Unplugged, and he laid out the scope of how EMC looks at their customer base and it falls very much in line with the thinking behind all of the various Hyper Converged players: 1000 VM’s and below: Hyper Converged, 1000 VM’s and Above: Rack Converged.

Now the Hyper Converged Vendors will tell you there is no actual upper limit to their capabilities, I do believe that this “Number of VM’s” approach to infrastructure tends to make sense because of the diversity of workloads that will be deployed by organizations once they reach a specific VM density. Unless we are talking strictly about VDI, organizations with over 1000 Virtual Machines in my experience have much greater diversity of applications that would cause them to look at the engineered Rack Scale solutions instead of Hyper Converged. And yes, one of these days I will go into much more depth about Rack Scale, just not today.

Expanding Opportunities and Use Cases

landandexpand

Having been part of an initial go to market sales team in the early stages of the Hyper Converged marketplace, I’ve been afforded a unique opportunity to see what early successes looked like, as well as what kind of dynamics are at play as VAR’s start to adopt a Hyper Converged play as part of their product portfolio. After that initial first 6 months of evangelizing and bringing the message to the masses (I presented to over 500 customers in a single year), it became clear that some of our initial thoughts on where early success would materialize were misplaced, while other areas where we had not initially focused turned out to be amazingly successful. As a customer looks at an initial project, perhaps a QA or Dev Cluster, the ease of use, simplicity in deployment, and all-in-one nature of the Hyper Converged Infrastructure afforded customers the flexibility and agility to provision resources much faster than their prior Legacy stack solutions. That in turn lead to more business, and an expansion of the number of units within the org. What mattered most in many instances was to observe the systems operating in house, gain the appropriate level of trust in the system and the vendor, and finally to gain widespread adoption and acceptance. Its very much like a virus in how it can penetrate the skin of the organization, and spread, and in many instances, take over entirely.

Destroy All Silos

silosOne of the final items I wanted to touch on in regards to HCI, is one benefit that I don’t think gets enough focus;  the opportunity afforded to flatten the IT workspace and reduce the silo effect that is a direct result of the Legacy IT Stack. After 15 years working in the end user space, and as a member of various silos myself, I came to loathe them primarily because of the inefficiencies they injected into day to day operations, the constant turf battles, and the prolonged and drawn out impact they had on achieving any form of technical agility. The Legacy Stack and its individual components are the prime reason we have these silos in the first place, so I can see the adoption of Hyper Convergence being a first step into their removal.

Now I’m fully aware that for certain organizations, the complexity of their operations lends itself well to the use of a Silo system, but I submit that for that type of organization, Hyper Convergence is more than likely not going to be an initial good fit. I also understand that the subject matter experts or team members with a primary focus on one technological aspect will always be part of the IT space. Still, this doesn’t mean that cross functional teams should not be a primary goal for smaller organizations, and that their ultimate benefit is a workforce that has a far stronger cooperative group dynamic. The days of being “the storage guy” or “the server guy” are closely approaching end times, and in many respects they are already here for many groups. Once again, the simplicity of the Hyper Converged approach takes complexity out of the base foundation to IT infrastructure to the benefit of the entire team.

Hyperactive Growth

My prior piece on the “hype” around Hyper Converged generated a lot of discussion, and that was the hyperactivemain intent of writing it. I had always planned to provide a piece on the opposite side of the spectrum to even things out. In a very short time period, Hyper Convergence has gone from a new concept with a handful of startup companies participating, to a hyper growth mode, where the major OEM’s are now scrambling to bring their own unique solutions to the market to make up for lost ground. It’s very much an exciting thing to witness, and I for one enjoy being able to comment and discuss it. As always commentary is welcome. Cheers for now.

 

This entry was posted in Enterprise Tech, HyperConvergence, Nutanix, Rack Scale, SimpliVity. Bookmark the permalink.

2 Responses to Doing “IT” The Hard Way, or Why You Should Be Hyper Converged

  1. 1. Most IT environments different size VMs.  Some consume a lot of DISK and CPU, others are idle. Thus VMs on a converged appliance don’t simultaneously exhaust DISK and CPU. Thus another appliance needs to be added to scale and the original appliance is not fully utilized. It is better to manage discrete resources that are tuned for their specific needs.
    2. Storage requires in-memory data structures address space management and metadata. Storage requires processor and network resources for replication and recovery. When these resources are co-located on one appliance with application workloads, they contend for one another and organizations are may only get a fraction of available performance.
    3. In data-centers storage and compute have different hardware refresh cycles and are refreshed on-demand. (Storage is often purchased up-front and consumed over time). Why deploy a converged architecture that requires refresh and replace in equal proportion?

  2. Id agree with Jonathon for those companies that are deploying HCI (also called Software Defined Storage by some, to borrow another acronym and “next new” thing) as an appliance. The promise of HCI is deliver a web-scale model to ordinary IT infrastructure. Now I can’t outgrow my storage controller and have to forklift it for something new, I just add another node. Things like EVO Rails have made the scaling factor lumpy, which is going to depend completely on how many nodes is right for you – the lumps are most noticeable and objectionable at the low end of the spectrum. In the end though, it is still all about the applications, which is why organizations give any money at all to IT.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.