Doing “IT” The Hard Way, or Why You Should Be Hyper Converged

So to counter balance my other piece from this week on some of the challenges that the Hyper Converged Infrastructure space faces, I thought I would argue the flip side of the coin. If you are not looking to take a Hyper Converged First approach to your infrastructure needs, you are throwing away your money.

Doing “IT” The Hard Way

Screen Shot 2015-02-12 at 4.49.07 AMLet’s face it, Virtualization first tends to be the dominate strategy for most companies today unless they have a very specific kind of workload for which Virtualization is not a good fit. Be it VMware, KVM, Hyper-V, Proxmox, CoreOS/Docker, etc. The ability to create flexible portable workloads in the data center is the de facto standard for application deployment and management. So why are we continuing to build IT infrastructure that fits the old silo’d models of delivery? If we are virtualizing everything in the server application space, why are we not doing the same for the infrastructure itself?

As I said in my previous post, Hyper Convergence brings Simplicity. There is nothing simple about the Legacy IT Stack, quite to the contrary if you look at the image above, every one of those hardware pieces is simply a commodity x86 server running Linux. Each has its own management console, power/cooling, support contract, and requirement for training. None of these components have an inherent interconnected awareness of each other. So as we look to provide virtualized workloads to our end users, why are we also not looking to virtualize the infrastructure along with it? This is the promise of Hyper Convergence. It brings the ability to reduce the high cost, and high complexity of a legacy IT model that was designed for the previous era of computing. It also offers an easy, repeatable, and scalable delivery model, without the increased complexity of a disaggregated infrastructure stack.

Competing Approaches To The Modern Data Center

For the heavily virtualized customer base, those in the 70-100% Virtualized range, or for customers looking to address that specific infrastructure there tends to be two approaches that will come to Screen Shot 2015-02-12 at 5.00.56 AMdominate the decision making process in the future, and those approaches will hinge very much on the number of workloads they are looking to deploy. I interviewed Chad Sakac at VMware Partner Exchange last week around the introduction of VSPEX:Blue for Cisco’s Engineers Unplugged, and he laid out the scope of how EMC looks at their customer base and it falls very much in line with the thinking behind all of the various Hyper Converged players: 1000 VM’s and below: Hyper Converged, 1000 VM’s and Above: Rack Converged.

Now the Hyper Converged Vendors will tell you there is no actual upper limit to their capabilities, I do believe that this “Number of VM’s” approach to infrastructure tends to make sense because of the diversity of workloads that will be deployed by organizations once they reach a specific VM density. Unless we are talking strictly about VDI, organizations with over 1000 Virtual Machines in my experience have much greater diversity of applications that would cause them to look at the engineered Rack Scale solutions instead of Hyper Converged. And yes, one of these days I will go into much more depth about Rack Scale, just not today.

Expanding Opportunities and Use Cases


Having been part of an initial go to market sales team in the early stages of the Hyper Converged marketplace, I’ve been afforded a unique opportunity to see what early successes looked like, as well as what kind of dynamics are at play as VAR’s start to adopt a Hyper Converged play as part of their product portfolio. After that initial first 6 months of evangelizing and bringing the message to the masses (I presented to over 500 customers in a single year), it became clear that some of our initial thoughts on where early success would materialize were misplaced, while other areas where we had not initially focused turned out to be amazingly successful. As a customer looks at an initial project, perhaps a QA or Dev Cluster, the ease of use, simplicity in deployment, and all-in-one nature of the Hyper Converged Infrastructure afforded customers the flexibility and agility to provision resources much faster than their prior Legacy stack solutions. That in turn lead to more business, and an expansion of the number of units within the org. What mattered most in many instances was to observe the systems operating in house, gain the appropriate level of trust in the system and the vendor, and finally to gain widespread adoption and acceptance. Its very much like a virus in how it can penetrate the skin of the organization, and spread, and in many instances, take over entirely.

Destroy All Silos

silosOne of the final items I wanted to touch on in regards to HCI, is one benefit that I don’t think gets enough focus;  the opportunity afforded to flatten the IT workspace and reduce the silo effect that is a direct result of the Legacy IT Stack. After 15 years working in the end user space, and as a member of various silos myself, I came to loathe them primarily because of the inefficiencies they injected into day to day operations, the constant turf battles, and the prolonged and drawn out impact they had on achieving any form of technical agility. The Legacy Stack and its individual components are the prime reason we have these silos in the first place, so I can see the adoption of Hyper Convergence being a first step into their removal.

Now I’m fully aware that for certain organizations, the complexity of their operations lends itself well to the use of a Silo system, but I submit that for that type of organization, Hyper Convergence is more than likely not going to be an initial good fit. I also understand that the subject matter experts or team members with a primary focus on one technological aspect will always be part of the IT space. Still, this doesn’t mean that cross functional teams should not be a primary goal for smaller organizations, and that their ultimate benefit is a workforce that has a far stronger cooperative group dynamic. The days of being “the storage guy” or “the server guy” are closely approaching end times, and in many respects they are already here for many groups. Once again, the simplicity of the Hyper Converged approach takes complexity out of the base foundation to IT infrastructure to the benefit of the entire team.

Hyperactive Growth

My prior piece on the “hype” around Hyper Converged generated a lot of discussion, and that was the hyperactivemain intent of writing it. I had always planned to provide a piece on the opposite side of the spectrum to even things out. In a very short time period, Hyper Convergence has gone from a new concept with a handful of startup companies participating, to a hyper growth mode, where the major OEM’s are now scrambling to bring their own unique solutions to the market to make up for lost ground. It’s very much an exciting thing to witness, and I for one enjoy being able to comment and discuss it. As always commentary is welcome. Cheers for now.


Posted in Enterprise Tech, HyperConvergence, Nutanix, Rack Scale, SimpliVity | 2 Comments

DevOpolis Edition #1: Hello World

Screen Shot 2015-02-16 at 4.52.15 PMSo with the number of VMware vExperts topping 1000, it was becoming a little unwieldy to continue to scan through a thousand blogs and curate a vExpert Weekly Flipboard magazine, alas I let it kind of wither and die on the vine. If anyone else in the community has the time to carry on with the concept please let me know.

Now I find myself working in a different area and focus, much less standard virtualization and much more Cloud/DevOps oriented. I read a lot and am always trying to educate myself on the various platforms and products relevant to this space and as such, I’m creating a new Flipboard Magazine called DevOpolis that will be dedicated to a host of technologies, but will primarily focus on DevOps, Containerization, Automation, Cloud Computing, OpenStack, OpenCompute, and other Open Source efforts. So for your reading pleasure, please check out DevOpolis Edition #1: Hello World

As usual feedback is much welcomed. If you would like to have your blog added to my list just shoot me a message on Twitter @Bacon_Is_King. Hope you enjoy.

Posted in Cloudy, DevOpolis, DevOps, Enterprise Tech | Leave a comment

With Hyper Converged: Don’t Believe the Hype???

public-enemy-2013-rock-n-roll-nominee-black-enterpriseIt’s about time for a contrarian viewpoint around Hyper Convergence, and what better place to put it forth than from one of its prime cheerleaders (me). With EMC entering the Hyper Converged marketplace with VSPEX:Blue, and the impending releases of the major VMware OEM partners and their EVO:Rail solutions, I thought I would take a hard look at the state of the Hyper Converged market 5 years on and try to give a fair analysis of the market in terms of where it’s being successful and where it’s not.

Disclaimer: I have no proprietary information to share here, all of this comes from my observations of public statements made by various customers, and vendors in this space.


Nearly all of the Hyper Converged vendors will pitch that the TAM (Total Addressable Market) is huge. idc_marketscape_2014_hyperconverged_suppliersThe three claims by the analyst community that I have seen are Gartner: 6 Billion by 2014, IDC: 17 Billion by 2016, and Forrester 40 Billion by 2018. That’s a pretty big stretch and an amazing growth rate if it turns out to be true. The reality on the other hand is something totally different. Let’s take the recent IDC analysis of the Hyper Converged market. This graph has Nutanix clearly in the lead, a point that I find to be 100% valid. It also has them with roughly 5X the sales of its nearest competitor SimpliVity and Scale Computing, which I would also say is valid. Let’s face it, they have a good head start on everyone else in the market place having had product shipping since roughly 2011. Nutanix is claiming a run rate of business of around $300 Million.

Do you see the rest of the players in this space accounting for the other 5.4 Billion that Gartner claimed would be the TAM for 2014? No, at best you can equate the rest of the markets combined run rate of business at roughly $500 Million per year. Now for an emerging market that’s fairly impressive, but lets put it into a different context, 500 Million is roughly the entire Fibre Channel HBA market world wide. For all the claims that Hyper Convergence is taking over the data center, one has to ask how is that so if these companies, who are pioneers in the space, are not taking off in terms of sales that would make a dent in the multi-billion dollar Server/Storage market, let alone eclipsing a legacy storage transport market?

Now to be fully fair, I’m pretty sure that the analyst firms are bundling in the standard Converged Infrastructure players into this market space as well. The devils always in the details, and if we were to include VCE/FlexPod/Exadata/PureSystems etc into the mix that 6 Billion number looks a lot more appealing than my estimated 500 Million. VCE hit a run rate of 2 Billion this year per their claims. For those playing at home, thats 4X the entire Hyper Converged market today.

So Easy A Caveman Could Do It.

so_easy_a_caveman_can_do_it_tv_show_announcementHyper Converged vendors are most certainly first and foremost selling Simplicity. Nearly all of their sales pitches discuss how simple it is to deploy, manage, and configure (sounds like a VMware course title), and from what I’ve seen they deliver on that promise. I took the EVO:Rail challenge at VMworld Europe and knocked it out in 15:06, that’s a full 6 seconds  past the 15 minute claims by VMware for reading the document and doing any error correction in the data entry portion. So whats the problem with that? Well all of the Hyper Converged systems are Channel Sales driven, that means VARs (Value Added Resellers) are you go to market vehicle that bolsters your sales team. None of these companies are taking the business direct, even EMC is saying that VSPEX:Blue will be 100% channel.

Riddle me this, if I’m a VAR, and I already get slim margins on Hardware Sales (anywhere from 3-7%) where do I get to make the bulk of my money? That’s right, Services. What happens to all those billable hours that I used to charge for installing and designing the “legacy infrastructure” for my customers as their “Trusted Advisor” when I sell them a Hyper Converged system that they can have up and running in 15 minutes on their own?

This is a question I’ve posed to many people and no one seems to have a compelling answer so far. I know there are huge margins in the Hyper Converged space. These products are not cheap, but the OEM’s are the ones making the bulk of the money here, not the Channel partners who are selling it. Sure there will be SPIF’s and incentives to push product, that’s the nature of the beast, but what about those sweet, sweet, billable hours that are so much gravy ladled over the IT sales process? I’m sure there is a good answer out there somewhere to this question, but so far I’ve not heard a compelling one, and in my own experiences one of the first questions that arrises during the initial conversation with VAR’s I was working to recruit was: “How do I make up for this lost revenue?” Good question. Anyone have a good answer?

The Mis-Match Game

The last point I wanted to touch on was what I call the “Mis-Match Game”, how do the Hyper Converged matchgameplayers address the mismatch between Storage and Compute ratios that invariably will appear with an all-in-one approach? How do customers still find a way to leverage their investments in the legacy stack that they have depend on for the last two decades? Some of the vendors in this space attempt to do this by opening up the Storage functionality to external compute, but that will only let you scale compute resources. The challenge ends up being what if I want just a little bit of storage, or a whole bunch of compute?Certainly for some IT Shops it will not be a major concern if I have to scale one additional node, but what if that scalable unit is 4 like it currently is with EVO:Rail? Invariably there is a mismatch and customers end up buying more of what they truly need and as an industry, we have been pitching “right sizing” of environments since the dawn of time. This can be a fairly hefty financial impact for a target market that is currently geared towards the SMB/Mid-Market customer whose IT budgets are fairly tight to begin with.

This poses a significant challenge to customers and vendors alike, and to be honest, I find some of the arguments against this point to be very weak. What the customer makes up for in reduced complexity, they lose in terms of a very rigid infrastructure model that is not well adjusted to unforeseen change or mutation. I’ll tell you that speaking from my own experiences, the holy grail of an “All-in-One Datacenter” sounds great on paper to many customers, but it doesn’t always end up being so elegant in practice. As this space continues to mature, it will be interesting to see how the current crop of vendors attempt to address this point.

 So which way do we go with Hyper Convergence??

looney tunes abominable snowmanNone of above is a terminal to the Hyper Converged space. To the contrary, this space didn’t even exist 4 years ago, it’s starting to build an attractive run rate of business, and there is significant validation of the Hyper Converged concept by the entry of EMC, HP, Dell, etc. who are always looking to increase their bottom lines.

Let’s be real here, it takes time to change 20+ years of how IT has done business and it won’t happen overnight. I simply offer these examples as check points against much of the Hype that I see every day. And while it’s easy to get caught up in the echo-chamber of the influencer community to which I belong and play, customers tend to have a lot more riding on their infrastructure bets, namly their jobs. So they are going to cast a far more skeptical eye on these solutions, and in turn will be voting with their wallets.

Coming soon: Moving Beyond Hyper Converged, Rack Scale.


Posted in HyperConvergence, Nutanix, SimpliVity, Virtualization | 11 Comments

2014: Validation of Hyper Convergence – 2015 Hyper Converged Goes Mainstream

2014: Validation of Hyper Convergence

Screen Shot 2014-11-27 at 12.38.37 PMLooking back on 2014 I saw it as  the year that Hyper Convergence  has come into its own as a legitimate data center technology. It’s rare that I don’t run into a customer in my day to day discussions that is not evaluating one of the current Hyper Converged vendors, or looking to adopt or develop a strategy for how Hyper Converged can become part of their IT infrastructure to some extent. I chalk this up to three main factors that continue to be the major challenges that IT faces today :Complexity – Data Growth – Scalability. The Hyper Converged systems today aim to address those three areas, but they also are seeking to become the standard building block unit for most virtualized workloads.

Delivering Simplicity

simplicityThe Hyper Convergence promise is simplicity. That’s the first and foremost value proposition provided to the end user. The ability to plug in, turn on, and deploy workloads in a time frame that surpasses the traditional multi-vendor reference architecture deployment model is a strong motivating factor for customers who are looking at the Hyper Converged appliance based model. This is especially true if they have a strong understanding of their workload needs, and are looking at a fairly static growth pattern. That is one of the key winning aspects of a Hyper Converged solution over the engineered Converged Infrastructure plays like vBlock and Flexpod. Compared to the first generation Converged Infrastructure systems, the design aspects involved with Hyper Converged systems often are a driving factor in its adoption as well. You need 5000 VDI seats? Simple, here is a 400 desktop based appliance, go do the math. To some that may seem as an oversimplification, but that is indeed the approach that many Hyper Converged vendors will take when engaging with customers, and at times it’s hard to argue against that approach. The ability to design around pre-determined chunks of compute and storage, ie: the lego data center approach, is another strong factor in favor of Hyper Convergence. The abstraction of the complexity that goes into making all of the interwoven pieces of the legacy stack work together is removed, thus the removal of complexity of design, this all points to the ability to deliver simplicity.

A Jack of All Trades



The “one size fits all” approach to the data center is not without its pitfalls. Sure Hyper Converged solutions are simple to design, easy to deploy, and  are well suited to a majority of virtualized workloads, but I’d hazard a guess that most data centers are not a simple homogeneous environment, and that is why in the first two years of their arrival, the solutions have been targeting the specific use cases around ROBO, QA/Test/Dev and VDI. Much like the same way that Virtualization made its inroads into nearly every IT shop on the planet. this new way of providing infrastructure is going after the low hanging fruit, yet I see adoption and acceptance occurring at a more rapid pace than what VMware faced from 2004-2009.  One could ask, If the appliance based Hyper Converged solutions address all of our data center needs, then why are we not seeing it become the defacto  infrastructure platform adopted today?

Well, this is where I get to throw out, “it’s complicated”, because in theory Hyper Convergence can address almost any workload that is virtualized today.  Still, the entrants in the space are young, and that means that the focus will first and foremost be on the lower risk spaces of ROBO, Test/Dev, and VDI. This isn’t necessarily a limitation of its capabilities, but more so a realization that risk aversion tends to rule the day when it comes to production infrastructure. I do not expect this to be the case for 2015, especially since VMware is enabling so many OEM groups to become market participants without having to do much in the way of actually creating anything other than the vessel of delivery.

Prediction Time: 2015 Hyper Convergence Goes Mainstream

On the last episode of In Tech We Trust, we discussed Hyper Convergence in detail as part of the 2014 Trends, and I offered some of my thoughts on where I see the space going. To elaborate a bit further, I would expect a few things to happen this year.

nutanixI’d look for Nutanix to IPO this year. Hitting the 5 year mark, with nearly 3 of those with a shipping product bodes well for them, along with their published run-rate of business hitting the 200M mark and a 2 Billion Valuation. Stronger expansion into Europe and APJ, coupled with the reseller agreement with Dell are also beneficial. I’ve also seen a toning down of some of the more caustic rhetoric from some of their team members, that tends to be a sign that the adults have taken a stronger leadership role. All of this points to the move to go public. I personally believe they should have launched their IPO prior to VMware announcing EVO:Rail, but  the case can still be made that the momentum of VMware’s EVO partners going into the market, and a general shift for standard virtualized compute adopting more of the “Software Defined” aspects will offer numerous reasons for the street to see investment value. Does this mean I think that Nutanix is the clear winner in the Hyper Converged space, no it does not, but for now they have the strongest mind-share, and a diverse enough offering to give them top billing.

simplivitySimpliVity should be looking towards a very large D round of funding that will facilitate further growth.While they do not get the kind of marketing and community buzz that Nutanix does, SimpliVity has a very robust platform and significant intellectual property in their portfolio. Their partnering with Cisco is a much smarter move in my view than the Nutanix/Dell relationship (more on that on another post), and will allow them to leverage the large UCS installation base and strong Cisco partner network. What remains to be seen is how much integration between the two platforms will be done as there is no direct UCS integration right now, though I believe it would be prudent for them to work on that quickly. The original platform based on Dell worked to launch the product, but I see the Cisco relationship being the one that brings them a larger measure of success and awareness. I’d also look for them to announce a secondary hypervisor support in 2015, most likely being Hyper-V which has good traction in the SLED/FED space (an area I see SimpliVity having much success in).

evoEVO:Rail will be great for VMware, but not so great for the OEM’s that have latched onto it. As a 1.0 product that has the added benefit of being able to take a wait and see approach to the Hyper Converged market, I was a bit disappointed in what eventually will hit the street for customers. EVO:Rail is going to be a very expensive solution, even at the lower end of the OEM spectrum with manufacturers like Super Micro being part of the group. While not all the OEM’s have pricing out, I’ve seen 180k as a starting point, which puts it as one of the more pricey solutions based on the number of VM’s or Virutal Desktops that are currently being supported.. Yet even as I say this, it will be hard to beat the marketing muscle and ability to penetrate into the customer space that VMware can enact when coupled with the major OEM partners. Once again, VMware has dictated direction to the OEM’s, this time the server vendors. In the past it was directed at the storage companies with VAAI, but now, there is greater profitability to push more of the “Software Defined” Data Center into customers hands and to have a dominant position as the paid Hypervisor of choice. EVO:Rail as its constituted today is not a very solid Hyper Converged platform, but the fact that HP, Dell, HDS, SM, and others now have a “good enough” entrant to compete with Nutanix and SimpliVity (even at the cost of their own home grown/partner solutions) is a big enough deal to make it the solution that SimpliVity and Nutanix will have to compete against as well as the traditional status quo of legacy rack solutions. I plan to expand on this in greater depth with its own blog post soon.

federation-logo-largeThat leaves the great unknown being the EMC Federation, and what comes from EMC/VCE for their entry into the Hyper Converged space. I think its easy to see that they will use EVO:Rail as one vehicle, but I’d also look for some kind of new solution that would look to leverage the ScaleIO technology. I’d go one further to say that their larger focus will be on the RackScale technologies.

Closing Thoughts

For a majority of virtualized workloads, and for customers looking to heavily virtualize,Hyper Convergence is a good fit. The “simplicity” message resonates strongly, as does the TCO argument which can in some cases be 3X reduction across the board. True there are caveats about maturity of the vendors involved, but if we see more VC money pour into the established, and up and coming players, along with an IPO by one of the founding groups many of those fears will subside. Most notably though, the validation of the Hyper Converged approach by VMware entering the market with EVO:Rail, and the rapid adoption and partnering by the major server OEM’s, points to this being a true disruptive technology transition that is here to stay.

Cloud dynamics are going to play a factor in this market as well, as we see companies adopt a Hybrid Cloud approach or move into Public First or adopt a Full Private approach. Yet that battle space is still being identified, and looks to be more relevant to a specific subset of very large Enterprise customers.

2015 will be the year that many companies start to look or at least entertain a “Hyper Converged First” approach to specific workloads, and use cases. What remains to be seen is what impact RackScale and Hybrid Cloud play in the designs (that will be a new set of blog posts all to themselves). And as a last note, the changes with technologies such as Open Stack, Docker, etc. will play a big part in how our next generation data centers are designed, and that is where I can see a robust “cloud ready” appliance based on the Hyper Converged model or Rack Scale approach become the defacto standard for infrastructure deployments.

I’ll have a lot more to say about this space as the year goes on. I’m working on several deeper dives into the Rack Scale side of things and hope to be able to provide some content soon. Thanks for reading.

Posted in Enterprise Tech, HyperConvergence, Rack Scale, Virtualization | 7 Comments

SNLDD Episode #10: Oh the Humanity

It’s been a while since we have had a chance to get the gang together for a good SNLDD episode. While there is ncaho real specific topic for this weeks show, we will be playing a round or two of Cards Against Humanity which will probably be terrifying, funny, and most certainly politically incorrect. At some point I’m going to build my own set of technology related decks, but since this is a last minute thing like everything else we do, we will be playing with the current DevOps deck. (click for a preview) and through the power of the inter-web, we can play online via Pretend You’re Xyzzy

As with all things SNLDD expect technical errors, glitches, and confusion. Link to the Google+ broadcast will be forthcoming.

Link to tonights episode to watch live.

Posted in SNLDD | 1 Comment

Hyper converged is just the tip, let me show you my rack

To say that the Hyper Converged appliance marketplace is “so hot” right now is more or less understatement of the year. Nutanix, Simplivity, Scale Computing, Nimboxx, Pivot3, EVO:Rail, and now Stratoscale (who just secured a 32M B round without even having a reference customer that I can find) are all vying for a piece of a very large pie.


Estimates place the Hyper Converged market at around 10-20 Billion (with a B) of TAM (Total Addressable Market), and frankly that may be just thejustthetip tip.  Given the amount of technology that can be condensed (and displaced) into a small appliance form factor, those data center dollars start to add up awfully fast. Reducing the annual spend on multiple point solution based appliances with a small data center building block that does all of them is appealing, and can significant reduce TCO. Taken together, the disruption will continue as more organizations adopt a more building block approach to a good portion of their virtualized workloads and it makes sense that Hyper Converged offerings will be looked at as the key platform of adoption. Hyper Converged players are packaging the capabilities of multiple appliances into a single box. Storage Array, Storage Networks, Servers, Backup appliances, Cloud Gateways, and WAN accelerators are the main focal point for takeout, but also replication and backup software can be targeted as well depending on how robust the platform is in terms of convergence. All told, the standard Hyper Converged system will replace around 4-8 appliances and/or software systems to consolidate into a single appliance based model that scales resources pools in a predictable manner. This is why in a few short years we have seen a space that didn’t exist, ramp up to be one of the hottest technologies available. But buzz and press are one thing, are customers buying into this model at the pace being claimed?

Don’t Believe The Hype

dontSo lets slow down a bit and step back from all the hype. Hyper Convergence makes sense for a lot of organizations. It certainly has a natural appeal to the virtualized data center, and I see a very large install base of potential customers that would look to leverage Hyper Converged systems as their main “unit of infrastructure” (ooh I should trade mark that one)  Other use cases for larger Enterprise customers are ROBO/DR targets and Dev/Test. The prime targets for a Hyper Converged play revolve around IT orgs where there are a handful of people wearing all the hats, or the IT Generalist crowd, where infrastructure decisions are made by small groups that can control the decision making process.  Alas, this is not where the bulk of IT dollar spend exists today. While that space has the sheer numbers of customers, it cannot begin to comprehend the amount of annual IT spend that the big players go through in a year.  There is a group of around 200 or so global companies who have annual IT Budgets in excess of 1 Billion, for which the Hyper Converged systems would simply be looked upon as toys.

With Nutanix claiming 200M in sales it’s easy to think that they have ‘arrived’,  but when you see individual deals for specific infrastructure projects or deals that are twice that size for a single customer, it’s hard to buy in entirely that the data center of the future will be of the Hyper Converged model. In that space risk aversion rules the day, and the lack of maturity for Hyper Converged systems is not there yet.

Let Me Show You My Rack

I’ve purposely left off bare metal workloads, and how Rack Scale addresses them and how Hyper Converged cannot, alas that is a post for another day. So while there is a great benefit to Hyper Convergence, there are alternatives at play that address those 200 customers where the smaller appliance based systems cannot play, and now we are starting to see  an emergence of Rack Scale based infrastructure offerings. m-seriesI think that many people would place vBlock and Flexpod into this space, but in my mind those are still multi-vendor reference architectures / rack architectures that still rely too heavily on multiple vendors to integrate, design and deploy. The coming wave in rack scale architecture will come in a form factor that deconstructs the traditional multiple appliance based infrastructure approach with one that is more modular in its nature. I recalled reading the “Rack Endgame”  by Stephen Foskett a while back as well as the follow on article Cisco’s Trojan Horse,  both illustrate some of the potential that Rack Scale architecture can achieve, and both offer a first step forward to understanding where Rack based convergence is moving.  Looking at UCS M-Series Modular servers as well as HP’s Moonshot programs as the first two major players to embrace this space and bring products to market. Both are looking to productize the concepts that were being developed by the Open Compute project and Intel’s Rack Scale Architecture approaches. Not to be left out, I see EMC making a full effort to enter this space and address their current favorite customer that wishes to utilize Hybrid Cloud in their organization. This is where the EVO:Rack based systems will come into play for not only EMC, but other large vendors as well if they choose to build systems around that specific architecture.

Given that this entire post materialized in about 10 minutes from reading that Stratoscale article, and I felt a deep seated need to get all these thoughts out of my brain, I’ll leave todays post at that. As always, there will be more to come.

Posted in Enterprise Tech, HyperConvergence, Rack Sale, Storage & Virtualization | 2 Comments

Shameless Plug: VMworld Europe Tech Talk

Screen Shot 2014-10-27 at 9.52.54 AMFor those of you who may have missed it live, I gave a brief tech talk around the state of the Converged Infrastructure marketplace as well as diving a little bit into what I am calling 3rd Wave of Convergence. Give it a watch and please shoot me any feedback. Have several things in the works around this space as I’ve been trying to coalesce my last two years working in this space into a more cohesive viewpoint. There are serious shifts in the data center coming, and hyper convergence is driving much of it, but not all. More thoughts to come.

Posted in HyperConvergence, Storage & Virtualization, VMworld | Leave a comment

The Federation Is Now Complete

Before reading further, read this post from Chad Sakac. Now allow me to talk out my ass and inject wild speculation prefaced by the fact that I’m speaking on my own behalf.

EMCFederationFirst: I think when most people read that post they will walk away with a good understanding of why Cisco reduced their share in VCE, and why EMC has essentially taken ownership of the group. For me, I see this as the realization of a strategy that was put into place several years ago with the movement of specific key leaders (Pat Gelsinger goes from EMC to VMware and Paul Maritz left VMWare to go to Pivotal, etc.) and with the absorption of VCE the Federation is now complete.

federation-logo-largeSecond: the obvious conclusion the EMC Federation strategic vision has been to move towards being the complete provider of IT for its customer base, and the for the Federation to be complete,  EMC II needs to move beyond being a storage vendor. The push will be primarily focused on Converged Infrastructure / IaaS / Hybrid Cloud offerings as I believe they see the future of those spaces being the direction that IT will be moving. Furthermore,  2B run rate is significant, and EMC was taking in 70% of the revenue from that, why not capture more as the ecosystem shift moves into the hyper converged and rack converged spaces? Why not be the end to end solution provider? You have all the pieces of the puzzle to build nearly any environment for any customer, so capitalize on that ability.

Third:  EMC now has a Converged Infrastructure sales force that is battle tested and will be leveraged across the entire EMC Federation to focus on this new strategic shift.  I always viewed VCE as the official VAR (Value Added Reseller) for VMware, EMC, and Cisco that in turn chose strategic private VAR  partners to leverage for deliverables. It’s far easier (and cheaper) to use a certified partner to do the grunt work and let your highly specialized sales force focus on the largest and most challenging deployments. It also allows you to grow much faster than you could have on your own. This is why so many startups work the channel model in the begging since it allows you to reach a much larger commercial space.  With the addition of the VCE Team, the EMC Federation has a 2000 member strong, highly specialized converged infrastructure sales force that can move seamlessly between standard vBlock Converged Infrastructure,  and their future EVO:Rail/Rack based solutions as well. Converged Infrastructure sales can be highly nuanced and there tends to be a longer sales cycle and a deeper level of customer integration, customization, and education that needs to be imparted during that process. I think the EMC Federation realized this and has capitalized by bring the VCE team on board.

Fourth: Much of this strategic focus is born out of the general acceptance of Cloud services (public/private/hybrid) and with the market WLW-TheFutureofHybridCloud_8E2E-inigo-hybrid_2reaching a level of maturity that reduces the risk to making a significant shift of this size and scope. VMware embracing OpenStack starts to look a lot more reasonable if you buy into the larger EMC Federation strategic viewpoint that Hybrid Cloud is the future of IT. The relatively quick adoption of the DevOps mindset within IT organizations is the driving factor here, and the Hybrid model makes the most sense in my viewpoint. I think this is why Chad bolded the following: “the API and abstraction focus along with management, orchestration, automation.  It is NOT about the hypervisor, server, network, or storage themselves. “  That is the foundational step that needs to be taken if you are going to move towards the agility and acceptance of what Hybrid Cloud can offer to larger organizations. In essence it lays the foundation for what I see as the holy grail for a lot of companies (and this is where Pivotal comes into the equation) the ability to make business critical strategic decisions based on the information you currently have at your disposal in a point that approaches a near instantaneous time frame. This is why you see companies like GE investing in Pivotal, and declaring that they are going to move all of their IT operations to the Cloud space. It’s also why I think many people misunderstand the role of Pivotal in the larger EMC Federation strategy. The “big data” and analytics side of the equation is something that other vendors do not have in house and need to leverage outside resources. Once again, we go back to the EMC Federation looking to become a panacea for IT.

K3N5JyISo where does it all go from here? I just laid out some highly speculative thoughts above, and will be curious to see how competitors in the space react. Obviously I see Cisco as having adopted a similar mindset, but a different strategic approach and one that is more geared towards the public cloud in many respects.


Posted in DevOps, Enterprise Tech, HyperConvergence, Storage & Virtualization | Leave a comment

SNLDD Episode #9: Can’t We All Just Get Along?

storage_warsThe storage wars of the mid/late 2000s were fairly legendary, at least that was my take on it from an outsiders viewpoint. There was a period from around 2008-2011 where it went pretty hot and heavy,  with the various corporate bloggers/evangelists of the various factions EMC, Netapp, HP, HDS, IBM, and even “independent” bloggers throwing their lot in with trying to muddy the waters.  What made it entertaining from an outsider’s view was the concentrated effort to FUD up a blogger’s comments section with “anonymous” posts. The various warriors would inject themselves into various discussions and then flame wars, trolling, and FUD would be the order of battle. Then oddly there came a period of quiet between the major players. I’m not saying it went away completely, you tend to see it from the old guard whose business is being eaten away by the new upstarts, and also as a means of generating buzz for the upstarts products at the expense of the legacy vendors pitfalls or mis-steps.

Those of us on the vendor side of technology are accustomed to a certain level of open battle that takes place between vendors. FUD for all its misery is still a tried and true tactic, and this is none more true than in the space of Enterprise Storage. Competitive “Battle Cards” are released to the troops, and tend to be one of the first line weapons deployed from the Sales teams. Next up comes the landmines set in RFP’s, and then the nuclear option, massive cost reductions. To me, these are part and parcel of technical sales, and while they may seem dirty to the outsider, in the end competition is to the benefit of the consumer.

The recent slate of storage based startups have brought the FUD wars back front and center, except Screen Shot 2014-09-25 at 1.24.39 PMnow I’ve seen the focus turn a bit more personal in its scope. Sure startups would punch well above their weight, and push forward with the “David/Goliath” messaging, and frankly I can see a logical reason to move in that direction. For some though, everything is personal. Everything is a chance to stomp the other guy, or throw dirt. The ability to Screen Shot 2014-09-25 at 1.25.25 PMsimply agree to disagree does not exist, and at its worst, the attacks become personal in nature and are completely non-related to anything technical. Also, there tends to be a rash of fake twitter accounts aka “sock puppets” used to astroturf against competitors. I find this to be childish, petty, and generally a sign of an insecure individual(s). Frankly I’d like to see those accounts stopped and the owners exposed.

So this week on SNLDD we are going to discuss a few things: (click to left for event page)rodneyking

  • What’s considered “fair game” in competition
  • Why go “personal” and is there any benefit to doing so
  • What crosses the line
  • Sockpuppets, fake accounts, and general douchebaggery
  • Can’t we all just get along?


Posted in FUD Wars, Social Media, Storage, Tech Marketing | 1 Comment

In Tech We Trust

So I’ve been quiet on the blog as of late. Several reasons of course. The first few weeks at Cisco have been very busy and I’ve spent considerable time getting ingrained in the Cisco culture and learning how very large organizations work. Lets just say the transition from a company with 350 people to one with 74,000 takes a bit of adjusting. This said, I have a ton of stuff to write about coming soon, and will be setting down to commit fingers to keyboard soon.

ITWT-banner-960x460Now I have been involved in a new podcast venture called In Tech We Trust with Nigel Poulton, Hans De Leenheer, Marc Farley, and Rick Vanover. This is a group show where we will be discussing technical trends, tech news, and various technology banter.

This week’s episode just dropped so go give it a listen as we discuss:

  • Should EMC sell off VMware
  • Would an HP + EMC mega-merger have been a good thing
  • The value of R&D in big companies
  • Googles new mega data center planned for The Netherlands
  • NetApp’s secret launch of FlashRay
  • OpenStack

I’m really excited about this, as I’ve really enjoyed listening to many podcasts over the years. It’s somewhat surreal to be involved in a show with some of the people I’ve been listening to on podcasts over the last few years.  Being part of this show with some of the smartest people I know is really an honor.

Follow Us on Twitter: @InTech_WeTrust

Posted in InTechWeTrust, Podcasts | Leave a comment