Finding the sweetspot with ESXi5 Licensing

As most of you know, VMware decided to change their licensing model for vSphere 5 just prior to VMWorld 2011 this year. The results were less than positive. I think the initial thread in VMTN forums was close to 133k views  by the time VMware decided to change the initial scheme and upgrade the entitlements across the board.

A little history:

If you’re not familiar with the new licensing entitlement breakdown, the revised edition is here.

I think customers were right to be alarmed at the initial low entitlements per socket. The documentation that came out with the original entitlements pointed to consolidation ratios of around 5:1 which was the case for say 2006, but today many customers are achieving ratios of 35:1 and higher depending on the workload being virtualized. In my organization I have roughly 30:1 consolidation ratios per host, and expect to reach 45:1 with additional per host memory upgrades in the near future. The increased entitlement went a long way to helping quell the uprising that resulted.

As an FYI, Rynardt Spies of VirtualVCP.com @rynardtspies has a very excellent vsphere licensing calculator that was very helpful in determining pricing.

Birth of a Virtual Environment:

For the record, I have learned a great deal in the last 2 years when it comes to virtualization and host sizing. My initial VMware hosts were dual socket dual core IBM x3650’s with 12GB of RAM running ESXi 3.5. I had migrated off of Windows Virtual Server 2005 and onto “free” ESXi 3.5 as a proof of concept exercise to prove to my manager that VMware was going to be the better virtualization play, as well as to show that I could do the work myself without having to engage outside consultants. My organization is very cost conscious and it is difficult to get budget for many of the IT based projects that we take on. Regardless of the huge ROI we could incur with virtualization, there was serious pushback from upper IT management and a great deal of skepticism and fear.

Currently I manage a small virtualization environment and we are primarily an IBM shop. I have 3 Production Data Centers, separated out by host type and location. Our first true production cluster was 2 IBM 3850M2 Quad Socket Quad Core systems with 128GB of RAM. We use 8GB FC for Data Store connectivity and 10GB Ethernet for LAN.  At the time I purchased these systems, they were the largest x86 machines in our environment, eclipsed only by our Power5 and iSeries.

As we started P2V’ing more and more systems, and as requests for new servers came in the need for additional hosts to support the ever growing environment became clear. At this time the new Nehelam chipset was being finalized and we opted to wait until the IBM X5 series of servers was released before we purchased new hosts. A second production cluster was introduced with 2 Socket 8 core x3690 X5 servers with 128GB of RAM each.  We were very pleased with the performance of the new X5 servers, and noticed that they were far more efficient than our older 4 socket xSeries servers.

As with many virtualized environments, we are memory constrained, to the point that deploying additional VM’s will have to wait until something changes. This graph above was the impetus for this post.

Now when I graph on memory utilization per socket the real issue becomes exposed.

 

As you can see, some servers perform better than others when it comes to memory utilization per socket. For me the the idea is to get as much bang for your buck as you can. The 3690 systems are currently utilizing 46GB of memory per socket, vs the 4Socket 3850’s at 13. Part of this is higher socket density, but when you look at CPU utilization, the higher socket count systems ultimately become less efficient and far more costly than 2 socket systems which outperform them.

Now given the changes to the VMware licensing, my goal is to utilize the full memory entitlement per socket on my ESXi hosts.  I want to be able to deploy enough VM’s per host to eat up the entire 192GB vRAM entitlement for a two socket host, and allow for N+1 redundancy in the cluster, without incurring a vTAX cost.  Thus, the sweetspot.

  • „  4 ESXi Hosts with 256 GB of RAM each
  • „  192 GB Active on each system takes full advantage of Enterprise + Licensing
  • „  Spare 64GB Capacity on each host provide for N+1 with 4 Hosts
  • „  4 Hosts provides for approximately 192 Standard Servers at 4GB of RAM each
  • „  Consolidation Ratio of 48:1 (best case scenario, but not likely with real world workloads)

 

Arriving at this conculsion:

I’m no excel wizard, but I’ve tried my best to determine exactly where the price point was for our organization when it came to getting the most out of our existing hosts, and planning for future host purchases.

Here I took 4 different hosts/memory configurations to see at what point do I come to the sweetspot where I can support N+1 utilizing the full memory entitlement  for each socket without a tremendous amount of surplus memory being wasted. The idea is to get zero excess memory while utilizing the full licensed amount during a host failure.

 

I had to choose 512GB of RAM per host as my upper limit because that is the max for the X5 servers without going to an external memory tray.

For my IBM systems, the prices reflect what I’ve paid in the past and what I’ve been quoted in the future, so your prices of course will vary and this is by no means 100% accurate but it does give a good indication of what I would expect to pay given our current environment.

The red line is the where N+1 comes into effect. A 3 host cluster will need 384GB of RAM per host in order to utilize the full licensed vRAM per socket and sustain a loss of a single host. A 4 host cluster needs 256GB as does a 5 Host system.  It’s not until we reach a 6 hosts with 384GB of RAM per machine that we get the biggest impact with the ability to lose half the cluster and still have enough capacity in each of the remaining systems to run all of the licensed Virtual Machines. Still that’s a lot of un-used physical memory.

For me, the sweet spot will be 4 hosts at 256GB of RAM. This equates to a perfect amount of RAM allocated to result in no excess memory. If I go to 5 hosts, I will have 64GB of spare memory not utilized by actual VM’s, but 4 hosts leaves me with 0. This for me is an exercise at trying to be efficient with memory utilization that supports the loss of a host for a short period of time, but allowing all VM’s to run. I know some shops have different requirements, but for me being able to run all of my hosts at 75% memory utilization that equates to the amount licensed is a good fit. That 25% overhead is simply buying me the +1 without the cost of buying and licensing an entire extra host.

So if I have 4 hosts with 256GB of RAM and each host is running at licensed vRAM max of 192GB then the 4 node cluster will still have enough physical memory capacity available to support the licensed vRAM entitlement for the physical servers. That is, unless I don’t understand what I’ve been lead to believe so far in regards to the licensing scheme for vSphere5.

Now I could be totally wrong on all of this and have missed some glaringly obvious flaw so, if you see something that I am missing please let me know. I have excel spreadsheets that back most of this, but I’m afraid it might not make a lot of sense to anyone but myself, but I will post them soon.

 

This entry was posted in VMWare, vSphere 5. Bookmark the permalink.

4 Responses to Finding the sweetspot with ESXi5 Licensing

  1. nate says:

    Hey there!

    My first reaction to this: wow, it is sad that vmware licensing has come to this huh! I mean this stuff was supposed to be simple. It honestly reminds me of a co-worker of mine back in 2003 running excel spreadsheets/visio diagrams to figure out how best to configure an EMC storage system.

    It’s not entirely fair to compare the original 4 socket systems that are based on ~2007 technology(from what I can see w/Intel 7300 CPUs) to the latest and greatest Intel tech from today. Four socket systems can add some value in reducing the “islands” of capacity. Though conversely increase the amount of things that go foobar when/if one of them goes down.

    My current VM project which we just got the hardware for (won’t be installed for another month), is based on the 2-socket HP DL385 G7 loaded with 24x8GB dimms, 4x10GbE and 2x4Gb FC each, from the quote I see here the systems run about just under $12k, with 8 of them + 16 copies of vsphere ent+ and 2 copies of vcenter (the 2nd copy was basically free) coming in at $157k after discount. That is for 1.5TB ram and 192 x 2.3Ghz CPU cores. This is HP of course and Opterons not IBM and Intel.

    Ironically my first choice was the IBM x3750 M3 (I think that’s it) quad socket 2U Opteron box with 16GB DIMMs. After about a month of sitting on the quotes the boss ended up asking if it was ok to go the DL385 route and I said yeah that’s fine. HP is the only one that has systems that exploits the full 12 DIMMS/socket config on the Opteron, though you do get slower memory speeds if you use all 12 sockets.

    I originally liked the(and still do) the IBM quad socket box because it’s half the physical size of a DL585, and it does support IBM’s chipkill technology which is critical for any sort of deployment of vmware these days (that or HP Advanced ECC).

    Dell by contrast has a similar box but has no advanced ECC functionality on their AMD platforms, only on their Intel ones (and only on the more recent Intel CPUs which added support for it in them, though it is crappy support compared to IBM or HP’s memory technology). The main thing I didn’t like about the IBM box was it had 3 (?!) power supplies, that is *really* difficult to plan for with most data centers, they suggested all 3 power supplies be in use for full redundancy with my configuration. I had someone run the power calculator for me (since I don’t have Excel which it required if I remember right), and my peak power usage was something like 80W below the maximum rating of the PSU. So in theory I could of gotten by with 1 PSU running the box. I was going to hook all 3 PSUs up but most data center facilities only provide an A/B circuits. So 2/3rds of the PSUs would be on one UPS/generator feed. I’d be fully protected if a PSU went out, or if the “other” PDU went out, but if the main PDU went out things may of gotten dicey(assuming peak load, which it was probably never going to hit).

    For us the 192GB memory was entirely coincidental, given that 24 x 8GB memory chips = 192GB. 8GB memory chips were chosen since they were cheaper..

    The box you have there with the dual socket 32 dimm system looks quite nice, though I’m not sure how much of a premium is associated with that latest generation Intel proc. From the looks of your costs above it seems like a high premium? I could be reading it wrong though(I read it as roughly $32k/host + ~$8k for vsphere?)

    Your math, while I honestly had sort of a hard time following all of it looks good with the caveats that the vRAM tax (from what I recall) is based on provisioned capacity, not used capacity, so if your VMs benefit a lot from transparent page sharing and/or memory compression then there’s a lot more grey area as far as ideal configuration, same goes for “right sizing” VMs. But if you have the good usage data from your existing workloads then your probably fine..

    Your consolidation ratios are extremely impressive though, are a lot of your VMs windows hosts? I have noticed massive amounts of TPS benefits on windows VMs but near zero on linux for whatever reason. Could be because windows likes to keep a lot of memory free and use the page file where linux is the opposite.

    Another suggestion I would put forward – don’t be afraid to keep your older boxes on vSphere 4.1 if it makes more sense. The old rust buckets from 2007 are not efficient for vSphere 5 licensing but you can still keep a cluster of them around for less critical work provided it still makes sense to keep them running from a power/cooling/space standpoint (which I suspect it probably does make sense).

    It wasn’t that long ago (well a bit over a year now), that I was trying to weigh the benefits of 8 x quad socket HP blades with 32x8GB DIMMs each (2TB for 1 chassis), vs taking the dive and splurging on the 16GB DIMMs (which were like 3-4x more expensive at the time maybe more). Then I thought, if I needed more memory(since I knew I would be mem constrained), I would have to add a 2nd chassis, and blades, and vmware licensing and power and cooling. So it was a no brainer(at the time) to just cough up the extra dough for the 16GB DIMMs, since it was going to be much cheaper anyways. How times have changed with vSphere 5..

    I am still contemplating whether or not to build this new cluster of mine on vSphere 5 or not. There’s nothing in 5 that I need, 4.1 is probably more mature, though if I go to 5 right off the bat I don’t need to worry about upgrade pains later. Who knows, by the time “later” comes I may be working at another company anyways 🙂

    Are you using the IBM 10G BNT switches or something else for networking?

    Since those new IBM boxes you have support 6/8/10 core procs how many cores/proc did you settle on? 10 to maximize vSphere licensing? 6 to minimize costs given your not CPU bound?

    How are you booting the systems? internal disk? USB? boot from SAN?

    I wonder how IBM feels about the vRAM tax given they spent (I assume) a lot of effort developing those memory extender systems.

    Very nice graphics too 🙂

  2. admin says:

    Hey Nate, thanks for the comment!

    The vMARK scores for the 3690 X5 boxes outpaced our 3850 M2’s by 10 points with half the sockets. That alone kind of sealed the deal for me when it came to picking up new hosts for our data center.

    Our 4 Socket boxes were kind of forced on us, I wanted to go the 2 socket 3690 X5 route from the get-go but we could not wait until the new line came out when we finally got budget for production level ESX hosts. The NUMA architecture on the 3690 X5 boxes allows for some very powerful ESXi hosts with a max of 256GB of RAM dedicated to each socket. So a 2U 16Core Nehelam with 512GB of RAM is pretty attractive from a density standpoint. I could really load these suckers up with a lot of VM’s and then I’d have to worry more about Disk I/O than Memory or CPU.

    The 3690 X5 comes with 2 or 4 PSU. We use 4, but I’m sure I could get by with 2 since I have no interenal drives. Our APC system is pretty good with power managmenet. I simply wish we had a dedciated generator instead of having to rely on batteries (thats a whole different discussion for later).

    I probably should have noted that for us the pure cost of a physical box is closer to 22K (thats with redundant 8GB FC cards and 10GB NICs). We run ESXi embedded which runs of an internal USB fob and no HD’s in the systems. The price comparisons in my example also include support, as well as additional monitoring/backup software costs and the cost for MS Data Center Edition for Windows VM’s that will reside on the host. This way we can put any number of Windows VM’s on the box and the DC license will cover them. It’s licensed per socket so for now its highly economical for us.

    Red Hat used to have a 10 VM SKU that would get you 10VM’s running RHEL5 at a pretty steep discount. They unfortunatley did away with that, and now we have to licesne each system be it Virtual or Physical.

    We have mostly Windows boxes so thats why my consolidation ratios are so high. Our of say 150 VM’s I have 30 RHEL5 and a few CetnOS boxes. RHEL eats all the mem you can give it, where Windows wont unless it needs it.

    I’m entertaining boot from SAN in the future, but honestly not sure if I need to or not and I’m always leary of relying soley on one system regardless of the levels of redundancy built in.

    Still on the fence about vSphere5, I like the storage DRS, but honestly I’m not sure how much I would trust it. I like to stay current on the ESXi side of the house so I will probably upgrade later this year when we pick up a few more hosts and simply build a new VCenter on 2K8R2 and rebuild the entire environment from scratch and start fresh. As much of a pain as that may be, I think that I’ve learned enough over the last 2 years to really fine tune the environment to perform even better than it is now.

  3. admin says:

    One more thing that I think a lot of people don’t realize with 10GB ETH adapters. According to VMWare ESX/ESXi supported maximums for 10GB/1GB ETH combinations are below

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1020808

    For 1500 MTU configurations:

    4x 10G ports OR
    16x 1G ports OR
    2x 10G ports + 8x 1G ports
    For jumbo frame (MTU up to 9000 Bytes) configurations:

    4x 10G ports (only if the number of cores in the system is more than 8) OR
    12x 1G ports OR
    A combination of 2x 10G ports and 4x 1G ports

  4. Pingback: Thankfully the RAID saved us. » Blog Archive » #vTAX is dead, long live #vTax

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.