Marketing masquerading as “Tech Journalism”, Or how not to write a stealth press release

Let me preface this by saying that this may come across as snarky and malicious, but that’s not the full intent. I’m using this as a simple illustration of what bothers me about tech journalism. There is a trend I continue to see in technical marketing that bothers me to no end, it’s the press release that masquerades as a legitimate piece of tech reporting.

Case in point, this piece by Mike Vizard at IT Business Edge. There is so much wrong in this piece that I felt compelled to address it.

One of the first storage vendors to add specific support for the vStorage APIs for Array Integration (VAAI) in VMware vSphere is Tintri, a provider of storage appliances that combine Flash memory and SATA drives in a way that is specifically optimized for the VMware file system.

Interesting. A company founded in 2008, that doesn’t even come out of stealth until March 2011, somehow is one of the first storage vendors to add support for VAAI which was first referenced by EMC Japan in June 2010, and announced by VMware with the release of vSphere 4.1 in July 2010.  Whats that, almost a year before Tintri even ships its first box?

And what of being one of the “first” to offer VAAI support? Well given that Tintri still doesn’t (it’s coming this summer), I’d say that claim is a little off. In reality,EMC, 3Par, and Netapp were the first groups to really offer VAAI support on their arrays. IBM followed up pretty quick on their XIV and Storwise platforms. Compellent, HP, etc. etc. all soon followed.The list was initially pretty small, but if you look at the HCL today its growing with most of the Enterprise players having support for the first 3 primitives, and most supporting all 5 with the release of vSphere5.

Trust me, tech journalism isn’t the only group out there that takes a press release and regurgitates it to appear as a researched article with “facts”, the political press has done this for years. It’s sloppy and lazy, and frankly we deserve better.

For the record, I love what Tintri is doing and wish I could get one of their rigs to play with. This is by no means an attack on them, there are some really scary smart people working on that team. I spent almost half an hour with Dr. Harty at VMWorld going over the product and getting some real insight into what Tintri wants to do with their systems.  Still, it doesn’t do them any benefit to make claims that are simply not correct.

 

Posted in Storage, Storage & Virtualization, Tech Marketing, VMWare | 1 Comment

Hyper-V vs VMware a response to Nate at TechopsGuys

I read this post by Nate over at TechOpsGuys about Hyper-V vs VMware. It’s a good read and Nate brings several salient points to discussing the challenges facing VMware in the future as Hyper-V comes into feature parity with VMware.

This was my response to his post, I’m thinking of digging a lot deeper into this, but I wanted to go all stream of conciousness and get this out while it was still fresh.

I think that sometimes it’s easy to let the environment you manage color your perception of how we think others are deploying and managing technology as a whole. I know that when I write about certain aspects of technology that I will lean heavily on the assets I manage on a daily basis, its my familiarity with those systems that allow me to speak with authority about them. I sometimes take for granted that other IT shops do things differently, or may see my approach as not going to work for them. It’s very easy to generalize and jump to conclusions.

I actually had a boss that would tell me that “cost is no issue” for every project I worked on. I knew full well that cost was the over-riding issue and would architect 3 different scenarios for each project, the “cost is no issue”, the “what I think we would really need to meet the projects needs”, and the budget solution. Invariably, the budget solution would be the one that management opted for and I’d have to make due with it. I think it really depends on the type of corporate culture at a given company as to whether cost will be an issue or not. Some companies still love to hate their IT departments, and will do their best to make due with as little as possible and take the risk that they will survive a real disaster. Others will throw money at IT and waste it as they cluelessly look for a solution to a perceived problem. I know of a company that bought an XIV system to test a workload, and then pulled all the drives and shredded them after the test due to the sensitivity of the data and their internal policies. Yes, there is close to 800K down the drain, that’s the annual IT budget for lots of shops.

I can’t make a comparison of Hyper-V3 to VMware until it’s actually released. I refuse to take marketecture as fact, especially in the case of Microsoft products. I most certainly will not deploy any MS product until at least SP1 has come out.  Those burns refuse to heal.

We are 80/20 Windows/Linux so for us the benefits involved with memory management within VMware work well. Linux workloads, not so well, that may improve since VMware has Linux proficiency, will it with Hyper-V, I doubt it. We have achieved densities of around 35:1 with our existing host infrastructure. I don’t know of any Hyper-V shops currently getting that level with production class workloads.

I’ve yet to see a third party that I trust provide the cost comparison between Hyper-V and VMware. Those of us with MS ELA’s can get discounts at a significant level, but then there’s all the extra costs involved. Sure the hypervisor is free, but you want to manage it? Well that’s where its going to cost you plenty.

You’re right that VMware is expensive, and it’s going to get even more expensive as time goes on. The licensing fiasco for vSphere5 may have left people’s minds as time has gone by, but I tend to see it as VMware adopting some aspects of the Oracle pricing model. Oracle can charge what it does because they know they have many of their customers over a barrel. VMware is approaching that threshold with a fair number of large installs. Sure the SMB market may have the flexibility to move to Hyper-V, but your shops with 1000+ VM’s and all the infrastructure built out to support them will be hard pressed to make the move. This will mean that you will see shops keeping their older, and cheaper, versions of VMware unless there is a true benefit to upgrade.

If you look at the features being crafted with each subsequent release of vSphere, you will see that they are nearly all confined to IO performance, and Storage management/offload. In robust environments, this is where the pain point continues to manifest. In the general VMware populace at large, its memory constraints, but for those groups pushing the envelope, and running the 85-100% virtualized platform, IO and Storage Constraints are key. I think that’s why you see VMware doing what they are with the VAAI primitives, and Storage DRS.

As for Hyper-V, meh, good enough doesn’t cut it in my current organization. Then there is the lack of a real ecosystem devoted to Hyper-V and products that I can leverage. Sure as the market share increases, the ecosystem will develop, but really if I’m going to stake my job on production workloads and the SLA’s required, I’m not going to settle.

Bottom line, I don’t doubt that feature parity is coming, but is it too late in the game for that vast swath of VMware customer base to make the change mid-stream? Sure shops may go evaluate Hyper-V and use it for test/dev and in the case of some shops production. Still, given my own scenario, I don’t see Hyper-V making any headway in our organization, and as such, for many shops it will be a decision they ultimately have to make.

I do think that at some point the hypervisor will be given away free, it essentially is now, it’s the management that will cost you. So if you want to run 100 ESXi hosts of vSpher5 with 64GB of RAM and two sockets, then get crafty with scripting, you can essentially run a hypervisor cost of zero. Still the amount of time to manage and craft that solution may cost you as much, if not more than paying for the licenses.

 

Posted in Hyper-V, Tech Marketing, Virtualization, VMWare | 1 Comment

VSPEX – The rebirth of the Mainframe/Mini in x86 form?

EMC has trademarked a new product name: VSPeX. So what the heck is it?

Alternative: slow news day at El Reg.

It doesn’t take a genius to realize that with EMC’s hardware and coding chops, and direct line into VMware that purpose built virtual appliance coupled with the appropriate connectivity and storage are the next evolution of their business model. There is a lot more competition in the storage space these days, sure EMC is the top dog, but a lot of that is built of their dominance since the Sym days. As the other players catch up and surpass EMC in terms of functionality and value add, they will need to differentiate themselves and in turn create an even larger revenue model.

If you think about it, a converged platform, other than VCE/VBlock etc makes sense since it can be bolted into their existing storage product line. Just look at what they are doing with Greenplum, its the precursor to the VM appliance. Imagine a VMAX with built in ESXi and you have essentially the rebirth of the mainframe market for the x86 space. Scale on demand type delivery, as well as built in replication/dr, snaps, clones, etc. etc. Converge all the features of Enterprise storage and virtualization into an appliance coupled with their huge install base, and you have what Oracle wants to be but is incapable of becoming (since they unceremoniously killed SUN).

Virtrualization has been around for a long time, many of the features we use within the x86 virtrualization space have been available in the Mainframe/Mini/Unix space for some time.

Now the big one, does EMC go after Cisco? The long rumor has always been that EMC wants to buy a network stack, is Cisco too expensive? Personally, I wouldn’t go that route, I’d look at Extreme Networks given their Diamond series is the shit, and it could be had for a far cheaper price point without all the baggage of merging a huge company like Cisco into an equally huge company like EMC.

Of course, I could be crazy in thinking this. Wouldn’t be the first time.

Posted in Storage & Virtualization, Tech Marketing, Virtualization, VMWare | 1 Comment

IO Analyzer 56k no way!

Install and configure VMware IO Analyzer:

Quick note, in case you missed the Brown Bag last night, the Video is here. I go over the install process as well as some general QA about IO Analyzer there.

 

ProfessionalVMware BrownBag – IO Analyzer from ProfessionalVMware on Vimeo.

VMware IO Analyzer is a VMware labs fling that was created to provide a simple analysis tool for testing storage attached to your VMware environment. For anyone who has installed an OVF template before, the Installation and setup is relatively simple.

Once there submit to the terms and conditions and you can download the zipped template:

 

 

 

 

 

 

 

 

Once the file has been saved to your local or network drive, expand its contents and open your vClient. We are going to want to go to File, Deploy OVF Template:

 

 

 

 

 

 

 

Browse to the templates location, and then click Next

 

 

 

 

 

 

 

You can now choose a new name for the VM or leave it as listed, also choose a folder location to deploy the VM if you have folders created within your environment:

 

 

 

 

 

 

 

Now choose the Cluster or Host you wish to deploy the VM onto:

 

 

 

 

 

 

 

 

Next if you wish to place the VM into a specific Resource Pool you can choose to do so here. I would not recommend doing so at this time, since we do not want to limit the resources available to the VM . Furthermore the resources utilized by the IO Analyzer are relatively small.

 

 

 

 

 

 

 

Now choose the data store you wish to place the VM onto:

 

 

 

 

 

 

 

You will want to install the VM as a thick provisioned device.

 

 

 

 

 

 

 

Choose the network you wish to access the IO Analyzer from:

 

 

 

 

 

 

 

 

Last step is to accept the final results and perform the installation:

 

 

 

 

 

 

 

 

The new VM will be created and placed into the location you provided above:

 

 

 

 

 

Once finished, we will want to configure the new IO Analyzer system for use.  Right click on the newly created VM and lets go to Edit Settings:

 

 

 

 

 

As you can see, the OVA comes with a pre-configured second disk of 100MB in size. If you consult the documentation within the IO Analyzer you will notice that the recommendation is to remove that disk and provide a new disk for testing.

 

 

 

 

 

 

 

 

 

I choose to utilize RDM (Raw Data Mappings) for this purpose. It can be done with either iSCSI or FC storage. In my case we have FC and will create a second disk for use on the system. This will be the test disk.  Prior to completing the next few steps you will need to follow your storage manufactures best practices to create a disk lun to present to the IO Analyzer virtual machine. For testing purposes, the lun should be over 2GB in size. This way we won’t utilize any of the cache on the IO Analyzer machine itself.

 

 

 

 

 

 

 

Add the new disk:

 

 

 

 

 

 

Choose Raw Device Mappings:

 

 

 

 

 

 

 

Remember you will need to re-scan your hosts storage prior to attaching any RDM devices.

 

 

Once you are sure the host can see your RDM disk then go ahead and add the disk to the IO Analyzer system.

 

 

 

 

 

Depending on your internal practices, you can store the disk with the VM or specify a datastore of your choosing. I tend to choose to store the RDM with the VM that its attached to.

 

 

 

 

 

 

 

 

You have the choice to either set the disk into Physical or Virtual lun mappings. This makes no difference as far as the test results are concerned.  If for any reason you wanted to snapshot the IO Analyzer, a virtual mapping would be required.

 

 

 

 

 

Following my own internal best practices, I will choose to attach the disk to its own separate SCSI Node:

 

 

 

 

 

Click finish and your new disk will be available for use:

 

 

 

 

 

 

 

 

 

One key thing to note, prior to launching the IO Analyzer is that you want to eagerzerothick the disk prior to actually running any tests on it. Follow the steps in this KB Article:

 

 

 

Now we are ready to power on the IO Analyzer and configure it for first use. Power on the machine, and lets go to the console to setup the networking  components. When the IO Analyzer first powers up, it will be sitting at this screen. Use the arrow buttons to move to the Configure Network section

 

 

 

 

 

 

If you use DHCP you can follow the prompts and your IP address will be configured. I would recommend against using DHCP for this system as you will need to access the web console via its IP, unless you plan on adding the system into your internal DNS structure.

 

 

 

 

Follow the prompts and enter the IP, Subnet, DNS and hostname information. Once you are satisfied, save your settings and the system will present with the new IP address showing at the top of the screen:

 

 

 

 

 

 

The next step is to logon to the system from the console. In order to perform any tests with the IO Analyzer, the system needs to be logged into. The credentials are root/vmware. After the logon, you will be presented with a rather bland linux gui:

 

 

 

 

 

 

 

 

At this point you are done configuring the system.  We can come back to this screen for more advanced settings in the future.

Now we can start running our first IO Analyzer tests. Open a firefox browser and lets input the IP address of the IO Analyzer machine.

 

 

 

 

As you can see there is a web based front end running that will provide the options to setup and run the IO Analyzer tests as well as view results. In order to get the proper ESXTOP results for our IO Analyzer as well as to select the IO Analyzer from the VM drop down menu, we want to add the ESX/ESXi host ip and the root password that the IO Analyzer is located on.

 

 

 

 

 

Next we will want to add the IO Analyzer system as a guest worker and create a workload to run our initial test on.

 

 

 

On this screen we several options. In this case I have a host, with an IO Analyzer system to choose from.  The fields are pretty intuitive, with the last one being the most important, that will contain the IP address of the IO Analyzer machine we want to test.  The one nice thing about the system is that once you setup your configuration, you can save it, and easily come back to run more tests in the future. See the Load and Save configuration tabs above.

So essentially, we are choosing the IO Analyzer Host, the IO Analyzer Virtual Machine, the test workload (which there are roughly 30 of) and we are inputting the IP Address of the IO Analyzer machine. Lastly we will add the worker and input a run time for the test. The last step is to click run.

 

Now it may appear that nothing is happening, but if we switch over the console of the IO Analyzer VM we can see the test as its running:

 

 

 

 

 

 

 

The IOmeter that is running on the IO Analyzer is fully interactive. You will have to slide the Update Frequency (red arrow above) over to a quicker refresh setting to see real time results. You can also alter or change the different tabs being displayed. This will be available during the entire run of the test.

When the test is over, the IO Analyzer web page will have the results available.

 

 

And the results link:

 

 

 

If configured properly you will see the IOmeter Results Summary, as well as the ESXTOP stats for the host being accessed.

And that’s it. For myself, I like to do independent verification of results, so I will look at my Veeam One  (shameful plug to my friends at Veeam, hey it’s free, you might as well try it) settings during and after the tests, as well as the monitoring tools on my storage array. You can also use the performance tabs on your ESXi hosts. I have verified that the results being provided are in line with what IO Analyzer is reporting.

 

 

 

 

 

 

 

 

 

 

 

Hope this helps, don’t forget to checkout the Brown Bag session I’m doing on this tonight 5 PM PST over at Professional VMware. If you have any questions comments hit me up.

Posted in Storage, Storage & Virtualization, VMWare | 2 Comments

How a love of bacon and a trip to vmworld opened the world of Twitter to me.

awesome-baconIn case you didn’t know, my twitter handle is @Bacon_Is_King. At first I had no intention of ever getting on twitter under the guise of thinking that “real men don’t tweet”. Then I started actually seeing that almost all the blogs I followed in the virtualization space were heavily into the “twitters”. So I joined. The genesis of the handle was my love of all things bacon, and I figured that I wanted a twitter handle that was somewhat unique, and would actually make people possibly laugh or be intrigued enough to follow me, thus Bacon_Is_King was born. Don’t get me wrong, Bacon really is king. Yeah you may think Steak is King, or Lobster is King, but dude, nothings better than bacon.

Next came lurking and then following. Usually the bigger names in the v-space, your Scott Lowes, Jason Boche, Duncan Epping types. The “heavy hitters” in the vSpace as I call it. For me it was an interesting foray into a world of pseudo tech celebrities, and after lurking for a few months I finally figured out the basic lingo, and protocol. Even the perplexing #FF hashtag became a valuable tool for getting follower and finding new people to follow.

This also coincided with my first trip to VMworld. I had been begging to go since 2009, and luck would have it that the removal of my old boss, and the arrival of my new boss would be the key to actually getting to go.

Fun fact: my twitter handle got me into the VMunderground party. I had heard about the party before, but by the time I had signed up for VMworld, the registration had closed. I had heard that VMUndergound was the one place where the “in-crowd” was going to be. As luck would have it, my twitter handle and a few dozen tweets about my desire to go to the VMUnderground party caught the eye of Steve Rogers @steve888 from Xangati, and because their tagline for the party was “I’d rather be eating bacon”, he generously took pity upon me and granted me access to the chosen land. From that point on, I entered a world that was in my view, pure awesomeoness.

So at the VMUnderground party, even though there was no bacon, there was turkey legs, big ones, and my own gluttony would introduce me to my first real life v-Rockstar, Shane Williford aka @coolsport00. So as I sat, eating an giant turkey leg on the second floor, I ended up striking up a conversation with Shane and kind of made my first in person connection to a vExpert. As the night progressed, and as I managed to quaff many a pint o Guinness, and I ended up meeting  many of the people who would help me along my journey into the social technology sphere.

Prior to going to VMworld, I had read a post by Christopher Kusek @cxi about going to social events like VMworld and how to prepare. I brought 500 business cards with me to VMworld and I started giving them out to anyone who would take them, and when I left I think I only had 150 remaining. I glommed onto the spirit of socially networking with everyone I met. I hit up blogger row and bumped into Matt Vogt @mattvogt, John Troyer @jtroyer Patrick Redknap @PatrickRedknap Damian Karlson @sixfootdad and many others. These were people that I had followed on Twitter and whose blogs I read, and whose opinions I valued. In a sense I was a little “star struck” because to me many of these people represented the real essence social media and blogging within the Virtrualization space.

From there, I discovered the world of v-Outreach. People meeting, tweeting, blogging, and connecting all in an effort to discuss virtrualization.  For the social animal, this kind of event is the like the super bowl. You get to watch the real kings of the industry in the Cube be interviewed, watch Chuck Hollis give a presentation on the future of Big Data, bump into Robin Harris, Scott Lowe, Virgil , Eric Sibert, and many others in the storage and virtrualization space, and not only that, have discussions with them.

I don’t think I ever truly understood the power of social media until I went to an event where it drove so much of the social interaction, and was ingrained into the day to day, hour to hour aspects of the event.  Prior to VMworld, I had 70 followers, months later now I’m at around 360, yeah, its nowhere near the Greg Knierimen level, but its a start.

Coming soon in Part 2: “Ouch my liver hurts, or pace yourself”

Posted in Virtualization, VMWare | 3 Comments

Dell Marketing takes the “Cloud” to a whole new level

Clayton Sotos – Visual Innovators from Visual Innovators on Vimeo.

I don’t know if this is legit or not, but its hilarious. I will say this, fake or real, its beautifully shot, funny, and does manage to highlight the product line. It might not be for everyone, but its sure better than “Dude you’re getting a Dell”

Posted in Tech Marketing | Leave a comment

Marketers vs Academics opposite sides of the same coin.

Greg Schulz (@ storageio) posted today about a UCSD study on the future of SSD/NAND Flash disk. As with anything Greg puts out, it’s a well thought out, sourced, and informative post. I’ve come to really value Greg’s opinion, and there is a very good reason he has the reputation within the industry he has today. *cough* Rockstar *cough* While I’m on the subject of pimping out Greg’s awesomeness, do yourself a favor and go buy his book. Cloud and Virtual Data Storage Networking, I picked up my copy at VMworld this year and it’s about as close to a tech bible as you can get.

So now onto my .02 cents.

Predicting the future in the technology space about the longevity of specific technology can be a  fool’s errand, but that doesn’t stop marketers and academics from trying to do it.

Academics:

For the record, I do not have a problem with academics sticking their fingers into current business and technology trends, my only quibble would be, many do not have any hands on experience on the implementation and management side to fully understand the “hows” and “whys” of a technologies impact. The saying goes, those who can do, those who can’t do teach, this while simplistic, does hold an inkling of truth. Many academics spend time in a cocoon of their own making; Cocoons in my view are not great incubators of ideas. Knowing a technology is all well and good, but unless you have felt the pressure involved in making that technology work in a production setting it may be difficult to fully understand the total impact of its use. Academics usually won’t lose their job if their assumptions or theories are wrong, technology implementers will. Getting a B on your paper won’t cost you your livelihood.

Marketers:

I speak for myself, but perhaps others in the implementation and evaluation side when I say that “markatecture” is a dirty word.  The prevalence of the MBA class that has moved into technology from the marketing and message crafting side are able to fool the C-Suite, but those of us in the trenches can usually see through the slick packaging, the buzz words, and the hype. The marketers will take the research that the academics have done, and then utilize it as a club for which they will beat the competition and the customer over the head with. Email campaigns, mailers, and most evil the fake technology leader website. Much of the message is crafted towards the decision makers within organizations, and not so much the end user/implementers.

There is a very valid reason that most technology companies employ Sales Engineers as the tip of the spear towards those who are the day to day implementer’s of technology, and why the pure sales animal is the only person the CFO/CTO will ever meet during the sales cycle.

Predicting the future:

If I could tell you what the future held for “technology X” 2 years from now, with certainty, I wouldn’t be sitting here in my kitchen typing away on my laptop. The fact remains, that no one can prospect out that far in the technology space, especially when it comes to the Enterprise. Technological advances can outpace and displace entire companies.  Case in point, anyone remember STEC? Yeah me neither. In less than 18 months they went from being the SSD tech darling, to an also ran and their stock took a nosedive reminiscent of Red Hat and Yahoo circa 2000. The very same fate could follow for Fusion-IO, though I think they are making good strides into diversifying their offerings. It’s a tough business, and it changes fast, and sometimes those changes are adopted, and sometimes they are not

Academics who want to try to foretell the death or demise of a specific technology, when they have zero actual hands on experience within the space would do well to temper their outrageous claims.

Same for marketers, we know you’re trying to spin “Product X” in the best light, but be mindful, the second you make a claim that is so patently false as to be laughable, your credibility is gone, and your sales will quickly follows.

If I had a dollar for every time I’ve been pitched the death of tape, I could retire. Remember SEPATON? Yeah how’s that working out?  The death of the HDD is currently being bandied about, so I have an equally skeptical view of  SSD’s having a “bleak future

Looking at reality:

Does someone really need a PhD to come to this conclusion?

The technology trends we have described put SSDs in an unusual position for a cutting-edge technology: SSDs will continue to improve by some metrics (notably density and cost per bit), but everything else about them is poised to get worse. This makes the future of SSDs cloudy: While the growing capacity of  SSDs and high IOP rates will make them attractive in many applications, the reduction in performance that is necessary to increase capacity while keeping costs in check may make it difficult for SSDs to scale as a viable technology for some applications.

It more or less smacks of common sense and conventional wisdom, but because its come from an academic wing, it will be seized by the marketers to push a message.

Let’s face it, not every company is a Hedge Fund running HFT algos requiring dark fibre and sub 1ns latency, but many companies do have a need to address high IOP workloads, and for now, SSD’s and NAND Flash fit those needs well. If the cost per IOP fits within their ROI, then there is going to be a very high adoption rate at which SSD’s will be implemented. As the density vs. performance factor comes closer and closer to equilibrium, then SSD’s will continue to gain in market share and dominance.

SSD has its place. It’s a maturing technology that will probably be with us for the mid to long term. Will it replace Hard Drives? Yes, in some instances they already have, but not all of them. But just like Hard Drives have not replaced Tape, SSD’s will not replace Hard Drives, and to claim a “bleak future”  based on a just exposes a short sighted view of an industry that both Marketers and Academics have a hard time grasping.

Posted in Storage | Leave a comment

Symantec files frivilous lawsuit against Veeam and Acronis

Symantec filed a lawsuit against both Veeam and Acronis claiming “patent infrigement” and loss of business.

Full disclosure, I know a good number of people who work at Veeam. I also was a participant at their recent Sales Kickoff as part of a customer panel. I have used Veeam Backup and Recovery since 2010 in my environment, as well as Veeam Monitor and Reporter. I have also used vRanger, Backup Exec, Netbackup, Arcserve and Tivoli Storage Manager over the course of my career.

That said, it wouldn’t matter who this lawsuit was filed against, my opinion would be no different, Symantec’s claims are wholly without merit, and frankly reek of a company that is desperate to remain relevant in a growing sector where their market share has decreased significantly, not because of “Patent Infringement” but because their product in its current incarnation is bloated, expensive, inefficient. Much like their entire Anti-Virus suite.

Digging into the complaint, lets look at this passage:

page 7 lines 27-28 page 8 lines 1-2

“among the features necessary to perform state-of-the art backups and recoveries in a virtual environment include (i) backing-up and imaging virtual machines, (ii) restoring imaged data on the same and different computer systems, and (iii) use of an effective user interface to manage system backups. All of these features involve use of Symantec innovations.

No, all of these features are part and parcel of EVERY backup software on the market today.

 Chris Mellor over at El Reg dug into the patents:

  • Symantec’s ‘517 patent covers backup data being restored to a different hardware configuration from the source hardware
  • The ‘086 patent refers to a virtual machine backup going to a different storage device than the one used by the VM
  • A ‘365 patent covers storing backup data in the same storage partition as the source data and restoring from it
  • The ‘655 patent refers to constructing a catalogue of backed up data
  • Symantec’s ‘010 patent is about a backup and restore GUI that enables simultaneous viewing of the contents of a computer that has been backed up and the destination computer for a restoration.

Hmm do any of those things sound familiar? Uh yeah, because every backup program I’ve ever dealt with has one or all of those in one form or another. And lets not forget, VMware themselves provide many of the same feature sets, wonder why Symantec isn’t suing them, or Comvault, IBM, Quest, CA, etc. etc. etc. ? One can only guess.

Pure speculation on my part time:

My guess is that Symantec has made a play to buy either Veeam or Acronis and were rebuffed. This fits the Symantec (as well as Quests) growth model, buy what you cannot internally innovate; buy those who have a customer base that you haven’t completely alienated. Now I could be 100% wrong, weirder shit has happened, but I don’t see this ending well for Symantec. In fact, what I do see happening is a further sense of distrust and disgust within the generalized IT community towards Symantec.

As it stands right now, in my view: Symantec is reeling as of late due to their source code being leaked online for their anti-virus and PC Anywhere software, not to mention the fact that they were caught working with some nefarious government agencies to provide backdoors into their product line for government snooping, as well as claims that they have tried to bribe those responsible for the source code leaks. All of that bad press does a lot to depress share prices.

But also, what it all boils down to is a sad and pathetic attempt to  bully smaller companies into submission. If I believed that Symantecs claims had any merit, I would say so, but when you dig into their claims, they are so astonishingly stupid that I fail to comprehend how anyone could not look at this on its face and not simply laugh.

I know some people who are not laughing though, the people at Veeam and Acronis who have to defend against this crap.

Editing to add:

Howard Marks makes the very valid point that Patents are not Copywrites.

The primary flaw is you’re argument that the technologies covered by these patents are pretty much standard TODAY so the patents must be invalid. Today doesn’t matter. If Symantec got a patent 12 years ago the patent s valid if it was non-trivial and new AT THE TIME.

Unlike copyrights which require copying for violation patents are absolute. If I patent something and you independently invent it AT THE SAME TIME but don’t reveal it to the pubic until after my patent application is accepted I have the right to sell it and you don’t.

This is true. Though I will say, that its unfathomable to me that Symantec is just now realizing that Veeam, and Acronis are infringing upon their IP. Furthermore, where is the suits against Quest for vRanger and PHD Virtual?

But seriously look at patent 7,322,010, which is related to “using a graphical user interface to map computer resources”, you know how much software falls under this?

Edit #2: I’ve read through the patent itself and realized that even VMware Converter is in violation, not to mention pretty much any piece of software that images a computer and then allows you to alter the image when re-deploying it.

A computer accessible storage medium comprising a plurality of instructions which, when executed:  present a graphical view of a first computer system configuration comprising a first plurality of computer system resources; concurrent with presenting the graphical view of the first computer system configuration, present a graphical view of a second computer system configuration comprising a second plurality of computer system resources; provide a mechanism to capture data representing at least a first resource of the first plurality of computer system resources from the first computer system configuration and insert the data in the second computer system configuration; and provide an automatic mapping of resources from the first computer system configuration to the second computer system configuration, wherein the first computer system configuration corresponds to a backed-up computer system and the second computer system configuration corresponds to a computer system that the backup is being restored to.

 

Edit #3: I knew someone would be able to dig this up, because I was positive that Veritas/Symantec was not the first group to do VM backups: @jmattox (Jason Mattox co-founder of vizioncore) tweeted: esxRanger was backing up virtual machines as far back as 2004

 

Posted in Backup & Recovery, Virtualization | Tagged , | 3 Comments

Digging deeper into per VM cost analysis

I consider this pretty high level and overly general. I’ve written this more in the style of a memo form that I could provide to management as a way to help them understand the costs associated with virtual systems within our organization. Once again, pricing can and does change and the pricing listed below should not be construed to be representative of any one server/software vendor.

Overview

The current virtual environment consists of several VMware ESXi host computers that provide a distributed resource pool that is shared within the environment. This pool of resources allows for the creation of server workloads in a virtual space that are highly available, and portable. A single ESXi Host computer can provide enough resources for anywhere from 25-45 standard virtual machines.

Cost Breakdown

There are many factors that contribute to the makeup of the costs associated with the virtual servers. Network, Storage, Memory, CPU, Software Licensing and Support all contribute to costs. At its most simplistic view, and when all aspects are taken into account, a virtual machine ends up costing $340 per GB of active memory and $5 per GB of consumed disk space. The final cost for a fully licensed ESXi Host comes out to $61,700, this price reflects all aspects of the infrastructure involved with providing virtual resources. Breakdown as follows:

  • 1 Dual Socket Server with 16 Processor Cores and 256GB of Memory
  • Redundant 10GB Network Access Ports
  • Redundant 8GB Fibre Channel SAN Ports
  • 2 Terabytes of SAN Disk Space to host Virtual Machines
  • Software and support for Virtual Machine Backups (Veeam)
  • Software and support for Tivoli Storage Manager Backups (TSM)
  • Microsoft Data Center Licensing to cover all Microsoft Server Licenses
  • VMware Enterprise Plus Licensing for 2 Sockets with 3 years support
  • 3 years of hardware support

Per Virtual Machine Costs

The primary cost factors for virtual machines are the amount of memory and disk space that will be required.  Across the virtual environment, we see the average machine using 8GB of RAM, and 80GB of SAN disk space which equates to $3120 worth of resources. Further analysis will show that for virtual machines of different memory sizes, the costs for a virtual system will actually surpass those of a physical system. It’s once our consolidation ratios get higher, that the price breakdown makes more sense. One aspect to take into account is that I have not included any costs for power/cooling, and those can be significant.

Total Costs Including Network/Storage/Host/Licensing
RAM per VM Number of VMs Price Per VM

24

8

$8,149.13

18

10

$6,519.30

16

12

$5,432.75

12

16

$4,074.56

8

24

$2,716.38

6

32

$2,037.28

4

48

$1,358.19

3

64

$1,018.64

2

96

$679.09

1

192

$339.55

Items to Consider

Memory and disk space for our environment are the two resources that we are constrained by, that is why they are the two primary cost driving factors. These costs are broken down based on a host with a capacity of 192GB of active memory. We do not run hosts at the full 100% utilization of their allotted physical memory because of high availability/N+1 capacity restrictions. I have included the costs of all network access in this calculation, even though those resources existed before our virtual environment was fully in place. We currently have enough high speed network and fibre channel SAN connectivity to support 2 new ESXi hosts, after that, additional network and fibre channel assets will have to be procured. There are two line items for backup, this is because of limitations with virtual backups and physical RDM volumes, plus a large number of systems still use iSCSI connections to different storage tiers. ESXi5 will help with this as I plan to setup SDRS and implement a more efficient set of storage pools.

 

Posted in VMWare | Leave a comment

2012 vExpert Nominations are now open

So while I twitter away all my free time studying for VCP5, which yes I will be taking on Feb 28th along with half the rest of the planet. I saw a tweet from@ChrisWahl (Wahlnetwork) that the vExpert nominations are now open for 2012.

Over at VMTN boards the vExpert page has the criteria up, as well as a revised FAQ .

I think it would be pretty sweet to be recognized as a vExpert. I know a lot of vExperts through twitter, from VMworld, and the Brown Bags that Cody Bunch and Damian Karlson put on every Wednesday.

I need to look further into the nomination process. While I may not think that I would qualify this year, if I can get the Orange County VMUG up and running I may qualify for next year. I will say, that I’ve helped several people with the VMware IO Analyzer since my post on it a while back, but I think my contributions to the VMware community would probably need to be a little more substantial.

Chalk that up to one more goal for the year.

Quick Edit: The nomination page is now open. If you feel that I have made an impact in the last year with my posts on VMware and Virtualization, please feel free to nominate me 🙂

Posted in vExpert, VMWare | Leave a comment