As someone who was very keen to get my hands on 3PAR gear before HP bought them and who had seen his last storage array vendor gobbled up by Dell, I had mixed feelings about HP being the winner in the 3PAR bidding war. At the time I had not had evaluated the EVA platform in 2006 but wasn’t overly impressed. I still thought of HP as printers, PC’s, and servers, not storage. The XP line didn’t interest me either and I had evaluated LeftHand when it was LeftHand and for me Equallogic won the iSCSI battle of the day.
I think I can chalk up that mindset as stemming from my lack of exposure to the HP side of things. The shops I had worked at had all been Dell, Compaq, and IBM, so the familiarity wasn’t there. But there was also in my view a legitimate case to be made that HP was a very large poorly run company that didn’t really know what their direction was. Perhaps the CEO merry –go-round had something to do with this.
As we all know the times they are a changing, and yesterday was HP’s time. The primary announcement being the introduction of the 7000 series of 3PAR entry and mid level arrays. The 7200 and 7400 series arrays are shipping right now, and as you hear HP tell it they have orders lined up and ready to go. I had heard that the initial production ramp was significant, I’m willing to guess this is because there are thousands of EVA customers out there who are dedicated to HP but didn’t have the budget for a full fledged 3PAR system but were desperate to get off a system that was very long in the tooth.
First off we get two base models, a dual node 7200 and the quad node 7400. Entry level pricing of 25k for the 7200 and 40k for the 7400. Now what exactly you get for those prices is still not known to me, but that initial price point will be very, very, very, attractive to a good number of customers, and it won’t just be the enterprise. This is targeted directly at the Dell Compellent and EMC VNX/e based arrays which have been eating up a significant portion of the SMB space. Not to say this is an SMB array, it has true Tier 1 functionality within it and the potential to scale big enough for a wide swath of customers. I constantly have to remind myself that the days of a terabyte of data being “big data” are long long gone. Many small businesses today generate Terabytes of data annually and data growth is one of the key metrics on storage admins minds, right after performance.
Devil is in the details
Do keep in mind though that the devil will be in the details when it comes to pricing these systems out. 3PAR like, EMC, and others love to charge for the goodness, and these systems will be no exception. The base system will get you the entry level feature set, that will include thin-provisioning and some other basics, but 3PAR doesn’t shine fully unless you license the best features. As usual HP’s website is nearly impossible to navigate, but here is the best product link I could find to go into specifics. See only the base configuration is available, you have to pay for all the best stuff.
I can see the Application Suite for VMware specifically being a must have for any 3PAR customer, as well as Data Optimization. The Reporting Suite is also one of those things I as a storage administrator would want to have especially given the reliance upon thin provisioning with 3PAR. I won’t go into the ins-n-outs of all these various packages, Nate over at TechOpsGuys already did that for me 🙂 And trust me, he knows the systems far better than I do so his input is highly valuable. Also if you’re an existing 3PAR or EVA customer, here is a good breakdown on the functionality being made available for migration.
The Next Era of Storage
To get a better idea of the direction that HP is taking, take a look at this video. Not only did it channel every 80’s Hair Metal Band with its soundtrack, it manages to put together the direction in which HP is heading.
I will say is that HP has in my view caught up with the rest of the storage industry when it comes to having a modern mid-tier array platform to sell to its customers with the new 3PAR 7000 series. There is now ample competition with the VNX line from EMC and in my view competition is good. HP is making steps into the right direction with its desire to unify the look and feel as well as the management console across their storage platforms. This is something IBM did with the XIV GUI which was ported over to the StoreWise v7000 and DS8000 lines. EMC to an extent is trying to do the same but in a far less successful manner (there are just too many product lines to do this). A common look and feel and functionality set goes a long way towards adoption of disparate platforms from the same vendor. Many of the mysteries of storage administration have become abstracted from the end user, and the move towards the “IT Generalists” (a phrase I loathe) has started to permeate throughout the interfaces for many systems. While the C-Level guys might like the fact that you can drive storage from an iPAD, many of us will continue to use the command line. Its good to see that functionality has not yet been removed.
There is a lot more to this announcement than just the new 7000 series systems, HP has gone through its portfolio and revamped nearly everything, and as HP says, The Next Era of storage is here. The decoder ring goes like this:
- Protection = StoreOnce and StoreEver
- Connect = StoreFabric
- Converged = StoreEasy
Keep those names in mind as you see more discussed in the coming months. I expect the markets to take this new product offering as a sign that HP is turning the corner with one of their most profitable product lines, which should be good for HP overall especially after the recent Autonomy news.
Oh and of course, as a lover of all things FUD, I would be remiss in linking to Dells response on the day of the launch. I don’t think that will be the case going forward.
I’m not sure what their pricing is now, but I have a quote from Dell Compellent from March of this year – about $40k for a pair of Series 40 controllers, 2x10Gbps iSCSI HBAs, and 12x600GB 15k SAS. That is for hardware(w/base license software), and professional services to install. I excluded software support (I assume it’s required but for 3PAR 7000 I don’t believe it is). I don’t believe this Compellent quote had any host FC ports (I was looking to use iSCSI at the time)
Can’t do exactly apples to apples because there is no 600GB 15k SAS on 7000. BUT I was poking around google earlier tonight with the part#s on the data sheet and believe I came up with a basic configuration of a 7200 w/24x300GB 15k SAS, and base software licensing for $28k. The base 7200 configuration includes 4x8Gbps Host FC ports. iSCSI HBAs on top of that would be an extra ~$5k. That is for hardware (w/base licensing), no professional services, base warranty support(which includes 4 hour on site parts only). No software support. An additional shelf of storage was about $20k(shelf+drives+software licensing). The third shelf would cost less because the software license cap kicks in at 46 drives.
Really the one thing that I would of liked to have seen would of been physically larger controllers on the 3PAR side – to support more host port connectivity. That’s been my main big complaint about their mid range from a hardware perspective. NetApp’s mid range has a ton of expansion ports, as does Dell Compellent. I guess you can’t have everything…
I mentioned this in my blog post last night but I was reading through the IBM V7000 “customer reviews” last night because well, I have no good reason. I got deep down in the pages and saw mixed reviews, a lot of folks say it’s great(including folks saying it’s low cost), some had some major issues with the hardware (which is the same chassis as 3PAR now, one guy was saying IBM should put a “reboot” button on the thing).
But the comment that caught my attention was some person saying how the RAID 5 performance on their database was unacceptable and they had to use RAID 10 instead, which made it difficult to compete V7000 vs EMC at least in that regard. Someone from IBM confirmed that RAID 5 is done in software on the Storwise platform so it is slower. IBM touts the real time compression how it can save you tons of space (It’d be great if 3PAR had that too), and I thought it was sort of funny – you can use this compression to gain a bunch of space back – then you lose it because you have to use RAID 10 to get performance back.
You being an ex-XIVer, do you think Storwise is the future for IBM Enterprise storage? It does seem like it to me, apparently IBM is using the same XIV user interface on the V7000. Plus they have integrated some level of SVC functionality on it. Do you see a reason why someone would want to use an XIV over Storwise ? XIV does have that optional large SSD read cache, whereas I think Storwise still relies on EasyTier.
I saw the base 3PAR 7200 configuration (2 controllers, 1 enclosure, no drives) going for $10k online! That’s just absurd! 🙂
HP is going to start kicking some serious butt with these new systems.
Gabe, where’s the love for NetApp?
Off topic: We will probably be calling you about a mysterious Emulex NIC disappearance.
Hey Matt, it will come, that said I’d love to hear about whats going on with Netapp Direct Connect with AWS. Ping my corp email regarding the NIC stuff and I’ll dig a little deeper about your NIC issue.
-Disclosure Netapp Employee-
To be fair to the V7200 3PAR, it would compare more closely to the FAS22xx line of systems which have similar number of host connect ports. I’m not sure if the 7400 is similarly limited, but you compare that to the FAS3220 then I’d agree the configuration flexibility doesnt seem to be on-par.
Whether RAID-5 really slowed down the database is probably up for debate, unless the workload is over random 40% writes the problem with RAID-5 is usually that there simply arent enough spindles for the capacity, which is becoming increasingly common as drive sizes go up while spindle speeds stay fairly constant. Even so, RAID-10 is at best 50% efficient for random writes, and there are better options to optimise writes that RAID-10 or staging random writes onto SSD as a cache (which, depending on the SSD can lead to nasty peformance degradation over time see http://event.cwi.nl/damon2009/DaMoN09-FlashWritePerf.pdf : its a little old now, but still valid)
I’d also challenge the “RAID is done in software so it’s slower” assumption, the number of XOR calculations an intel processor can do in a millisecond completely dwarfs the number required to keep up with disk I/O even when you have hundreds or even thousands of spindles. If you had a really crappy RAID algorithm it might slow things down, but IBM owns pretty much every decent RAID software patent there is (with the notable exception of RAID-DP). The ASIC = faster argument might have held true ten years ago, but not today.
Given that 3PAR are using the same Intel CPUs, as everyone else, why on earth would they go to all the trouble and expense of adding the ASICs if their impact was so negligible. I think it suits vendors without ASIC based offloads to spread the myth that Intel is all you’ll ever need. Maybe in a few years time but were not there quite yet. Raid calcs are simple enough, but it’s all the other data shaping, workload management and data integrity overheads that eat CPU time and so add to latency as the IOps ramp up. So why not offload those functions to a purpose built bit of kit, whilst the Intel CPU’s handle the mapping and coordination.
I enjoy the ease of use for the 3par storage systems from a management perspective, however their hardware and microcodes are questionable. We have experienced numerous outages due to code upgrades, unstable codes and lack luster hardware. I used to be a fan of the product line but now can no longer endorse them. I know numerous other companies that faced similar issues.
Ashley, you’re obviously a troll for a competing vendor. Next time please at least try to provide some facts.
Hey Ashley –
Curious is your configuration supported? All software updates you complete a host worksheet with various drivers, firmware, OS levels etc.. if anything is not supported then you have to send a statement to them that you understand it is unsupported and they can’t guarantee it working right etc..
I performed 2 updates on an HP 3PAR last year, and one minor patch this year — all 3 were incident free. The minor patch did not require host worksheet since it didn’t restart any controllers. The other two did — I had some unsupported configurations (some Ubuntu VMs running software iSCSI) – so I shut down those storage connections during the upgrade “just in case”.
A few years ago I had some issues with Linux software iSCSI and fail over, during one 3PAR upgrade caused the upgrade to get automatically rolled back multiple times because the iSCSI clients were not logging into the array in time (if the same hosts don’t login within ~60 seconds the upgrade process is aborted). In that case there was no outage as well – the apps remained online (though there was some pause while MPIO failed over).
If you count the MPIO failing over as an outage (which can take up to 60 seconds or so depending on software) then you may want to look into 3.1.2 which removes that step by having the second controller perform a sort of WWN takeover (at least for FC – I am not sure about iSCSI) so the amount of time I/O is paused should be tiny (perhaps a fraction of a second). Though of course most other arrays have similar fail over for MPIO. The one exception I think is Hitachi.
On my recent upgrades — they were my first upgrades running in a 100% vmware environment (past arrays I have had were always hybrid physical and virtual systems), the upgrades went so smooth I couldn’t believe it, there was no noticeable impact to anything(I really expected to see a 30-60 second I/O pause but did not notice anything on any system).
Another thing that again is common in most arrays is you have to make sure you have enough capacity to take the performance hit of a controller going down and the system going to write through mode. I had my first 3PAR controller fail due to hardware last fall (internal disk failed) and the performance hit was quite large — I did have to shut down a bunch of less critical things to reduce stress on the remaining controller. Fortunately we had recently completed a system upgrade which added 50% more disks(I had just completed re-striping literally days earlier). If you happen to have a 4,6 or 8 node 3PAR system this problem is handled automatically with persistent cache, which mirrors the cache to another controller in the system.
Unfortunately for HP support our 4 hour on site support didn’t help a whole lot they were out of stock on the part and wanted to replace it next day — obviously that wasn’t acceptable so after some escalations they managed to get a part from another depot and drive it in — so the system was degraded for about 9 hours in the end. There was never an outage but the performance tanked (more than I expected – given the latency on the spindles was very stable sub 20ms).
None of my upgrades going back 6 years (though my experience with HP support has been limited compared to 3PAR pre acquisition) has had any downtime for supported configurations (notable exception is when I took the time to shut down the unsupported configs during the upgrade as mentioned earlier).
Not to say that problems don’t happen though 🙂
I plan to upgrade from 3.1.1MU1 to 3.1.2 in the next month or so — that is a very large release (biggest since 2.3.1 which I was the first production customer to deploy that one). Despite the small version change, massive amount of changes under the hood. I’m not expecting any issues though I have decided to wait several months for 3.1.2 to get tested more before deploying it myself as there is nothing too vital in 3.1.2 that I need (space reclamation algorithm improvement would be the main thing I could use but it’s not critical)
2.3.1 I was in a real rush to get to at the time because of supportability with Exanet NAS.
I’d be curious the details around one/more of the outages that you’ve experienced during an upgrade, what software was involved on the server end, what behavior you saw etc..
Nate, your performance would have tanked due to system going into write through – ie not using cache. The system will not allow the use of cache unless it is mirrored to another node, which is your case is down. This is the same for the EVA and almost every other dual controller SAN.