On array auto-tiering

A good discussion about auto-tiering popped up at ARS. That said, I responded as such:

I’m honestly not sold on the on array tiering, at least right now though I can see its usfullness for certain types of workloads.

With what I’ve seen from all the array manufacturers there is no real prediction ability of the arrays to move your cyclical hot data up prior to need. Example, month end transactions are when I would need a lot of data that was cold during the month to move to my hot tier for the batch processes that will take place when we close the month. I know this may be specific to my workflow, but by the time the data has moved into the hot space, the processing could be over, and then that data will be cold again. I kind of see that as a waste of cycles.

I’ve had several discussions about chunk sizes and the efficiency of certain sizing and the amount of processor utilization it takes to move that data around and track it. 1 MB, 1GB, 256MB or 64k. That’s a lot of data flying around the array from tier to tier and it takes a significant amount of overhead to facilitate those operations. I’m sure todays processors are able to do there’s operations far easier than they used to, but its still overhead that may or may not have real world benefit.

I think if a storage system is designed properly you can mitigate the need for the secret sauce. Every vendor is going to do it differently, and then claim that theirs is the best. Same with de-dupe. In the end, how much benefit are you really achieving, and is it reducing TCO? I’m not sure I can answer that yet.

I think a lot of my experience comes from having an array that doesn’t do tiering, but then again, doesn’t need it. RAID has been around for a long time, and honestly I think that its long time we move past it. The limitations of arrays with multiple RAID sets have forced us to simply implement schemes to mitigate its short comings, and in turn, the storage vendors increase the costs to the end users when in all actuality its still JBOD.

I think when a company comes around and breaks the mold, or introduces something outside the status quo, they will be the targets of a tremendous amount of FUD. I’ve seen it with Equallogic when they first came on the scene and offered all the goodies (thin provision, replication, etc) for a moderately low price. The same with 3PAR when they introduced thin provisioning (which everyone quickly followed suit), and XIV with their non-RAID grid architecture, and to an extent Isilon with their scale out NAS (even EMC was smart enough to see the writing on that wall), and I’d thrown Tintri in there as well with their approach to storage for VMWare environments.

The status quo fears change. Many times to their own detriment.

This entry was posted in Storage & Virtualization. Bookmark the permalink.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.