Emerging Trends Weekly – October 25th 2022 – Container News

It’s time again for another edition of Emerging Trends Weekly for October 25th 2022. You can find this weeks edition by following this link. Last week saw the OCP (Open Compute) release a bunch of new stuff, primarily for the hyperscalers and their ongoing push into large scale AI/ML operations. Along with those new designs, comes a lot of discussion on the networking and bandwidth requirements that appear to be growing at an astonishing rate.  Also, we continue to see a lot of noise around CXL for memory disaggregation but also storage operations. While still very much in its infancy, CXL shows a lot of promise and during my time at Fungible, it would come up in conversations weekly for those organizations looking at disaggregation of expensive infrastructure assets such as GPU, Storage, and Memory. Containers continue to be a huge driver for our industry, and I address that down below in this week’s featured article.

Top Ten Stories for the week of Oct 25th 2022:

The deep dive for this week relates to the ongoing growth of Kubernetes/Container workloads and the continual shift to application modernization for stateful applications. From a survey done by DOK and a summary presented over at The New Stack:  More Database, Analytics Workloads Ran on Kubernetes in 2022

More than three-quarters (76%) of survey participants now acknowledge the use of databases on Kubernetes, up from 50% just last year. Analytics workloads have also jumped significantly, the report states, going from 39% to 67%.

Actually running stateful applications (those including data saved to persistent disk storage) is not relatively common in the abstract. A year ago, 55% of the Cloud Native Computing Foundation’s 2021 user survey were doing this. Yet, based on the DoK report, the mix of application types that use data on Kubernetes appears to be growing.

The new report surveyed more than 500 Kubernetes users that run data workloads on Kubernetes. Consistency and ease of management are the leading factors behind running data workloads on Kubernetes, which are both critical to ensuring that widespread, production use of containers can be handled.

Notably, among those using data on Kubernetes, there was no increase in utilizing persistent storage, and an actual decline in streaming or messaging workloads.

Commentary: I’ve been watching the growth of Kubernetes since 2014 when Docker presented at the OpenStack conference in Atlanta. At the time OpenStack was riding high and there was a lot of interest in it being used to replace existing hypervisor based platforms. As we have all seen, the focus on OpenStack has dropped off significantly as many organizations embraced Containers for their microservice based applications. Though it’s still used in the Telecom, Gaming, and Service Provider industries, its use in the enterprise has waned. Much of that is in part because VMware, RedHat and others embraced containers and pushed significant resources into making them easier to manage, deploy, and operate. 

All of that said, Container usage is growing, and the types of applications for which they are being used and adoption in the enterprise for some of the more traditional workloads continues to expand. Persistent data is becoming more important for organizations using container workloads, though it still seems to be attributed to a small percentage of revenue generating applications. As more database systems are being containerized, I expect this number to grow. The full report is linked here, and it’s worth browsing through, the main article above gives a good highlight of key takeaways. 

Posted in Cloud Storage, Cloudy, DevOps, Emerging Tech Weekly, Kubernetes | Leave a comment

Emerging Trends Weekly-ish – September 22nd 2022 – Job Search Blues and the Beast Needs Skittles

A fairly diverse set in the top ten this week, the main edition has a lot more coming in at 53 stores total, so click the link and go read it all. Well, a bit delayed with this edition but such is life. The “job hunt” post RIF has been a bit eye opening. Seems like there are a lot of “job openings” but not job hiring’s. Hearing through the LinkedIn grapevine that many organizations are in limited hiring freeze territory given the current economic downturn. Glancing at my stock and crypto portfolios as of late, downturn seems like an understatement. I will say, one continued source of frustration that I see echoed on social media posts is the concept of up-front understanding when it comes to what the interview process is, and what to expect. Having a clear set of expectations of the process up front is super helpful, instead of the email/phone tag circle of hurry up and wait. I’ve noticed the larger companies like Amazon and Google are pretty good at setting the table before you spend too much of your or their time. The other thing that I’m seeing is larger technology focused companies reaching out for first touch discussion without having a specific job role or opening available. Hey I’ll always take the lunch, but it has to have some caloric content at least. Oh well, time marches on, and I’ve had some really good discussions with several companies about roles. Still I’m batting 0-2 right now with a runner on 3rd.

This Weeks Top Ten

The big story this week is from our friends over at The Next Platform (who continue to post all of their article titles in ALLCAPS which I find odd, and given I’m too lazy to retype the headlines its going to be copypasta for them). HYPERSCALERS AND CLOUDS SWITCH UP TO HIGH BANDWIDTH ETHERNET

The hunger for more compute and storage capacity and for more bandwidth to shuffle and shuttle ever-increasing amounts of data is not insatiable among the hyperscalers and large cloud builders of the world. But their appetite for is certainly always rising and occasionally voracious – even when facing a global economy made uncertain by pandemic and trade wars as well as an actual war.

We expect 100 Gb/sec Ethernet to have a long tail,” says Boujelbene. “In fact, we expect 100 Gb/sec Ethernet to still comprise 30 percent of the switch ports by 2026. The deployment will be driven by different use cases. Currently 100 Gb/sec is mostly used in the fabric, but as prices continue to come down, 100 Gb/sec will also get adopted in the top of rack for server connectivity, which will drive a lot of volume. 10 Gb/sec died pretty quickly as it got replaced by 25 Gb/sec, which was priced almost at parity with 10 Gb/sec.”

Commentary: Having spent the last 18 months focused almost exclusively on 100G connectivity for storage and composable infrastructure solutions, much of the research out of Dell’Oro rings true. Getting my start on the selling side at Emulex when 10G was just starting to kick off in earnest, the evolution of data center connectivity speeds continues to be the one constant that stays true. When I started in tech we were using 4MB Token-Ring in our networks, moving further forward, octopus cables for shared 10MB were all the rage for a time. Then it was the jump to 1G and then 10G which still seems fairly commonplace, but has been depreciated in favor of 25G to a large extent.

As we continue to see massive growth in the AI/ML space, as well as for EDA and other high data volume workloads it makes sense to see hypergrowth in 100G+ for the hyperscalers, as well as the research labs and HPC space, but does this necessarily translate down into the common enterprise? Sure core switching needs a lot of bandwidth, but does your vSphere server need 100G? More than likely the answer is no, but with NVMe/TCP growing into an industry wide standard for storage technologies and scale-out storage products that are looking to replace DAS, my expectation is that 100G to the appliance will continue to grow, and for a certain segment of traffic generating systems, 100G will be the defacto connection point. Also, let’s also not forget all those new shiny GPU’s being announced because they are major bandwidth hogs, they will need high throughput connections as well. This brings in another point that’s being made in the article which is the price to move a bit.

This level of cost isn’t one that comes up in casual conversation, but like power consumption, IOPS/Watt, $/GB, and other more behind the scenes calculations that come to the fold when talking to larger scale infrastructure teams, these little details start to make a big difference when taken into the larger discussion of building modern data center systems. I’ve had a large number of discussions with customers who are implementing 400G, primarily when talking with the groups who leverage and utilize CERN data or ride on ESNET/Starlight networks between universities, and who are looking to implement 400G and higher (1TB) connections at global scale. This applies as well to the larger HPC shops who while still in love with InfiniBand, are probably taking a hard look at possible replacements given its single vendor status.

Feed the beast

Bottom line, networks are hungry beasts, modern workloads love bandwidth, bandwidth is skittles, and vendors need to feed the beast.

Posted in Cloud Storage, Emerging Tech Weekly, Enterprise Tech, Networking, Storage | Leave a comment

Emerging Tech News – September 7th 2022 – VMware Explores Public Cloud

Click the link for this weeks full edition of Emerging Tech News, July 7th 2022.

Top Ten stories for this week:

Commentary: A lot in the news since last week which saw the VMware Explore conference and a ton of announcements from them and their ecosystem partners. Much of the focus remains exclusively on hybrid cloud, and the larger exodus from on-prem to managed cloud services on AWS, GCP, and Azure with the bigger thrust of that focus being on AWS. As the Broadcom acquisition comes soon to a close, what remains to be seen is how much of the work VMware has done (and invested in) towards the larger shift to the Big3 Cloud vendors will bear fruit when Broadcom takes control of the reins. At it’s initial launch VMware Cloud on AWS was positioned as a means to allow VMware customers to maintain some semblance of control as they shifted into the major IaaS providers, but it was costly, restrictive, and limited in its scope compared to your standard lift and shift methodology. Their efforts in going “cloud native” seem to be at odds with what the Public Cloud offers as a whole, and the smart money is in their shift in focus to “hybrid”, which I’ve always felt was the stronger play for them. Keep in mind, most devs want seamless access to cloud native services, and placing a costly middleware tier just to manage the server basics seemed a fools errand. Retaining your customer base by embracing a hybrid model, and the reduced friction incorporated with that appears at this point to be a smart move on their part. I wouldn’t exactly say that VMware is giving up on their prior incarnation, much of the news out of Explore this week saw big changes continue to be announced that will allow customers much more granular and cost effective means to provision VMware as their hypervisor of choice in the public cloud. More of the focus shifted to modernization of VMware and a mimicking of AWS and their Nitro concept with the addition of the Project Monterey participants who are now pushing DPU’s that will run vSphere as well as related VMware services directly on those cards. Couple that with Aria being a more comprehensive management platform focused on hybrid-cloud management , security, and visibility and you have a fundamental shift in direction for VMware customers to take as they embrace the hybrid cloud model.

Cloud isn’t a location, but an operational model and VMware with its massive install base is working hard to remove friction, and retain customers with a more comprehensive hybrid cloud model. On top of that, they are working to expand storage offerings outside of VSAN by partnering with NetApp who has, much like VMware, made a multi-year journey of being a primary public cloud solution that adds a much more enterprise admin look and feel to their offerings. As a legacy vendor of technology services that long ago won the hearts and minds (and pocketbooks) of most IT organizations, VMware is working hard to remain relevant in a world where public cloud first seems to be the order of the day.

Posted in Cloud Storage, DPU, Enterprise Tech, Hybrid Cloud, Hypervisors, VMWare | Leave a comment

Emerging Tech News August 30th: A Great Week For DPU Tech – A Not So Great Week For Me

For some time I’ve been publishing a Flipboard Magazine at the companies I’ve worked for where I collect interesting articles that come across my RSS feed and aggregate them into a magazine that I publish on a (mostly) weekly basis. It’s a great way to expose the field and go-to-market teams to a lot of information that is relevant to the industry we work in. I also find it useful as a reference point for announcements about products and solutions I’m interested in. So given my current “FunEmployment” situation, I figured I’d just start posting it here on the old blog that has languished for some time. As for the title, well my tenure at Fungible, Inc. has ended. You can read about it on Blocks and Files, but suffice to say, I was caught up in the RIF that just occurred. These things happen, and you can’t dwell on it. So I plan to take a bit of time and start looking for opportunities (If you know of one shoot me a note). Enjoy this weeks edition. If you like it, please subscribe and you will get updates when the magazine publishes.

Ok onto this weeks edition (click the link)

Top Ten Stories For This Week:

Commentary: VMware Explore (the conference formerly known as VMworld) is in full swing this week and I couldn’t help but notice all the announcements from DPU manufacturers who have been working for some time now on integrating into the vSphere ecosystem courtesy of Project Monterey. Having spent roughly the last 17 months focused on the DPU market segment, it’s great to see progress being made with this technology and its potential starting to make its way into a more commercial use case. Lots of stuff in this edition, mostly links from the last two weeks. So you have stuff trickling out from VMware Explore as well as HotChips34. DPU’s are starting to gain more traction in the market. When I first saw Project Monterey in 2020 I was intrigued, and I had also just started as an analyst at Gartner and had an opportunity to see some of the early concepts. At the time, it just didn’t jive with me, but the more I dug into the possibilities the more interested I became. To really understand why things are moving in this direction, you have to go back to 2016 when AWS bought Annapurna Labs, which would eventually become AWS Nitro. James Hamilton (whose work is high on my list of truly amazing things) posted a good video last week about the evolution of Silicon for AWS and its really worth watching because it hones in on the challenges of introducing new silicon into the data center, but it also illustrates why new silicon is needed.

One of the primary reasons I joined Fungible was because I believed that the offloading of compute heavy tasks from the commodity x86 CPU to a specialized processor that was designed to perform data centric tasks faster, more efficiently, securely, and at scale was something that was desperately needed. I still believe that, but that is going to require a blog post all by itself. Enjoy this weeks edition.

Posted in Agile Infrastructure, Cloudy, DPU, Startups, VMWare | Leave a comment

Extending Server Lifespans With Fungible DPU

Since the onset of the Covid pandemic, global supply chains have seen significant challenges. This affects nearly every facet of the IT industry, from vendors, partners, and consumers. All have faced long lead times for components and key infrastructure resources that power their business applications and operations. Prices for components have also seen huge increases, with some components prices increasing 100X. All of this contributes to the bottom line of organizations that need to grow their infrastructure to accommodate demand and modernization, and to address the challenges of the Data Centric Era that are impacting us all. 

With lockdowns and logistics issues continuing to be problematic, the ripple effect may continue to impact all consumers of IT resources. The recent passing of the CHIPs act is a step forward to increase investment in the America’s of processors and components, but the time frame for that investment to be realized in the general marketplace is years away. So what can customers do today to address a shortage of infrastructure systems?

Factors Affecting Server Longevity

Let’s focus on the most basic component of data center infrastructure, the x86 server. From my early days in IT with beige Compaq towers onward, the standard design of the data center server has traditionally been a 1U or 2U system, with a single or dual processor, memory, storage, and networking connectivity, and more commonly now, al GPU to address the massive adoption of AI/ML, and other graphics intensive workloads. Server design for the most part has remained relatively unchanged, outside of faster components, increased memory density, and faster processors, flash storage, and higher networking bandwidth. All of these finished components are being impacted by the global supply chain issues. 

At the same time, many organizations leverage a 3 year depreciation cycle for server lifespans. Some of this has to do with tax depreciation schedules, others with the challenges presented by digital transformation and the increase in data creation and movement. Support costs for many infrastructure solutions beyond the 3 year time horizon tend to increase significantly in years 4 and 5 as well, making the cost of keeping older server systems more expensive.

Now you may be asking yourself, what does any of this have to do with DPU technology? Well let’s take a look at why organizations upgrade servers in the first place.

CPU performance and core density: CPU core counts continue to increase but clock speeds have reached a plateau. With a larger shift to moving more work into software vs hardware, organizations are asking more and more of the commodity CPU. Yet the growth of data, and data speeds required for modern workloads is forcing more and more work onto the CPU. It’s not uncommon for deployed systems in virtualized environments to take up 15-20% CPU overhead before an application is even provisioned.

Software Defined Everything: We continue to software define “all the things”, and ask the do more and more of the work that it may not be up to the challenge to address. While core density is increasing, the costs associated with that growth are rising as well. Intel plans a 20% increase in costs this year, and thus asking the CPU to do the heavy lifting of shifting more and more functions into a software defined model, will result in higher costs of doing business. Taking into account the bigger push for Software Defined, Networking, Storage, and Security means the CPU has to do more of the overall work. Add virtualization on top of that, and you are placing a heavy burden on the CPU, thus organizations almost always end up over provisioning CPU resources due to the difficulty in predicting demand. This results in higher costs to consumers, and in many times a stranding of resources that should be Fungible in their consumption. 

The Impact of the Data Centric Era: data growth has been increasing year over year for some time. Taken in total, every day each person generates 1.7MB of data per second, or 143GB of data per day. Every text, tweet, transaction and post is captured and stored. From 1986 and into 2025, a 7000% increase in data storage in exabytes is being predicted. This massive growth in data requires a massive growth in the components that store and secure it. In addition, consumers are mostly relying on x86 processors to manage, process, transfer, and secure this data. 

So these are all big issues on their own, but when combined, require organizations to modernize their infrastructure in order to keep up with the demands of modern applications and services.  Couple this with supply chain challenges, and many organizations are searching for  how they can increase the lifespan of their currently deployed server systems.  Exactly to this point, Microsoft announced that they were increasing their server life cycle from 4 to 6 years for their cloud infrastructure.  The move is expected to save Microsoft 3.7 Billion USD. Google announced they would increase their server lifecycle from 3 to 4 years, and Amazon Web Services stated that they would increase their own server infrastructure to 5 years. So we see the big 3 public cloud providers making big shifts to increase server lifespans, resulting in huge savings to their bottom line. 

Increasing Server Lifespans with the Fungible DPU.

So since you are probably not Google, Amazon or Microsoft, what can you do to increase the lifespan of the existing server infrastructure assets in your data center? The answer lies in composability and redefining what a “server” is. The Fungible view is that a server should in essence be a CPU for applications, a GPU for graphics, and the DPU for data centric operations and IO operations, and those expensive physical assets should be composed into the server across your existing ethernet network. No need for internal storage, no need for RAID controllers, no need for expensive high density GPU systems. Servers can be streamlined into a less expensive and less complex asset that can be dynamically composed at time of need from a pool of resources that the customer controls.  

The main goal with this approach is offloading Data Centric functions increasing CPU utility and utilization as well as providing high performant application centric storage, and GPU’s on demand instead of creating customized servers which result in server SKU sprawl customers can extend the life of already paid for server infrastructure.  At the same time, older existing server infrastructure can gain the benefit of modern storage, advanced networking, and GPU functions by composing those functions into the older server platform. As for increasing the utility of the existing server CPU’s, the Fungible DPU offloads core data centric tasks from the CPU. In turn this frees up CPU cores to operate as they were intended to, for applications and business logic. 

The main goal of this approach is offloading Data Centric functions to improve utility and performance and resource agility while streamlining operational complexity.  The result is enabling the CPU to do the core work of application computation accessing data from high performance application centric storage. Access to and use of GPU resources is on-demand and can be composed to meet varying requirements without server SKU sprawl.  At the same time, existing server infrastructure can gain the benefit of modern storage, advanced networking, and GPU functions by composing those functions into the server platform. 

On average we have seen a reduction in 8-16 CPU cores by offloading just the storage initiation from the CPU for high performance operations. Add to that the ability to re-compose an existing server into one that can provide GPU’s for AI and ML workloads, Virtual Router offloads, high performance storage and the overall utility of a standard server increases from one that was purposely built to address maybe a handful of applications, to one that has the ability to be composed to support a broader array of applications. 

So now that server you bought 3 years ago that is fully depreciated gets an extension and , continue to be utilized for modern applications and solutions. The CPU can increase the number of applications, containers, or virtual machines because of the freeing up of additional cores that were dedicated to data centric tasks, and providing bare metal composability will give that server new life to address modern workloads. So while you may not save billions of dollars the way Microsoft, Google, and Amazon are, you can breathe new life into aging infrastructure and increase compute utility by adopting the Fungible DPU and a composable infrastructure approach.

Posted in DPU, Enterprise Tech | Leave a comment

Defeat the IO Tax with Fungible DPU and Storage Initiation

The increased need for speed and scale of modern applications has been the driving force behind the growth of CPU speeds and core density as well as the adoption of powerful GPU technologies to support modern workloads that require real-time answers from our data.  Network bandwidth requirements have skyrocketed from 1GB to 100GB and greater, with latency expectations dropping below 1ms in many cases.  Moore’s law is reaching its peak, and  the commodity x86 CPU is plateauing in terms of its overall rate of performance growth. Modern workloads and cloud native applications have performance requirements that general purpose CPUs will become increasingly incapable of meeting. 

Given the trends listed above, what can an organization do to address the peak CPU era? How can they leverage the CPU and GPU architectures to their benefit, and more importantly, focus their efforts towards the processing functions they were truly intended to address? 

CPU’s GPU’s and now DPU’s? 

A new class of processor has been introduced recently, it’s called the DPU (Data Processing Unit) and it’s role in the data center is to accelerate and offload data centric functions from the general purpose CPUs, and to act as a catalyst for bare metal composability at scale. Data Centric Computing requires this new class of processor to accelerate the delivery of data, as well as to offload data centric tasks from the commodity x86 processor leaving more available cycles for general purpose compute needs The data center itself is becoming the “computer” but in order to achieve that goal, we need to address the challenges associated with forcing the commodity x86 compute layer to service data centric operations.

What are Data Centric Operations?

Data centric operations are the core services that allow for the processing of data and data movement within modern infrastructure and data center systems. These processes have long been forced to run on x86 processors regardless of the CPU’s ability to support them efficiently, effectively, and with high performance at scale. 

We have asked so much of the x86 processor over the years, primarily to serve as the one-stop-shop for all services that support our infrastructure. Even with the growth of high-core count processors, the demand just doesn’t seem to be stopping for ever more compute power. 

And why is that? Because we are asking the CPU to do everything, even if it was not designed to do those things well. Good-enough seems to be ok for many, but we at Fungible have a much different approach to supporting the workloads of today and tomorrow. 

Today, customers looking to offload data centric services from the CPU, must leverage a DPU  in order to meet the growing networking and IO requirements of modern computing. The more services that can be offloaded from the CPU, the more resources you can give back to support the applications and compute centric services that the CPU was designed to do. This in turn removes the IO Tax that all organizations are paying when using commodity x86 CPU’s as the only processor for IO and data centric services.

The IO Tax

In many data centers today the CPU is the de facto delivery mechanism for IO and data centric operations. There are many consumption models that can deliver these functions. Routers, Switches, Storage Arrays, HyperConverged systems, Software Defined Storage, etc. they almost all use commodity x86 processors to service these functions. And in nearly every instance, those systems are required to overprovision the CPU resources required to perform those tasks. In some cases, FPGA’s are leveraged to accelerate data services in conjunction with a CPU, which increases complexity and cost. The CPU ends up becoming a bottleneck, or needs to be over provisioned to support fluctuations in data operations, traffic spikes, and bursted IO. 

At scale, and with high demand, this model will struggle in the long run as the CPU reaches its peak capabilities. At the same time, high demand data services in the modern data center are almost universally forced to utilize host based CPU resources to deliver data centric tasks and  the customer ends up needing to over provision compute resources, resulting in costly over provisioned compute resources that may sit idle, or in the opposite end of the spectrum, be overloaded with data processing requests.  With this model, the customer is forced to pay an IO tax to perform the data services and IO that their applications require, and the CPU resources that should be dedicated to applications, are now siphoned off to focus on servicing data centric tasks. 

As an example, under load, a standard x86 compute node with 48 cores running a high-io workload will need to consume 8 to 12 cores to support the operations of say a 1 million IOP workload. In our view this is an “IO Tax” that is forced upon the organization. The extra burden of delivering data centric operations is consumed by the expensive compute resource that should be delivering applintion services more efficiently. Yet the CPU is asked to handle the data centric operations, and thus the customer is forced to  over provision CPU resources to pay the IO Tax. 

The IO Tax Illustrated 

Earlier in the year, working with Nikef/Surf in Europe, a single server node connected to a single Fungible storage node set a world record of 6.55M 4k Read IOPS utilizing NVMe over TCP in the linux kernel. This was a clear illustration of the power of CPU based server driving high IO to a DPU powered storage node. During this test, the local CPU on the server hit 100% CPU utilization, which illustrates the high performance requirements for performant IO operations at this level while utilizing standard 100G networking cards and the operating systems management of NVMe over TCP operations.

To illustrate the impact of data centric operations and the effect of the IO Tax, we wanted to see what would happen when we offloaded the data centric aspects of driving high IO with the Fungible DPU on the host server as well as at the storage endpoint. 

Fungible, working with San Diego Supercomputer Center, recently did a test to see how much performance a single server could deliver to a single Fungible Storage node utilizing Fungible Accelerator Cards for NVMe over TCP storage initiation.  The result was  a new world record of 10 million IOPS with the same 4K read IOPS workload.  Not only did we see a 53% increase in IOP performance, we saw that by leveraging DPU to DPU communications, CPU resources drastically decreased from 100% CPU utilization from the prior record to 24% total CPU utilization at peak performance. While this may not be a standard workload, the proof point of CPU utilization reduction was proven out, and the IO tax was significantly minimized.

Thinking in terms of data centric operations in the data center, a 76% reduction in CPU utilization is significant, and highlights the high price that the IO tax exacts on data centric operations. And while a single server to single storage node operation isn’t always a standard use case, when thinking in terms of the large number of disparate workloads with varied IO profiles as well as the scale in which modern applications are operating, the reduction in CPU utilization means that CPU resources can be freed up to be leveraged for the work they were designed to do. 

Putting this into a cost perspective, for each of the systems that were running the 64 core AMD EPYC processors (retail price of 8641), would result in a per-core cost savings of $135 per core, or $6518 per CPU! 


At Fungible, we believe that data centric operations will continue to demand more resources from the CPU as network speeds, and the high demand of real-time data insights continues to grow. Offloading data centric operations from the CPU to the DPU can immediately result in considerable cost savings to organizations operating at scale with high performance. Customers looking to architect for the future, reduce the costs of operations, and recapture CPU resources should look to leverage the DPU in their data center designs in order to increase resource utilization, and gain a competitive advantage. 

Posted in DPU | Tagged | Leave a comment

Recognizing Disruption and the Death of the Storage Admin

Fun Fact, I wrote this in November of last year. Still holds up. Figured I publish it.


I’ve spent pretty much the last few years speaking to customers about the many disruptive forces that are invading the traditional Information Technology space. I normally start these talks off with the slide to the left. distruptionThat’s Steve Jobs holding the first iPhone in 2007 and the text to the left is the conversation between the two CEO’s of RIM the makers of the Blackberry smart phone.  “These guys are really good, this is different” – “It’s Ok, we’ll be fine” Fine you say? At the time RIM was the dominant smart phone platform along with Nokia, both companies were at the top of the smart phone game and had the dominantnotfine market shares. At one point RIM’s shares were trading at 135 dollars a share and their market cap was roughly 40B with Nokia sitting at 114B. Both companies would continue and accelerate in both market share and value, but only until a point. That point would come a short year later with the release of the iPhone 3G and with that, the app store ecosystem was starting to gain traction and how people used their phones started to change drastically.

Now, I was an avid Blackberry user from roughly 1999 (the 850) to 2011. I resisted the iPhone at first and only got one once I moved into a Sales Engineering role that required me to travel a bit more for work. One of the primary reasons I went to iPhone was the app ecosystem that had built up around it. For anyone that remembers the Blackberry app store, then you remember the abject failure that it was, how few appdevices actually existed for it, and how abysmal the experience was in trying to use it. So as much as I loved that physical keyboard, the user experience came to be so awful that I simply couldn’t stick with the platform. Seems like the market has spoken as well, since on September 28, 2016, Blackberry announced it will stop designing its own phones. Now was the Blackberry a bad phone? No, it was actually a really good phone and email device, but that was about it. The iPhone moved my phone from being something I simply communicated with, to something I utterly relied upon for my daily life. It became a computer in my hand, not a phone. That’s one thing RIM simply didn’t comprehend and failed to act upon, or at least, waited too long to address.

Ain’t Nobody Got Time For That

timeforthatNow this seems like a long pre-amble to a post about storage, but bear with me. The storage industry holds a lot of parallels to the story above. Storage used to be a complex beast that needed to be tamed by specialist, by SAN engineers, by graybeards (sorry @DeepStorageNet). It was practically a science in the data center requiring certifications, deep understanding of the physics behind spinning disks and the complexities of RAID layout, snapshot reserves, controller overhead and limits, drive types, etc. Fact is, you had to spend a lot of time to ohcomplexityprovision and design storage when it came to legacy storage platforms. Even many of the more modern systems that are deployed today, still require some heavy lifting on the design front. Sure it doesn’t take 158 actions to provision a volume the way it did in 2005, but the underlying challenges are still there, even for systems that market themselves as simple.  The simple fact remains, when we look at the advances in technology for the data center, Storage tends to be the one system that has failed to keep pace with innovation. It’s still fairly beholden to the standards of the head and sled design construct, it still doesn’t scale well, it still fails to deliver guaranteed performance even when using Flash SSD’s as the storage medium, and it still requires a specific skill set to leverage and provision properly.

RIP Storage Admin

deathThat brings me to the death of the storage admin and why recognizing disruption is hard. Storage admins in many respects are the AS400 operators of today. Sure they sill exist, sure they still perform operations, but honestly, most technologist in any given organization kind of look at them as relics. When all you really want is to answer 3 questions: How Big, How Fast, and Who Accesses, why should simplethat process be tasked to a “specialist”. Why isn’t it simply an API call that I pass, or an integration point with my hypervisor, or a near zero touch operation that is integrated into an orchestration workflow? Now the point isn’t to insult or be totally derisive to the storage teams within organizations. The reality for me at least, is that they tend to be entrenched in their lines of thinking, and part of that is also turf and job protection. I was the storage admin for a good portion of my career in IT, and I can totally see the other side of this argument. Yes I did all that heavy lifting and design work, and I took pride in it. It took a long time to build those skills, but at some point I realized that I had to move beyond that limited viewpoint, thus I got involved in virtualization, cloud, and other technologies pushing more for a full stack engineering skill-set.

joySo to tie all this rambling together into some coherent conclusion. Like we saw with the smart phone market, the move away from just providing phone and email to a device that allowed for a full computing experience was what killed RIM and Nokia. Their technology was sound and did what it was designed to do, but they as organizations failed to see the trends that were being adopted by the  customer ecosystem. The same is being seen in how storage is being adopted, deployed, and consumed in many organizations. Storage is becoming a service to be consumed and utilized, a platform that accelerates the deployment of applications, tools, and resources. As I’ve claimed many a time, there is no book called the “Joy of Menial Tasks”  At the end of the day, our customers (ie the people we provide resources for) are looking for the same simplicity that they get when they go to the Apple app store, download an app and start using it. We need to be able to deliver storage resources in the same fashion. Now obviously, I have my biases, but after nearly two years working at SolidFire, I still have not seen a solution that delivers the simplicity of design, operation, with the multiple disparate consumption models that SolidFire does.


Posted in Cloud Storage, Enterprise Tech, SolidFire, Storage, Uncategorized | 1 Comment

Podcast Idol Application

Dear readers (what few of you there may actually be after a one year blogging hiatus) Oh happy day is upon us, I have returned, just as the phoenix rises from the flames, I too have managed to do what comes naturally, that is, talking about myself.

In all actuality I have been hella busy. The last 4 months post NetApp acquisition has been a challenge to say the least. It’s also been a great learning opportunity and I plan to start chronicling a lot more of that process on these her pages for you dear reader (count your lucky stars)!

SIT-logoObviously, that time is not now. For those of you who listen to Speaking In Tech podcast, you will notice that Sara Vela has decided to leave the show to pursue a career in opera, or street singing, or something of that nature (Keep Austin Weird). Alas, this leaves an opening in the shows cast for a new member to join and opine weekly on all the tech things. Thus, I figured why not put my application into a blog post and force myself to actually sit down and write something.

Being an illeists (look it up) here is my submission for Podcast Idol.bobdole

You know the Gabriel, he’s actually been on Speaking In Tech before, occasionally he shows up on his own group Podcast called In Tech We Trust (always be plugging). The Gabriel would make an excellent co-host and or host, sidekick, man servant, lackey, digital butler, minion and or cabana boy for the Speaking In Tech podcast. Having achieved the status of “employable” in the technology field for the last 20 years, Gabriel has managed to amass a stunning array of unimpressive knowledge for which he is in no way qualified to deliver his opinion on. This would include some of or all of or possibly even none of the following:

  • boxes that you plug wires into
  • wires you plug into boxes
  • information we used to store in books
  • books we now store on disks
  • disks we now store inside of boxes that you plug wires into
  • hyphenated words of varying length
  • tubes that carry the information we used to store in books but also store on disks inside of boxes with blinking lights
  • consumption of meat and meat like products

oldpeoplevsnewpeopleAs you can see, these are top of mind subjects that any podcast listener would find to be utterly fascinating, and of course would hold their interest for the duration of at at least 60 seconds or the amount of time it takes to swipe left or right on tindr (whichever is shorter).  This is obviously the most important skill required for Speaking In Tech, because lets face it, informed commentary on technical matters boils down to rolling bones in a pan and deciphering their meanings and then making a proclamation of fact.

All of that said, please simply provide Gabriel with the proper credentials and the an invite to the Speaking In Tech slack channel and lets get this show started.

Oh and as for that other show I’m on, who said you can’t do two podcasts as poorly as one?

Posted in Enterprise Tech, Facepalm, Podcasts, Tech Marketing | Leave a comment

Musings on the last year, successful exits, and the future in general

Let me preface this one with there has been a lot of stuff locked away in my noggin that I’ve simply not taken the time to sit down and spit out.

Fun fact, I’ve been busy.

Since joining SolidFire in April, my day job has taken up a fairly significant amount of my time, not to shark-cartoon-112mention, I’ve done a good amount of writing for SolidFire which has not all found its way over here, just in case you are curious, take a look. At SolidFire My goal has to been to make as much of a personal contribution to the success of the organization as possible. Initially my role was to take over the responsibilities of the Agile Infrastructure deliverable collateral as part of SolidFire’s Tech Solutions team. I’m personally indebted to Jeremiah Dooley for setting the bar high enough that I felt compelled to meet it (tall order indeed). The result of this effort was primarily to craft a mixed workload Reference Architecture design around Dell Compute/Networking, VMware Virtualization and SolidFire storage. You would be amazed how much effort goes into a Reference Architecture design, but as challenging as it was, I learned a lot about myself and abilities in during the process. 

A Promotion

US_Army_Enlisted_PromotionWith the Agile Infrastructure  solutions work taking a lesser priority within the organization and some of the personnel changes that altered the team’s structure, I found more time being focused on customer outreach and engagement to work on assisting the West team as needed in the role of the Principal Field Architect (yay a promotion). It was refreshing to get back into the field and in front of customers working on a daily basis to move deals across the finish line. Part of me missed the day to day trench warfare, but in this role there is a larger emphasis on being an overlay to a larger territory and team, as I’m fond of saying “being a Prime Mover”.

Enter Netapp

Now the time finds me on the cusp of a new change with the impending acquisition by Netapp. Obviously, for anyone who has listened to me on the In Tech We Trust Podcast, I’ve not been a major Trikot_NetApp_2011fan (that’s putting it lightly). My common thought has been that Netapp is the Blackberry of the storage world. Failing to keep abreast of changes in the space, rigid in its thinking, inflexible, full of hubris, all the things that leave an organization ripe to be surpassed and eclipsed by the very startup companies I’ve come to enjoy working for. I would forever hate myself if all of the sudden I put on the Homer Trousers and suddenly was pushing the OnTap message, in fact I’m half tempted to make a T-Shirt that says the only thing I serve OnTap is beer (as a certain CEO once told me “You are a walking HR Violation). Obviously, I probably won’t do that, and just in case Mr. Kurian reads this, I promise to only wear that shirt at home. But to a larger point, I feel its important to stick to ones principles and not fully sell out, and frankly in my opinion the reason Mr. Kurian took a strong interest in SolidFire was exactly because we were not OnTap and traditional Netapp in our way of thinking, design, implementation, marketing, sales, and corporate esprit de corps. In that respect, I do feel compelled to move forward if an offer is made to retain my services. There is something about being able to challenge the status quo from within the belly of the beast that has always appealed to me.

Big Names Open Big Doors

name-dropPrior to leaving Cisco, I managed to have a 30 minute 1X1 with Chuck Robbins. I got to sit in his office and chat about anything that came to mind. It was actually a great experience to sit down and trade stories of how we both entered the industry, and what drives us. Of the several things that stood out from that meeting, the one that stuck most was something Chuck said: ” In the last week I’ve met with both Hillary Clinton and Bill Clinton, the Prime Minister of South Korea, and several major business leaders, and the reason they meet with me isn’t because of who I am, it’s because of the logo on this card”. There is a lot of truth in that. When you are 55 people at a startup with a killer idea, amazing technology, and a product that solves many problems, opening the door to a major companies bathroom is tough, let alone getting a meeting with anyone who can actually make a decision. Having the heft of a major corporation behind you does indeed get you more at-bats (that challenge in and of itself is another thing I love about startup land), thus it may seem like the cards are always stacked against you, but they’re not, it just takes persistence. All this said, I see a very bright future for the SolidFire team within Netapp primarily because outside of my snarky viewpoint, the company has built an astonishing channel organization and customer base over the last few decades, and frankly deserve respect for that. It’s very easy to get caught up in the mortal combat of the us vs them world of technology sales, but remember, at any time you can end up working for one of your biggest competitors. Hubris goes both ways.

Successful Exits Are Tough

Not one to make major predictions with each passing year, I do believe that the infrastructure vendors imagesare in for some challenging times ahead, especially a number of the private storage players. Tegile, Tintri, Kaminario all face uphill struggles as that space collapses. The Dell/EMC situation confuses things and creates opportunities, but is there enough time and runway for those other players to garner enough sales to make successful exists themselves? Frankly I don’t see IPO as a route to exit for very many organizations.  The public players Nimble, Violin, and Pure all face challenges themselves and their reflective stock prices foretell a bleak future in many respects unless they adapt their product lines since single product platforms with limited true disruptive power are not long for this world, at least that’s how I see it. Adapt quickly to the new model which tends to be software oriented in nature (alas thats a subject for another blog post altogether) or starve. Not to say all software options are going to hit it big. There are large number of software storage plays out there today who have hitched themselves to current trends (aka hyper converged) but will not generate enough momentum to break out and will fail completely or get picked up for pennies on the dollar for their IP. The VC pools are tightening in light of recent events, even relatively powerful entrants like Nutanix face significant challenges as their burn rates, and sales expenditures outpace their ability to generate revenue. At some point you are not a viable concern, too expensive to acquire, too sales poor to IPO, that down round is the kiss of an impending death.

What The Future Holds

thefutureRamble much? Yeah, I guess I had a lot sitting in the noggin indeed. As for the future, at this moment, It’s hard to tell. There are many unanswered questions in front of me. For all intents and purposes, nothing has actually changed at all (not till the inks dry), so for me its business as usual. I’m still visiting customers, I’ve got travel booked for corporate events, I’ll be in Berlin for Cisco Live Europe. I have obligations that I feel I need to meet, and frankly, I think there are various upside options that I’ve not fully taken into consideration when it comes to a future with Netapp. I wouldn’t consider it fully prudent to simply move on without hearing them out, and there are some great people there I’d like to chat with about longer term strategic goals.

Still, if I’m 100% honest with myself there is something that really draws me to startup culture, especially the early days in that Pre-A round/A-Round time frame where the beta customers are being engaged, the story is being baked and formulated, the go to market message is being crafted, tested, rehearsed and finalized. Its that period where a handful of individuals can make an amazing impact on the future direction of a product and company. Not to say those things can’t be accomplished within a larger organization, its just the impact isn’t quite the same, and there is also a safety net that in my view may not properly motivate the individual. Still there are some major shifts afoot that I’m interested in being part of, so I’m open to discussion.

Posted in Cloud Storage, Sales Engineering, SolidFire, Startups | Leave a comment

Too Much Information Running Through My Brain

AKA: A long rambling post of disparate ideas that are loosely coupled together but really kind of maybe not so much.

The Police were one of the very first bands I fell in love with. If I go back to 3rd/4th grade, where I maxresdefaultstarted to get into music, I had 3 cassette tapes that factored heavily into my personal rotation: Blondie: Parallel Lines, The Police: Ghosts in the Machine, and The Cars self titled album. All 3 of those albums hold up pretty damn well in my opinion, but the Police have always been my first real musical crush, and Ghosts in the Machine is probably my favorite album still to this day. One track that always sticks out to me is Too Much Information

Over my dead body
Over me
Over you
Over everybody

Too much information running through my brain
Too much information driving me insane

toomuchThis is pretty much how I feel some days when I look at the current state of data center computing and the future of what we used to call “computing” is becoming. My day job brings me into contact with a fairly interesting customer space, and its well and beyond where I initially got my start in technology, ie the Windows 3.X era.

Today, what most of us consider the “Enterprise” is facing a weird form of technological mid-life crisis when it comes to the “How” of the data center, as well as the direction in which to take their development , delivery, and daily work flows.  The “Great Migration” off of ITIL to DevOps has begun, but as with all migrations, the pace is slow, and the mass extinction event has yet to happen. That event is coming, and I when I look at the legacy Enterprise focused vendors aka the “Dinosaurs”, I’m not sure if they have the ability to evolve fast enough to the evolutionary pace in which we are facing today. You get the feeling that many decision makers in the Enterprise space desire a move toward service based IT delivery, but they have had it fail to work because of a plethora of roadblocks they have faced. The technology wasn’t there, the team wasn’t capable of delivering, the organization as a whole couldn’t adjust to the rapidity of change. It could be all or none of those things. The one thing that does ring true though is that there is a need to move and do something, what that is, I think is still being realized.

Those in the traditional hardware space are going to be hit the hardest as these changes start to build ChildPleaseand move from the Web Giants, Service Providers, and Content Delivery companies down into the Global/Fortune 2000 Enterprise space, and eventually further. For the sake of argument as well, lets put Public Cloud out of the picture for this discussion,  that in and of itself could take up a few dozen pages of speculation and its not where I want to spend my time today.

There was a truly awesome article in The New Stack that was kind of the genesis of today’s ramblings:  How the Enterprise Adopts the New Stack, or “I said no dammit”  (stop now, and read the entire thing)  The timeline in the chart below is in my view fairly accurate for many Enterprise customers.

  1. Year 1: The way we do things is great; all that crap that startups are doing is basically just the same as what we do, but obviously we’re better.
  2. Year 2: That stuff that startups are doing works for them, but it wouldn’t work for us because we have different requirements.
  3. Year 3: If we were starting from scratch, we’d do it the way the startups do, but we aren’t, and we have different requirements.
  4. Year 4: Yeah, OK, we should probably do a pilot project to see if we can do that new thing that the startups are doing.
  5. Year 5: Ok, yeah, actually, it is pretty good. We’re going to do it everywhere and be awesome again. We are embarking on a two-year plan to apply that startup stuff everywhere. It will be expensive, but it’s the right thing, and then we’ll really be set.
  6. Year 8: OK, so it took us three and a half years instead of two, but we’re finally done!
  7. Go to step one.


I’m fairly certain that that there are a lot of people who see whats coming out of start-up land and the first reaction is “I Said No Dammit”, honestly I’ve been there so I understand where they are coming from and it takes a significant amount of exposure to the startup space to get into that new line of thinking. I visited an ex co-worker this week whose company is going through a massive consolidation effort. Roughly 12 formerly independently run organizations are having their IT departments collapsed into a single entity (something that should have happened a decade ago). Its an opportunity like that one that opens the door to do a lot of pretty significant adjustments and changes for the better, but after chatting with my friend, I got the feeling that based on a lot of the options that they have available today that would greatly benefit the organization as a whole will be seeing Ignore, Ignore, No, No and I said No Dammit, with that last part being elongated from a year to maybe three.

But why is this? Why is there such push back on the new things? I used to think it was simply risk aversion, and “we’ve always done it that way” thinking, and both of those hold true for many organizations that can seem to move at a glacial pace when it comes to innovation and adoption of new technologies, but I do think there is another factor, by damn if there is just too much shit out there to take in and absorb in a rational manner.

Lets just take one of todays hype cycle darlings, Containers and OpenStack. Now, I’ve probably seen two dozen articles in the last week that mention Docker, CoreOs, Kubernettes, and Mesos. The last 24 months have seen a massive push into the Container space, with barrels of ink spilled on how they are the death of VMware and will change how virtualization in the data center is approached etc. Then of course there is OpenStack and its adoption curve and the growth of the private cloud ecosystem. Follow this with the rapid assimilation of most of the managed OpenStack players by major companies like Cisco, HP, IBM. All of this activity points to a “sea change” in data center virtualization and workload delivery. The push towards the amazonification of the private cloud for the Enterprise customer. But how much of this is rooted in reality? How many Enterprise customers are actually leveraging Containers as the key foundation for their production environments, how many have their mission critical functions utilizing this technology? If I had to hazard a guess that number sits at less than 100 when we look at Global 2000. Of course, a lot of this brings to the forefront the death of Pets vs Cattle as a concept, but honestly, that time is a long ways away.

Two great data points to point too: BT complaining about the lack of maturity of OpenStack and its support for NFV (honestly a poor nitpick) couched against Bank of America and their challenges around having to craft proprietary systems to bring OpenStack to maturity for use in production in their environment. Both pieces have some very good points to make and bolster the case that OpenStack is still a maturing technology that requires heavy lifting to implement to its full potential. The same can be said for the promise of container technologies. Steep learning curves, significant complexity, and a lack of a strong talent pool are roadblocks to adoption in the Enterprise space. All of this said, the space is maturing and it will be interesting to watch it continue to do so.

lreqopgunucyyn7tljhqAs you can tell, there is simply too much going on in my brain and this was a very ppor attempt to dump some if out. Honestly, this is what happens when you keep waking up at 2:47 AM all week and are running on 3-5 hours a sleep a night. Fun facts, I’ll be at Dell World next week spittin hot solidfire knowledge. Following week finds me in the land of Robots, Tokyo Japan for OpenStack Summit. Impending American Godzilla photo album to be posted afterwards. If you are attending either of these events and feel froggy, jump over to see me.

Posted in Cloudy, DevOps, Enterprise Tech, OpenStack | Leave a comment