Since the onset of the Covid pandemic, global supply chains have seen significant challenges. This affects nearly every facet of the IT industry, from vendors, partners, and consumers. All have faced long lead times for components and key infrastructure resources that power their business applications and operations. Prices for components have also seen huge increases, with some components prices increasing 100X. All of this contributes to the bottom line of organizations that need to grow their infrastructure to accommodate demand and modernization, and to address the challenges of the Data Centric Era that are impacting us all.
With lockdowns and logistics issues continuing to be problematic, the ripple effect may continue to impact all consumers of IT resources. The recent passing of the CHIPs act is a step forward to increase investment in the America’s of processors and components, but the time frame for that investment to be realized in the general marketplace is years away. So what can customers do today to address a shortage of infrastructure systems?
Factors Affecting Server Longevity
Let’s focus on the most basic component of data center infrastructure, the x86 server. From my early days in IT with beige Compaq towers onward, the standard design of the data center server has traditionally been a 1U or 2U system, with a single or dual processor, memory, storage, and networking connectivity, and more commonly now, al GPU to address the massive adoption of AI/ML, and other graphics intensive workloads. Server design for the most part has remained relatively unchanged, outside of faster components, increased memory density, and faster processors, flash storage, and higher networking bandwidth. All of these finished components are being impacted by the global supply chain issues.
At the same time, many organizations leverage a 3 year depreciation cycle for server lifespans. Some of this has to do with tax depreciation schedules, others with the challenges presented by digital transformation and the increase in data creation and movement. Support costs for many infrastructure solutions beyond the 3 year time horizon tend to increase significantly in years 4 and 5 as well, making the cost of keeping older server systems more expensive.
Now you may be asking yourself, what does any of this have to do with DPU technology? Well let’s take a look at why organizations upgrade servers in the first place.
CPU performance and core density: CPU core counts continue to increase but clock speeds have reached a plateau. With a larger shift to moving more work into software vs hardware, organizations are asking more and more of the commodity CPU. Yet the growth of data, and data speeds required for modern workloads is forcing more and more work onto the CPU. It’s not uncommon for deployed systems in virtualized environments to take up 15-20% CPU overhead before an application is even provisioned.
Software Defined Everything: We continue to software define “all the things”, and ask the do more and more of the work that it may not be up to the challenge to address. While core density is increasing, the costs associated with that growth are rising as well. Intel plans a 20% increase in costs this year, and thus asking the CPU to do the heavy lifting of shifting more and more functions into a software defined model, will result in higher costs of doing business. Taking into account the bigger push for Software Defined, Networking, Storage, and Security means the CPU has to do more of the overall work. Add virtualization on top of that, and you are placing a heavy burden on the CPU, thus organizations almost always end up over provisioning CPU resources due to the difficulty in predicting demand. This results in higher costs to consumers, and in many times a stranding of resources that should be Fungible in their consumption.
The Impact of the Data Centric Era: data growth has been increasing year over year for some time. Taken in total, every day each person generates 1.7MB of data per second, or 143GB of data per day. Every text, tweet, transaction and post is captured and stored. From 1986 and into 2025, a 7000% increase in data storage in exabytes is being predicted. This massive growth in data requires a massive growth in the components that store and secure it. In addition, consumers are mostly relying on x86 processors to manage, process, transfer, and secure this data.
So these are all big issues on their own, but when combined, require organizations to modernize their infrastructure in order to keep up with the demands of modern applications and services. Couple this with supply chain challenges, and many organizations are searching for how they can increase the lifespan of their currently deployed server systems. Exactly to this point, Microsoft announced that they were increasing their server life cycle from 4 to 6 years for their cloud infrastructure. The move is expected to save Microsoft 3.7 Billion USD. Google announced they would increase their server lifecycle from 3 to 4 years, and Amazon Web Services stated that they would increase their own server infrastructure to 5 years. So we see the big 3 public cloud providers making big shifts to increase server lifespans, resulting in huge savings to their bottom line.
Increasing Server Lifespans with the Fungible DPU.
So since you are probably not Google, Amazon or Microsoft, what can you do to increase the lifespan of the existing server infrastructure assets in your data center? The answer lies in composability and redefining what a “server” is. The Fungible view is that a server should in essence be a CPU for applications, a GPU for graphics, and the DPU for data centric operations and IO operations, and those expensive physical assets should be composed into the server across your existing ethernet network. No need for internal storage, no need for RAID controllers, no need for expensive high density GPU systems. Servers can be streamlined into a less expensive and less complex asset that can be dynamically composed at time of need from a pool of resources that the customer controls.
The main goal with this approach is offloading Data Centric functions increasing CPU utility and utilization as well as providing high performant application centric storage, and GPU’s on demand instead of creating customized servers which result in server SKU sprawl customers can extend the life of already paid for server infrastructure. At the same time, older existing server infrastructure can gain the benefit of modern storage, advanced networking, and GPU functions by composing those functions into the older server platform. As for increasing the utility of the existing server CPU’s, the Fungible DPU offloads core data centric tasks from the CPU. In turn this frees up CPU cores to operate as they were intended to, for applications and business logic.
The main goal of this approach is offloading Data Centric functions to improve utility and performance and resource agility while streamlining operational complexity. The result is enabling the CPU to do the core work of application computation accessing data from high performance application centric storage. Access to and use of GPU resources is on-demand and can be composed to meet varying requirements without server SKU sprawl. At the same time, existing server infrastructure can gain the benefit of modern storage, advanced networking, and GPU functions by composing those functions into the server platform.
On average we have seen a reduction in 8-16 CPU cores by offloading just the storage initiation from the CPU for high performance operations. Add to that the ability to re-compose an existing server into one that can provide GPU’s for AI and ML workloads, Virtual Router offloads, high performance storage and the overall utility of a standard server increases from one that was purposely built to address maybe a handful of applications, to one that has the ability to be composed to support a broader array of applications.
So now that server you bought 3 years ago that is fully depreciated gets an extension and , continue to be utilized for modern applications and solutions. The CPU can increase the number of applications, containers, or virtual machines because of the freeing up of additional cores that were dedicated to data centric tasks, and providing bare metal composability will give that server new life to address modern workloads. So while you may not save billions of dollars the way Microsoft, Google, and Amazon are, you can breathe new life into aging infrastructure and increase compute utility by adopting the Fungible DPU and a composable infrastructure approach.