A fairly diverse set in the top ten this week, the main edition has a lot more coming in at 53 stores total, so click the link and go read it all. Well, a bit delayed with this edition but such is life. The “job hunt” post RIF has been a bit eye opening. Seems like there are a lot of “job openings” but not job hiring’s. Hearing through the LinkedIn grapevine that many organizations are in limited hiring freeze territory given the current economic downturn. Glancing at my stock and crypto portfolios as of late, downturn seems like an understatement. I will say, one continued source of frustration that I see echoed on social media posts is the concept of up-front understanding when it comes to what the interview process is, and what to expect. Having a clear set of expectations of the process up front is super helpful, instead of the email/phone tag circle of hurry up and wait. I’ve noticed the larger companies like Amazon and Google are pretty good at setting the table before you spend too much of your or their time. The other thing that I’m seeing is larger technology focused companies reaching out for first touch discussion without having a specific job role or opening available. Hey I’ll always take the lunch, but it has to have some caloric content at least. Oh well, time marches on, and I’ve had some really good discussions with several companies about roles. Still I’m batting 0-2 right now with a runner on 3rd.
This Weeks Top Ten
- Google Cloud boosts its storage capabilities to meet customers’ evolving needs
- How to Build a Hybrid Cloud Storage Strategy
- WHY AREN’T THERE SOFTWARE-DEFINED NUMA SERVERS EVERYWHERE?
- Ansible vs. Terraform Demystified
- Storage Admins Aren’t Ready for Infrastructure as Code
- The AI Unbundling
- Five Common AI/ML Project Mistakes
- NVIDIA Hopper AI Inference Benchmarks in MLPerf Debut Sets World Record
- The Recovery-as-a-Service (RaaS) Market in 2022
- What Does the Post Crash VC Market Look Like?
The big story this week is from our friends over at The Next Platform (who continue to post all of their article titles in ALLCAPS which I find odd, and given I’m too lazy to retype the headlines its going to be copypasta for them). HYPERSCALERS AND CLOUDS SWITCH UP TO HIGH BANDWIDTH ETHERNET
The hunger for more compute and storage capacity and for more bandwidth to shuffle and shuttle ever-increasing amounts of data is not insatiable among the hyperscalers and large cloud builders of the world. But their appetite for is certainly always rising and occasionally voracious – even when facing a global economy made uncertain by pandemic and trade wars as well as an actual war.
We expect 100 Gb/sec Ethernet to have a long tail,” says Boujelbene. “In fact, we expect 100 Gb/sec Ethernet to still comprise 30 percent of the switch ports by 2026. The deployment will be driven by different use cases. Currently 100 Gb/sec is mostly used in the fabric, but as prices continue to come down, 100 Gb/sec will also get adopted in the top of rack for server connectivity, which will drive a lot of volume. 10 Gb/sec died pretty quickly as it got replaced by 25 Gb/sec, which was priced almost at parity with 10 Gb/sec.”
Commentary: Having spent the last 18 months focused almost exclusively on 100G connectivity for storage and composable infrastructure solutions, much of the research out of Dell’Oro rings true. Getting my start on the selling side at Emulex when 10G was just starting to kick off in earnest, the evolution of data center connectivity speeds continues to be the one constant that stays true. When I started in tech we were using 4MB Token-Ring in our networks, moving further forward, octopus cables for shared 10MB were all the rage for a time. Then it was the jump to 1G and then 10G which still seems fairly commonplace, but has been depreciated in favor of 25G to a large extent.
As we continue to see massive growth in the AI/ML space, as well as for EDA and other high data volume workloads it makes sense to see hypergrowth in 100G+ for the hyperscalers, as well as the research labs and HPC space, but does this necessarily translate down into the common enterprise? Sure core switching needs a lot of bandwidth, but does your vSphere server need 100G? More than likely the answer is no, but with NVMe/TCP growing into an industry wide standard for storage technologies and scale-out storage products that are looking to replace DAS, my expectation is that 100G to the appliance will continue to grow, and for a certain segment of traffic generating systems, 100G will be the defacto connection point. Also, let’s also not forget all those new shiny GPU’s being announced because they are major bandwidth hogs, they will need high throughput connections as well. This brings in another point that’s being made in the article which is the price to move a bit.
This level of cost isn’t one that comes up in casual conversation, but like power consumption, IOPS/Watt, $/GB, and other more behind the scenes calculations that come to the fold when talking to larger scale infrastructure teams, these little details start to make a big difference when taken into the larger discussion of building modern data center systems. I’ve had a large number of discussions with customers who are implementing 400G, primarily when talking with the groups who leverage and utilize CERN data or ride on ESNET/Starlight networks between universities, and who are looking to implement 400G and higher (1TB) connections at global scale. This applies as well to the larger HPC shops who while still in love with InfiniBand, are probably taking a hard look at possible replacements given its single vendor status.
Bottom line, networks are hungry beasts, modern workloads love bandwidth, bandwidth is skittles, and vendors need to feed the beast.