article thumbnail

High-bandwidth memory nearly sold out until 2026

Network World

Basically, this means that demand for HBM exceeds supply for at least a year, and any orders placed now won’t be filled until 2026. Bottom line: Expect a new supply-chain headache thanks to HBM being unavailable until at least 2026. HBM memory is used in GPUs to provide extremely fast memory access, much faster than standard DRAM.

Intel 418
article thumbnail

Is SD-WAN sill relevant in today’s technology landscape?

Network World

Software-defined wide area networking (SD-WAN) emerged in 2014 as a way to help organizations embrace the cloud and quickly became a hot commodity. Gartner predicts that 70% of enterprises will have implemented SD-WANs by 2026, up from around 45% in 2021. As a result, SD-WAN deployments must also grow to keep business humming.

WAN 366
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Equinix to cut 3% of staff amidst the greatest demand for data center infrastructure ever

Network World

The news comes amidst reports that the company also plans to close its infrastructure as a service (IaaS) division Metal in 2026, and in the wake of the appointment of new CEO Adaire Fox-Martin and a shakeup in its C-suite that saw the departures of CIO Milind Wagle and CISO Michael Montoya.

article thumbnail

AI, automation spur efforts to upskill network pros

Network World

At the same time, they need to expand their cloud and security skill sets to accommodate more complex tools and technologies. From AI and network automation to cloud computing and security, the critical networking skills needed to excel in 2025 are shifting. Yet network automation lags behind other automation initiatives.

Network 307
article thumbnail

AI driving a 165% rise in data center power demand by 2030

Network World

On the demand side for data centers, large hyperscale cloud providers and other corporations are building increasingly bigger large language models (LLMs) that must be trained on massive compute clusters.

article thumbnail

IBM expands Nvidia GPU options for cloud customers

Network World

IBM is offering expanded access to Nvidia GPUs on IBM Cloud to help enterprise customers advance their AI implementations, including large language model (LLM) training. IBM Cloud users can now access Nvidia H100 Tensor Core GPU instances in virtual private cloud and managed Red Hat OpenShift environments.

IBM 418
article thumbnail

Oracle to offer 131,072 Nvidia Blackwell GPUs via its cloud

Network World

Oracle has started taking pre-orders for 131,072 Nvidia Blackwell GPUs in the cloud via its Oracle Cloud Infrastructure (OCI) Supercluster to aid large language model (LLM) training and other use cases, the company announced at the CloudWorld 2024 conference.

Oracle 397