This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
Even as demand for data infrastructure surges to an all-time high, Equinix is planning to lay off 3% of its workforce, suggesting a growing skills mismatch in the industry. According to Goldman Sachs , datacenter demand in the US alone is projected to nearly triple by 2030, driving more than $1 trillion in investment.
As datacenters evolve from traditional compute and storage facilities into AI powerhouses, the demand for qualified professionals continues to grow exponentially and salaries are high. The rise of AI, in particular, is dramatically reshaping the technology industry, and datacenters are at the epicenter of the changes.
Datacenters this year will face several challenges as the demand for artificial intelligence introduces an evolution in AI hardware, on-premises and cloud-based strategies for training and inference, and innovations in power distributionsall while opposition to new datacenter developments continues to grow.
Massive global demand for AI technology is causing datacenters to increase spending on servers, power, and cooling infrastructure. As a result, datacenter CapEx spending will hit $1.1 His projections account for recent advances in AI and datacenter efficiency, he says. Much of this growth is due to AI.
The chipmaker has released a series of what it calls Enterprise Reference Architectures (Enterprise RA), which are blueprints to simplify the building of AI-oriented datacenters. Building an AI-oriented datacenter is no easy task, even by datacenter construction standards.
Cisco is boosting network density support for its datacenter switch and router portfolio as it works to deliver the network infrastructure its customers need for cloud architecture, AI workloads and high-performance computing. Cisco’s Nexus 9000 datacenter switches are a core component of the vendor’s enterprise AI offerings.
HPE claims that this approach effectively reduces the required datacenter floor space by 50% and reduces the cooling power necessary per server blade by 37%. “As Datacenters warm up to liquid cooling : AI, machine learning, and high-performance computing are creating cooling challenges for datacenter owners and operators.
The chief architect for Intels Xeon server processors has defected to chip rival Qualcomm, which is making yet another run at entering the datacenter market. If Intel was hoping for a turnaround in 2025, it will have to wait at least a little bit longer.
Cisco has unwrapped a new family of datacenter switches it says will help customers more securely support large workloads and facilitate AI development across the enterprise. The first major service these DPUs will perform on the switch will be Layer 4 stateful segmentation throughCiscos Hypershield security architecture.
Lightmatter has announced new silicon photonics products that could dramatically speed up AI systems by solving a critical problem: the sluggish connections between AI chips in datacenters. Todays AI chips often sit idle waiting for data to arrive, wasting computing resources and slowing down results.
The MI325X uses AMD’s CDNA 3 architecture, which the MI300X also uses. CDNA 3 is based on the gaming graphics card RDNA architecture but is expressly designed for use in datacenter applications like generative AI and high-performance computing. The FP6 is new and unique to AMD.
Rising costs, AI concerns and staffing challenges are among the top issues facing datacenter leaders in 2024, according to Uptime Institute’s latest survey data. Datacenter teams are working to balance higher rack power density and future capacity needs with capital investments and staffing limitations.
Edgecore Networks is taking the wraps off its latest datacenter networking hardware, the 400G-optimized DCS511 spine switch. Sharma added that hyperscale architecture is typically based on Layer-3 features and BGP. This feature enables long-range, high-speed connections crucial for distributed datacenterarchitectures.
Considerable amounts of data are collected on the edge. Edge servers do the job of culling the useless data and sending only the necessary data back to datacenters for processing. Liquid cooling gains ground: Liquid cooling is inching its way in from the fringes into the mainstream of datacenter infrastructure.
Nokia is offering customers an alternative way to set up datacenter networks by expanding its datacenter fabric package to support the open-source Software for Open Networking in the Cloud ( SONiC ). The SONiC community has grown significantly since and now features some 4,000 contributors and 28 paying members.
Broadcom on Tuesday released VMware Tanzu Data Services, a new “advanced service” for VMware Cloud Foundation (VCF), at VMware Explore Barcelona. VMware Tanzu for MySQL: “The classic web application backend that optimizes transactional data handling for cloud native environments.” Is it comprehensive?
I am happy to announce that my coverage area now includes DataCenter Services, Infrastructure Outsourcing, and Semiconductors. This is in addition to my existing coverage of Technical Debt and Enterprise Architecture.
The AI revolution is driving demand for massive computing power and creating a datacenter shortage, with datacenter operators planning to build more facilities. But it’s time for datacenters and other organizations with large compute needs to consider hardware replacement as another option, some experts say.
also supports HPEs Data Fabric architecture which aims supply a unified and consistent data layer that allows data access across premises datacenters, public clouds, and edge environments with the idea of bringing together a single, logical view of data, regardless of where it resides, according to HPE.
Nvidia has partnered with hardware infrastructure vendor Vertiv to provide liquid cooling designs for future datacenters designed to be AI factories. AI factories are specified datacenters emphasizing AI applications as opposed to traditional line of business applications like databases and ERP.
For starters, generative AI capabilities will improve how enterprise IT teams deploy and manage their SD-WAN architecture. An example of this is in the area of analyzing real-time network telemetry data to improve network performance as well as user and application experiences. AI is set to make its mark on SD-WAN technology.
Another driver is the fact that individual datacenters themselves are upgrading to 400G Ethernet. The previous capacity of the DE-CIX network was 100G, which means that datacenters running at 400G need to split the signal. Companies are spending money on AI datacenter clusters, which need to be connected to each other.
SONiC remains a cornerstone of Aviz’s strategy Shukla highlighted the growing demand for SONiC, particularly in datacenters and GPU fabrics, driven by the need for new infrastructure to support the increasing use of GPUs and inferencing networks. The new funding follows a $10 million round announced in December 2023.
Debates surrounding the Energy Efficiency Act, which aims to reduce energy use in public authorities, corporations, and datacenters, will potentially influence future laws globally. Here’s the context: Datacenters use a lot of energy. Why is this important and what does it mean?
The 7700R4 AI Distributed Etherlink Switch (DES) supports the largest AI clusters, offering massively parallel distributed scheduling and congestion-free traffic spraying based on the Jericho3-AI architecture. The 7060X6 AI Leaf switch features Broadcom Tomahawk 5 silicon with a capacity of 51.2 specification before the end of the year.
One of the newer technologies gaining ground in datacenters today is the Data Processing Unit (DPU). As VMware has observed , “In simple terms, a DPU is a programable device with hardware acceleration as well as having an ARM CPU complex capable of processing data.
Enterprise data storage skills are in demand, and that means storage certifications can be more valuable to organizations looking for people with those qualifications. Here are some of the leading data storage certifications, along with information on cost, duration of the exam, skills acquired, and other details.
The deal, which sees Oracle and Carlyle selling their stakes in Ampere, strengthens SoftBanks position in the growing market for AI-optimized processors as major cloud providers increasingly look beyond traditional x86 architecture.
Todd Pugh, CIO at food products manufacturer SugarCreek , manages a fully virtualized private datacenter. We asked three enterprises to share why they deployed microsegmentation technology in their networks and how it's working. Here are their stories. Distributed firewalls via VMware NSX.
AI isn’t just another form of data traffic; it’s also a technology that can improve the operation of networks as well, which is another key theme that was explored at the event. “We Parantap Lahiri, vice president of network and datacenter engineering at eBay said that his organization is using AI today for a networking monitoring system.
Other features in Nile Nav offer real-time deployment data and visibility as well as instant feedback during setup and activation ensures IT teams can monitor progress and address issues promptly, Kannan stated.
Fortinet is expanding its data loss prevention (DLP) capabilities with the launch of its new AI-powered FortiDLP products. The FortiDLP platform provides automated data movement tracking, cloud application monitoring and endpoint protection mechanisms that work both online and offline.
Cisco and Nvidia have expanded their partnership to create their most advanced AI architecture package to date, designed to promote secure enterprise AI networking. access offers visibility into who wants or has use of an AI application and then it controls access to protect and enforce data-loss prevention and mitigate potential threats.
BlueField data processing units (DPUs) are designed to offload and accelerate networking traffic and specific tasks from the CPU like security and storage. Morpheus is a GPU-accelerated data processing framework optimized for cybersecurity, using deep learning and machine learning models to detect and mitigate cyber threats in real time.
To keep up, IT must be able to rapidly design and deliver application architectures that not only meet the business needs of the company but also meet data recovery and compliance mandates. Moving applications between datacenter, edge, and cloud environments is no simple task.
Later, as an enterprise architect in consumer-packaged goods, I could no longer realistically contemplate a world where IT could execute mass application portfolio migrations from datacenters to cloud and SaaS-based applications and survive the cost, risk and time-to-market implications.
The combination aims to conquer the enormous compute market that has long been dominated by the x86 architecture (and thus, Intel and AMD). The x86 platform remains the leader in PCs and datacenters, but the future growth prospects lie in […].
The base architectural design of Blackwell Ultra is similar to Nvidias Blackwell, but it provides incremental performance improvements with increased memory capacity and AI tweaks in the silicon, said Anshel Sag, vice president and principal analyst at Moor Insights and Strategy. Nvidias GPUs support data types ranging from FP4 to FP64.
The IBM Storage Scale platform will support CAS and now will respond to queries using the extracted and augmented data, speeding up the communications between GPUs and storage using Nvidia BlueField-3 DPUs and Spectrum-X networking, IBM stated.The multimodal document data extraction workflow will also supportNvidia NeMo Retriever microservices.
Supermicro announced the launch of a new storage system optimized for AI workloads using multiple Nvidia BlueField-3 data processing units (DPU) combined with an all-flash array. The new Just a Bunch of Flash (JBOF) system features a 2U rack that can house up to four BlueField-3 DPUs.
We have invested in the areas of security and private 5G with two recent acquisitions that expand our edge-to-cloud portfolio to meet the needs of organizations as they increasingly migrate from traditional centralized datacenters to distributed “centers of data.”
The research, released this week, analyzes the networking challenges, IT and business priorities, architectural maturity, and investment strategies of 2,052 IT professionals across 10 global industries. Network architectures are more sophisticated, more complex, and spread across more multi-clouds and multi-vendors than ever.
We organize all of the trending information in your field so you don't have to. Join 83,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content