This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
Lightmatter has announced new silicon photonics products that could dramatically speed up AI systems by solving a critical problem: the sluggish connections between AI chips in datacenters. Todays AI chips often sit idle waiting for data to arrive, wasting computing resources and slowing down results. Lightmatter, valued at $4.4
While the IEEE P802.3dj project is working toward defining 200G per lane for Ethernet by late 2026, the industry is (loudly) asking for 400G per lane yesterday, if not sooner, Jones wrote in a recent Ethernet Alliance blog. subsidiary of Huawei, and the chair of the IEEE P802.3dj 200Gb/sec, 400Gb/sec, 800Gb/sec and 1.6Tb/sec Task Force.
CDNA 3 is based on the gaming graphics card RDNA architecture but is expressly designed for use in datacenterapplications like generative AI and high-performance computing. And in 2026, the AMD Instinct MI400 series will arrive, based on the AMD CDNA “Next” architecture.
The rapid expansion of AI and generative AI (GenAI) workloads could see 40% of datacenters constrained by power shortages by 2027, according to Gartner. An AI hyperscale datacenter can consume as much as 100 MW of power.
Datacenter power constraints and burgeoning AI workloads have companies scrambling to find new sources of electricity. That’s why, in an effort to find new energy sources—and in the face of the push to make it clean energy—datacenter owners are turning to nuclear power. times the 2023 level. times the 2023 level.
growth this year, with datacenter spending increasing by nearly 35% in 2024 in anticipation of generative AI infrastructure needs. Datacenter spending will increase again by 15.5% in 2025, but software spending — four times larger than the datacenter segment — will grow by 14% next year, to $1.24
By 2026, 30% of enterprises will automate more than half of their network activities, according to Gartner. The application of automation across infrastructure and operations will deliver significant gains, according to Gartner. Read more about network automation DIY or commercial network automation?
Datacenters are hot, in more ways than one. In fact, according to the International Energy Agency, by 2026 the AI industry is expected to have grown exponentially to consume at least ten times its electricity demand in 2023. I’ve seen numbers up to 40% of the total power consumption of a datacenter going to cooling.
The reality of what can be accomplished with current GenAI models, and the state of CIO’s data will not meet today’s lofty expectations.” GenAI will easily eclipse the effects that cloud and outsourcing vendors had on previous years regarding datacenter systems,” according to Lovelock. “It
billion in 2026 though the top use case for the next couple of years will remain research and development in quantum computing. This means that they have developed an application that shows an advantage over a classical approach though not necessarily one that is fully rolled out and commercially viable at scale.
This next generation processor, dubbed Aurora, is not due until 2026. Aurora offers powerful AI compute capabilities for workloads like RAG and vector databases, but Wittich said it will support all types of enterprise applications, not just cloud. “So But don’t be planning to place an order just yet.
As large enterprise and hyperscaler networks process increasingly greater AI workloads and other applications that require high-bandwidth performance, the demand for optical connectivity technologies is growing as well. Apollo is believed to be the first large-scale deployment of optical circuit switching (OCS) for datacenter networking.
Secure Access Service Edge (SASE) is a network architecture that combines software-defined wide area networking (SD-WAN ) and security functionality into a unified cloud service that promises simplified WAN deployments, improved efficiency and security, and application-specific bandwidth policies. billion by 2025.
Although it can be complex, the right HPC implementation provides your enterprise the computing capabilities necessary for high-intensity applications in many industries, especially those taking advantage of AI. HPC as a service : Another key trend is the emergence of HPC as a fully managed service inside enterprise datacenters.
“We started Lumen with the mission of launching a constellation of orbital datacenters for in-space edge processing,” Oltean explained in an email. “Essentially, other satellites will send our constellation the raw data they collect. What about customers?
In fact, in a recent IDC study , 60% of CIOs stated they are already planning to modify their operating model to manage value, agility, and risk by 2026. It can automate tasks such as deploying a new distributed application for users in the home and office. When you have improved end-to-end visibility, you can react more quickly.
It’s following in the footsteps of IBM and Microsoft, which like the German telco have an edge over regular companies contemplating a similar move to Rise in that they have their own clouds in which to host the applications and their own IT services divisions to make the move.
Oracle is adding a new managed offering to its Cloud@Customer platform that will allow enterprises to run applications on proprietary optimized infrastructure in their own datacenters to address data residency and security regulations and solve low-latency requirements.
Traditional enterprise wide area networks, or WANs were designed primarily to connect remote branch offices directly to the datacenter. They rely on centralized security performed by backhauling traffic through the corporate datacenter, which impairs application performance and makes them expensive and inefficient.
New York-Presbyterian will also invest in zero trust this year, adding a security operations center (SOC) for 24/7 network monitoring as well, Fleischut says. Cold: On-prem infrastructure As they did in 2022, many IT leaders are reducing investments in datacenters and on-prem technologies. “We
The country’s strategic pivot to digital transformation, being the first in the region to adopt a Cloud-First policy, attracted AWS to setup in-country datacenters, further accelerating the adoption of cloud services, which is expected to contribute $1.2 So why did AWS select Bahrain? In Bahrain, the figure is over 40%.
Chatbots are just one application of natural language processing (NLP), a type of artificial intelligence (AI) that is already having a major impact in financial services, among other industries. . billion of global investments in AI by 2026, according to Markets and Markets. The same study estimated that chatbots would lead to $1.3
EU-funded project is set to change the course of digital storage by launching datacenters into space, aiming to reduce Earth-bound energy consumption and enhance data sovereignty. Datacenters, critical for digital progression, consume substantial electricity and water to operate and cool their servers.
Paige Vickers/Vox; Getty Images The energy needed to support data storage is expected to double by 2026. Included for the first time were projections for electricity consumption associated with datacenters, cryptocurrency , and artificial intelligence. Servers at datacenters also heat up.
The company plans to roll out two new networking chips, designed for server switches, later this year and in 2026. This move indicates a strategic, focused application, acknowledging that reliability needs might differ from those of their top-tier GPUs. These chips will leverage co-packaged optics to deliver a hefty 3.5
This impressive increase is indicative of the rising demand for AI chips in datacenterapplications, as companies seek to enhance their model training and inference capabilities. and $9 for 2025, 2026, and 2027, respectively. Broadcom’s AI revenue alone skyrocketed 220%, totaling $12.2
With the ability to instantaneously ingest reams of data using large language models (LLMs), generative AI technologies such as OpenAI’s ChatGPT and Google’s Bard can produce reports, contracts, and application code far surpassing earlier technologies in speed, accuracy, and thoroughness. Employees are going to use this.
Everything points to people, machines and systems massively increasing their use of networked applications and services. that global mobile traffic reached 49 exabyte (EB) per month at the end of 2020 and forecasts it will increase nearly fivefold to reach 237EB per month in 2026, with a single. In 2018, datacenters accounted for.
Nvidia, Microsoft, Salesforce, Meta and Amazon are key players expected to benefit significantly from advancements in AI infrastructure and applications. Microsoft is also expanding its datacenter network across the globe to meet growing demand. Projections indicate its revenue will more than double in fiscal year 2025.
The chips are expected to be manufactured by TSMC, the world’s largest semiconductor foundry, starting in 2026. While demand for training chips is currently higher, industry experts anticipate that the need for inference chips will surpass training chips as more AI applications reach the deployment stage.
The datacenter segment, now accounting for 91% of total sales, generated $35.6 Analyst Vivek Arya emphasized Nvidia’s leadership in AI compute and inference applications, citing Blackwells impressive $11 billion in sales, significantly higher than the anticipated $4 to $7 billion range. in fiscal 2026.
Limp said Amazon was on track to launch half of the satellites for the Kuiper constellation by mid-2026, using up to 77 medium- to heavy-lift rockets it’s reserved at ULA as well as at Arianespace and Blue Origin. by the end of the year. Less than $400 per terminal. Starlink’s terminals currently sell for $599.
billion revenue surge on AI demand Nvidia’s datacenter business significantly contributed to the company’s success, generating $30.8 Significant investments from companies such as Microsoft, Oracle, and OpenAI in Nvidia’s technology underscore the burgeoning market for AI applications. Nvidia reports $35.1
He announced the forthcoming Blackwell Ultra chip for 2025 and introduced the next-gen Rubin platform, slated for release in 2026. NVIDIA’s Computex 2024 announcements NVIDIA, now renowned for its AI datacenter systems, introduced innovative tools and software models ahead of the Computex 2024 trade show in Taiwan.
The reason I subscribe to this notion is that of a need for balancing performance, availability, capacity, energy to a given power, cooling, floor space and environmental need along with price to meet different tiers of application and data quality of service and service level needs. What say you? Ok, nuff said.
Tightened State and Local Government Regulations: Local & State Governments may introduce stricter compliance requirements for AI vendors, particularly those enhancing existing applications or offering new AI-based solutions. These are not issues that traditional application testing alone can address.
Schneider Electric has introduced AI-ready datacenter designs aimed at addressing the energy demands and sustainability challenges brought by artificial intelligence (AI). Datacenters are the backbone of this AI infrastructure and are the critical enabler of efficiency and decarbonization.
Nvidia teams up with Cisco On February 6, Nvidia announced a collaboration with Cisco to deliver AI infrastructure solutions for datacenters. Datacenter revenue spikes Nvidia started the year with a bang, releasing financials for its fiscal year 2024 ended January 28, 2024. Datacenter revenue reached $26.3
billion in a new AI datacenter in Wisconsin as part of a growing wave of investment in the technology. The datacenter is set to come online by 2026. Such transformation needs more than just datacenters. Microsoft is betting big on AI, investing $3.3 Much of this investment focuses on the people.”
We organize all of the trending information in your field so you don't have to. Join 83,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content