This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
Nvidia has been talking about AI factories for some time, and now it’s coming out with some reference designs to help build them. The chipmaker has released a series of what it calls Enterprise Reference Architectures (Enterprise RA), which are blueprints to simplify the building of AI-oriented data centers.
Lightmatters solution includes two products: the Passage L200 co-packaged optics (CPO) and the Passage M1000 reference platform. Customers can expect the M1000 reference platform in the summer of 2025, allowing them to develop custom GPU interconnects. The L200, coming in 2026, will be available in 32 Tbps and 64Tbps versions.
AI factories are specified data centers emphasizing AI applications as opposed to traditional line of business applications like databases and ERP. Nvidia has partnered with hardware infrastructure vendor Vertiv to provide liquid cooling designs for future data centers designed to be AI factories.
Project Salus is a responsible AI toolkit, while Essedum is an AI framework for networking applications. Top AI applications : Network automation leads at 57%, followed by security at 50% and predictive maintenance at 41%. LF Networking also announced the CAMARA Spring25 Meta-Release advancing the open-source telecom-focused platform.
integrates what Red Hat refers to as VM-friendly networking. With OpenShift 4.18, Red Hat is integrating a series of enhanced networking capabilities, virtualization features, and improved security mechanisms for container and VM environments. In particular, OpenShift 4.18
The term service mesh has been widely used over the last several years to refer to technology that helps to manage communications across microservices and applications. Thats a clear use case for Linkerd, which does all that, makes it all secure, and then decouples it from the application.
Our vision is to be the platform of choice for running AI applications, says Puri. The system integrator has the Topaz AI platform, which includes a set of services and solutions to help enterprises build and deploy AI applications. The updated product also has enhanced security features, including LLM guardrails. I have no idea.
The Ethernet Alliances roadmap references the consortiums 2024 Technology Exploration Forum (TEF), which highlighted the critical need for collaboration across the Ethernet ecosystem: Industry experts emphasized the importance of uniting different sectors to tackle the engineering challenges posed by the rapid advancement of AI.
Intel has introduced a reference design it says can enable accelerator cards for security workloads including secure access service edge (SASE), IPsec, and SSL/TLS. Get regularly scheduled insights by signing up for Network World newsletters. ].
Specialization: Some benchmarks, such as MultiMedQA, focus on specific application areas to evaluate the suitability of a model in sensitive or highly complex contexts. The better they simulate real-world applications, the more useful and meaningful the results are. They define the challenges that a model has to overcome.
Keysight can now filter network traffic to detect the presence of AI-based applications. Were just making sure we can find traffic of interest by developing the application signatures that find those applications, Taran Singh, vice president, product and strategy at Keysight, told Network World.
The SAP Business Technology Platform offers in-memory processing, agile services for data integration and application extension, as well as embedded analytics and intelligent technologies. This is referred to as a hybrid approach. HEC thus corresponds to the private cloud model. This allows for maximum flexibility.
When I joined VMware, we only had a hypervisor – referring to a single software [instance] that can be used to run multiple virtual machines on a physical one – we didn’t have storage or networking.” We have gone from choosing an operating system to being able to run any application anywhere and on any cloud by virtualizing storage.”
This type of interoperability is increasingly essential as organizations adopt agentic AI and other advanced applications that require AI model integration, IBM stated. The idea is to let customers responsibly scale and enhance AI applications like retrieval-augmented generation (RAG) and AI reasoning, IBM stated.
Legacy platforms meaning IT applications and platforms that businesses implemented decades ago, and which still power production workloads are what you might call the third rail of IT estates. Compatibility issues : Migrating to a newer platform could break compatibility between legacy technologies and other applications or services.
AGNTCY plans to define specifications and reference implementations for an architecture built on open-source code that tackles the requirements for sourcing, creating, scaling, and optimizing agentic workflows. Hypershield uses AI to dynamically refine security policies based on application identity and behavior.
Publishing job ads enables companies to collect applications and information about potential candidates to have a pool on hand to quickly respond to future employment needs. Companies can therefore publish ads en masse, regardless of their actual recruitment needs.
“You cannot just rely on the firewall on the outside, you have to assume that any application or any user inside your data center is a bad actor,” said Manuvir Das, head of enterprise computing at Nvidia. Zero Trust basically just refers to the fact that you can't trust any application or user because there are bad actors.”
In these uses case, we have enough reference implementations to point to and say, Theres value to be had here.' Weve seen so many reference implementations, and weve done so many reference implementations, that were going to see massive adoption. Now, it will evolve again, says Malhotra. Agents are the next phase, he says.
For example, Whisper correctly transcribed a speaker’s reference to “two other girls and one lady” but added “which were Black,” despite no such racial context in the original conversation. Whisper is not the only AI model that generates such errors.
Deepak Jain, 49, of Potomac, was the CEO of an information technology services company (referred to in the indictment as Company A) that provided data center services to customers, including the SEC,” the US DOJ said in a statement. From 2012 through 2018, the SEC paid Company A approximately $10.7
Unfortunately, despite hard-earned lessons around what works and what doesn’t, pressure-tested reference architectures for gen AI — what IT executives want most — remain few and far between, she said. “What’s Next for GenAI in Business” panel at last week’s Big.AI@MIT Finding talent is “a challenge that I am also facing,” Guan said.
According to IT decision-makers surveyed, the service management areas where organizations are least effective are integrating IT silos with systems and applications (cited by only 8% as very effective) and using AI to improve the delivery of ITSM (7% citing it as very effective).
We’ve all heard about how difficult the job market is on the applicant side, with candidates getting very little response from prospective employers. But the hiring side isn’t much easier. Biswas says he has a referral program, and Roberge estimates that around 70% of his hiring is now through referrals.
“Information relating to the financial conditions of the termination of functions of Peter Herweck and appointment of Olivier Blum will be made public according to the applicable regulation and to the recommendations of the corporate governance code AFEP-MEDEF to which Schneider Electric is referring,” the statement added.
integrates Ciscos Hypershield and AI Defense packages to help protect the development, deployment, and use of AI models and applications, according to Jeetu Patel, Ciscos executive vice president and chief product officer. Hypershield uses AI to dynamically refine security policies based on application identity and behavior.
Although organizations have embraced microservices-based applications, IT leaders continue to grapple with the need to unify and gain efficiencies in their infrastructure and operations across both traditional and modern application architectures.
Now that we have covered AI agents, we can see that agentic AI refers to the concept of AI systems being capable of independent action and goal achievement, while AI agents are the individual components within this system that perform each specific task. Microsoft is describing AI agents as the new applications for an AI-powered world.
Such teams are sometimes referred to as value creation teams and consist of members from different national subsidiaries, departments and disciplines. This refers to an open and iterative way of working that prioritizes continuous improvement and collaboration.
Some have referred to MCP as the USB-C port for AI applications. Similarly, email became a killer application for driving internet adoption through the open SMTP standard.
Later, as an enterprise architect in consumer-packaged goods, I could no longer realistically contemplate a world where IT could execute mass application portfolio migrations from data centers to cloud and SaaS-based applications and survive the cost, risk and time-to-market implications.
Impact on AI chip development The latest restrictions target AI chip manufacturing, reflecting concerns over the potential military applications of the technology in China. The measures seek to limit Beijing’s access to advanced memory and chipmaking tools, further tightening control over critical semiconductor technologies.
It prevents vendor lock-in, gives a lever for strong negotiation, enables business flexibility in strategy execution owing to complicated architecture or regional limitations in terms of security and legal compliance if and when they rise and promotes portability from an application architecture perspective.
AI networking AI networking refers to the application of artificial intelligence (AI) technologies to network management and optimization. It’s particularly well-suited for applications that require rapid data transfer, such as scientific computing, financial modeling and video rendering. Industry 4.0 Industry 4.0
Its newly appointed CEO, Romain Fouache, is bringing Australian retailers a collection of cloud-based technologies, including Product Information Management (PIM), Syndication, and Supplier Data Manager capabilities to rapidly scale the depth and maturity of their AI applications.
Regarding specific technologies hes focusing on, he references RPA, AI pilots to get the most out of it in an industrial area, andall the tools to future plan and manage data.To And although it can always be improved, our job is to optimize it as much as possible by looking for return applications.
In June 2023, Gartner researchers said, data and analytics leaders must leverage the power of LLMs with the robustness of knowledge graphs for fault-tolerant AI applications. For example, if an LLM is asked to provide information about a companys product, manuals for that product and other reference materials would be extremely helpful.
Cisco, HPE, Dell and others are looking to use Nvidia’s new AI microservice application-development blueprints to help enterprises streamline the deployment of generative AI applications. More applications are expected in the future. Developers can gain a head start on creating their own applications using NIM Agent Blueprints.
This is an ever-growing catalog of referenceapplications built for common use cases that encode the best practices from NVIDIA’s experiences with early adopters,” he added. You can think of these applications as a database wrapped in a web UI connecting multiple teams through a business process,” he said. “We
NetBox Labs this week made available a new product that will help network teams detect and remediate configuration drift across sophisticated network environments before costly service disruptions or unplanned downtime affect services or applications.
Secure Access Service Edge (SASE) is a network architecture that combines software-defined wide area networking (SD-WAN ) and security functionality into a unified cloud service that promises simplified WAN deployments, improved efficiency and security, and application-specific bandwidth policies.
TIAA has launched a generative AI implementation, internally referred to as “Research Buddy,” that pulls together relevant facts and insights from publicly available documents for Nuveen, TIAA’s asset management arm, on an as-needed basis. When the research analysts want the research, that’s when the AI gets activated.
At its core, real-time data refers to data made available immediately (or almost immediately) to support operational and analytical workloads. I often get client inquiries about latency requirements to support real-time analytics and operational workloads. This data may […]
Ultra microservices are for multi-GPU servers and data-center-scale applications. Nano microservices are optimized for deployment on PCs and edge devices. Super microservices are for high throughput on a single GPU. Partners extend reasoning to Llama ecosystem Nvidias partners are also getting in on the action.
We organize all of the trending information in your field so you don't have to. Join 83,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content