Dell, Deloitte, NVIDIA Roll Out New AI Factory Infrastructure
New AI factories from Dell, Deloitte, NVIDIA, and others provide the compute, storage, and networking infrastructure to operationalize AI.
November 13, 2024
As demand for AI workloads increases, companies have been ramping up their offerings in AI infrastructure. NVIDIA CEO Jensen Huang has said a new type of data center called AI factories is emerging. These AI factories make up the compute, storage, and networking infrastructure to allow companies to operationalize AI.
Huang originated the AI factory term when he wanted the company to be known for a lot more than GPUs but also as a “factory for AI,” according to Rohit Tandon, managing director of AI and Insights, as well as practice leader at Deloitte.
Huang referred to AI factories during a keynote event as far back as NVIDIA’s GTC 2022 event. “AI data centers process mountains of continuous data to train and refine AI models,” Huang said at the time. “Raw data comes in, is refined, and intelligence goes out — companies are manufacturing intelligence and operating giant AI factories.”
Tandon compares an AI factory to the engine of a race car, with the tires and the chassis equivalent to the software and dashboards that keep the engine running at top capacity.
“The factory basically [means] everything that you need to process, run your data, your AI workloads in an environment are available in, like a box, if you want to call it, or a factory,” Tandon says.
At the 2024 Dell Technologies World conference in Las Vegas, Michael Dell noted how factories use a water wheel or wind power to drive power. Workers would hook up a wheel, writes Patrick Moorhead, founder and CEO of Moor Insights & Strategy.
"When electricity came along, factories just used electricity to spin a big wheel, like wind and water had done before. But eventually, people figured out that they could run electricity directly to the machines producing things," Moorhead writes. "Dell said he wants people using AI to skip the "wheel" part of development and go straight to the direct-to-production part."
Vendors charge a subscription fee for AI Factory as a Service for compute and data usage as needed, along with a consulting fee built-in, Tandon says.
New Dell Servers Fortify AI Infrastructure
Last month, Dell Technologies expanded its AI Factory for AMD environments and introduced the new PowerEdge XE7745 server, which supports AI inferencing and model tuning. The PowerEdge R6725a and R7725 will also allow organizations to carry out robust data analytics and AI workloads and scale their configurations.
Meanwhile, also as part of its AI Factory, on Oct. 15, Dell rolled out its PowerEdge XE9712 platform, which incorporates NVIDIA Grace and Blackwell GPUs. The high performance is suitable for LLM training and real-time inferencing of AI deployments. The new infrastructure is designed to provide a flexible way to meet growing AI workload demands.
“Today’s data centers can’t keep up with the demands of AI, requiring high density compute and liquid cooling innovations with modular, flexible and efficient designs,” Arthur Lewis, president of the Infrastructure Solutions Group at Dell Technologies, said in a statement. “These new systems deliver the performance needed for organizations to remain competitive in the fast-evolving AI landscape.”
Meanwhile, Hewlett Packard Enterprise (HPE) also offers an NVIDIA-powered AI factory called Private Cloud AI. It provides access to NVIDIA’s AI microservices.
Deloitte, NVIDIA Introduce AI Factory as a Service
In September, Deloitte rolled out a turnkey generative AI solution called an AI Factory as a Service.
Tandon designed the service, along with Nitin Mittal, U.S. artificial intelligence (AI) strategic growth offering leader at Deloitte US.
As part of its AI Factory as a Service, Deloitte manages the infrastructure for AI, which includes NVIDIA hardware and Oracle software. Deloitte’s managed services help organizations deploy AI infrastructure and offers orchestration services and a consulting layer, Tandon says.
Deloitte will be working with OEMs such as Dell and HPE to manage AI Factory as a Service offerings.
The Deloitte service incorporates NVIDIA’s AI Enterprise and NIM Agent Blueprints as well as Oracle’s AI technology.
AI Factory as a Service will allow Deloitte to integrate its data science and model design with the NVIDIA AI platform. Deloitte will also incorporate data and model governance under its Trustworthy AI framework.
Deloitte’s AI Factory as a Service lets businesses use NVIDIA’s NIM Agent Blueprints to power their AI workflows as well as NVIDIA’s NIM microservices and NeMo framework to accelerate generative AI applications.
Oracle’s contribution to the Deloitte AI Factory as a Service is comprised of a full-stack solution of IaaS, PaaS, and database services. By offering multiple services as part of the Deloitte AI Factory as a Service, Oracle provides flexibility to customers to develop custom applications or use existing software.
With AI Factory as a Service, Deloitte aims to make AI infrastructure more cost-effective and plug-and-play, as well as provide the technical resources to help organizations solve problems and scale with AI, according to Tandon.
Today, GPU access could take three months to set up, but with AI Factory as a Service, it could take four weeks, Tandon says.
Deloitte can monitor AI workloads and schedule them at the right time to ensure that companies set up their environments for the right kind of GPU access given varying use cases and workload requirements, Tandon explains.
"This is a flexible model where we can go up and down in terms of usage and allow our clients to be more cost-effective and deliver those AI solutions," Tandon says.
Organizations can deploy AI cloud on-premises, in a colocation setup, or in a private cloud, he adds.
Deloitte also offers "AI architects" as part of the AI factory.
"Architecting these environments is one of the biggest things because you’ve got to figure out how much compute will you need, how much data, how much networking, and then all of this needs to be put together with the right kind of guardrails, and that's what the AI architects do,” Tandon says.
NVIDIA Offers a Backbone in AI Factories
In June, at the Computex conference in Taipei, Taiwan, NVIDIA announced that it would work with companies such as Asus, Inventec, and Supermicro to develop the infrastructure to build AI factories as part of what Huang called the “next industrial revolution.”
NVIDIA’s Blackwell architecture includes Grace CPUs and NVIDIA networking and architecture that will help organizations build AI factories.
Last month, NVIDIA introduced Enterprise Reference Architectures (Enterprise RAs), which are blueprints to help organizations build high-performance, scalable, and secure AI factories to handle manufacturing intelligence.
“NVIDIA Enterprise RAs help organizations avoid pitfalls when designing AI factories by providing full-stack hardware and software recommendations and detailed guidance on optimal server, cluster, and network configurations for modern AI workloads," wrote Bob Pette, vice president and general manager for enterprise platforms at NVIDIA, in an Oct. 29 blog post.
By offering Enterprise RAs, NVIDIA hopes to speed up the time it takes to deploy AI infrastructure and reduce the cost of deployment. Dell Technologies, HPE, Lenovo, and Supermicro offer solutions based on NVIDIA’s Enterprise RAs.
In April, NVIDIA announced that it would acquire Israeli AI startup Run:ai, which allows developers to accelerate AI development and gain visibility into their AI infrastructure and workloads. NVIDIA will require EU approval for the deal, the European Commission announced on Oct. 31, according to Reuters.
Read more about:
Infrastructure for AIAbout the Author
You May Also Like