What is an AI Factory?

An AI factory operationalizes the use of AI by tightly integrating compute, storage, and networking elements, optimizing the entire system for AI workloads.

3 Min Read
An AI factory operationalizes the use of AI by tightly integrating compute, storage, and networking elements, optimizing the entire system for AI.
(Credit: Aleksei Gorodenkov / Alamy Stock Photo)

For the last few years, there has been industry talk of a concept called the AI factory. The overall idea behind an AI factory is to have an AI-optimized architecture and perhaps managed service to run AI workloads and accelerate any work that makes use of AI. To that end, an AI factory (as the term factory implies) operationalizes the use of AI.

Over the last year or so, companies like Penguin Solutions, Dell, HPE, and others have teamed with NVIDIA to offer such AI-optimized systems for enterprise users of all types. Such systems tightly integrated compute, storage, and networking elements in a way that allowed AI jobs to make efficient use of expensive cores (GPUs) and eliminate performance bottlenecks.

AI factories got more attention recently. For example, at this month’s COMPUTEX conference, NVIDIA CEO Jensen Huang said in his keynote address that a host of industry operators, including Asus, Gigabyte, Inventec, and Supermicro, amongst others, will provide cloud, hybrid, and edge AI systems with NVIDIA GPUs, CPUs, and networking modules. Some of these operators will offer these as part of an AI factory, which will provide actionable intelligence, new insights, and real-time decisions through AI-led infrastructure.

To put the impact of AI into context, he drew an analogy to a previous major transformation. “The next industrial revolution has begun. Companies and countries are partnering with NVIDIA to shift the trillion-dollar traditional data centers to accelerated computing and build a new type of data center — AI factories — to produce a new commodity: artificial intelligence,” said Huang.

AI needs its own factory

The explosion in artificial intelligence research and deployment over the past 24 months has led to a surge in demand for AI-optimized data centers and services. NVIDIA has seen its stock price balloon, alongside other manufacturers that have proved they’re at the forefront of AI. To truly capture this demand, many systems providers envision a future in which all infrastructure is built with an AI-first approach in mind.

Data centers are rewiring their infrastructure to meet the demand, with AI operations expected to need, on average, over ten times more compute power than the previous generation of internet and enterprise services. NVIDIA has launched several modules, including an ARM-based CPU, to create an end-to-end machine built for AI. At the same time, it is partnering with several manufacturers and data center operators to ensure at least its GPUs are being deployed at a fast rate.

Other industry efforts focused on the networking aspects of AI performance have unfolded during the same time period. As we noted in “AMD, Cisco, and Others Team on Ethernet for AI and HPC,” “as enterprises embrace artificial intelligence (AI), they often find their existing infrastructures are not optimal for AI’s workload demands.” That issue led to the formation of a new industry group, the Ultra Ethernet Consortium (UEC), whose aim is to build a complete Ethernet-based communication stack architecture for AI and high-performance computing (HPC) workloads.

Putting the AI factory concept into perspective

With data centers optimized for AI deployments and infrastructure manufacturers of all sizes retooling their businesses to fit with AI, the age of the AI factory may be upon us.

New terms that summarize large infrastructure improvements or deployments are helpful, especially for those outside the industry, and the industry is keen to see AI factories adopted. A similar push happened with the Internet of Things, a wide set of networking, hardware, and software tools that turned a fleet of sensors into actionable data and insights. The handy abbreviation of IoT opened the door for more people to better understand networking and hardware announcements. 

Related articles:

About the Author(s)

David Curry, Technology Writer

David Curry is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights