Nvidia today announced it is making 21 tools for building applications on its graphical processor units (GPUs) available on the Amazon Marketplace. This is part of a larger effort to streamline the process of embedding AI capabilities into apps.
The relationship between Nvidia and AWS is becoming more complicated. In addition to agreeing to make rival GPUs from Intel available as a cloud service earlier this month, AWS also signaled its intention to build its own GPUs.
Already available on Nvidia GPU Cloud (GPC), these tools are encapsulated as Docker containers that can be deployed anywhere, including the GPU cloud service based on Nvidia processors that AWS makes available.
Nvidia claims these components have already been downloaded more than a million times by 250,000 developers and data scientists.
The goal is to make it as simple as possible to build, train, and deploy AI applications on Nvidia processors, which will soon include Arm processors it is gaining via a previously announced acquisition expected to close next year.
This alliance marks the first time the entire Nvidia portfolio is available on the AWS Marketplace, said Adel El Hallak, director for Nvidia GPC, in an interview with VentureBeat.
Previously, individual Nvidia components had been made available on the AWS Marketplace. By making all the components available, Nvidia is looking to reduce the number of steps developers would otherwise have to make by downloading components from a separate platform, El Hallak said.
That’s critical, because AI is no longer just being included within a research project or proof of concept, he added, noting that enterprise IT organizations are now more routinely including AI capabilities in the applications that are being deployed in production. “We’ve reached an inflection point,” said El Hallak.
Nvidia has already committed to making its portfolio available on other cloud marketplaces. AWS, however, was the initial priority given the available resources, said El Hallak.
Nvidia GPUs are mainly employed to more cost-effectively train AI models using GPUs that are usually accessed via the cloud. The inference engine AI models run on are most commonly deployed on x86 processors. Nvidia has been making a case for also using either lower GPUs or Arm processors to run those AI inference engines.
Regardless of the processor type, it should be feasible to deploy Nvidia software that is encapsulated in containers. Those tools span everything from instances of MXNet, TensorFlow, Nvidia Triton Inference Server, and PyTorch software to frameworks for video analytics and software development kits made up of multiple compilers and libraries.
Naturally, competition is fierce as cloud service providers battle for the hearts and minds of the developers building these applications, given the amount of compute resources they consume. As competition drives down the cost of accessing those resources, the rate at which AI applications are being built and deployed should accelerate.
But the real challenge is not so much accessing the tools and compute resources needed to build these applications as it is finding and retaining the data scientists and developers required to build, deploy, and maintain them.
Updated 12/18/20, 1:45pm PT with comments from an interview