NVIDIA and Microsoft Merge Technologies for Next-Level AI Solutions

NVIDIA has partnered with Microsoft to integrate its NVIDIA AI Enterprise software into Azure Machine Learning to assist enterprises in accelerating their AI projects. By combining their technologies, the two companies will establish a secure platform that enables Azure customers worldwide to rapidly develop, implement, and oversee tailored applications.

“With the coming wave of generative AI applications, enterprises are seeking secure accelerated tools and services that drive innovation. The combination of NVIDIA AI Enterprise software and Azure Machine Learning will help enterprises speed up their AI initiatives with a straight, efficient path from development to production,” said Manuvir Das, Vice President of Enterprise Computing at NVIDIA.

The merger will enable users to access over 100 NVIDIA AI frameworks and tools fully supported within NVIDIA AI Enterprise, the software component of NVIDIA's AI platform.

“Microsoft Azure Machine Learning users come to the platform expecting the highest performing, most secure development platform available. Our integration with NVIDIA AI Enterprise software allows us to meet that expectation, enabling enterprises and developers to easily access everything they need to train and deploy custom, secure large language models,” said John Montgomery, Corporate Vice President of AI platform at Microsoft.

Azure Machine Learning helps developers seamlessly scale applications, ranging from small tests to large-scale deployments. It offers robust data encryption, access control, and compliance certifications to ensure security and compliance with organizational policies. NVIDIA AI Enterprise enhances Azure Machine Learning by providing secure, production-ready AI capabilities.

NVIDIA AI Enterprise encompasses a comprehensive suite of resources, including more than 100 frameworks, pretrained models, and development tools. It features NVIDIA RAPIDS, which accelerates data science workloads. Additionally, it offers NVIDIA Metropolis for expediting vision AI model development, while NVIDIA Triton Inference Server assists enterprises in establishing a standardized approach to deploying and executing models.

Availability 

The integration of NVIDIA AI Enterprise with Azure Machine Learning is currently in a limited technical preview. Moreover, businesses worldwide can now access NVIDIA AI Enterprise through Azure Marketplace, providing them with expanded options for secure and supported AI development and deployment.

In addition to this integration, the NVIDIA Omniverse Cloud platform-as-a-service is now available on Microsoft Azure as a private offer for enterprises. Omniverse Cloud delivers a comprehensive cloud environment for developers and businesses, enabling them to design, develop, deploy, and manage industrial metaverse applications on a large scale.

Additional  generative AI platforms for every industry

Apart from its collaboration with Microsoft, NVIDIA CEO Jensen Huang announced other generative AI capabilities and integrations during his live keynote at the COMPUTEX conference in Taipei. He unveiled a range of platforms designed to enable companies to capitalize on the revolutionary power of generative AI, reshaping various industries, including advertising, manufacturing, and telecommunications.

During the keynote, Huang introduced the DGX GH200, an AI supercomputer designed to deliver unparalleled performance for enterprises. This powerful system leverages NVIDIA NVLink technology, allowing the combination of up to 256 NVIDIA GH200 Grace Hopper Superchips into a single GPU within a data center environment.

The DGX GH200 has remarkable capabilities, providing exaflop-level performance and a shared memory capacity of 144 terabytes. This immense memory allows developers to construct large language models for generative AI chatbots, intricate algorithms for recommender systems, and graph neural networks utilized in fraud detection and data analytics.

Leading companies such as Google Cloud, Meta, and Microsoft are among the first to gain access to the DGX GH200, an AI supercomputer. This system serves as a blueprint for future hyper-scale generative AI infrastructure, empowering these companies to leverage its exceptional capabilities.

To cater to data centers of various sizes, the CEO introduced NVIDIA MGX, a modular reference architecture. This architecture enables system manufacturers to quickly and cost-effectively build over a hundred server configurations, accommodating a wide range of AI, high-performance computing (HPC), and NVIDIA Omniverse applications.

Finally, NVIDIA Avatar Cloud Engine (ACE) for Games, a new specialized service available to developers, enables the creation and implementation of customized AI models for speech, conversation, and animation in gaming applications. This service enhances non-playable characters by giving them conversational abilities, enabling them to respond to questions with realistic and evolving personalities.

An example of the application of this technology is the collaboration between NVIDIA and WPP, the marketing services organization. They are working together to develop a unique content engine powered by generative AI on the Omniverse Cloud platform.