Google Cloud and Hugging Face have announced a strategic partnership aimed at advancing the mission of democratizing AI. This collaboration allows developers to leverage Google Cloud's robust infrastructure for all Hugging Face services, streamlining the training and serving of Hugging Face models on the Google Cloud platform.
"Google Cloud and Hugging Face share a vision for making generative AI more accessible and impactful for developers. This partnership ensures that developers on Hugging Face will have access to Google Cloud's purpose-built AI platform, Vertex AI, along with our secure infrastructure, which can accelerate the next generation of AI services and applications," said Thomas Kurian, CEO at Google Cloud.
The collaboration enables developers to effortlessly train, fine-tune, and deploy Hugging Face models using Vertex AI directly from the Hugging Face platform. This allows seamless integration with Google Cloud's comprehensive MLOps services, facilitating the development of next-generation AI applications.
Additionally, the partnership supports Google Kubernetes Engine (GKE) deployments, allowing Hugging Face developers to independently manage, fine-tune, and deploy their workloads. This includes the use of Hugging Face-specific Deep Learning Containers on GKE for scalable model development.
Leveraging Google Cloud Marketplace streamlines the management and billing processes for the Hugging Face managed platform. This includes features such as Inference, Endpoints, Spaces, AutoTrain, and more.
"From the original Transformers paper to T5 and the Vision Transformer, Google has been at the forefront of AI progress and the open science movement. With this new partnership, we will make it easy for Hugging Face users and Google Cloud customers to leverage the latest open models together with leading optimized AI infrastructure and tools from Google Cloud including Vertex AI and TPUs to meaningfully advance developers ability to build their own AI models," said Clement Delangue, CEO of Hugging Face.