Snowflake and NVIDIA Announce Collaboration to Drive Customization of AI Data Applications
At the 2024 Snowflake Summit, Snowflake announced a deep collaboration with NVIDIA. The collaboration aims to empower customers and partners to develop custom AI data applications within the Snowflake platform, leveraging NVIDIA's leading AI technology.
Through this collaboration, Snowflake has successfully integrated NVIDIA's AI enterprise software and incorporated the NeMo retriever microservice into Snowflake Cortex AI, which is Snowflake's hosted LLM (Large Language Model) and vector search service. This integration allows organizations to easily combine custom models with various business data, enabling seamless and accurate responses.
Furthermore, Snowflake's open enterprise LLM, Snowflake Arctic, now supports NVIDIA TensorRT-LLM software, significantly enhancing performance. Additionally, Arctic can be accessed as an NVIDIA NIM (NVIDIA Inference Microservices) inference microservice, further expanding developers' access to its functionality.
As enterprises continue to seek to maximize the potential of AI, the demand for data-driven customization is growing. This collaboration between Snowflake and NVIDIA will accelerate the development of specific AI solutions, bringing tangible benefits to businesses across industries.
Snowflake CEO Sridhar Ramaswamy said, "The combination of NVIDIA's full-stack accelerated computing and software with Snowflake's advanced AI capabilities in Cortex AI is undoubtedly a game-changing endeavor. Together, we are ushering in a new era of AI, enabling customers to easily, efficiently, and confidently build custom AI applications on their enterprise data."
NVIDIA founder and CEO Jensen Huang also expressed high recognition for this collaboration, stating, "Data is the fundamental raw material of the AI industrial revolution. NVIDIA and Snowflake will work together to help enterprises refine their proprietary business data and transform it into valuable generative AI."
In Cortex AI, NVIDIA AI enterprise software provides several notable features, including the NVIDIA NeMo retriever, which offers accurate and high-performance information retrieval for enterprises; the NVIDIA Triton inference server, which facilitates the deployment, operation, and scaling of AI inference on various platforms; and the NVIDIA NIM inference microservice, which, as part of NVIDIA AI enterprise, can be deployed directly in Snowflake through the Snowpark container service, enabling organizations to easily deploy foundational models within Snowflake.
It is worth mentioning that Quantiphi, an AI-first digital engineering company and an "elite" partner of both Snowflake and NVIDIA, has developed Snowflake Native Apps (including baioniq and Dociphi) to enhance productivity and document processing capabilities within specific industries. These applications are developed using the NVIDIA NeMo framework and will be available on the Snowflake marketplace.
Snowflake Arctic LLM was launched in April 2024 and trained on NVIDIA H100 Tensor Core GPUs. It is now available for use as an NVIDIA NIM, accessible for free within seconds through the NVIDIA API catalog or provided in downloadable NIM form, offering users flexible deployment options.
Earlier this year, Snowflake and NVIDIA expanded their collaboration to create a unified AI infrastructure and computing platform in the AI Data Cloud. Today's announcement marks significant progress in their partnership, jointly helping customers achieve excellence in the field of AI.