Snowflake Releases a new large language model

0

Data Cloud company Snowflake has released a new large language model (LLM) designed to be the most open, enterprise-grade LLM on the market.

With its Mixture-of-Experts (MoE) architecture, Snowflake Artic delivers top-tier intelligence with unparalleled efficiency at scale and is optimised for complex enterprise workloads.

In addition, Snowflake is releasing Arctic’s weights under an Apache 2.0 license, along with details of the research leading to how it was trained. Snowflake says this sets a new openness standard for enterprise AI technology. The Snowflake Arctic LLM is part of the Snowflake Arctic model family, a family of models built by Snowflake that also includes the best practical text-embedding models for retrieval use cases.

“This is a watershed moment for Snowflake, with our AI research team innovating at the forefront of AI,” said CEO Sridhar Ramaswamy. “By delivering industry-leading intelligence and efficiency in a truly open way to the AI community, we are furthering the frontiers of what open-source AI can do. Our research with Arctic will significantly enhance our capability to deliver reliable, efficient AI to our customers.”

Recent research found that around 46% of global enterprise AI decision-makers said they leverage existing open-source LLMs to adopt generative AI as a part of their organisation’s AI strategy.

Snowflake’s open model also provides code templates alongside flexible inference and training options so users can quickly get started with deploying and customising Arctic using their preferred frameworks. These will include NVIDIA NIM with NVIDIA TensorRT-LLM, vLLM, and Hugging Face. Arctic is available for serverless inference in Snowflake Cortex, Snowflake’s fully managed service that offers machine learning and AI solutions in the Data Cloud. It will also be available on Amazon Web Services (AWS), alongside other model gardens and catalogs, including Hugging Face, Lamini, Microsoft Azure, NVIDIA API catalog, Perplexity, Together AI, and more.

Snowflake’s AI research team took less than three months and spent roughly one-eighth of the training cost of similar models when building Arctic. Trained using Amazon Elastic Compute Cloud (Amazon EC2) P5 instances, Snowflake is setting a new baseline for how fast state-of-the-art open, enterprise-grade models can be trained, ultimately enabling users to create cost-efficient custom models at scale.

As a part of this strategic effort, Arctic’s differentiated MoE design improves training systems and model performance. Arctic also delivers high-quality results, activating 17 out of 480 billion parameters at a time to achieve industry-leading quality with unprecedented token efficiency. In an efficiency breakthrough, Arctic activates roughly 50% less parameters than DBRX, and 75% less than Llama 3 70B during inference or training. In addition, it outperforms leading open models, including DBRX, Mixtral-8x7B, and more in coding (HumanEval+, MBPP+) and SQL generation (Spider) while simultaneously providing leading performance in general language understanding (MMLU).

Snowflake continues to provide enterprises with the data foundation and cutting-edge AI building blocks they need to create powerful AI and machine learning apps with their enterprise data. When accessed in Snowflake Cortex, Arctic will accelerate customers’ ability to build production-grade AI apps at scale, within the security and governance perimeter of the Data Cloud.

In addition to the Arctic LLM, the Snowflake Arctic family of models also includes the recently announced Arctic embed, a family of state-of-the-art text embedding models available to the open-source community under an Apache 2.0 license. The family of five models are available on Hugging Face for immediate use and will soon be available as part of the Snowflake Cortex embed function (in private preview). These embedding models are optimiseds to deliver leading retrieval performance at roughly a third of the size of comparable models, giving organisations a powerful and cost-effective solution when combining proprietary datasets with LLMs as part of a Retrieval Augmented Generation or semantic search service.

Snowflake also prioritises giving customers access to the newest and most powerful LLMs in the Data Cloud, including the recent additions of Reka and Mistral AI’s models. Moreover, Snowflake recent expanded partnership with NVIDIA to continue its AI innovation, bringing together the full-stack NVIDIA accelerated platform with Snowflake’s Data Cloud to deliver a secure and formidable combination of infrastructure and compute capabilities to unlock AI productivity. Snowflake Ventures has also recently invested in Landing AI, Mistral AI, Reka, and more to further Snowflake’s commitment to helping customers create value from their enterprise data with LLMs and AI.

Share.

Comments are closed.