
Blog
Build, Deploy and Serve Models at Scale with Snowflake ML
Read about the latest generally available announcements for building and deploying models at scale in Snowflake.
FREE Snowflake Virtual Dev Day '25
Skill up with AI and learn from visionaries Andrew Ng and Jared Kaplan on June 26.
Use Case
Accelerate machine learning from prototype to production with distributed GPUs or CPUs on the same platform as your governed data. Streamline model development and MLOps with no infrastructure to maintain or configure — all through a centralized UI.
Overview
Develop, deploy and monitor ML features and models at scale with a fully integrated platform that brings together tools, workflows and compute infrastructure to the data.
Unify model pipelines end to end with any open source model on the same platform where your data lives.
Scale ML pipelines over CPUs or GPUs with built-in infrastructure optimizations — no manual tuning or configuration required.
Discover, manage and govern features and models in Snowflake across the entire lifecycle.
ML Workflow
Model Development
Optimize data loading and distribute model training from Snowflake Notebooks on Container Runtime or any IDE of choice with ML Jobs.
Feature Management
Create, manage and serve ML features with continuous, automated refresh on batch or streaming data using the Snowflake Feature Store. Promote discoverability, reuse and governance of features across training and inference.
Production
Log models built anywhere in Snowflake Model Registry and serve for real-time or batch predictions on Snowflake data with distributed GPUs or CPUs.
“Previously, the process to train all these models and generate predictions took a half hour. The unified model on Snowflake is super quick; we’re talking minutes to generate forecasts for hundreds of thousands of customers. This speed and simplicity will help unlock additional capabilities for the business like simulation and scenario forecasting.”
Dan Shah
Manager of Data Science
Learn more about the integrated features for development
and production in Snowflake ML
Get Started
End-to-end ML
Yes, data scientists and ML engineers can build and deploy models with distributed processing in CPUs or GPUs. This is enabled by the underlying container-based infrastructure that powers the Snowflake ML platform.
You can build features and models directly from Snowflake Notebooks, or through any IDE of choice with ML Jobs.
No, you can bring models built anywhere externally to run in production on Snowflake data. During inference, you can take advantage of integrated MLOps features such as ML observability and RBAC governance.
Yes, Snowflake ML is fully compatible with any open-source library. Securely access to open source repositories via pip and bring in any model from hubs such as Hugging Face.
Snowflake operates on a consumption-based pricing model with the latest credit pricing table here.
Yes, you can try any of our ML quickstarts directly from the free trial experience.
Subscribe to our monthly newsletter
Stay up to date on Snowflake’s latest products, expert insights and resources—right in your inbox!
* Private preview, † Public preview, ‡ Coming soon