Skip to content
Snowflake Inc.

Live Demo

Build Deep Learning Models with Distributed PyTorch on GPUs in Snowflake ML

On-Demand

Distributed training has become crucial for handling increasingly complex deep learning models and massive datasets. With Snowflake Notebooks on Container Runtime, ML developers can leverage multiple GPUs to accelerate PyTorch development, by sharding, distributing, and training on Snowflake data in parallel. Models can then be easily productionized in Snowflake through seamless integration to model serving with GPUs and observability. In this session, we will show you how easy it is to work with any open source package, configure resources, and build scalable end-to-end workflows.

Join this demo with ML expert, Vinay Sridhar, to learn how to use Snowflake Notebooks on Container Runtime to:

  • Build and deploy scalable computer vision PyTorch model for anomaly detection
  • Speed up training and inference on large datasets with distributed GPU pools
  • Develop ML workflows using any open source Python package from PyPi or Huggingface
Speakers
Vinay Sridhar

Senior Product Manager, Snowflake

Watch Now

  • Privacy Notice
  • Site Terms
  • Cookie Settings
  • Do Not Share My Personal Information

© 2025 Snowflake Inc. All Rights Reserved |  If you’d rather not receive future emails from Snowflake, unsubscribe here or customize your communication preferences