Teams working on data science initiatives are tasked with deriving new insights from massive amounts of data. To accomplish this, teams work with compute environments that hit bottlenecks, require heavy operational overhead or both—reducing the amount of time spent innovating. 

With Snowpark for Python, teams can now code with Python’s familiar syntax and execute with the superior performance, security and near-zero maintenance of the Snowflake processing engine. 

Join us to see a demo and learn more about how to: 

  • Write and execute exploratory data analysis and feature engineering code in Snowflake using Snowpark for Python DataFrames
  • Bring open-source libraries and frameworks into your pipelines with the Anaconda integration 
  • Automate and streamline your feature engineering pipelines for training and inference
Speakers
  • Michael Gregory

    Principal Data Platform Architect, Field CTO Office, Snowflake