Warsaw – 7 December
AGENDA
Agenda is subject to change. If you are planning on attending a hands-on lab during BUILD, please bring your laptop and charger to be able to participate.
Breakfast and Registration
Get your badge, grab some breakfast, talk with your peers, and explore our partner solutions before the opening keynote begins.
Opening Keynote and Customer Panel
Explore the power of GenAI and LLMs in the Snowflake ecosystem. Learn how to rapidly build and deploy AI-driven applications with ease. Following our keynote, join our panel featuring start-up organisations on how they’ve built real-world applications and learn from their experiences.
VP, Sales Engineering EMEA, Snowflake
Head of AI/ML Strategy, Snowflake
Co-Founder & CTO, Gem
Director, Engineering, Snowflake
Founder, thingsTHINKING
Coffee Break and Partner Expo
Breakout Track 1
Breakout 1
11:10 AM - 11:40 AM
How to Train and Deploy Computer Vision Models for Images Using PyTorch and Snowpark
11:50 AM - 12:20 PM
Data Superhero Session: Democratising AI with Snowflake - From Data to Insights
10:30 AM - 11:00 AM
How to Build Secure Snowflake Native Apps
Creating secure applications can be quite complex. In this session, we will walk through the security model and explore the different features provided by the Snowflake Native App Framework that enables providers to build Secure Snowflake Native Apps, including the permissions SDK for Streamlit that greatly simplify the user experience when setting up an application for the first time.
-
Harke Harkema
Sr. Sales Engineer, Snowflake
11:10 AM - 11:40 AM
How to Train and Deploy Computer Vision Models for Images Using PyTorch and Snowpark
The world’s volumes of unstructured data keeps growing rapidly and deep learning frameworks such as Pytorch continue to lead the way in performance for computer vision use cases. Join this session to learn how to leverage Snowpark Container Services and NVIDIAs RAPID library for Pytorch to quickly experiment and deploy models using GPUs in Snowflake. For this demo, we will use a set of pre-labeled medical images.
-
Michael Gorkow
Field CTO, Snowflake
11:50 AM - 12:20 PM
Data Superhero Session: Democratising AI with Snowflake - From Data to Insights
The democratisation of AI plays a crucial role in levelling the playing field, enabling small enterprises and individuals to leverage AI technologies that were once exclusively accessible to large corporations with vast resources.
In this session presented by a Data Superhero, we will demonstrate how one can foster the democratisation of AI utilising Snowflake Data Cloud alongside AI models. We will showcase how to harness external APIs, which enable the utilisation of AI models, and how to load models into Snowflake. In addition, we will illustrate how AI can answer questions based on documents stored in tables in Snowflake.
-
Przemysław Kantyka
Snowflake Data Superhero & CxO, Data Consulting
-
Rafal Stryjek
Snowflake Data Superhero & CxO, Data Consulting
Breakout Track 2
Breakout 2
11:10 AM - 11:40 AM
How Synerise Unleashed the Potential of Behavioural Data with BaseModel and Snowflake Containers
10:30 AM - 11:00 AM
Simplify Application Development with Hybrid Tables
There are many examples of workloads that have both transactional and analytical needs. Historically that has needed multiple databases on different database engines to solve. With Hybrid Tables Snowflake can support applications with needs across this continuum with one single database. With excellent performance for both operational and analytical queries, Hybrid Tables supports critical transactional features including unique keys, indexes, & referential integrity constraints. Join this session to see how Hybrid Tables integrate seamlessly with existing Snowflake tables and features through a demo walking through the Tastybytes application ecosystem. We’ll walk through how users can effectively join data across Hybrid Tables and other Snowflake table types in the same query engine with no federation required.
-
Fredrik Goransson
Field CTO, Snowflake
11:10 AM - 11:40 AM
How Synerise Unleashed the Potential of Behavioural Data with BaseModel and Snowflake Containers
Discover BaseModel.ai, a private foundation model for behavioral data, deployed and trained within Snowflake Containers in private beta preview. By harnessing the power of foundation models, similar to those used in ChatGPT and Dall-E 2, BaseModel.ai offers an innovative approach to predictive analytics on behavioral event data stored in your Snowflake database.
This hands-on session will outline BaseModel.ai’s application in diverse industries and within a plethora of practical use-cases. By automating feature creation and model training from raw streams of event data, BaseModel.ai significantly streamlines predictive modeling, enhancing performance while saving time.
We’ll explore its deployment in Snowflake’s secure, scalable environment, showcasing how it offers a competitive edge in behavioral data science. BaseModel.ai represents not just an advancement in technology but a new era in understanding human behavior efficiently and comprehensively.
After the session you will know:
1. how to deploy your own application inside Snowflake Containers.
2. how to use advanced Foundation Models inside Snowflake environment in order to run various AI models on top of your behavioral event data.
-
Jack Dąbrowski
Chief Artificial Intelligence Officer, Synerise
11:50 AM - 12:20 PM
The Future of DevOps with Snowflake
Snowflake is launching a set of features to facilitate better DevOps as you build pipelines, models and apps. Come and see how Snowflake’s new declarative database change management features allow you to store your Snowflake objects definitions in source control and deploy them natively through Snowflake. You’ll also see how to leverage Snowflake Git integration and CLI to accelerate your development and release cycles.
-
Maciej Oczko
Director, Software Engineering, Snowflake
-
Michal Trzaskowski
Manager, Software Engineering, Snowflake
Breakout Track 3
Breakout 3
11:50 AM - 12:20 PM
How to Enrich Snowflake Utilising Open Source Solutions with Core3
10:30 AM - 11:00 AM
How to Use Apache Iceberg with Snowflake and AWS
Join this demo session to see how to integrate Snowflake with a data lakehouse built with Apache Iceberg on AWS. You will learn how to create Iceberg Tables, register them with AWS Glue Data Catalog, and query the same table from Amazon Athena and Snowflake. In addition, you will see how to easily use various Snowflake features like data sharing, time travel, and converting catalogs without any data rewrite or upfront ingest.
-
Piotr Pietrzkiewicz
Manager, Sales Engineer, Snowflake
11:10 AM - 11:40 AM
Supercharge Your Data Transformations with Coalesce
Want to see what the future of data transformations looks like? Attend this technical deep dive of Coalesce, the only data transformation solution built for the Data Cloud. Build data projects and pipelines faster by visually transforming your data and learn how to:
– Navigate the Coalesce interface
– Use common data patterns to apply and standardize transformations
– Build and manage your data warehouse in a matter of clicks
– Adjust, deploy and refresh data pipelines, and more
-
Mark van der Heijden
Senior Sales Engineer, Coalesce
-
Frederik Naessens
Lead Architect, Coalesce
11:50 AM - 12:20 PM
How to Enrich Snowflake Utilising Open Source Solutions with Core3
Snowflake as a solution provides a wide portfolio of technologies out of the box or through the technology partners’ offerings. We would love to share 3 business cases based on our own experience where choosing the OpenSource solution was helpful in fulfilling business needs while optimising costs and potential workload. Or maybe not everything was as successful as you may think?
How to work on your Data Governance and Data Quality with Amundsen, DataHub, and Great Expectation? Come and find out during our session.
-
Patryk Krzyżopolski
Big Data Delivery Manager, Core3
-
Jakub Maćkowiak
Data Architect, Core3
Lunch Break and Partner Expo
Join us for lunch, talk with your peers, and explore our partner solutions before the afternoon agenda commences.
Breakout 1
1:40 PM - 2:10 PM
What's New with Snowflake's Developer Ecosystem and APIs
Learn how to build data apps and custom APIs using open-source drivers and Snowflake APIs. Get the latest updates on the newly launched Snowflake Python APIs and learn how to build and automate Snowpark pipelines or deploy your LLM in Snowpark Containers easily using the new Python APIs and Snowflake CLI.
-
Fredrik Goransson
Field CTO, Snowflake
2:20 PM - 2:50 PM
How Dynatrace Prototypes Data Applications with Streamlit
Streamlit offers a simple way to create interactive data applications using pure Python. With its integration, the Snowflake platform provides a powerful tool for the swift development and deployment of data-centric applications. In this talk, you will discover how the Data Science team at Dynatrace leverages Streamlit for the exploration of new ideas and potential features. We will share practical insights and experiences in using Streamlit to prototype and transform our data applications. Join us to discover how you can also utilise Streamlit to enhance your data science projects.
-
Saša Mitrović
Sr. Sales Engineer, Snowflake
-
Florian Perteneder
Team Captain and Senior Data Scientist, Dynatrace
3:00 PM - 3:30 PM
Snowpark Containers: All Data, All Workloads!
Snowpark Container Services is a fully managed container offering that allows you to easily deploy, manage, and scale containerised services, jobs, and functions, all within the security and governance boundaries of Snowflake, and requiring zero data movement.
Join us to learn about the latest innovations to empower developers with enhanced flexibility which the Snowpark Container Services bring !
-
Waldemar Kot
Sr. Sales Engineer, Snowflake
3:40 PM - 4:10 PM
What’s New in Snowflake SQL for Builders
We continue to make significant SQL improvements in Snowflake to boost your productivity. Join this session to see technical demos highlighting our latest SQL improvements, including SELECT * EXCLUDE/ILIKE/REPLACE/RENAME, MIN_BY and MAX_BY, banker’s rounding and GROUP BY ALL—all helping you code faster with more concise and easy-to-write queries. Plus, hear best practices on how to use these new syntaxes and functions in your day-to-day coding.
-
Andy Sanderson
Sr. Competitive Intelligence Manager, Snowflake
Breakout 2
1:40 PM - 2:10 PM
How to Build Streaming and Batch Data Pipelines with Dynamic Tables
2:20 PM - 2:50 PM
Data Pipeline Monitoring with Snowflake Cortex ML-based functions
3:40 PM - 4:10 PM
Data Superhero Session: Real-time Analytics Using Snowflake Streaming-API and Streamlit Visualisation
1:40 PM - 2:10 PM
How to Build Streaming and Batch Data Pipelines with Dynamic Tables
Join this session to learn how to set up a full data pipeline with Dynamic Tables. We’ll navigate common issues using the many metric sources that come with Dynamic Tables, change our setup on the fly to ensure we meet the business requirements, and explain the effect of the target lag parameter, the difference between full and incremental refreshes, and the reasons for skipped refresh. We will also show how Dynamic Tables can be used for Slowly Changing Dimension (SCD) use cases where temporal alterations can be easily tracked, ensuring historical data stays accurate and insightful. After attending the talk, you will be able to start your own pipeline consisting of multiple connected Dynamic Tables and be familiar with the basic tools to monitor and adapt your pipeline.
-
Dmytro Yaroshenko
Field CTO, Snowflake
2:20 PM - 2:50 PM
Data Pipeline Monitoring with Snowflake Cortex ML-based functions
What if data engineers could use ML to forecast growth in their data pipelines – then identify outliers and trigger alerts when monitoring the health and performance of those pipelines? Or when monitoring data quality?
Our new ML-based Snowflake Cortex functions Forecasting and Anomaly Detection empower data engineers to generate accurate forecasts of data volume growth and find outliers that should be investigated, or help data engineers find unlikely-to-happen-again situations that should be excluded from data pipelines – with just a couple of simple SQL commands. No ML expertise required.
Join this session to learn more about Forecasting and Anomaly Detection. See an in-action demo on how they generate predictions and detect anomalies for a single time series or multiple time series.
-
Michael Gorkow
Field CTO, Snowflake
3:00 PM - 3:30 PM
Logging & Tracing for Snowpark and Snowflake Native Apps
Join us in this session to learn all about how to debug and troubleshoot your Snowpark code and Snowflake Native Apps using the built-in logging and tracing capabilities. You will learn how to instrument your applications to capture logs and traces. You will learn how to add tracing support to understand the behavior and performance of your code on your data. Lastly you will also learn how to use Snowflake alerts and notifications to monitor applications and take actions on important application events
-
Fredrik Goransson
Field CTO, Snowflake
3:40 PM - 4:10 PM
Data Superhero Session: Real-time Analytics Using Snowflake Streaming-API and Streamlit Visualisation
There are many tasks when we not only have to process a huge amount of data but process it as fast as possible. A delay in predicting tsunamis can cost people’s lives. A delay in predicting traffic jams costs extra time. Ads based on recent user activity are ten times more popular. The goal of the session is to demonstrate the implementation of a solution based on available Azure components and Snowflake as a SQL data warehouse for real-time data processing. Finally, the results will be presented in Streamlit.
-
Michał Gołoś
Snowflake Evangelist, Infinite Data
Breakout 3
3:00 PM - 4:10 PM
Hands-On Lab: Intro to Data Engineering with Snowpark for Python
1:40 PM - 2:50 PM
Hands-On Lab: Zero to Snowflake
Join us for an instructor-led hands-on lab to learn how easy it is to turn your organisation into a data-driven business. Follow along in your own Snowflake free-trial account and have your questions answered by a product expert.
Difficulty level of the lab: Beginner / New to Snowflake
-
Niels ter Keurs
Sr. Sales Engineer - Snowflake Startup Program, Snowflake
3:00 PM - 4:10 PM
Hands-On Lab: Intro to Data Engineering with Snowpark for Python
Snowpark is the set of libraries and runtimes in Snowflake that securely deploy and process Python code for powerful data pipelines. Customers using Snowpark see a median of 3.5x faster performance and 34% cost savings over managed Spark. This session will be an introduction to building data engineering pipelines with Snowpark for Python using the latest feature announcements, including dynamic file access, Python 3.10, and Python Task API.
Difficulty level of the lab: Intermediate (you must have knowledge of how to use Snowflake or if you joined the Zero to Snowflake lab earlier in the afternoon, you will be able to join this lab)
-
Harke Harkema
Sr. Sales Engineer, Snowflake
-
Niels ter Keurs
Sr. Sales Engineer - Snowflake Startup Program, Snowflake