wHY ATTEND
Building a successful AI strategy requires more than just high-performance models; it demands a modern, resilient data foundation. This edition of Snowflake Connect: Data Engineering provides a deep dive into building and managing high-performance transformation pipelines designed specifically for AI-ready data. Through hands-on demos, we explore how to modernize transformation pipelines and leverage AI to design scalable data architectures that reduces operational overhead with familiar tools and workflows such as Python, Apache Spark, SQL and dbt.
You’ll learn how to:
- Build and migrate transformation pipelines using Snowpark for Python/Java, Snowpark Connect for Apache Spark, Dynamic Tables, and dbt with live demos showing code-first and declarative approaches
- Optimize pipeline performance and reduce Total Cost of Ownership (TCO) through intelligent materialization strategies, workload isolation, and cost monitoring techniques
- Accelerate development cycles using Cortex Code and agentic workflows to automate repetitive engineering tasks, generate data products, and improve developer productivity
- Ensure data interoperability and scale by deploying your code for continuous orchestration alongside Apache Iceberg for open storage.