Data Cloud Summit 2020 provided a great deal of information that you can use immediately. Although the Data Cloud Summit has ended, the opportunity to learn from the speakers and panelists continues. To view a session that you missed or rewatch a session that was particularly useful, go to the agenda.

One of the most informative tracks was Modernize Your Data Lake, Build Data Engineering at Scale. This track included a wealth of information about how Snowflake enabled organizations to solve their data challenges. During this track’s sessions, customers described the issues they faced and provided in-depth details about the designs and architectures they implemented to solve those issues. 

Highlights from the Modernize Your Data Lake, Build Data Engineering at Scale track include: 

  • How Snowflake’s platform enables global organizations to modernize their technology and build a robust data infrastructure by using Snowflake’s data lake as their single source of truth and taking advantage of Snowflake Secure Data Sharing to share governed data among employees, partners, and customers.
  • Examples of how healthcare organizations unlocked massive amounts of structured and semi-structured data from a variety of sources to serve the needs of doctors, practitioners, and patients.
  • Descriptions of how organizations consolidated disparate data from multiple silos into a unified single source of truth. These organizations used specific tools and methodologies to create a data classification. They had to find and resolve data that was duplicated in different departmental databases and spreadsheets, find hidden data, and eliminate sources of data loss. Then they implemented Snowflake’s Data Cloud because of its security, scalability, near-zero maintenance, and cost transparency.
  • Techniques for building extensible data pipelines that incorporate code and libraries written in a variety of languages.
  • How implementing modern data lake functionalities enables organizations to increase flexibility, reduce storage and compute costs, enable self-service queries, reduce downtime, and support data sharing. 
  • How Snowflake solves common data pipeline challenges, providing fresh data in near real time, increasing reliability and performance, processing structured and semi-structured data, and reducing costs.

To listen to any session that you missed in any Data Cloud Summit track, go to the agenda.