Virtual Hands-On Lab

Prompt to Pipeline: Build a dbt + DCM Data Pipeline in Snowflake with Cortex Code

A 60-minute hands-on lab where AI pair programming, native dbt, and Git-backed change management do the heavy lifting for you.

10JUN

Register Now

Why this lab

Data engineers and analytics engineers spend too much time on the undifferentiated plumbing between raw data and business-ready tables: scaffolding models, writing boilerplate SQL, wiring up schema change management, and chasing environment drift. At the same time, the pressure to deliver AI-ready data has never been higher. This hands-on lab is built for data engineers, analytics engineers, and Snowflake practitioners who want to cut that cycle time in half by letting an AI coding agent do the heavy lifting, without giving up governance, version control, or reproducibility.

How we will solve it

We will use Cortex Code, Snowflake's AI coding agent, as our pair programmer to build and ship a complete data pipeline directly against a Snowflake trial account. Transformations are authored as a dbt project on Snowflake with no external dbt host and no extra infrastructure, and every schema and object change is governed through Database Change Management (DCM) for a Git-backed, reviewable, repeatable deploy path. The result is a workflow where Cortex Code drafts the code, dbt models the data, and DCM ships the change.

What we will do in the webinar

In this Hands on Lab, we will be:

  • Starting from the PawCore smart-pet-collar dataset (device telemetry, manufacturing quality logs, and customer reviews) pre-loaded into your own Snowflake trial account before the session.
  • Using Cortex Code to scaffold a dbt project on Snowflake end to end: sources, staging models that preserve the raw table shape, and a layered set of analytical tables.
  • Prompting Cortex Code to generate dbt models and tests that land DEVICE_DATA.TELEMETRY, MANUFACTURING.QUALITY_LOGS, and SUPPORT.CUSTOMER_REVIEWS so the existing PawCore semantic view (PAWCORE_ANALYTICS.SEMANTIC.PAWCORE_ANALYSIS) drops in on top unchanged.
  • Adding three high-value derived tables that match the agent's verified analytical questions: lot quality correlation, regional customer impact, and battery × moisture correlation.
  • Wrapping the project in Database Change Management (DCM) from init to commit, plan, and deploy to DEV so every object and model change is reviewed, versioned, and reproducible.
  • Running the end-to-end pipeline, validating row counts and tests, and previewing how a Cortex Agent plugs straight into the output.

What you will walk away with

You will leave with a working, Git-backed Snowflake pipeline you built yourself, plus a repeatable pattern for using Cortex Code as your day-to-day data engineering copilot. Specifically, you will know how to drive dbt projects on Snowflake from natural-language prompts, use DCM to review and deploy schema and dbt changes safely through environments, and combine the two into a CI-friendly workflow your team can adopt immediately. As a bonus, the tables your pipeline lands are the exact shape the PawCore Cortex Agent's semantic view expects, so you can layer Snowflake Intelligence on top in the follow-on Cortex AI Hands on Lab and have a working agent in minutes.

Speakers

Person alt text
John KangSolutions Engineer, Snowflake