quickstart
SNOWFLAKE CERTIFIED SOLUTION
Confidently Power Iceberg Pipelines with Any Data
CREATE DATABASE my_linked_db
LINKED_CATALOG = (
CATALOG = 'my_catalog_int'
)
CREATE DATABASE my_linked_db
LINKED_CATALOG = (
CATALOG = 'my_catalog_int'
)
CREATE DATABASE my_linked_db
LINKED_CATALOG = (
CATALOG = 'my_catalog_int'
)
CREATE DATABASE my_linked_db
LINKED_CATALOG = (
CATALOG = 'my_catalog_int'
)
CREATE DATABASE my_linked_db
LINKED_CATALOG = (
CATALOG = 'my_catalog_int'
)
CREATE DATABASE my_linked_db
LINKED_CATALOG = (
CATALOG = 'my_catalog_int'
)
CREATE DATABASE my_linked_db
LINKED_CATALOG = (
CATALOG = 'my_catalog_int'
)
CREATE DATABASE my_linked_db
LINKED_CATALOG = (
CATALOG = 'my_catalog_int'
)
CREATE DATABASE my_linked_db
LINKED_CATALOG = (
CATALOG = 'my_catalog_int'
)
Overview
Snowflake’s Enterprise Lakehouse solutions, grounded in Apache Iceberg™ tables, enables any customer to overcome data and security fragmentation to conquer data complexity.
It starts by connecting existing Iceberg tables on any catalog, region, or cloud to build a single, connected and secured view of your entire data estate and apply precise security policies.
Customers can also connect new data, with improved economics, by streaming data with Snowpipe, together with Snowpipe Streaming, to any Iceberg Table in their choice of latency, or simply connect multi-modal data from anywhere with Openflow.
As data comes in, built-in automated data quality monitoring offers customizable quality controls and proactive alerts (PrPr) to monitor the health of their data pipelines.
With Snowflake, customers can streamline pipelines via automated and declarative solutions with a fully managed infrastructure with Dynamic Tables for Iceberg and the option to run existing Apache Spark in Snowpark Connect for lower TCO and greater scalability.
Customer’s business ecosystem extends beyond their architectures. Support for secure data sharing in open formats connects this last-fragmentation-mile with support to share governed access to Iceberg and Delta tables with any team, customer or partner.
Conquer Data Complexity. Deliver AI

Connect any data to streamline the pipelines and deliver governed data for AI.
Use Case 1
Connect existing data
Federate from any Iceberg REST catalog — including AWS Glue, Databricks Unity and Microsoft OneLake — all from a single Snowflake development environment to automatically discover and access fresh data. This delivers on the lakehouse’s Zero-ETL promise, while providing unprecedented interoperability and processing power with Snowflake’s world class performance engine. Protect your data on land by applying precise security policies to your data, regardless of catalog, region, or cloud.
Build a connected and governed Lakehouse

Centralize and activate data from anywhere
Use Case 2
Connect New Data
Snowflake Openflow is an open, extensible, managed multi-modal data integration service that makes data movement effortless between data sources and destinations, supporting all data types including structured and unstructured, batch and streaming.
Explore more resources for this use case
Use Case 3
Build Declarative Pipelines
Leveraging a declarative SQL framework to simply define the desired outcome of your data transformation, and let Snowflake automatically handle orchestration, managing dependencies, scheduling and incremental refresh. The result is fully managed pipelines that free up development hours and deliver efficient and stable data.

Instead of building complex, manual pipelines to extract text, parse documents, chunk content, and then generate embeddings, use Snowflake Openflow to extract directly from source systems with built-in processors to parse and chunk documents leveraging Snowflake Cortex.
Explore more resources for this use case
Use Case 4
Automate Data Quality Monitoring
Leverage customizable data quality controls and proactive alerts (currently in private preview) that isolate bad records for remediation. You gain confidence that every data product delivered — whether to a dashboard, an application or powering a gen AI model — is consistent and trusted.
Measures key metrics like freshness and counts that measure duplicates, NULLs, rows, and unique values. Uses data metric functions (DMFs), which include Snowflake-provided system DMFs and user-defined DMFs
Explore more resources for this use case
Use Case 5
Share Iceberg and Delta Tables with anyone
Snowflake’s secure zero-ETL data sharing now supports both Iceberg and Delta Lake tables regardless of catalog. This means you can easily and securely share open table formats across regions and clouds with security and governance policies persisting for your data customer.
Explore more resources for this use case
This solution was created by an in-house Snowflake expert and has been verified to work with current Snowflake instances as of the date of publication.
Solution not working as expected? Contact our team for assistance.