Big data has been revolutionizing the digital marketing landscape: organizations are gathering data from numerous sources; data streams are being collected at unparalleled speeds; and businesses are dealing with a variety of data structures, from emails to user behaviors to financial transactions.
Nearly every company considers itself a data-first company, but not every company is creating a data architecture with a level of democratization that makes it easy to activate and transform data into valuable insights. Until recently, many organizations relied on a centralization strategy to process vast amounts of data from various sources, requiring importing and transporting data from different sources to a data lake to be queried, which can be time consuming and costly.
Here at Iterable, we found ourselves in a similar scenario facing many of these same challenges as our company grew. We realized the growing need for a new data architecture that would allow our various operational teams to seamlessly access and analyze peripheral data across the business—all while eliminating the challenges of data availability and accessibility at scale.
As a result, we turned to Snowflake to help develop a data mesh for Iterable, to help us build a modern, distributed architecture to manage our data. Let’s take a look at the three components that helped us build an innovative data management strategy and solved some of our core challenges in democratizing data:
The first step in building a successful data mesh model is defining domain-driven data ownership and pipelines.
Snowflake gave us the capability to build large-scale ad hoc data pipelines using a variety of languages (SQL, Python, Java, etc.). We were able to easily integrate Fivetran and dbt from a wide selection of extract, transform, and load (ETL) partners, resolving data silos within our many go-to-market (GTM) applications and improving data accessibility. Our data engineers now can easily set up pipelines from major applications such as Salesforce, Zuora, and Zendesk within minutes—all with a couple of clicks.
On the other hand, Iterable’s product organization presented a more complicated data pipeline use case that required ingesting high volumes and frequency of data. The BI team was able to leverage the Snowflake task functionality to trigger a request to our API endpoints, resulting in Snowpipe automatically processing and ingesting the data into our Snowflake database from an external S3 bucket.
Using Snowflake has allowed us to simplify our process in how we set up each domain and improved the teams’ autonomy over their own data and ETL process.
Sharing data as a product
The second key pillar of our data mesh approach was the ability to share data as a product.
Once you have structured and processed data, how a company uses it in cross-collaboration becomes a critical step. Leveraging Snowflake’s Secure Data Sharing capabilities, businesses can not only share data sets between their own internal departments but also with external organizations, such as customers and partners. BI at Iterable wants to not only make our systems smarter at capturing information but also help empower our end users to be able to make data-driven decisions.
The power of sharing data sets came into effect internally when our “Iterable at Iterable” project—a term we use to describe using our own technology—was able to unify a cross-cloud contact profile from Marketo, Salesforce, and, last but not least, the Iterable application. We were able to easily access clean and processed data models from a variety of sources, creating a master enriched profile with compliant preferences on our customers sent downstream via Snowflake and Segment.io.
For our BI team, being able to share our data science model outputs using Snowflake’s Python connector from Jupyter accelerated a variety of revenue initiatives, such as sales forecasting, predictive modeling, and financial reporting. The BI team currently ingests multiple third-party data sources and runs predictive analytics to help us identify our top accounts. Output from such predictive analytics can then be shared back to key business stakeholders, thus unlocking real value from cross-collaboration of data.
Our most powerful use case that fundamentally requires a data mesh architecture is the ability to securely share data with our customers. Last year, Iterable built a native integration to support Snowflake Secure Data Sharing to seamlessly and securely grant Iterable customers access to their system event data right in their own Snowflake instance—all without an ETL. Now Iterable customers can securely share live data internally within their own organization, reduce data storage costs, and drive powerful business decisions with data silos removed.
Centralized and decentralized balance of data
The last pillar that allows a data mesh to be the optimal solution is achieving the balance between centralization and decentralization of data. BI at Iterable looks at our data strategy as a hybrid model: our core BI team provides the standardization and centralization of data, while our business stakeholders and analysts on other teams have the ability to access and run their own analyses with domain-specific data governance policies.
Iterable’s revenue and billing data is protected by Snowflake’s capability to define security policies down to the data and roles of domain users. Our sales and marketing teams can respectively query our financial data models and have their data views be modified based on user role and data sensitivity, allowing security to be scalable across the organization. Additionally, external organizations that tap into our data shares are able to integrate with their existing security policies.
This is just the beginning as we begin to flesh out our data mesh architecture. As our pool of customers expands, Iterable will be able to dive into industry benchmarking and share critical KPIs that others in the industry can benefit from.
Internally, our BI team is looking forward to improving and expanding our customer maturity models. We hope to continue to expand on the centralization of all data across the organization to fuel more robust (and soon, machine learning-based) revenue analytics.
The real power of a data mesh architecture lies in the ability to scale with the business. At Iterable, this is just the dawning of a future that will be driven by data.