BUILD: The Dev Conference for AI & Apps (Nov. 12-14)

Hear the latest product announcements and push the limits of what can be done in the AI Data Cloud.

Strategy & Insights

The Three Essentials to Get to Responsible AI

The Three Essentials to Get to Responsible AI

The excitement (and drama) around AI continues to escalate. Why? Because the stakes are high. The race for competitive advantage by applying AI to new use cases is on! The launch of generative AI last year added fuel to the fire, and for good reason. Whereas the existing portfolio of AI tools had targeted the more technically minded like data scientists and engineers, new tools like ChatGPT handed the keys to the kingdom to anyone who could type a question. Gen AI has literally democratized not the data itself but the insights and information derived from it.

The impact of this widespread use of AI has caused concern, and perhaps rightfully so. For enterprises worried about their revenues and their reputations, rules must apply. Success in the new AI landscape depends not only on this shiny new tool, but on the foundations on which it will be built. The foundation for the successful and responsible use of AI and gen AI must be based on data security, data diversity and organizational maturity. It’s not only about the technology but the people and processes as well.

Where are we now?

At a series of recent events, we asked the audience for a show of hands if they were using gen AI within their organizations. In some places, only a few people raised their hands. Yet, we know that the use of ChatGPT has exploded. Just two months after launch, ChatGPT adoption reached 100 million monthly active users, making it at the time the fastest-growing consumer application in history. Personally, I’m not an early adopter or even a fast follower. I’ve only recently used ChatGPT for some travel itinerary suggestions. I wanted an “off the beaten track” itinerary for an upcoming trip, and it gave me some great recommendations. 

Others use it as a matter of course. My son recently asked me to review an email he’d written to a professor for help in choosing between two proposed internships. In the original email, he included the two proposals as attachments. I told my son that he shouldn’t expect his professor to open attachments. He needed to include a 2-3 sentence summary of each. So what did he do? He copied the text of each proposal into ChatGPT and asked for a 2-3 sentence summary. And, that’s how its use is entering organizations today, particularly with that demographic.

At a global specialty food manufacturer, the data team went on a sleuthing mission to uncover the use of ChatGPT by monitoring network traffic. They found that roughly 10% of employees had visited and tested ChatGPT in the first quarter of 2023, yet one service was responsible for 60% of the ChatGPT traffic. In fact, it was only one employee—an intern learning to code side-by-side with her AI co-pilot. Younger employees have caught on to this new tool. 

Yet in the CDO Agenda 2024 study by MIT, fewer than 10% of companies have gen AI use cases in production and just over 10% are piloting the technology. In the same study, almost 60% say that they have not made any changes to their data environment to support or enable gen AI. Many feel like they didn’t really see it coming until it exploded onto their radar. Maybe they knew a few people were testing it. But it’s like in Top Gun when the pair of bogies they’re tracking doubles. “We have a problem here. It’s not just one pair. It’s two. We’ve got four aircraft. I repeat, four bogies.” 

Suddenly, there are gen AI use cases everywhere, with ChatGPT or Bard summarizing documents or creating first drafts of emails. With execs not wanting their business plans or contracts to be copied into ChatGPT, some companies have brought that capability into the enterprise with tools like Walmart’s MyAssistant. But these use cases are just the tip of the iceberg. 

Where are we going from here?

Innovative data leaders and their teams explore the possibilities of applying AI and gen AI. Like the bogies on the radar, the use cases are multiplying. At a recent event in Geneva, one global shipping company described how an internally trained AI-assistant improved response times for customer requests by extracting structured information from unstructured data. The shipper gets requests to transport goods from origin to destination. Approval depends on the contract with the customer, as well as the laws of the country of origin and the destination. A new AI tool extracts relevant information from customer contracts and enables natural language queries to accelerate request response times. In the future, the tool can also scan regulation and provide the necessary documentation for the shipment.

In the pharmaceutical industry, research and development teams benefit from AI’s ability to accelerate drug discovery. Powerful models help find the figurative needle in a haystack, or “candidate compounds” among the near-infinite possibilities for new drug treatments. In a recent Rise of the Data Cloud podcast, Dimitrije Jankovic, Sanofi’s Head of Data and AI, discusses the company’s use of AI, including for drug discovery but also across the organization to improve employee productivity. 

Companies are also using AI to ensure they retain their employees, particularly the productive ones. ADP, which manages payroll for one in six U.S. companies, provides machine learning (ML) models to help their customers retain top talent by predicting churn and optimizing salaries and benefits. Customers can also benchmark their metrics against aggregated, anonymized data from over 30 million employee records. 

How do we get there?

As with anything new and disruptive, particularly in the enterprise, apprehensions exist. Can I get there fast enough? How can I make sure I can protect my data and the privacy of my customers? Do I even have all the data I need? What if I don’t have the skills in-house to do it all? How do I get everyone in my organization up to speed on what it all means for them? 

As Snowflake customers share their experiences, a few requirements for their data stand out: data security, data diversity and organizational maturity—including data literacy. Companies love a platform that can make it all easier, and most importantly more secure. PII data, particularly patient and customer data, must be protected. Becoming a newspaper headline is a surefire way to risk revenues and reputation. And, that applies to all their data, whether internal or external data sourced to expand their training sets. However, there’s more to AI than technology and data. Customers also highlight the need to include people and processes in the mix.

Let’s take a look at some of these foundational elements.

Data security. While companies need to get there quickly, they also want to know they are not taking shortcuts. A few weeks ago Snowflake unveiled Snowflake Cortex (in private preview), our new, fully managed service that provides many of the building blocks to accelerate AI and gen AI adoption. Snowflake Cortex includes pre-built LLM-based functions to accelerate the development of enterprise-grade “assistant” apps including answer extraction, text summarization, translation and sentiment detection, as well as ML-based functions like demand forecasting, anomaly detection, and data classification. Snowflake Cortex also provides the building blocks to customize external and open source AI models and create custom AI apps. And, arguably most importantly, it does all this while ensuring that the data powering these functions and apps remains fully governed and secured within the Snowflake environment by the capabilities in Snowflake Horizon.

Data diversity.  As companies accelerate their use of AI, many recognize that their own data isn’t sufficient. Among other things, they worry about inaccuracies or “hallucinations” and bias. AI practitioners recommend controls like output audits and human-in-the-loop reviews. But another mitigation strategy is more data. Sanofi’s Jankovic stressed the need to encourage better data sharing between organizations, developing a more cohesive data ecosystem to train models on external data. In the insurance industry, companies collaborate to improve fraud models. The more claims a model sees, the more likely it will be to recognize fraudulent patterns. Or imagine a human resources scenario in which a model is used to select a candidate for hire. Your data might capture the profiles you’ve always hired while overlooking other demographics. External data such as ADP’s Payroll and Demographic Data or Revelio Workforce Analytics would augment your talent acquisition models and enhance decision-making. 

Organizational maturity. This is the people and process side of things that mustn’t be overlooked. Yes, responsible AI requires data and technology but also means establishing guardrails and raising awareness across the organization. At Sanofi, as discussed in the recent podcast, that meant upleveling the entire organization with its new program, Responsible AI at Sanofi for Everyone (RAISE). Participation in AI extended across the company with representation from privacy, procurement, legal and IT teams as well as business units. A cross-company AI Working Group defines the strategy and the Innovation Governance Committee, which includes the Chief Digital Officer and Legal, oversees execution. From an operational perspective, the company’s AI factory focused on repeatability and scalability, not just pilots. Snowflake has a similar cross-functional AI steering committee. AI is not a side project done in isolation. 

Responsible AI is built on a strong data foundation that also requires organization-wide data literacy. Consider awareness of the value of data at all levels, including those who might not know they work with data (such as field technicians, cashiers, or anyone who might be capturing data that could be input into an AI model). Do decision-makers comprehend enough of what’s going on under the hood to apply insights to the choices they must make? Do the data and insights teams have the tools they need and the expertise to use them? And, can that scale across the entire organization? Responsible AI requires data literacy for everyone. We’ll dig into this further in a future blog post.

Just remember that as use cases multiply, the responsible approach starts with a robust data platform and ecosystem.

Data + AI Predictions 2024

How artificial intelligence will radically reshape data science, cybersecurity, applications and more
Share Article

Subscribe to our blog newsletter

Get the best, coolest and latest delivered to your inbox each week

Start your 30-DayFree Trial

Try Snowflake free for 30 days and experience the AI Data Cloud that helps eliminate the complexity, cost and constraints inherent with other solutions.