What Can Go Wrong If AI Risk Management Isn’t Up To The Task

AI Risk Management
Monday, February 26, 2024

(This post was originally published in Ad Exchanger)

It’s no surprise that marketing is one of the first disciplines to embrace generative AI. In a recent State of Generative AI Survey by risk management platform Portal26, 68% of respondents in the marketing, media and sales categories believed generative AI would give their organization a significant competitive advantage.

Yet, in that same survey, 75% of those organizations have had security and/or misuse incidents with generative AI. While powerful, generative AI is also fraught with risks.

As Uncle Ben famously told Peter Parker, “With great power comes great responsibility.”

There are a number of landmines in the world of generative AI. Here’s what to watch out for and how to avoid them:

Deepfakes and authenticity. Bad actors have a lot to gain by associating themselves with existing brands and personas. 

The Skyward Aviator Quest gaming app recently created a video featuring world famous cricketer Sachin Tendulkar in a promo video. Only problem is, he didn’t endorse the product or have a deal with the company.

Protecting your brand assets, including photos and videos, from misuse and ensuring they are authentic will be new challenges. New companies and apps, like Nodle’s Click app, are now emerging, designed to automatically authenticate media assets. Look for more of this, and soon.

Bias. Bias comes in two forms: in training your LLM (large language model) and in how you prompt your AI tools to provide outputs.

As outlined in an article by Martech.org, generative AI is just a machine. The outputs are only as good as the inputs, and those inputs come from humans with subjective perspectives based on their own experiences and backgrounds. 

Just because a generative AI product is commercially available doesn’t mean it’s trustworthy. Some of the most popular tools out there still have significant bias.

This report by Bloomberg describes how Stable Diffusion’s AI creates images that perpetuate and amplify harmful gender and racial disparities. It ticks all the usual boxes for how AI manifests stereotypes: The world is run by white male CEOs; women are rarely doctors, lawyers or judges; dark-skinned men commit crimes, while dark-skinned women flip burgers. 

Filtering out bias isn’t easy. But deliberately introducing different perspectives and viewpoints in model training, a process known as data augmentation, should be mandated in governance and policy.

Privacy. Customer data privacy is a critical risk factor for marketers. Individuals are currently and actively creating prompts by entering private data like phone numbers, email addresses – even social security numbers and intellectual property – into public LLMs with the obvious negative results. The International Association of Privacy Professionals is a great resource for digging into guiding principles and legal compliance around generative AI privacy.

In South Korea, there was a major incident recently with Samsung in which private customer data made it into ChatGPT and became accessible to millions of people. This happened just 20 days after Samsung lifted a ban on ChatGPT designed to protect customer data. 

A thorough employee training regime could have made a difference. The aforementioned State of Generative AI Survey highlighted that 57% of marketing companies provided five hours or less training to their employees. We need to do better.

Data usage. Reuters highlights numerous corporate data risks around the adoption of generative AI. The nature of generative AI, which often requires pulling and storing data from a cloud-based repository, creates opportunities for bad actors to hijack data. Rules around what data is or isn’t allowed need to be clearly communicated and managed to minimize risk for the organization.

IP protection and copyright infringement. A recent article in HBR highlights the challenges creators and marketers face when diving into generative AI. Are you protecting your or your client’s intellectual property by not feeding it into public LLMs? Conversely, how do you prevent using others’ IP and copyrighted material when querying LLMs? And, finally, are you training your LLMs with copyrighted information? 

The copyright issue is in the courts right now as The New York Times is suing ChatGPT for infringement. A well-considered governance program should provide guidance and protective oversight when it comes to the use of copyrighted material. While it may take time for the court case to sort itself out, companies will require a way forward for training their LLMs. Some may even feel that a business relationship that pays the copyright owners is necessary if that information is considered important to model.

These are just a handful of areas of risk that marketers will have to wrestle with as they rapidly adopt generative AI. Even the most thoughtful training and governance program isn’t foolproof.

While it might seem counterintuitive, high-quality human oversight is essential when it comes to AI. Generative AI is no panacea for thorough human reviews by people who are well versed in the products and audiences they are meant to serve.

Yes, we are at the dawn of a new age that will enhance productivity and revenue. But marketers should still proceed with caution and ensure the right human safeguards are in place.

Key Takeaways

Generative AI Risk Management is Critical: 75% of organizations adopting Generative AI have reported security and/or misuse incidents. Marketers must proceed with caution and implement strong AI governance to minimize risk exposure.
Combat Deepfakes & Algorithmic Bias: Key governance areas include protecting brand assets against unauthorized use by bad actors (deepfakes) and mandating data augmentation to filter out inherent bias and harmful stereotyping in LLM outputs (e.g., Stable Diffusion).
Prevent Privacy Breaches and IP Infringement: Thorough policies are required to stop employees from entering confidential data (like phone numbers, SSNs, or IP) into public LLMs. Companies must also address the legal challenges of copyright infringement when training or querying LLMs (highlighted by The New York Times lawsuit).
Mandate Human Oversight and Training: Even robust training programs are not foolproof; successful Generative AI adoption requires high-quality human oversight and reviews, alongside significant improvements to employee training regimes regarding safe usage and company policy compliance.

Frequently Asked Questions About AI Governance and Risk Management

Q1: How widespread are security and misuse incidents among organizations adopting Generative AI?
A1: Generative AI risks are highly prevalent: 75% of organizations adopting Generative AI have already experienced security and/or misuse incidents. Despite this risk, 68% of respondents in marketing, media, and sales categories believe Generative AI will provide their organization with a significant competitive advantage. Marketers must proceed with caution, recognizing that while the power is great, so is the responsibility.
Q2: What are the primary risks associated with deepfakes and how can brands ensure asset authenticity?
A2: The risk of deepfakes involves bad actors associating themselves with existing brands and personas for illicit gain. For example, a gaming app recently featured world-famous cricketer Sachin Tendulkar in a promo video without his endorsement or a deal with the company. To counteract this, protecting brand assets, including photos and videos, and ensuring their authenticity are new challenges. New apps are emerging, such as Nodle’s Click app, specifically designed to automatically authenticate media assets.
Q3: How should organizations mitigate algorithmic bias in LLM outputs?
A3: Bias in Generative AI occurs both in the way the LLM is trained and in how users prompt the AI tools. Generative AI outputs are only as good as the inputs, which come from humans with subjective perspectives. To address this, organizations must mandate a process known as data augmentation in their governance and policy, which involves deliberately introducing different perspectives and viewpoints during model training to filter out inherent bias. Reports have shown that popular tools, like Stable Diffusion, can perpetuate and amplify harmful gender and racial disparities, manifesting stereotypes such as the world being run by white male CEOs or showing dark-skinned men committing crimes.
Q4: What are the data privacy and IP protection risks of using public Large Language Models (LLMs)?
A4: A critical risk factor is that individuals are actively creating prompts that include private data (like phone numbers, email addresses, or social security numbers) and even intellectual property (IP) into public LLMs. This poses severe risks for the organization. A major incident occurred in South Korea with Samsung, where private customer data made its way into ChatGPT and became accessible to millions of people. Regarding IP, companies face the dual challenge of protecting their own IP from being fed into public LLMs and preventing the use of others’ copyrighted material when querying or training their own LLMs. The legal challenge is highlighted by The New York Times currently suing ChatGPT for infringement.
Q5: What role does human oversight and training play in effective AI governance?
A5: Even the most thoughtful training and governance programs are not foolproof. High-quality human oversight and thorough human reviews by personnel who are well versed in the products and audiences they serve are essential for successful Generative AI adoption. Employee training regimes need significant improvement; a survey indicated that 57% of marketing companies provided only five hours or less of training to their employees regarding safe Generative AI use. Clear rules around what data is or isn't allowed must be communicated and managed to minimize organizational risk.
About the author
Neil Cohen

Neil Cohen is the Strategy Director at Traction. He has more than 40 years of experience creating, building and managing brands from start-ups to Fortune 500 companies. As a “marketing therapist,” he works with companies to help them focus and "get out of their own way."

Recent articles
Floating brains
Monday, February 12, 2024Mastering AI Marketing

There are 58,000 AI companies in the world today. 17,000 of them are based in the United States. That’s a lot. Marketers tend to keep up with technology more than most. I’d guess that half the people reading this are already using ChatGPT on a daily basis.

Davos, Switzerland
Sunday, January 21, 2024The Futureproof Project: Live from Davos!

Last week, I had the absolute pleasure of interviewing Tara Sharp, an extraordinary marketer, thought leader and all around inspiring woman, at an amazing "Live from Davos" virtual event for The Futureproof Project!

Tuesday, January 2, 20249 Challenges for CMOs in 2024

With the rise of GenAI and all the hoopla around it, it’s hard for marketers to see the forest from the trees. We're poised for more turbulence and disruption than ever before. Here are nine of the biggest challenges CMOs will need to navigate in 2024.

Generative AI Governance and Risk Management | Traction