How Do Brands Tell Authentic Stories In The Age Of AI?

By C200 Member Katie Calhoun

As marketing and agency leaders gather at the annual Cannes Lions in the south of France this week, the transformative power of generative AI will certainly be the topic of the hour. How will this technology impact creativity, consumer engagement, and visual storytelling? Like so many aspects of AI, this one is provoking excitement and anxiety in equal parts.

AI’s implications are particularly important for visual content, given the ubiquity of imagery and its role in shaping our culture. With the proliferation of photos, videos, GIFs, and emojis on our phones and social media feeds, imagery has become the language of the 21st century. Every day, users share 14 billion images on social media—three times the activity seen in 2013—and upload 720,000 hours of videos to YouTube. Businesses recognize that including visual content in marketing can more than double engagement, with short-form videos currently driving the highest ROI of any social marketing strategy.

The surge in visual consumption has coincided with an increasing emphasis on authenticity in advertising. Platforms like Instagram and TikTok have popularized a spontaneous, candid aesthetic. A recent survey from visual content marketing platform Stackla found that nearly 90% of consumers felt authenticity was essential to brand loyalty, preferring brands that are “real and organic” over those that are “perfect and well-packaged.” Campaigns like Dove’s Real Beauty and Patagonia’s Buy Less, Demand More have set new standards, with consumers now expecting advertisements to reflect their own values and relatable, genuine experiences.

The Transformative Potential of Generative AI

Generative AI has the potential to turn this dynamic on its head, as it introduces new opportunities and priorities for marketers. IBM aptly describes this as marketing’s “sink or swim” moment, with 76% of marketers already using generative AI tools and fearing that failing to adopt these technologies quickly will leave them at a competitive disadvantage.

The potential benefits of generative AI are formidable: it facilitates targeted marketing with unprecedented levels of personalization and customization. Gen AI can synthesize and analyze customer data from disparate sources with remarkable speed and accuracy. It offers compelling content creation, generating and modifying images at a fraction of the time and cost traditionally needed. This capability was highlighted in late May when David Sandstrom, CMO of Swedish Fintech company and early Gen AI adopter Klarna, pronounced that “AI is helping us become leaner, faster and more responsive to what our customers care about….we’re actually driving more marketing activity while saving tens of millions of dollars a year.”

Balancing AI with Authenticity

But where does that leave authenticity? How can brands tell an authentic story using a tool that has the word “artificial” in its very name? And what about the ongoing ethical concerns surrounding certain aspects of these tools?

Leaders can navigate these issues by being intentional, accountable, and identifying specific use cases when using generative AI for content creation. Clear policies and consistent human oversight make all the difference. Following are three key challenges posed by the use of Gen AI and strategies to effectively address them:

1. Deepfakes and Consumer Trust

Consumers don’t want to be fooled into mistaking artificial content for reality, whether it’s used in news or commercial contexts. According to a Getty Images report from April of this year, Building Trust in the Age of AI, 90% of consumers prefer transparency when an image is AI-generated. Marketing leaders who are open about their use of AI have seen positive responses from consumers. For instance, Coca-Cola’s Create Real Magic campaign openly embraced AI, inviting customers to create and be compensated for ads on a custom platform, resulting in significant global engagement and enhanced brand favorability.

Similarly, Nicki Minaj as well as her fans used generative AI to craft images of a fictional, pink “Gag City” to promote her album Pink Friday 2, a viral moment that was joined by global brands like Dunkin’. This transparent approach to AI is helping to normalize its use among Gen Z consumers in particular, as highlighted by Sprout Social’s 2024 Influencer Marketing Report which found that only 35% of them prioritize authenticity in influencers.

When marketers attempt to integrate generative AI without prior disclosure, it can backfire. In March, Lego faced backlash from fans of its Ninjago character when they used generative AI to illustrate the character in an online quiz. Disgruntled users called out the brand on social media, with the Ninjago creator himself posting, “This is just lousy in all aspects. There are actual guidelines against the use of AI like this at LEGO so this is completely unacceptable.” Lego apologized and removed the offending content.

To facilitate more transparency, groups like the Content Authenticity Initiative are bringing together tech and media companies, academics and non-profits to establish a global standard for clear disclosure of the use of AI. Apple is also leading by example. They’ve reported that their Image Playground app, part of the “Apple Intelligence” tools revealed on June 10, will include metadata to clearly identify any image created with their generative AI. Until these standards are widely adopted, corporate leaders need to set clear policies for transparency with their consumers, using clear communications or identifiers that reinforce trust rather than muddy reality.

2. Copyright Infringement

It’s been widely acknowledged that major LLMs have harvested enormous amounts of data from across the internet to train their generative AI tools. While these training inputs are often described as “publicly available,” they are not necessarily free from copyright protection. This practice has the industry heading toward a “legal iceberg,” according to The Wall Street Journal. Several high-profile lawsuits are compelling companies like OpenAI to pursue large content licensing deals with news outlets such as NewsCorp, The Atlantic, and Vox Media. These legal challenges will take time to play out, but in the meantime, brand marketers must avoid creating marketing materials that include trademarked characters or clearly recognizable derivative content.

To get ahead of this, leaders must clearly understand the training data that has been used by AI tools before sanctioning them for their organization. For the generation of visual content, tools such as Generative AI by Getty Images, Adobe’s Firefly or Shutterstock’s just-announced ImageAI are commercially-safe options, where the foundational tools have been trained on content that has been cleared and vetted. It’s critical to involve legal teams in these decisions, and to review levels of indemnification each tool offers its users, as copyright infringement exposes brands to both legal and reputational risk.

As brands develop custom versions of generative AI tools by integrating their own digital assets into the training sets, it’s imperative to verify the provenance and ownership of these assets beforehand. This will enable marketers to create on-brand content more efficiently without adding legal risk.

3. Bias in AI Outputs

Generative AI, more accurately described as regenerative AI, is trained on massive amounts of existing content to produce outputs that mimic the data it has been fed. In the process, these tools often reinforce and perpetuate biases and stereotypes embedded in our visual landscape. A March 2024 study from Cornell University analyzed imagery generated by Midjourney, OpenAI’s DALL-E, and Stable Diffusion, finding that these tools “inadvertently perpetuate and intensify societal biases related to gender, race, and emotional portrayals in their imagery.” In particular, the AI images tend to exclude women and people of color from depictions of occupational scenes, suggesting a level of inequity that is worse than what’s currently reflected in labor statistics. However, the scope of inherent bias extends well beyond a single cohort or context.

It will require time, advanced data-science techniques, and a nuanced understanding of cultural forces to address the biases in these tools. Here again, brand marketers don’t need to wait. They can address this issue right away through careful use of prompts and review of the output.

A “prompt” in this context refers to words or phrases submitted to a text-to-image generator to describe the desired content. Gen AI users can combat bias by including phrases that promote diversity, a range of ages, body types, and socioeconomic representations in their output, ensuring these elements are in the proper context. This approach isn’t a quick, one-size-fits-all solution. When Google released its Gemini Gen AI tool in late February, users were quickly taken aback when the tool produced images of America’s Founding Fathers, Nazi Officers and others with cartoonish, mixed-race characters with no historic accuracy. Presumably this was the result of “diversity” auto-populating into prompts regardless of context.

As leaders develop prompt policies for their own organizations, they should carefully consider their company’s specific needs and values, potentially with custom sets of phrases and guidelines for each Gen AI tool used. Many companies offer training for prompt engineering, from Coursera to Microsoft. The Dove brand, while committing to not using generative imagery in its own marketing, has released a Real Beauty Prompt Playbook for brands that are using these tools, in an effort to “set new industry standards of digital representation.”

As with all aspects of business, it’s important for leaders to involve diverse teams in integrating generative AI into their content creation. These teams should participate from the outset, contributing to the development of campaigns and prompts, and remain engaged downstream to review and spot-check outputs for any inadvertent issues.

Embracing the Future

Generative AI offers powerful opportunities and efficiencies for brands and marketing, and its use will only accelerate in the coming years. Leaders who guide their teams in recognizing the challenges, setting clear policies, and embracing intentionality will thrive in this transformative moment. The key is to blend the innovative capabilities of AI with the irreplaceable value of human insight, ensuring that your brand tells authentic stories that resonate with your audience.

C200 member Katie Calhoun is a strategic advisor and public speaker who consults on sales, branding, and content, with a focus on visual storytelling. She previously served as Vice President of Sales and Regional Marketing for the Americas at Getty Images, where she led a 100+ team and advised the world’s top corporate marketers, media & tech firms, and creative agencies on driving business results through content. She was part of the team that successfully took Getty Images public in 2022 and played a key role in launching its generative AI tool in 2023.

Leave a Reply

Your email address will not be published. Required fields are marked *