Skip to main content
Edit Page - Admin Only Style Guide - Admin Only Control Panel - Admin Only
IMS_MMR-Deep-Fakes_2602-Banner

Deepfakes and False Advertising: Measuring Consumer Impact

07.08.24

As artificial intelligence (AI) and machine learning (ML) technologies increasingly generate deep fakes for marketing and advertising, affected individuals may seek legal remedies under false representation and right of publicity laws. How might litigants or regulators use consumer surveys to evaluate deception and protect consumers?

What Are Deep Fakes?

Deep fakes are video or audio recordings that appear authentic but are, in fact, created by AI trained on human likenesses or voices. One well-known example is the use of David Beckham’s voice in the Malaria Must Die global campaign. With Beckham’s consent, advertisers synthesized his voice to deliver anti-malaria messages in nine different languages.

While that use was authorized, the same technology can be misused to produce false content, in which public figures appear to say things they never said or endorse causes they do not support. Deep fakes have also been used to create AI-generated music imitating famous artists, without their permission or participation.

These developments raise serious questions: What threats do deep fakes pose to consumers and celebrities, and how can consumer survey evidence help regulators and courts respond appropriately?

Regulating Generative AI Content

The European Union has enacted the world’s first comprehensive AI Act, which classifies artificial intelligence systems by risk level and regulates their permissible uses. The United States currently lacks comparable federal legislation, leaving developers, users, and others subject to a patchwork of existing laws. Two areas of intellectual property law may offer potential avenues for regulation:

False Advertising (Lanham Act): A deep fake could constitute false or misleading advertising if it makes a person appear to endorse a product that they did not actually endorse.

Right of Publicity: State laws protect individuals’ rights to control the commercial use of their name, likeness, voice, signature, or photograph.

As deep fakes increasingly blur the boundaries between reality and simulation, their use in advertising and competitive marketing may become a focal point of false advertising and right of publicity litigation. In such cases, consumer surveys can play a vital role in measuring deception and confusion.

Consumer Surveys and AI

Regulatory and civil disputes involving deep fakes often hinge on whether consumers were misled or confused. Well-designed consumer surveys can provide empirical evidence on how audiences perceive AI-generated content. A false advertising survey can measure whether viewers distinguish between genuine and AI-generated content. A materiality survey can determine whether consumers’ purchasing or voting behavior was influenced by their belief in the authenticity of a deep fake.

In cases involving AI-generated music, surveys can measure whether listeners mistakenly attribute authorship to a real artist or believe that a celebrity approved or participated in the work. Similarly, for deep fake videos of public figures, surveys could evaluate whether consumers perceive the clips as genuine and whether that perception altered their opinions or decisions.

Survey Evidence in Emerging AI Litigation

In the absence of regulation, consumer survey evidence can provide a mechanism for assessing the impact of AI-generated deception. Surveys can help courts and regulators determine whether deep fakes mislead consumers, dilute reputations, or distort public opinion.

IMS Legal Strategies designs and conducts consumer surveys in complex advertising, intellectual property, and emerging technology cases. If you require survey evidence for an AI-related false advertising or right of publicity matter, contact IMS.