Skip to main content
Edit Page - Admin Only Style Guide - Admin Only Control Panel - Admin Only
SM-Demystifying-Generative-AI-for-the-Modern-Juror_2509_Banner

Demystifying Generative AI for the Modern Juror

30.09.25

|

| Co-authored by Devon Madon, PhD, AI Expert Witness

We are standing at the brink of a legal era that may define how generative artificial intelligence will be governed, and today’s landmark cases could set AI rules for decades to come. Creators have mounted a multi-front legal challenge to the core practices of AI development. In the currently unfolding multidistrict litigation, In Re: OpenAI, Inc. Copyright Infringement Litigation, for instance, the plaintiffs allege that millions of copyrighted articles were used to train AI models capable of reproducing protected work verbatim, effectively usurping original journalism. Similarly, Getty Images v. Stability AI, which was refiled in August in the US District Court for the Northern District of California, targets the use of proprietary photographs in model training.1

These lawsuits involve a central obstacle: explaining the inner workings of large language models (LLMs) and other neural networks to non-technical audiences. It can be difficult to break down neural network architecture, training methodologies, and output generation into concepts accessible to judges and juries.2

For now, the pivotal question of whether ingesting protected data for training constitutes “fair use” remains unresolved. Successful outcomes in these cases will hinge, in part, on the litigator’s ability to clearly present AI concepts through a persuasive narrative that connects with ordinary jurors. As AI disputes become more common, this skill will shift from an advantage to a baseline requirement for effective advocacy.

Understanding AI’s Technical Reality

Practitioners must first understand what makes these technologies inherently challenging to explain. The complexity lies not just in the sophistication of the algorithms, but in their fundamental differences from human reasoning.

Large language models (LLMs) grew out of decades of research in natural language processing and machine learning. Early models could manage only narrow, rule-bound tasks. Today’s systems—such as OpenAI's GPT-5, Anthropic’s Claude Opus 4.1, Google's Gemini, and Meta's LLaMA—are trained on massive datasets to generate highly fluent, adaptable text across countless topics.

The leap from limited systems to modern LLMs has relied on machine learning (ML). Unlike human cognition, ML systems are trained on large datasets to automatically process patterns and decision rules. The dominant architecture is the transformer neural network: a design inspired by the brain’s connections but entirely mathematical in implementation. Within this network, layers of interconnected digital “neurons” exchange information, and each connection is captured as a numerical “weight.” As the model processes data, it manages billions of these weights, which collectively encode its learned patterns.

Transformer architecture was initially introduced in 2017 and rapidly adopted in 2018; it now makes up the backbone of modern language models.3 Its signature innovation, known as the “attention mechanism,” lets the model consider all the words in a passage in relation to each other simultaneously, rather than processing them in strict sequence. This preserves context over long passages, resolves ambiguity, and enables coherent responses to complex, multi-part prompts. For example, the model can tell that in the sentence, “The lawyer rested their case, then put their briefs back in their case,” the first “case” refers to a legal matter, while the second means a container for documents.

Understanding this process serves a critical legal purpose: preventing jurors from anthropomorphizing AI systems. Jurors must see these models as sophisticated pattern-matchers, not conscious entities. This distinction fundamentally shapes how juries evaluate core legal questions. If jurors view AI as conscious, they may incorrectly apply human standards of intent and knowledge—asking whether the AI ‘knew’ it should not copy protected works or ‘chose’ to infringe. These anthropomorphized judgements lead to verdicts based on moral intuitions about machine ‘behavior’ rather than the actual legal standards governing unconscious tools. AI systems are tools that produce output based on patterns, no more aware than a calculator is of the sum it displays.

The cognitive difference is fundamental: where the human mind perceives Gestalt, the psychological principle that we understand wholes as greater than the sum of their parts, AI systems analyze only the discrete statistical elements that create those wholes. When a human sees a Van Gogh painting, a large language model sees the same painting as millions of individual data points—color values, pattern, compositional ratios—and statistical relationships, without comprehension of the unified artistic vision. It can mimic the style of an expert because it has been trained in expert writing, but it has no real-world experience or judgment.

Ultimately, perceiving AI as pattern-matching technology helps juries focus on appropriate legal issues: whether the use constitutes fair use, whether substantial similarity exists, and whether the technology’s outputs compete with original works regardless of the system’s lack of awareness or intent.

The Cognitive Science of Juror Understanding

The fundamental challenge for attorneys in AI fair use cases is to ensure that jurors understand these sophisticated non-human processes without becoming overwhelmed, exhausted, and lost. The answer lies in understanding how the human brain processes complicated information.

Jurors’ interpretations of AI-generated content—including text, images, and music—hinge on how their minds take in, filter, and simplify technical data under the pressures and formalities of trial. Even the clearest explanation can be undermined if it overwhelms jurors’ working memory or triggers familiar, but inaccurate, mental shortcuts about technology. That is where cognitive science comes in.4

In AI litigation, cognitive overload manifests as jurors reverting to familiar lines of thinking, leaning into the concepts of the omniscient computer that cannot err, or the black box conspiracy where harmful algorithms are deliberately obscured. Machine learning models become conflated with simple rule-based software, and sophisticated pattern recognition is mistaken for human emotion and judgment.5

The so-called split-attention effect presents the most immediate threat to juror comprehension at trial. This occurs when a person is tasked with simultaneously processing multiple streams of information, like listening to testimony while viewing technical diagrams. Research shows that divided attention dramatically reduces comprehension and retention, particularly when complicated visual and auditory information requires integration.6

Effective communication is dependent upon the jurors’ belief in the speaker’s credibility and honesty. Jurors decide complex cases based on which attorneys and experts they trust. Being a good, likeable teacher is paramount in AI litigation, where the technology can seem abstract and removed from everyday experience. Similarly, communication strategies must reduce the jury’s cognitive load to support effective learning and be consistently applied in all phases of trial, with flexibility remaining a central tenet.

To teach AI principles, legal teams should take into consideration how humans naturally process, store, and retrieve information by using a consistent narrative. Tried-and-true trial strategies include: (1) simplifying complex information into manageable parts, a technique known as “chunking,” and (2) using “narrative frameworks,” i.e., storytelling. These are reliably useful tools for effective communication in AI litigation, designed to assist the listener’s brain in processing new information.

Applying Chunking and Narrative Frameworks in AI Cases

An effective approach for teaching jurors about AI involves breaking complex information into chunks and weaving these chunks together with a metaphorical narrative thread—leveraging cognitive learning to incrementally build understanding. This framework can flow across all trial phases while maintaining the flexibility essential for courtroom realities. Rather than overwhelming jurors with technical explanations, providing a relatable foundation gives them the tools to make informed judgments.

In preparing for trial, emphasize consistency among witnesses of the chosen metaphorical framework while avoiding over-scripting. Witnesses should practice explaining AI concepts using established analogies, rehearse transitions between information chunks without introducing competing metaphors, and develop fallback explanations for unexpected questions.

Also, thoughtfully design trial graphics to prevent split-attention effects. Coordinate the visual elements with spoken explanations so that what jurors are hearing reinforces what they are seeing, rather than dividing their attention. Use a maximum of three elements per slide, progressive disclosure of information, and consistent support for the chosen analogy throughout.

Consider a hypothetical AI copyright case in which attorneys for the defense are tasked with persuading jurors that their client’s AI tool did not violate copyright law. To effectively educate jurors about how the LLM works, attorneys might utilize a three-phase chunking approach that employs the consistent narrative metaphor of an art student studying masterworks in a museum.

At trial, attorneys can introduce the art student metaphor during the opening statement and preview how technical evidence will unfold in distinct phases. This prepares jurors cognitively for the structured information to come. For direct examination, attorneys should frame witness testimony where appropriate within the established narrative, checking jurors for understanding of how the subject matter relates to each phase. During cross-examination, they can highlight inconsistencies to draw jurors back to the message of the overall narrative.

As illustrated below, experts might also chunk information into three discrete phases to explain how AI tools first receive data, then process it, and then generate original outputs. The art student metaphor can be woven throughout to enhance juror understanding.

Stage One: The Input Phase

First, expert testimony should establish how AI systems acquire knowledge by consuming training data. The expert might say something like, “Imagine a young artist walking through an art museum studying masterworks.” From the start, our analogy grounds abstract AI training in concrete human experience, i.e., the “greater whole” view versus a “sum of parts” output.

Witnesses could then explain how the AI system analyzed millions of images over the course of months. However, like a visitor to a museum, it cannot distinguish between the types of images; it merely sees the images as they are. It is important to focus on data ingestion, using visual aids that show limited museum scenes and data-scale representations.

Stage Two: The Processing Phase

Once the learning foundation is established, expert testimony should advance to describing AI’s pattern recognition phase, explaining: “After studying thousands of masterworks, our art student begins to recognize artistic principles that connect some artists and works of art—how Renaissance painters use light, how Impressionists use brushstrokes.”

Here, the metaphorical museum breaks into “wings” for thematic organization of the paintings into style and special exhibits by artists. Expert witnesses might demonstrate mathematical transformation through visual aids, showing how specific images become abstract numerical patterns: “This is how AI learns ‘Renaissance-style’ or ‘portrait composition,’ like how our art student starts to classify masterworks into categories and can create a work in the ‘style of Monet’ without creating an exact copy of a Monet painting.”

Attorneys should train witnesses to monitor juror comprehension and return to the art student analogy rather than introducing new explanations.

Witnesses should look for cognitive overload signs—glazed expressions, limited note-taking, fidgeting, etc.—and adapt to meet juror needs. And they should be consistent in the use of the analogy, perhaps saying something like, “Though an art student may recognize brush techniques in different paintings, she’s learned principles, not memorized specific works.”

Stage Three: The Generation Phase

Finally, expert witnesses might describe the final stage of AI processing, that is, the AI output itself: “Our art student, having absorbed artistic principles from master painters, sits before a blank canvas to create her own work, applying the techniques she learned but creating something entirely new.” Attorneys must prepare witnesses to demonstrate how identical prompts can generate different outputs, emphasizing how an art student may paint different landscapes using the same learned principles. They might conclude with something like, “Just as an art student may paint three different landscapes using the same compositional principles they learned, AI creates unique works from its understanding of learned patterns.”

Conclusion: The Path Forward

AI is an evolving technology that calls for an innovative approach to communication. The science of how people process complex information provides valuable tools for making AI concepts accessible, but these tools must be applied with careful attention to courtroom dynamics and professional boundaries. The art student analogy demonstrates how a narrative framework, broken into chunks and grounded in cognitive science, can effectively speak to jurors. This approach enables juries to engage with technical evidence rather than defaulting to oversimplified or misguided heuristics.

Practitioners, however, must proceed with appropriate caution. A rigid style will fail when confronted with courtroom realities. Most critically, the boundaries between education and persuasion require constant vigilance. Techniques that facilitate understanding serve justice, while those that manipulate decision-making undermine it.*

Professional standards and ethical obligations in this emerging area demand careful consideration in every case. Whether addressing fair use, substantial similarity, or willful infringement, juries will need a sufficient understanding of how AI works to make informed decisions within existing legal frameworks. The goal is to help them understand evidence well enough to follow jury instructions and apply established legal principles correctly.

As case law unfurls in this emerging field, the message is clear: master the science of jury cognition or risk having technological complexity influenced by narratives created in the minds of your jurors.

*Note: Attorneys guide education in the courtroom through the presentation of evidence and witness testimony. In this context, we are directing the statement regarding education to the attorney, cautioning that this suggested methodology is not intended to manipulate.

This article was originally published by Law360; republished with permission.

About the Authors

IMS Legal Strategies Jury Consultant Elizabeth (Liz) Babbitt, MA, has extensive experience in providing clients with data-driven jury research services, case strategies, and jury selection advisory in high-stakes litigation. She is passionate about the importance of AI as a transformational technology for litigators and law practices.

Devon Madon, PhD, is a career educator and pioneer at the forefront of AI language model technology development. She works with a Fortune Tech 10 company as an AI content analyst and prompt engineer specializing in research, AI, machine learning, generative text models, and natural language processing. Dr. Madon also serves as an expert witness specializing in AI.

References

1The New York Times Co. v. OpenAI and Microsoft, (Dec. 2023), available at NYT complaint, https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf.

Getty Images (US), Inc. v. Stability AI, Inc, Complaint (with jury demand) (Feb. 3, 2023), Case No. 1:23-cv-00135 (D. Del.), available at https://docs.justia.com/cases/federal/district-courts/delaware/dedce/1:2023cv00135/81407/1

2As Kevin T. Frazier observes, increasingly complex AI litigation “may challenge even the most learned AI researcher,” suggesting that we instead use “juries of experts and professional peers” in such trials. Kevin T. Frazier, Use Specialized Juries in AI Litigation, The Regulatory Review (Nov. 13, 2023). theregreview.org

3Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems, 30. https://arxiv.org/abs/1706.03762.

4Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285.

5Greenstein, S. Preserving the rule of law in the era of artificial intelligence (AI). Artif Intell Law 30, 291–323 (2022). https://doi.org/10.1007/s10506-021-09294-4

6Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293-332.


Related Industry Insights