AI: Your Clients are Using AI, You Need to Understand It

I’m not sure who needs to hear this, but you’re not talking to an LLM. You prompt it; it predicts. Like DOS or Windows command prompts – input a command, receive a response. Always.

The difference: to us, natural language is ambiguous. You can’t create unambiguous prompts; LLMs can’t produce unambiguous responses.

What follows is a simplified explanation of how LLMs work – and why that matters for legal practice.

The Transformer Model

Current LLMs like ChatGPT use transformer models. This model works by transforming your ambiguous natural language prompt into definite tokens. Think of tokens as 1s and 0s representing small chunks of your prompt. Those chunks may be full words, word parts, or character combinations. The tokens aren’t ambiguous, and a response must be given.

We can skip the deep math – vectors, attention blocks, multi-layer perception blocks, feedback loops, additional layers. What matters: LLMs take your prompt, transform it into definite tokens, and predict responsive output tokens.

Tokens don’t’ have intrinsic meaning. Tokens aren’t thoughts. They’re predictions.

The process: You enter a prompt with words to which you ascribe meaning => LLM transforms those words into definite tokens => LLM predicts probabilistically responsive tokens based on training data => tokens transformed into words => you ascribe meaning to those words.

Our current LLMs don’t “know” what they are “saying.” They aren’t saying anything. They’re generating predictive tokens that become words that you give meaning.

How Humans Use Language

Humans use language to express thought. We accept that words are ambiguous (my “red” may be your “maroon”), but with craft and care, we express ideas. Good writers show rather than tell, transporting, convincing, and moving readers with ambiguous words.

For humans, thought precedes language.

We don’t predict probabilistically responsive statements. In conversation, we might engage, listen, or respond… or not. Silence can be the most powerful response.

We Don’t Know How It Was Trained

As investment flooded into LLMs, training data became proprietary secrets. And LLMs seemingly improved with more data.

The problem: today’s internet is garbage. LLMs are gorging on Reddit posts, tweets (not calling it X), and AI-generated slop. Of that set, Reddit posts may be the best data. Think on that.

National Literacy Institute data (2024):

·       21 % of US adults are illiterate.

·       54 % of adults have a literacy below a 6th grade level.

Meanwhile:

·       Over 93 % of US households have internet.

·       96 % of adults use the internet.

·       Over 50 % of internet content is AI-generated.

Published works weren’t enough. LLMs needed more data. They got the internet.

What This Means For Legal Practice

Attorneys must consider what data LLMs have accessed during training. Most individual settlement amounts are confidential. PAGA and class settlements can be public but are often housed behind paywalls and disparate court filing systems. Steve Pearl has compiled excellent PAGA Judgement data.

But breathless “nuclear verdicts” coverage is readily available. I’ve seen no databases tracking dismissed, defense verdicts, granted dispositive motions, or nominal settlements.

No Confidence Intervals

LLMs calculate predications designed to elicit positive reactions. But they don’t reveal confidence levels. They respond with certainty because they must respond.

Example: ChatGPT once offered me two responses and asked which was better. I preferred elements of both. The point: LLMs constantly generate multiple potential responses but typically show only one. With absolute confidence. You have no idea how “accurate” the LLM considers its own response.

And yet…

Clients use LLMs to value their cases.  They input facts, likely skewed to their side, and request valuations. This happens on both sides. And the same case likely receives vastly different valuations. As advocates, you must be prepared to address what the LLM “told” your client.

Next week: how to use LLMs effectively and how to discuss LLM predictions with clients.

Until then,

Monday x Morello Mediation.

None of this is legal advice. Your mileage may vary.

Previous
Previous

AI Part 2: Using LLMs Before the Mediation

Next
Next

When to Mediate: Managing Uncertainty and Knowing Your Client