AI Part 2: Using LLMs Before the Mediation
AI Part 2: Using LLMs Before the Mediation
LLMs are prediction engines. Our job is to design prompts that elicit useful predictions. If you don’t like what the engine has predicted in response to a prompt, that’s your fault, not the engine’s.
Effective prompting requires tinkering. Or more formally, “iterations.” Output disappointing? Identify what’s lacking, adjust your prompt, try again. Different models produce different results from identical prompts—though they're converging, each has its own flavor. Test the major models (Gemini, Claude, ChatGPT) and find which matches your needs.
The Key: To most effectively use LLMs, you must understand the output you want. I’ve found them most effective as a thinking partner, not a content creator.
My Impressions
Claude delivers strong edits but weak summaries. ChatGPT leaves fingerprints (would you like another emdash?, odd AI feeling folksiness), and displays edits inconsistently, but overall handles editing decently. It has also been the best for random brainstorms. I’ve found ChatGPT more useful for random consumer uses than editing. Gemini summarizes aggressively but can miss the plot. I haven’t yet found the right prompt to make it a good editor.
For each, detailed prompts produce better output, to a point. It’s a balance. As your prompts grow longer, ambiguity creeps in. And if you give an LLM an ambiguous prompt, you’re going to get an ambiguous result.
LLMs are “tasky”: describe a specific output with the right level of detail, get that output. But slight prompt variations or overly long prompts produce… interesting results.
As writing aids, I’m still deciding if LLMs are a feature or a product. They let me draft loosely and then “fix it in post.” It has made ideas from Gary Kinder’s “Three Steps and Three Rules” easier to apply. But that’s not a revolution, it’s a better spellchecker.
I don’t use LLMs for research. I experimented, got wrong answers or results that weren’t better than my Boolean searches, and haven’t experimented more. LLMs might work as a new type of finding source – provided they give you exact citations and you read the cases. But we’ve all read cases where judges stretch citations beyond holdings. Judges can do that, its their job to move the law forward. LLMs shouldn’t.
Prompting from 10,000 feet
Using consumer interfaces (not APIs), you’re stuck with default settings. I don’t know how to set sampling controls or other settings without API access. Advanced users might.
Using the consumer interface, I’ve found prompts more effective if they follow this general structure:
· Role. Define “who” the LLM is. What is its background? What’s its knowledge base?
· Task. Tell it what to do: edit, summarize, draft, critique, code, etc.
· Context. Provide background relevant to the task.
· Reasoning. Ask for specific reasoning steps if relevant.
· Examples. Show the LLM what a “good” response looks like. Especially useful for specific formats.
· Output. Describe exactly what you want, redline edits, summary paragraph, formatted memo, code language, etc.
· Stop Condition. Tell it when to stop.
This framework came from an OpenAI guide that I can’t seem to find. For deep dives:
· Gemini
· Claude
· OpenAI
Samples
Editing a Legal Brief: Need to cut 500 words? Try:
Act as a legal brief editor, in the style of "Legal Writing in Plain English" by Bryan Garner, and "Making Your Case: The Art of Persuading Judges" by Bryan Garner and Former Supreme Court Justice Antonin Scalia. [If you don’t know or don’t like Garner’s style, DO NOT USE THIS PROMPT – Try Strunk & White, “The Elements of Style” – or your preferred style guide].
Please review and revise the text below for clarity, conciseness, and persuasiveness. Prioritize [what you want to prioritize].
The text is from [MSJ in state court, demurrer, opposition to same, an appellate brief to the Ninth Circuit, etc.] addressing [specific legal topic]. [Provide procedural background and the form of the motion].
Return the results as:
1. A redlined section with all changes marked as tracked changes from the original draft. DO NOT CREATE OR EDIT CITATIONS.
2. Stop when at least 500 words shorter. All edits must be marked in redline. [Remove this condition if you want edits regardless of words cut].
The section to edit is below/attached:
Summarizing: Need a TL;DR for your blog post? I use:
Act as if you were summarizing in the style of [list publications you find compelling/well written] or for Authors like [List authors you think write well/are persuasive].
Review this final blog post. Do not edit. Instead, create a TL;DR version for posting on LinkedIn.
Ensure the TL;DR keeps my voice and authors in mind.
[Provide context. Example: These posts are part of my MondayxMorelloMediation series designed as a marketing content for my Mediation Firm.]
Optimize for clarity, conciseness, and practical value, without losing my voice.
Return the results as:
1. A TL;DR version to post on LinkedIn.
The long form blog post follows below:
With some tinkering or “iteration”, both prompts can be repurposed for memos, reports, briefs, or client updates.
Using LLMs Before the Mediation
With effective prompting in mind, before the mediation:
CONFIDENTIALITY WARNING
Once you hit enter, you no longer own it. Prompts aren't confidential. Fine for public brief sections. Not fine for case-specific details. No names, parties, or identifying information. Keep it generic.
Understand LLMs Valuations. Forget everything you’ve learned and mimic how a client might prompt an LLM. Then use the prompt framework above and have the LLM evaluate your case (remember to have the LLM evaluating playing different roles). Compare results.
Take the results with a grain of salt, but they’ll help you anticipate how your client, or the other side, might view the case.
You'll be prepared to explain: LLMs are prediction engines, not fact machines. Their training data doesn't represent most settlements. You'll have counter-examples ready when discussing LLM results.
Edit Your Briefs (Please). Feed smaller sections of your brief into an LLM for tone and clarity. This limits confidentiality risks and improves precision. Let the model critique a section or paragraph, not the whole draft. And use it for editing, not initial drafting.
Feeling creative? Ask it to act as a neutral and critique your brief from the mediator’s perspective. Take any feedback more as entertainment than insight. But you might find a fresh angle.
General Preparation. Ask the LLM for:
· Five questions the mediator might ask
· Five questions your client might ask
· Translate complex theories into plain English
As in-house counsel, I often translated wage and hour law for business teams. Use the LLM to help you do this translation before the mediation, make in-house counsel’s job easier.
For plaintiffs: generate common-sense talking points so clients feel prepared.
Not Research. If you’re in unfamiliar territory, start with traditional finding sources. Read key cases yourself. Don’t use LLMs to draft legal sections, you want an accurate statement of the law, not a prediction of what you want the law to be.
Not for initial drafting. You’ll sound like an LLM. AI-drafted text has tells. Something feels off. As ChatGPT suggested “There’s a certain uncanny rhythm to AI writing.” And I love a good em dash, but if you use an LLM to write for you, you’ll have more em dashes littered through your draft than you ever thought possible.
Next week: The Mediation Brief.
Until then,
Monday x Morello Mediation.
None of this is legal advice. Your mileage may vary.
P.S. - After posting note - OpenAI has specifically disclaimed any “legal advice” that ChatGPT may provide. Through their Usage Policies posted on October 29, 2025, OpenAI now prohibits “automation of high-stakes decisions in sensitive areas without human review … legal…” It is unclear what this means in practice. As of this post, ChatGPT still claimed it could “dig deeper and search for more nuanced or lesser-known unpublished trial/administrative decisions.” It sounds like the new usage policy is more about avoiding liability than changing functionality.
AI: Your Clients are Using AI, You Need to Understand It
I’m not sure who needs to hear this, but you’re not talking to an LLM. You prompt it; it predicts. Like DOS or Windows command prompts – input a command, receive a response. Always.
The difference: to us, natural language is ambiguous. You can’t create unambiguous prompts; LLMs can’t produce unambiguous responses.
What follows is a simplified explanation of how LLMs work – and why that matters for legal practice.
The Transformer Model
Current LLMs like ChatGPT use transformer models. This model works by transforming your ambiguous natural language prompt into definite tokens. Think of tokens as 1s and 0s representing small chunks of your prompt. Those chunks may be full words, word parts, or character combinations. The tokens aren’t ambiguous, and a response must be given.
We can skip the deep math – vectors, attention blocks, multi-layer perception blocks, feedback loops, additional layers. What matters: LLMs take your prompt, transform it into definite tokens, and predict responsive output tokens.
Tokens don’t’ have intrinsic meaning. Tokens aren’t thoughts. They’re predictions.
The process: You enter a prompt with words to which you ascribe meaning => LLM transforms those words into definite tokens => LLM predicts probabilistically responsive tokens based on training data => tokens transformed into words => you ascribe meaning to those words.
Our current LLMs don’t “know” what they are “saying.” They aren’t saying anything. They’re generating predictive tokens that become words that you give meaning.
How Humans Use Language
Humans use language to express thought. We accept that words are ambiguous (my “red” may be your “maroon”), but with craft and care, we express ideas. Good writers show rather than tell, transporting, convincing, and moving readers with ambiguous words.
For humans, thought precedes language.
We don’t predict probabilistically responsive statements. In conversation, we might engage, listen, or respond… or not. Silence can be the most powerful response.
We Don’t Know How It Was Trained
As investment flooded into LLMs, training data became proprietary secrets. And LLMs seemingly improved with more data.
The problem: today’s internet is garbage. LLMs are gorging on Reddit posts, tweets (not calling it X), and AI-generated slop. Of that set, Reddit posts may be the best data. Think on that.
National Literacy Institute data (2024):
· 21 % of US adults are illiterate.
· 54 % of adults have a literacy below a 6th grade level.
Meanwhile:
· Over 93 % of US households have internet.
· 96 % of adults use the internet.
· Over 50 % of internet content is AI-generated.
Published works weren’t enough. LLMs needed more data. They got the internet.
What This Means For Legal Practice
Attorneys must consider what data LLMs have accessed during training. Most individual settlement amounts are confidential. PAGA and class settlements can be public but are often housed behind paywalls and disparate court filing systems. Steve Pearl has compiled excellent PAGA Judgement data.
But breathless “nuclear verdicts” coverage is readily available. I’ve seen no databases tracking dismissed, defense verdicts, granted dispositive motions, or nominal settlements.
No Confidence Intervals
LLMs calculate predications designed to elicit positive reactions. But they don’t reveal confidence levels. They respond with certainty because they must respond.
Example: ChatGPT once offered me two responses and asked which was better. I preferred elements of both. The point: LLMs constantly generate multiple potential responses but typically show only one. With absolute confidence. You have no idea how “accurate” the LLM considers its own response.
And yet…
Clients use LLMs to value their cases. They input facts, likely skewed to their side, and request valuations. This happens on both sides. And the same case likely receives vastly different valuations. As advocates, you must be prepared to address what the LLM “told” your client.
Next week: how to use LLMs effectively and how to discuss LLM predictions with clients.
Until then,
Monday x Morello Mediation.
None of this is legal advice. Your mileage may vary.