AI Part 2: Using LLMs Before the Mediation
AI Part 2: Using LLMs Before the Mediation
LLMs are prediction engines. Our job is to design prompts that elicit useful predictions. If you don’t like what the engine has predicted in response to a prompt, that’s your fault, not the engine’s.
Effective prompting requires tinkering. Or more formally, “iterations.” Output disappointing? Identify what’s lacking, adjust your prompt, try again. Different models produce different results from identical prompts—though they're converging, each has its own flavor. Test the major models (Gemini, Claude, ChatGPT) and find which matches your needs.
The Key: To most effectively use LLMs, you must understand the output you want. I’ve found them most effective as a thinking partner, not a content creator.
My Impressions
Claude delivers strong edits but weak summaries. ChatGPT leaves fingerprints (would you like another emdash?, odd AI feeling folksiness), and displays edits inconsistently, but overall handles editing decently. It has also been the best for random brainstorms. I’ve found ChatGPT more useful for random consumer uses than editing. Gemini summarizes aggressively but can miss the plot. I haven’t yet found the right prompt to make it a good editor.
For each, detailed prompts produce better output, to a point. It’s a balance. As your prompts grow longer, ambiguity creeps in. And if you give an LLM an ambiguous prompt, you’re going to get an ambiguous result.
LLMs are “tasky”: describe a specific output with the right level of detail, get that output. But slight prompt variations or overly long prompts produce… interesting results.
As writing aids, I’m still deciding if LLMs are a feature or a product. They let me draft loosely and then “fix it in post.” It has made ideas from Gary Kinder’s “Three Steps and Three Rules” easier to apply. But that’s not a revolution, it’s a better spellchecker.
I don’t use LLMs for research. I experimented, got wrong answers or results that weren’t better than my Boolean searches, and haven’t experimented more. LLMs might work as a new type of finding source – provided they give you exact citations and you read the cases. But we’ve all read cases where judges stretch citations beyond holdings. Judges can do that, its their job to move the law forward. LLMs shouldn’t.
Prompting from 10,000 feet
Using consumer interfaces (not APIs), you’re stuck with default settings. I don’t know how to set sampling controls or other settings without API access. Advanced users might.
Using the consumer interface, I’ve found prompts more effective if they follow this general structure:
· Role. Define “who” the LLM is. What is its background? What’s its knowledge base?
· Task. Tell it what to do: edit, summarize, draft, critique, code, etc.
· Context. Provide background relevant to the task.
· Reasoning. Ask for specific reasoning steps if relevant.
· Examples. Show the LLM what a “good” response looks like. Especially useful for specific formats.
· Output. Describe exactly what you want, redline edits, summary paragraph, formatted memo, code language, etc.
· Stop Condition. Tell it when to stop.
This framework came from an OpenAI guide that I can’t seem to find. For deep dives:
· Gemini
· Claude
· OpenAI
Samples
Editing a Legal Brief: Need to cut 500 words? Try:
Act as a legal brief editor, in the style of "Legal Writing in Plain English" by Bryan Garner, and "Making Your Case: The Art of Persuading Judges" by Bryan Garner and Former Supreme Court Justice Antonin Scalia. [If you don’t know or don’t like Garner’s style, DO NOT USE THIS PROMPT – Try Strunk & White, “The Elements of Style” – or your preferred style guide].
Please review and revise the text below for clarity, conciseness, and persuasiveness. Prioritize [what you want to prioritize].
The text is from [MSJ in state court, demurrer, opposition to same, an appellate brief to the Ninth Circuit, etc.] addressing [specific legal topic]. [Provide procedural background and the form of the motion].
Return the results as:
1. A redlined section with all changes marked as tracked changes from the original draft. DO NOT CREATE OR EDIT CITATIONS.
2. Stop when at least 500 words shorter. All edits must be marked in redline. [Remove this condition if you want edits regardless of words cut].
The section to edit is below/attached:
Summarizing: Need a TL;DR for your blog post? I use:
Act as if you were summarizing in the style of [list publications you find compelling/well written] or for Authors like [List authors you think write well/are persuasive].
Review this final blog post. Do not edit. Instead, create a TL;DR version for posting on LinkedIn.
Ensure the TL;DR keeps my voice and authors in mind.
[Provide context. Example: These posts are part of my MondayxMorelloMediation series designed as a marketing content for my Mediation Firm.]
Optimize for clarity, conciseness, and practical value, without losing my voice.
Return the results as:
1. A TL;DR version to post on LinkedIn.
The long form blog post follows below:
With some tinkering or “iteration”, both prompts can be repurposed for memos, reports, briefs, or client updates.
Using LLMs Before the Mediation
With effective prompting in mind, before the mediation:
CONFIDENTIALITY WARNING
Once you hit enter, you no longer own it. Prompts aren't confidential. Fine for public brief sections. Not fine for case-specific details. No names, parties, or identifying information. Keep it generic.
Understand LLMs Valuations. Forget everything you’ve learned and mimic how a client might prompt an LLM. Then use the prompt framework above and have the LLM evaluate your case (remember to have the LLM evaluating playing different roles). Compare results.
Take the results with a grain of salt, but they’ll help you anticipate how your client, or the other side, might view the case.
You'll be prepared to explain: LLMs are prediction engines, not fact machines. Their training data doesn't represent most settlements. You'll have counter-examples ready when discussing LLM results.
Edit Your Briefs (Please). Feed smaller sections of your brief into an LLM for tone and clarity. This limits confidentiality risks and improves precision. Let the model critique a section or paragraph, not the whole draft. And use it for editing, not initial drafting.
Feeling creative? Ask it to act as a neutral and critique your brief from the mediator’s perspective. Take any feedback more as entertainment than insight. But you might find a fresh angle.
General Preparation. Ask the LLM for:
· Five questions the mediator might ask
· Five questions your client might ask
· Translate complex theories into plain English
As in-house counsel, I often translated wage and hour law for business teams. Use the LLM to help you do this translation before the mediation, make in-house counsel’s job easier.
For plaintiffs: generate common-sense talking points so clients feel prepared.
Not Research. If you’re in unfamiliar territory, start with traditional finding sources. Read key cases yourself. Don’t use LLMs to draft legal sections, you want an accurate statement of the law, not a prediction of what you want the law to be.
Not for initial drafting. You’ll sound like an LLM. AI-drafted text has tells. Something feels off. As ChatGPT suggested “There’s a certain uncanny rhythm to AI writing.” And I love a good em dash, but if you use an LLM to write for you, you’ll have more em dashes littered through your draft than you ever thought possible.
Next week: The Mediation Brief.
Until then,
Monday x Morello Mediation.
None of this is legal advice. Your mileage may vary.
P.S. - After posting note - OpenAI has specifically disclaimed any “legal advice” that ChatGPT may provide. Through their Usage Policies posted on October 29, 2025, OpenAI now prohibits “automation of high-stakes decisions in sensitive areas without human review … legal…” It is unclear what this means in practice. As of this post, ChatGPT still claimed it could “dig deeper and search for more nuanced or lesser-known unpublished trial/administrative decisions.” It sounds like the new usage policy is more about avoiding liability than changing functionality.