Case Law · AI Governance · Confidentiality

A Federal Judge Just Wrote Your Firm's AI Policy: Here's What It Says

Morgan v. V2X, Inc. is the most comprehensive AI-in-litigation ruling to date. The protective order language reads like a governance checklist every law firm should already have.

David Zissman, J.D., M.B.A. · April 2026 · Article 8 of the Series

The Ruling That Changes the Conversation

On March 30, 2026, Magistrate Judge Maritza Dominguez Braswell of the District of Colorado issued a ruling in Morgan v. V2X, Inc. that should be required reading for every attorney using (or thinking about using) AI in any aspect of their practice.

This is only the third federal ruling to address AI and work product or privilege. But it is by far the most thorough, and the one with the most direct implications for how firms handle AI tools right now. The reason: Judge Braswell didn't just decide a discovery dispute. She wrote a protective order that functions as a governance blueprint: one that maps directly onto the obligations every law firm already has but most haven't operationalized.

The judge's credentials matter here. She co-chairs the District of Colorado's AI Committee, co-founded the Judicial AI Consortium, participates in Sedona Conference Working Groups 1 and 13, and authors The AI Brief. This is a judge who has been thinking about these questions before the parties brought them to her courtroom. That context explains why the opinion is as comprehensive as it is, and why it is likely to influence courts far beyond Colorado.

What Happened

The underlying case is a routine employment discrimination dispute. What made it extraordinary is the plaintiff. Archie Morgan is pro se, representing himself against corporate defendant V2X and its litigation team. Both sides were using AI. The dispute erupted when V2X moved to amend the existing protective order to restrict Morgan's AI use, and Morgan fired back that V2X was holding a four-month-overdue document production hostage to extract concessions about his AI tools.

He had a point about the imbalance: a pro se plaintiff limited to free consumer tools being restricted while opposing counsel maintains enterprise AI infrastructure with contractual safeguards already in place.

The court framed the dispute around two questions. First, does the work product doctrine protect a pro se litigant's AI-generated materials? Second, what should a protective order say about AI use with confidential information?

Three Holdings, One Governance Framework

Judge Braswell's ruling delivered three distinct holdings, and taken together, they amount to the clearest judicial articulation yet of what responsible AI use in litigation actually requires.

1. Work Product Protection Survives Consumer AI Use

The court held that AI-assisted litigation materials prepared using public AI tools are protected under Federal Rule of Civil Procedure 26(b)(3) as work product: mental impressions and litigation preparation materials. Using a consumer AI platform does not automatically waive that protection.

Judge Braswell's reasoning went beyond textual analysis. She pointed out that the 1970 amendments to Rule 26(b)(3) extended protection to parties themselves, not just attorneys. Courts have applied it to pro se litigants for decades. And she wrote that extending these protections is "magnified in the context of AI: one of the most powerful knowledge tools ever to become available to the masses."

On the waiver question, the court posed a reframing question that should resonate with every attorney who uses cloud-based tools: nearly all electronic interaction passes through third-party systems. Gmail hosts millions of accounts and has access to millions of messages, documents, and files. Does having a Gmail account forfeit all rights to confidentiality and privacy? If you accept the waiver argument, the answer has to be yes. And that isn't a sustainable legal rule.

2. AI Tool Identity Must Be Disclosed

Morgan argued that identifying which AI platform he used would itself reveal work product: that tool selection reflects strategy and analytical approach. The court disagreed. Disclosing the name of the AI tool used in connection with confidential information does not reveal mental impressions or legal strategy. And the opposing party has a legitimate need to assess whether its confidential information was compromised.

The court's language was pointed: Morgan offered only conclusory assertions rather than a factual basis for his claim that tool identification would expose strategy. That wasn't enough.

3. The Protective Order Language: A Governance Blueprint

This is where Morgan goes where no prior AI ruling has gone. Judge Braswell analyzed both parties' proposed protective order language, rejected both, and wrote her own.

V2X's proposed language named specific platforms (ChatGPT, Harvey.AI, Claude) and was clearly drafted around its own enterprise contracts rather than the needs of the case. It still referenced Google's Bard, which was rebranded as Gemini more than a year ago. (The court noticed.) Morgan's proposed "closed-circuit environment" language was too narrow: it addressed unauthorized bad actors but not what the platform itself does with data in the ordinary course.

The court's own language requires that before any party uploads confidential information to an AI platform, the provider must be contractually prohibited from: (1) storing or using inputs to train or improve the model, and (2) disclosing inputs to any third party except where essential to service delivery, with those third parties bound by equivalent protections. The provider must also afford the party the ability to delete all confidential information on request. And the party must retain written documentation of those contractual protections.

Read that again through the lens of governance. The court just required, by order: data retention controls, training exclusion verification, third-party flow-down obligations, deletion rights, and documented compliance records. That is not a discovery ruling. That is an AI governance policy, delivered from the bench.

The Heppner Distinction Gets Sharper

If you have been following this series, you know that the Heppner ruling from the Southern District of New York (February 2026) held that a represented criminal defendant who used AI independently of his attorney could not claim work product protection. The reasoning: there was a structural gap between the party and his lawyer. The defendant acted entirely on his own initiative, without any direction from counsel.

Judge Braswell distinguished Heppner on two grounds. First, Heppner was a criminal case; Morgan is civil, governed by Rule 26(b)(3), which broadly protects the work product of a party (not merely counsel). Second, in Heppner there was a gap between the party and the attorney. No such gap exists in the pro se context, where the litigant is simultaneously the party and the advocate.

But the implication runs in the other direction too, and this is the part that matters for law firm governance: a represented party who uses AI independently of their lawyer (not at counsel's direction, not as counsel's agent) may land in exactly the same position as the defendant in Heppner. No privilege. No work product protection. The gap that destroyed protection in Heppner doesn't require a criminal case to appear. It requires an absence of attorney direction.

That is a governance problem. And it has a governance solution: policies that define who directs AI use, protocols that document that direction, and training that ensures clients understand the boundaries.

What This Means for Your Firm

This ruling validates the governance-first approach to legal AI. Every element Judge Braswell required in her protective order corresponds to something your firm should already have in place. Not because a court ordered it, but because your professional obligations demand it.

Court Requirement Governance Equivalent What This Actually Requires
No training on inputs Vendor due diligence before deployment Evaluating training and retention policies across every tool your firm uses: not once, but every time the vendor updates its terms. A platform that opted out of training last quarter may not today. Policies vary by subscription tier, and free-tier defaults are almost never compliant.
No third-party disclosure Confidentiality safeguards on data flow Mapping where client data actually goes: not just the primary vendor, but subprocessors, cloud infrastructure providers, and any analytics layer. These chains differ by tool, by feature, and by how the tool is accessed. A browser plugin and an API integration may route data differently.
Deletion rights Data lifecycle management Confirming deletion is real: not just removing a conversation from your dashboard, but verifying the vendor purges inputs from logs, caches, and backup systems. Deletion capabilities vary dramatically by platform and are rarely straightforward to verify without reading the fine print.
Written documentation Compliance records Not a one-time memo in a file. Ongoing documentation that tracks which tools are approved, when they were vetted, what changed, and who authorized each use. This needs to be maintained as tools, terms, and case requirements (which they do constantly).
Disclose AI tool identity Tool inventory and accountability Knowing what your firm actually uses, including the tools attorneys and staff adopted on their own without approval. Most firms discover shadow AI use only when someone asks. A defensible inventory requires a policy, a process for reporting, and periodic audits.

The left two columns are the part you can learn from a court ruling. The right column is the part you can't, because it depends on your firm's specific tools, practice areas, client types, and workflows. A family law solo vetting a consumer AI tool for case analysis has a completely different evaluation than a commercial litigation boutique using an enterprise platform for document review. The court's framework is universal. Implementing it is not.

The Access-to-Justice Dimension

There is one more thread in this ruling worth acknowledging. Judge Braswell recognized that restricting AI use to enterprise platforms with contractual safeguards will, "at least for now," effectively bar most consumer AI tools from use with confidential information. She acknowledged the cost , particularly for pro se litigants and under-resourced parties who cannot afford enterprise subscriptions.

This is real. AI has the potential to narrow the resource gap in litigation. But confidentiality obligations don't bend to accommodate cost constraints. The court held the line on protection while openly acknowledging the tension. That is the kind of intellectually honest judicial reasoning that builds durable precedent.

The Bottom Line

Morgan v. V2X is not a case about whether attorneys can use AI. That question is settled. It is a case about what responsible AI use : the answer, delivered from the bench, is governance. Contractual safeguards. Documented compliance. Attorney direction. Confidentiality controls. Tool accountability.

But here is what a court ruling cannot give you: the implementation. Knowing the five requirements in the table above is the easy part. Building the systems that satisfy them across your firm's actual practice (the vendor evaluations that account for how your specific tools handle your specific data types, the workflows that vary by practice area, the client training that reflects how your clients actually interact with AI, the ongoing monitoring as platforms change terms and courts issue new orders), that is where governance becomes real. It is not a document you draft once. It is infrastructure you maintain.

The firms that already have that infrastructure just received judicial validation. The firms reading this ruling as a checklist they can knock out in an afternoon are misreading how much the third column of that table actually demands. And the gap between those two positions is where risk lives.

This article is for informational purposes only and does not constitute legal advice. Consult qualified counsel for guidance specific to your situation.

JDAI Consultants helps law firms build the governance infrastructure that rulings like Morgan v. V2X now : from AI use policies and vendor due diligence through workflow classification and compliance documentation.

Schedule a Consultation Take the AI Readiness Assessment

← Previous Article
Share LinkedIn Email
All Articles → Next Article →