AI Privilege · Case Law · Professional Responsibility

Same Question. Same Day. Opposite Answers.

U.S. v. Heppner vs. Warner v. Gilbarco: The federal circuit split that will reshape how every lawyer uses AI.

On February 10, 2026, two federal district courts confronted the same novel question: when a litigant uses a generative AI tool to prepare materials related to pending or anticipated litigation, are those materials protected by attorney-client privilege or the work product doctrine?

They reached diametrically opposite conclusions.

In United States v. Heppner, Judge Jed S. Rakoff of the Southern District of New York held that 31 documents a criminal defendant generated using Anthropic's Claude were not privileged and not work product. The same day, in Warner v. Gilbarco, Inc., Magistrate Judge Anthony P. Patti of the Eastern District of Michigan held that ChatGPT-generated materials were protectable work product, and that disclosure to an AI platform did not constitute waiver.

This is not an abstract doctrinal curiosity. This split has immediate, concrete consequences for every attorney supervising a client who touches a generative AI tool - and for every client who has ever typed a litigation-related query into ChatGPT, Claude, Gemini, or any other consumer AI platform. The privilege status of those interactions is now an open question in federal courts, and the answer depends on your jurisdiction, your facts, and critically, whether counsel was in the loop.

The Cases

United States v. Heppner (S.D.N.Y.)

Bradley Heppner, a CEO facing federal fraud charges, used the consumer version of Claude to develop a legal defense strategy after receiving a grand jury subpoena and retaining counsel. The FBI seized 31 AI-generated documents from his home pursuant to a search warrant. Heppner argued the documents were privileged because he had input information learned from his attorneys and later shared the outputs with counsel.

Judge Rakoff rejected every argument. The court's analysis was methodical and, in my view, correct on the law while raising uncomfortable questions about the practical consequences:

AI Is Not an Attorney

Attorney-client privilege requires a communication between a client and an attorney. Claude is not an attorney. The court rejected the analogy to cloud storage platforms, reasoning that privilege requires "a trusting human relationship" with a fiduciary subject to professional discipline. An AI model has no fiduciary duties, no bar license, and no capacity for that relationship.

Consumer AI Is Not Confidential

Anthropic's privacy policy - to which Heppner agreed - expressly states that user inputs and outputs may be collected, used for training, and disclosed to third parties including government authorities. Under established privilege doctrine, a communication made through a channel with no reasonable expectation of confidentiality is not privileged.

AI Cannot Provide Legal Advice

Claude itself disclaims providing legal advice. The purpose element of the privilege test - that the communication be for obtaining or providing legal advice - was therefore unsatisfied.

Privilege Does Not Attach Retroactively

Heppner's fallback - that the documents became privileged when shared with counsel - fared no better. Non-privileged materials do not become privileged simply because they are forwarded to a lawyer. This is hornbook law.

On work product, the court was equally dismissive. Heppner's counsel conceded on the record that they did not direct him to use Claude. Because no attorney directed or was involved in their creation, the documents did not constitute attorney work product.

Warner v. Gilbarco, Inc. (E.D. Mich.)

In this employment discrimination case, the defendants sought to compel production of all documents related to the plaintiff's use of ChatGPT in connection with the lawsuit. The plaintiff objected on work product grounds. The defendants argued that any privilege had been waived by inputting litigation materials into a third-party AI platform.

Judge Patti denied the motion to compel. His reasoning diverged sharply from Rakoff's on two critical points:

  • AI is a tool, not a third person. Where Heppner implicitly treated AI as a third-party recipient of confidential information, the Warner court categorized generative AI programs as instruments - analogous to a calculator, a legal research database, or a dictation machine. Under this framework, disclosing information to an AI tool is not the same as disclosing it to a person, and therefore cannot constitute waiver.
  • Preparation in anticipation of litigation is sufficient. The court applied the plain language of Federal Rule of Civil Procedure 26(b)(3)(A), which protects materials "prepared in anticipation of litigation or for trial by or for another party or its representative." The Warner court did not require that counsel specifically direct the use of the AI tool. It was enough that the materials were prepared in connection with anticipated litigation and became part of the plaintiff's trial preparation strategy.

Side-by-Side: The Key Differentiators

Issue Heppner (S.D.N.Y.) Warner (E.D. Mich.)
Case Type Criminal (securities fraud) Civil (employment discrimination)
AI Tool Claude (Anthropic, consumer tier) ChatGPT (OpenAI)
Counsel Involvement None - defendant acted alone; counsel conceded they did not direct AI use Not specifically addressed - court focused on litigation-preparation purpose
Privilege Ruling Denied - AI is not an attorney, platform is not confidential, AI cannot provide legal advice Not directly at issue; court found AI disclosure is not disclosure to a "third person"
Work Product Denied - not prepared by or at direction of counsel Protected - prepared in anticipation of litigation under Rule 26(b)(3)(A)
AI Classification Implicitly a third-party recipient of information Explicitly a "tool, not a person"
Key Question Who directed the AI use? Why was the AI used?

What Actually Drove the Split

Strip away the doctrinal framing and two variables explain the divergence.

Variable One

The Role of Counsel

In Heppner, the defendant acted alone. He used Claude on his own initiative, without direction from his attorneys, and only later shared the outputs. His counsel conceded this on the record. The court seized on this fact: no attorney involvement means no attorney-client privilege, and no attorney direction means no attorney work product.

In Warner, the court did not focus on whether counsel specifically directed the AI use. It focused instead on the purpose of the materials - litigation preparation - and treated the AI-generated documents as part of the broader trial preparation effort, regardless of who initiated the specific tool usage.

This is the hinge. Heppner asks: who directed the AI use? Warner asks: why was it used?

Variable Two

The Ontological Classification of AI

Is generative AI a "third person" or a "tool"? This sounds philosophical, but it has hard legal consequences. If AI is a third person, then sharing privileged information with it is a voluntary disclosure that may waive privilege. If AI is a tool, then using it to process information is no different from using a spreadsheet - no waiver occurs.

Heppner did not explicitly categorize Claude as a "third person," but its confidentiality analysis functionally treated it as one: the court analyzed Claude's privacy policy the way it would analyze a disclosure to a human intermediary. Warner was explicit: generative AI programs are "tools, not persons." Waiver requires disclosure to an adversary or in a manner likely to reach an adversary. A tool does not satisfy that standard.

My Take: Heppner Will Prevail, and That Is the Right Result

I believe the Heppner framework will ultimately become the majority rule, and that it should. Here is why:

  • The confidentiality problem is real and dispositive. Consumer AI platforms are not black boxes that destroy input data after generating output. They are operated by corporations that reserve broad rights to collect, use, and disclose user inputs. When you type litigation strategy into a consumer AI tool, you are sharing it with a corporate entity that has explicitly told you it may share it with government authorities. Under any traditional confidentiality analysis, that is a waiver. The Warner court's "tool" characterization avoids this problem by categorizing AI at the wrong level of abstraction. A hammer is a tool. A cloud-based service operated by a corporation with access to your inputs, trained on your data, and subject to government subpoenas is something categorically different.
  • The "tool" analogy will not survive appellate scrutiny. The distinction between a "tool" and a "third-party service provider" is going to collapse the first time a court seriously examines what happens to data inside a large language model. Consumer AI platforms ingest user inputs, process them through models that are updated and monitored by employees, store conversation logs, and in many cases use those inputs for further training. This is not how a calculator works. It is how a consulting firm works. The legal framework for analyzing disclosures to third-party service providers is well-developed, and it is the right framework.
  • Enterprise AI changes the calculus. Judge Rakoff's opinion hints at this. If an attorney directs a client to use an enterprise AI deployment - one with contractual confidentiality protections, data isolation, no training on user inputs, and no third-party disclosure rights - the privilege analysis could come out differently. The critical variables are the confidentiality of the platform and the involvement of counsel, not the nature of AI as a category.
  • The profession needs a clear rule. "It depends on your jurisdiction" is not sustainable. Attorneys across the country are using or supervising the use of AI tools right now. The bar needs a clear rule that consumer AI platforms are not confidential channels for privileged communications, and that clients who use them without attorney direction do so at the risk of waiver. That is the rule Heppner establishes.

What Practitioners Should Do Right Now

01 - Client Guidance

Issue AI-specific privilege guidance to every client at the outset of engagement. Make it clear: if you put case-related information into a consumer AI tool without my direction, you may be waiving privilege. Document this in your engagement letter.

02 - Directed Use Only

If you need clients to use AI, direct it, document it, and control the platform. Use enterprise platforms with contractual confidentiality protections. Memorialize counsel's direction in writing.

03 - Internal Audit

Audit your own firm's AI usage. If associates or paralegals are using consumer AI tools for client work, you may be creating the same exposure that sank Heppner's privilege claims. Implement firm-wide policies specifying approved platforms.

04 - Read the Privacy Policies

The Heppner court treated Anthropic's privacy policy as dispositive evidence that confidentiality was not maintained. Every AI tool your firm or client uses should be vetted for data retention, training practices, and third-party disclosure provisions.

Looking Ahead

This split will not stand for long. The issues are too consequential and the questions too pervasive. AI-assisted legal work is not a niche practice - a Northwestern study published this month found that over 60% of federal judges have used AI tools in their judicial work. The tool is everywhere. The rules governing its use are nowhere near settled.

I expect the following developments within the next 12 to 18 months: appellate resolution from the Second and Sixth Circuits, formal state bar ethics opinions on AI use and privilege, disputes testing whether enterprise AI tiers satisfy confidentiality requirements that consumer versions do not, and possible legislative intervention as part of the broader federal AI regulatory framework.

The bottom line is this: the legal profession adopted AI faster than the law adapted to it. Heppner and Warner are the opening salvos of a conflict that will reshape privilege doctrine, client counseling, and legal technology adoption for a generation. The practitioners who are paying attention now will be the ones who are prepared when the rules finally crystallize.

"The risk is not the model. The risk is the firm that has not decided how to use it."

Case Citations: United States v. Heppner, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026); Warner v. Gilbarco, Inc., No. 24CV12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026).

This article is for informational purposes only and does not constitute legal advice. Consult qualified counsel for guidance specific to your situation.

JDAI helps law firms develop AI governance frameworks - from policy drafting and tool evaluation through attorney training and ongoing compliance support.

Schedule a Consultation Take the AI Readiness Assessment

← Previous Article
Share LinkedIn Email
Next Article →