Claude Is Not a Lawyer: Federal Court Rules that Use of Consumer AI Platforms Can Destroy Privilege Protections

March 2, 2026

In a first‑of‑its‑kind decision, Judge Jed S. Rakoff of the Southern District of New York ruled that a criminal defendant’s written exchanges with a publicly available generative AI platform were not protected by either the attorney‑client privilege or the work‑product doctrine. United States v. Heppner, No. 25‑cr‑503 (JSR) (S.D.N.Y. Feb. 17, 2026). The court’s reasoning provides one of the clearest judicial statements to date on how traditional privilege rules apply when clients use consumer AI tools to analyze, summarize, or generate materials relating to legal matters.

Background

Following the defendant’s arrest on securities and wire fraud charges, the government seized numerous documents and electronic devices pursuant to a search warrant. Among those materials were approximately 31 documents memorializing the defendant’s communications with Claude, Anthropic’s popular artificial intelligence chatbot. According to defense counsel, the defendant used Claude—without direction from counsel—to draft reports outlining potential defenses, analyze facts, and reflect on possible legal strategies, which he later shared with counsel.

The defendant claimed that these AI‑generated documents were protected by both attorney‑client privilege and the work‑product doctrine because they incorporated attorney‑provided information, were created in anticipation of litigation, and were ultimately shared with counsel.

Judge Rakoff rejected these arguments and held that none of the protections applied.

Key Takeaways

Communications with Consumer AI Platforms Are Not Privileged—Even When They Involve Legal Strategy

The court held that attorney‑client privilege protects only confidential communications between a client and an attorney made for the purpose of obtaining legal advice. Communications with an AI system—even one used to generate content for an attorney—do not meet this standard because:

  • AI platforms are not attorneys or agents of attorneys. The court emphasized that Claude expressly disclaims providing legal advice and is not a professional subject to fiduciary duties or professional discipline.
  • There is no confidentiality. The consumer version of Claude’s privacy policy informs users that their inputs and outputs may be disclosed to third parties, including government regulators. The client must consent to this publicly available policy when utilizing a consumer version of the AI tool. Because the client voluntarily shared information with a non‑confidential third party, privilege could not attach.
  • Sharing the AI output later with counsel does not retroactively create privilege. Non‑privileged material does not become privileged merely because it is later transmitted to an attorney.

Practical Implication:

If a client enters privileged or sensitive factual information into a consumer AI platform, that act may waive confidentiality, and the information is no longer protected—even if the client’s purpose was to prepare for conversations with a lawyer.

Using Consumer AI to Prepare Materials for Counsel Still Waives Privilege and Does Not Create Work‑Product Protection

The court also rejected work‑product protection for the AI documents. Work product protection generally applies to materials prepared by or at the direction of counsel in anticipation of litigation.

Here, the court found:

  • The client acted entirely on his own, without instruction or involvement from counsel, when drafting materials with Claude.
  • The outputs did not reflect counsel’s mental impressions or strategy at the time they were created.
  • Even if the client believed he was preparing for litigation, the doctrine is designed to protect lawyers’ thinking, not a client’s independent brainstorming with an AI tool.

Because the materials originated from the client’s unilateral use of a third‑party platform—outside the scope of counsel’s direction and outside any zone of confidentiality—they did not qualify as work product.

Practical Implication:

Clients cannot create work product by independently using public AI tools, even if their intention is to aid counsel. Doing so risks disclosure not only of the AI‑generated materials but also of any underlying information input into the system.

***

If you have any questions regarding the foregoing, please contact your primary attorney or a member of the Litigation & Investigations Group at Seward & Kissel LLP.

 

 

 


Authors
Related Practices