US court rules AI chat logs not protected by attorney-client privilege
A US federal court ruling that AI chat logs are not protected by attorney-client privilege is drawing attention from insurers and financial institutions reviewing how employees use consumer AI tools in legal and compliance analysis.
The decision from the US District Court for the Southern District of New York in United States v. Heppner determined that communications between a defendant and the generative AI platform Claude were not protected by attorney-client privilege or the work product doctrine.
The ruling comes while financial institutions increasingly adopt generative artificial intelligence tools for drafting, research, and internal analysis.
Judge Jed S. Rakoff rejected the defendant’s privilege claim on four grounds.
The court said attorney-client privilege requires communication with a licensed professional, which an AI system does not satisfy. It also cited privacy policies allowing AI providers to share data with third parties and government authorities. The ruling noted the defendant used the tool voluntarily and that the platform stated it does not provide legal advice. The court also said the work product doctrine protects an attorney’s mental processes rather than documents prepared independently and later shared with counsel.
The case involved Bradley Heppner, a financial services executive charged with securities fraud and wire fraud. After receiving a federal grand jury subpoena and hiring legal counsel, he used the consumer version of Anthropic’s Claude to research legal questions related to the investigation. The prompts and responses generated 31 documents that he later shared with his lawyers.
Authorities seized electronic devices containing the materials during the investigation, and prosecutors sought access to them.
Judge Rakoff ruled that sending the documents to lawyers after they were created did not make them privileged. The court also said entering information into the AI system disclosed it to a third party whose terms indicated inputs were not confidential.
The decision applied to consumer AI systems used without direction from counsel. The court said enterprise systems with contractual confidentiality provisions could potentially be evaluated differently.
The opinion stated: “Had counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.”
The ruling appears among a developing set of legal disputes involving generative AI data.
Courts have indicated that prompts, outputs, and activity logs generated by AI systems may qualify as electronically stored information subject to discovery when relevant to claims or defenses. In separate litigation involving OpenAI, a federal court ordered the retention of millions of chatbot conversation logs that could be reviewed in a copyright dispute.
Legal practitioners note that AI systems can produce large volumes of written records that may later be requested during civil litigation, regulatory inquiries, or internal investigations.
Regulators are also examining how insurers use artificial intelligence tools and external data.
The National Association of Insurance Commissioners’ Big Data and AI Working Group has launched a pilot program for its AI Systems Evaluation Tool. California, Colorado, Connecticut, Florida, Iowa, Louisiana, Maryland, Pennsylvania, Rhode Island, Virginia, Vermont, and Wisconsin are participating. The pilot is expected to run through September 2026.
During the program, insurers may receive inquiries during market conduct or financial examinations about their use of AI systems and third-party data.
On February 19, the US Department of the Treasury released two resources addressing artificial intelligence use in financial services: an Artificial Intelligence Lexicon and the Financial Services AI Risk Management Framework.
The lexicon provides definitions for AI concepts and risk categories intended to assist communication among regulators, technical specialists, legal teams, and business functions. The framework adapts the National Institute of Standards and Technology’s AI Risk Management Framework to operational and consumer protection considerations in financial services.
State lawmakers in Oregon, Utah, Virginia, and Washington are advancing legislation targeting developers and deployers of AI chatbot services, focusing on transparency, safety design requirements, and data protection.
Utah’s Companion Chatbot Safety Act has passed the Utah House of Representatives and is moving to the state Senate. California and other states are developing rules related to digital content and AI training data disclosures that may affect compliance across jurisdictions.
International regulators are also addressing privacy concerns. On February 23, 61 national privacy authorities coordinated through the Global Privacy Assembly issued a joint statement warning about the use of AI to generate images or videos depicting identifiable individuals without consent.