ABA Formal Opinion 512, issued in July 2024, put the profession on notice. Generative AI is not just a productivity tool. It is a confidentiality risk, a supervision risk, and, in the wrong fact pattern, a privilege-waiver event.
The stakes are immediate for individual attorneys. If a lawyer puts privileged matter into a third-party model without protections that preserve confidentiality, the issue is not just firm policy failure. It can trigger waiver arguments and disciplinary exposure under ABA Model Rule 1.6 and Rule 5.3.
For Risk Partners and General Counsel, the problem is not abstract. If an associate pastes privileged facts, litigation strategy, or confidential deal terms into a public or consumer AI tool without a vendor agreement that meaningfully protects confidentiality, you may have a third-party disclosure problem on your hands.
That goes straight to ABA Model Rule 1.6 on confidentiality of information and Rule 5.3 on a lawyer's duty to supervise non-lawyer assistance. Formal Opinion 512 says lawyers must understand how the tool handles input and output, what the provider stores, and who can access the data. If they do not, they cannot assume the disclosure is safe.
That matters because privilege turns on confidentiality. Once communications are disclosed to a third party outside the privileged relationship, waiver arguments start fast.
Courts have not needed a special "AI waiver doctrine" to get there. The existing doctrine already does the work. Voluntary disclosure to an outside service provider, without adequate safeguards and without necessity to the legal representation, is exactly the fact pattern opposing counsel will try to use.
The same analysis applies separately to work product. If a lawyer pastes mental impressions, witness strategy, or case theories into ChatGPT, that can expose opinion work product or fact work product.
Work-product protection is not identical to attorney-client privilege, but disclosure to an adversary or in a manner inconsistent with secrecy can still lead to waiver fights. If your litigation team is using a general-purpose model to pressure-test examinations or draft responses, they are not just handling client confidences. They may be exporting the core of trial strategy.
The risky prompts do not look dramatic. They look efficient. They look like Tuesday.
Summarise this motion to dismiss for our 2pm strategy call — it's for the Mercer fraud case, opposing counsel is Jones Day.
Draft a response to this deposition notice. Here's the witness background: [client name, disciplinary history, settlement terms from a confidential agreement]
Help me think through this witness examination — our theory is X, the witness is [person], their known vulnerabilities are [...]
Each one can disclose protected material. Names. Case posture. Settlement terms. Legal theories. Vulnerabilities of a witness. None of that belongs in a consumer chatbot by default.
State bars have been moving in the same direction. California's Practical Guidance for generative AI in 2023 and subsequent ethics commentary treated AI use as a live confidentiality and competence issue, not a novelty. The Florida Bar's ethics guidance in 2024 stressed that lawyers must protect confidential information and vet whether an AI provider stores or trains on inputs.
The New York State Bar Association's 2024 opinion on AI made the same point: lawyers may use AI, but not in a way that compromises client confidences or independent professional judgment. New Jersey's 2025 guidance goes further on governance and supervision, making clear that firms need controls, not just a memo telling lawyers to be careful.
This is where many firms get stuck. Policy is necessary. Policy is not enough.
A training slide deck will not stop a tired fourth-year from pasting a draft witness outline into ChatGPT at 11:48 p.m. Your legacy DLP likely does not see the prompt box in the browser. Your email banner does nothing here. Your MDM policy does nothing here.
The mistake happens in one copy-paste action, inside a tab your existing controls often miss.
You need friction at the moment of paste. Not after. At the exact second the lawyer is about to disclose privileged content to a third-party model.
That kind of control changes the risk profile. It blocks or warns before the disclosure leaves the browser. It gives the associate a chance to stop.
It gives the firm an audit trail of the redacted event, not a warehouse of raw privileged text creating a second breach surface.
Formal Opinion 512 is especially important on individual disciplinary exposure. This is not framed as a pure IT issue. It is a lawyer-conduct issue.
Rule 1.6 requires reasonable efforts to prevent unauthorized disclosure of information relating to the representation of a client. Rule 5.3 requires lawyers with managerial authority to make reasonable efforts to ensure non-lawyer assistance is compatible with the lawyer's professional obligations. When the "assistant" is an AI system operated by a third party, that supervision duty does not disappear.
If anything, it gets harder, because the provider's terms, retention, and model-training practices vary by product and plan.
The waiver argument gets stronger when there is no contract protection in place. Many firms understand this instinctively with e-discovery vendors and cloud hosting. They negotiate confidentiality terms, security addenda, access restrictions, and handling commitments because disclosure to a necessary service provider under proper safeguards is different from tossing facts into a public model.
The same logic applies here. No BAA-equivalent agreement, no vetted controls, no clear data-handling terms: your factual record gets ugly fast if privilege is challenged later.
For law firms and legal SaaS teams, four browser-layer controls matter.
Four controls every law firm needs at the browser layer
1. Real-time prompt inspection before submission
Your control must inspect text before it reaches ChatGPT, Copilot, Gemini, or Claude. That means detecting client names, matter names, settlement terms, financial data, health data, and internal legal strategy in the browser itself. If the tool only reviews traffic after transmission, it is too late.
2. High-risk block with user-level feedback
Warnings are useful. Blocks are better for certain categories. If a prompt contains privileged matter, confidential agreement terms, or witness strategy, the system should stop the paste and tell the user why. Short message. Clear reason. No lecture.
3. Redacted audit logs only
Your firm needs evidence that the control worked, but you do not need a second repository full of raw privileged prompts. Keep logs of the redacted event, risk category, user, timestamp, and destination app. Skip storage of the original sensitive content. That reduces discovery and breach exposure.
4. Scoped rollout by practice risk
Litigation is usually the fastest proof point, but it should not be the only focus. Employment, M&A, healthcare, and internal investigations create the same prompt-level exposure. Start where the risk is concentrated, then expand.
The firms handling this well are treating generative AI like any other unapproved third-party disclosure channel. They are mapping the ethics duty to the technical control. Rule 1.6 is about preventing unauthorized disclosure. Rule 5.3 is about supervision. Work-product protection is about preserving the confidentiality of legal preparation.
Browser-layer controls line up with all three.
If you are still relying on policy alone, you are trusting perfect human behavior in the least forgiving moment of the workflow.
Pilot the control in one practice group first. Litigation is usually the fastest proof point. Deploy Prytive to a single team, measure blocked high-risk prompts, review the redacted audit log, and prove the control before firm-wide rollout.