It’s 4:47pm on Friday. Someone just told you a customer success rep pasted a patient list into ChatGPT three weeks ago. You have 72 hours.
Not 72 hours to understand every detail. Not 72 hours to build a perfect narrative. You have 72 hours under GDPR Article 33(1) to notify the supervisory authority unless the breach is unlikely to result in a risk to the rights and freedoms of natural persons.
Your first job is simpler than most teams make it: freeze the facts you still have.
AI-channel incidents leave a different evidence trail than email misdirection or a lost laptop. The key questions are not only what data left your control, but where it was pasted, under which account tier, what retention applied, and whether the vendor may still hold the prompt.
Start with evidence preservation. In this order.
- The redacted prompt. Preserve the exact text if you can do so safely. If you cannot, preserve a redacted reconstruction.
- Timestamp. Capture the original paste time and the time you became aware.
- Tool and account tier. ChatGPT Free, ChatGPT Team, Claude consumer, Gemini via Workspace, API, Copilot. This changes retention and training exposure.
- User ID. Name, role, department, manager, employment status.
- Data subjects affected. Count, geography, customer account, special category data under GDPR Article 9 if present.
- The AI vendor retention policy for that exact account type on that date.
If you already have prompt logging at the browser layer, this is straightforward.
If you do not, you are now in reconstruction mode.
Use the employee’s recollection immediately. Pull browser history. Pull SSO logs. Pull endpoint telemetry. Pull Slack and ticket references. Preserve screenshots before the user deletes anything. Do not wait until Monday.
A simple incident timeline often starts with fragments like these:
// Show the timeline of an incident as it unfolds, with prompt artifacts
"4:47pm Fri, rep pastes: 'Can you reformat this patient list...' [47 records] — ChatGPT.com consumer account"
"3 weeks later, Slack message: 'hey did we ever get back to that clinic about their list?' — first moment of awareness"
That last line matters. The GDPR 72-hour clock starts when your organisation becomes aware of a personal data breach, not when the employee pasted the data.
Document the moment awareness was established with precision: date, time, source, and who decided this was a reportable security incident. If you notify late, GDPR Article 33(1) requires you to give reasons for the delay.
The log you wish you had the moment awareness was established looks like this:
Retention is the next fork in the road.
OpenAI, Anthropic, and Google do not handle prompt retention the same way across tiers. That difference affects both risk and your containment options.
OpenAI consumer ChatGPT chats can remain in the user history until deleted, and deleted chats are typically scheduled for permanent deletion within 30 days unless retained for security or legal reasons. OpenAI API data is usually retained for abuse monitoring for up to 30 days, with some enterprise arrangements offering no training on customer data and narrower retention. ChatGPT Team and Enterprise plans also differ from consumer in training defaults and admin controls.
Anthropic states that Claude API data is generally retained for a limited period, commonly 30 days, for abuse and trust-and-safety purposes unless you have a different contractual arrangement. Consumer-facing Claude products have different retention and account-history characteristics from API use.
Google also splits by product. Gemini for Google Workspace and Vertex AI controls differ from consumer Gemini. Workspace and enterprise services usually carry contractual commitments and admin visibility that consumer accounts do not. Consumer interactions may persist in account activity unless deleted, subject to product settings and Google’s retention terms.
You need the exact product and tier. “Used AI” is useless. “ChatGPT.com logged in with personal Gmail on Free tier” is useful.
Now assess impact.
If the prompt contained a patient list, you may be dealing with special category data under GDPR Article 9. That raises the risk analysis fast. Count the records. Identify whether names were paired with diagnosis, treatment, clinic name, phone number, or insurance details.
Determine whether the rep used a personal account, whether the prompt may have been used for model improvement, and whether the data can still be deleted.
Then work the notification map.
You may need to notify four separate audiences:
- Your supervisory authority under GDPR Article 33.
- Affected data subjects under GDPR Article 34 if the breach is likely to result in a high risk to their rights and freedoms.
- The customer whose data it was, if you are a processor in a B2B relationship. Check your DPA notice window. Many contracts require notice within 24 hours.
- Your E&O or cyber insurer. Miss the policy notice condition and you create a second problem for yourself.
Do not wait for total certainty before making the first regulator call. Article 33(4) allows phased notification when full information is not yet available. Initial notice can state what you know, what you do not know, and when you will update.
The hardest operational problem is the one most teams discover too late: you often cannot retrieve the actual prompt. There is no mail server copy. No DLP quarantine. No CASB event.
If the employee used a consumer AI account and deleted the chat, you may have nothing except memory and fragments.
That turns step one into witness interviewing.
Ask the employee to reconstruct the paste from source documents, not from paraphrase. Which file did they open? Which rows did they select? Did they paste one list or two? Did the prompt include free-text notes? Did the model response contain transformed personal data that was then copied elsewhere?
This is also why instrumentation has to exist before the incident. After awareness, you are doing forensics with smoke.
The printable 72-hour checklist
- Record awareness precisely
- Log the exact date and time awareness was established. - Name who received the report and who classified it as a personal data breach.
- Preserve the evidence base
- Redacted prompt. - Timestamp. - Tool and account tier. - User ID. - Data subjects affected. - Vendor retention policy for that account.
- Contain access
- Suspend or monitor the relevant AI account. - Remove personal accounts from sanctioned workflows. - Instruct the employee not to delete artefacts until legal approves.
- Reconstruct what was pasted if logs do not exist
- Interview the employee. - Pull browser, SSO, endpoint, Slack, and ticket artefacts. - Match against the source file to estimate exact records exposed.
- Assess risk under GDPR Articles 33 and 34
- Was personal data involved? - Was Article 9 special category data involved? - Can the vendor still retain or train on it? - How many people were affected, and in which countries?
- Notify the required parties
- Supervisory authority within 72 hours where required. - Affected data subjects where high risk applies. - Customer under your DPA or MSA notice clause. - E&O or cyber insurer under policy conditions.
- Document gaps and next updates
- State what is confirmed, inferred, and unknown. - Set deadlines for supplemental facts. - Keep a defensible incident chronology.
- Fix the control failure
- Block or redact sensitive prompts in the browser before submission. - Stop relying on memory as your audit trail. - Update training, sanctioned-tool policy, and vendor tier controls.
If you are the person who gets the 5pm Friday call, your problem is rarely the legal test alone. It is the missing evidence.
Instrument now. Prytive captures exactly the evidence base the 72-hour protocol requires: redacted prompt artefacts, timestamps, tool names, and user IDs, without storing raw sensitive content.