An analyst is pasting a client's portfolio into ChatGPT to draft Monday meeting notes.

Who's going to find it?

If you're the DPO, that question matters more than the prompt itself. The real failure is often not the first paste. It's that nobody can tell you it happened, when it happened, or which data left the browser.

For many UK and EU fintechs, AI use has moved faster than control design. Staff are using ChatGPT, Copilot, Gemini, and Claude in the browser because the gains are obvious: faster summaries, cleaner client communications, less admin.

The compliance problem sits at the prompt layer.

Under GDPR Article 28, you need a contract with every processor handling personal data on your behalf. If an analyst types a client's portfolio, KYC extract, or arrears list into ChatGPT, OpenAI is no longer background infrastructure. It is processing personal data.

If you do not have a data processing agreement in place for that use case, you have an Article 28 problem the moment the prompt is sent.

Most firms do not discover that through a control. They discover it by accident.

That matters because GDPR Article 33 gives you 72 hours to notify the supervisory authority after you become aware of a personal data breach, where notification is required. The clock does not start when the analyst hits Enter. It starts when your firm becomes aware.

In practice, many DPOs have no systematic way to become aware of browser-based AI disclosures. Awareness arrives through a screenshot, a complaint, a manager overhearing something, or a panicked Slack message after the fact.

That is not a defensible operating model.

The pattern shows up quickly when you inspect real prompts. In audits, roughly 35-40% of prompts from finance-team members contain at least one piece of PII or regulated content. Not all of that is reportable. Not all of it is even a breach.

But enough of it falls inside Article 28 scope, confidentiality duties, or internal policy that you need visibility before you can make legal judgments.

The prompts are rarely dramatic. That's why they slip through.

"Here's John Mercer's portfolio summary — £47k in ISA and £180k in GIA. Can you write a plain-English review for his Monday meeting?"
"Format this KYC extract as a table: Jane Okafor, DOB 14/03/1988, passport GBR..., address 12 Harrow Road..."
"Take this list of clients with overdue KYC and group them by branch — [paste of 200 names, emails, account numbers]"

None of those are files. None are email attachments. None are USB transfers. None are uploads in the old sense. They are typed or pasted text in a browser tab.

That distinction breaks a lot of legacy DLP coverage.

Traditional DLP is built around channels it understands: email, endpoints, file movement, cloud storage, removable media. AI prompts dodge those categories because the risky act is often a few lines of text submitted through a web form.

If your controls trigger on document exfiltration but not on prompt content, your highest-volume AI data flow is largely invisible.

This is where DPOs get cornered in audits. Someone asks a simple question: show me your audit trail for personal data shared with public LLMs. If your answer depends on policy PDFs, annual training slides, and a best-efforts instruction not to paste client data, you do not have an audit trail.

You have intent.

A prompt-level audit log is the difference between those two states. It gives you timestamps, categories of detected risk, user context, and a redacted record of what was attempted without creating a second sensitive data store.

This is what that control looks like in practice.

!Prytive dashboard showing a redacted audit log of prompts flagged as high-risk, with categories and timestamps

The dashboard shows a redacted audit log of flagged prompts with timestamps and risk categories, which is the kind of evidence an auditor can actually inspect.

That matters for more than internal assurance. Regulators are paying attention. The ICO published AI guidance for organisations acting as controllers in 2024, and the direction is clear: if you're adopting AI, you are expected to understand your data flows, lawful basis, processor arrangements, and governance choices.

AI use is no longer novel enough to excuse guesswork.

For fintechs, the processor angle is usually underestimated. Teams often assume they are safe because the model provider says enterprise data is not used for training, or because staff are only using AI for drafting and formatting. Useful points. Not enough.

Article 28 is about whether a processor is handling personal data on your behalf and whether the required contractual terms are in place. A hidden prompt flow can create processor exposure long before security notices it.

You also need to think carefully about evidence quality. If you later need to reconstruct events under Article 33, broad statements like "some staff may have used ChatGPT" are nearly useless. You need dates. You need scope.

You need to know whether one adviser pasted one client's holdings or whether a team lead pasted a branch-wide KYC list affecting 200 data subjects.

That is why the browser layer matters. It sits where the action happens.

A workable control at this layer does four things:

For a DPO, that changes the conversation. You move from "we think staff are probably not doing this" to "we can show what was attempted, what was blocked, what categories appeared, and when".

You do not need a six-month programme to test whether this is a live issue in your firm. You can answer the core question in one working morning.

Three diagnostic questions you can answer in 10 minutes

1. Can you name every AI processor your staff are using with personal data?

Not your approved vendors. Every processor actually receiving prompt content today. If the answer is based on procurement records alone, you are missing browser-led usage.

2. If a supervisor asked for your ChatGPT audit trail right now, what would you hand over?

Be strict with yourself. A policy is not an audit trail. Training completion data is not an audit trail. You need prompt-level evidence, with dates and enough context to assess risk.

3. How would you know when Article 33 awareness begins?

Your process needs a defined event that creates awareness: a user report, a manager escalation, a SOC alert, or a prompt log review. If there is no detection path for browser AI prompts, your 72-hour window is resting on luck.

That is the Monday problem. By the time you hear about a pasted portfolio or KYC list, the real control gap is already behind you.

If you want to test this without turning it into a major project, install the extension on 3-5 analysts for 7 days and review the redacted prompts from your own team.