Email attachments.
USB copies.
Dropbox uploads.
Your DLP sees those.
Then a sales lead pastes next quarter's renewal book into ChatGPT and asks for a churn plan. Your controls stay quiet. No blocked file. No unusual upload. No alert your analyst can trust.
That is the DLP ChatGPT blind spot.
Most security teams were trained to think in files, endpoints, and network egress. Reasonable. That is where classic data movement lived for twenty years.
The AI channel does not behave like email, web upload, or cloud sync. Your existing controls were not built for what happens inside a browser text box.
A prompt is not a file in the way your DLP expects. It is often a stream of typed characters, browser events, and API calls wrapped in TLS, mixed with normal web traffic, then rendered back into a chat session that looks harmless unless you inspect the exact moment the user submits the text.
That gap matters because prompt exfiltration is usually high-value data, not trivia.
The examples are mundane. That is why they get through.
Here's our H1 customer churn data — CSV attached below. Which 10 accounts are highest-risk for churn in Q2?
[+ 5,000-row CSV]
Draft a retention plan for these 15 enterprise accounts: [names, ARR, renewal dates, health scores]
Translate this customer escalation thread into German for the Munich team: [thread with 3 customer names, their purchase history, and pricing]
None of those look like malware. All of them can contain personal data, contract values, commercial terms, or account health details your legal team would classify as confidential.
Why network DLP misses prompts
First, TLS decryption is often sampled for performance. That is not a design flaw. It is a scaling concession. At volume, full inspection across SaaS traffic adds latency and cost, so teams narrow scope or sample flows. Prompt traffic falls straight into the coverage gap.
Second, prompts are often sent incrementally, not as a neat discrete blob. AI interfaces stream input and output. Some send partial text while the user types. Others chunk requests through background calls. Your network stack may capture pieces, but not the full semantic unit that tells you whether sensitive data just left the business.
Third, free-form chat produces a nasty false-positive rate. Pattern matching works reasonably well on a spreadsheet with columns and known labels. It works badly on natural language.
A sentence containing a customer name, a renewal date, and a pricing term may be totally benign or a serious disclosure. If you tune rules tightly enough to catch leakage, you drown analysts in noise. If you tune them loosely enough to run production traffic, you miss the thing you care about.
That is why many teams convince themselves a ban is cleaner. Block ChatGPT. Put policy behind it. Move on.
It rarely holds.
Several CISOs who issued blanket ChatGPT bans have reversed within 90 days because the ban breaks against actual working habits. Product teams still need summarisation. Sales still needs drafting help. Support still wants translation.
Once the official path is blocked, people switch to personal devices or unmonitored browser sessions. Risk does not disappear. It moves off your logs.
That is the real failure mode. You stop seeing the behaviour at the exact moment it matters.
A practical control pattern is simple: discover, classify, act with graduated response, log.
Discover which AI tools are already active in your browser fleet. Classify prompt content before submission, at the point where the full text exists. Act based on risk, not with one blunt deny rule: warn on low-risk, require justification on medium-risk, block high-risk. Then log the redacted event so your team can audit behaviour without creating a second sensitive data store.
That approach fits the problem better than perimeter thinking because it works where the user actually discloses data: inside the browser, before the prompt leaves the device.
This is the misalignment most CISOs feel but do not always name. You were trained on perimeter and endpoint. Firewalls, proxies, endpoint agents, CASB, mail gateways. Sensible stack.
But AI use lives in a browser interaction layer that those controls only partially understand. By the time the traffic hits the network, the context is degraded. By the time it becomes an endpoint artefact, the disclosure has already happened.
The compliance angle is not abstract either. If a prompt includes customer personal data, you are immediately in regulated territory. Under GDPR Article 5(1)(c), you are expected to limit data to what is necessary. Under GDPR Article 32, you need appropriate technical and organisational measures.
If your staff are pasting account histories, renewal values, or support threads into public AI tools without prompt-layer controls, that is hard to defend as proportionate. In healthcare SaaS, the same pattern can drift into HIPAA Security Rule exposure under 45 CFR §164.312 if ePHI is included.
You do not need a giant programme to address this. Start with evidence.
A 48-hour discovery pass across a sample of employees will usually tell you enough. Which tools are in use. Which teams are heavy users. What categories of data show up in prompts. How often warnings would have been enough versus where a hard block was justified.
In most mid-market teams, the first useful output is not a policy document. It is a small table with counts, categories, and examples redacted well enough for review.
Take these three questions to your next security review:
- Where, exactly, do you inspect prompt content before it leaves the browser, not after it becomes network traffic?
- If you ban a public AI tool tomorrow, what proof do you have that usage will not shift to personal devices and disappear from your telemetry?
- Can your team distinguish between low-risk drafting, medium-risk confidential context, and high-risk PII or financial disclosure with a graduated response?
If those answers are vague, your DLP has a blind spot.
Run a 48-hour discovery scan across a sample of employees and see which AI tools are already in your fleet. If you need a browser-layer starting point, Prytive is built for exactly that control gap.