A single claims-adjuster paste can put you in scope for three regulators at once: HIPAA for health data, GLBA for nonpublic personal information, and state insurance commissioners applying NAIC Model 670 and Model 672 data security expectations. That is the exposure density problem with AI use in insurance. One prompt. Three compliance stacks.

Most compliance managers track one lane well. Health plans know HIPAA. Personal lines teams know GLBA. Security teams track state insurance rules.

Claims work cuts across all of them, often inside the same file.

That file is unusually dense. It can include PHI, NPI, driver's licence data, banking details, policy numbers, accident reports, treatment codes, reserve notes, attorney correspondence, and litigation strategy. In practice, claims files are the densest exposure per paste in insurance.

The problem is not theoretical. It looks like this:

Summarise this claim: patient Sarah Linden, policy ID L-47382, MRI $2,400, diagnosis M54.5, denied by in-network rule. Help me write the denial letter.

Or this:

Explain why this auto claim was denied — here's the full file including driver's license, VIN, accident report, and medical treatment codes.

Or this:

Draft a subrogation demand letter. Insured: John Park, SSN last 4 8831, auto policy APP-99182, vs. Elena Reyes, policy E-4488. Incident details: [...]

Each prompt carries a different mix of regulated data. The common mistake is treating them as ordinary productivity use. They are not. They are claim file disclosures.

Start with HIPAA. If you handle health or certain life and disability claims, a pasted file can include protected health information under 45 CFR §160.103. If that information is used or disclosed outside the boundaries allowed by 45 CFR §164.502, you have a HIPAA problem before anyone debates whether the model output was useful.

Then GLBA. Claims and policy operations process nonpublic personal information. The Safeguards Rule at 16 CFR Part 314 requires financial institutions to develop, implement, and maintain information security safeguards to protect customer information.

The revised rule, with compliance milestones landing through 2023, is explicit about technical controls, access controls, monitoring, and change management around unauthorised access to NPI. If an adjuster pastes account-linked personal data into an unsanctioned AI tool, your GLBA analysis is not optional.

Then the state layer. Many insurers now operate under state laws modelled on the NAIC Insurance Data Security Model Law, often referred to as Model 668, with related governance expectations sitting alongside other NAIC models such as Model 670 and Model 672 depending on line and state implementation.

Compliance managers often know the acronym but not the operational consequence: state insurance regulators have their own enforcement path for insurer data security failures. You do not get to treat AI prompts as outside the insurance control environment.

If you are licensed in New York, add 23 NYCRR Part 500. NY DFS expects a cybersecurity program, written policies, access controls, audit trails, and incident response. Section 500.17 requires notice to the superintendent within 72 hours for certain cybersecurity events.

That timeline gets short when you cannot determine what was pasted, by whom, and into which tool.

This is where most control programmes break. You have one user action, but you inspect it three different ways, in three different systems, after the fact. That creates audit gaps, weak evidence, and a lot of manual reconstruction.

A browser-layer control is the cleaner answer because the risk happens at the point of paste. You can classify the content once, decide whether to warn or block, and log only the redacted event for review across HIPAA, GLBA, and NAIC-aligned audits.

!Prytive risk breakdown showing categories: PII, financial, confidential — the kind of categorisation required for multi-regulator audits

That kind of risk breakdown matters because claims prompts rarely fit a single label. A health claim may include diagnosis codes and treatment details. An auto bodily injury claim can carry driver's licence numbers, VINs, address data, payment information, and counsel notes.

A subrogation workflow may add legal case facts.

If your log can only say “AI used,” it is not enough. You need to know what category of data was present and whether it was stopped, warned, or redacted.

The audit question is simple: can you show, with evidence, how your team prevents unauthorised disclosure of PHI and NPI in browser-based AI tools?

A mini audit framework helps.

Which regimes apply to which data in a claims file

HIPAA

Apply HIPAA when the claim content includes protected health information and your entity or function is within HIPAA scope. Look for names tied to diagnosis, treatment, procedure codes, imaging, prescriptions, medical necessity notes, billing details, or denial rationale.

Relevant use and disclosure limits sit at 45 CFR §164.502. Minimum necessary should also be part of your review under 45 CFR §164.514(d).

GLBA

Apply GLBA when the prompt contains nonpublic personal information linked to a consumer or customer relationship. That includes account data, policy-linked financial details, payment history, addresses, claim payment information, and identifiers used in servicing.

Your benchmark is the Safeguards Rule at 16 CFR Part 314, especially whether technical safeguards actually prevent unauthorised access to NPI.

NAIC Insurance Data Security Model Law

Apply the NAIC state insurance security framework when insurer-controlled nonpublic information is involved. State enactments vary, but the operational themes stay consistent: risk assessment, access control, monitoring, incident response, vendor oversight, and regulator notification.

If an AI prompt contains claim data from your insurance operations, assume your state insurance regulator will view it through that lens.

NY DFS Part 500

Apply this on top if you are a covered entity licensed in New York. Focus on whether your controls around AI use satisfy expectations for audit trails, user activity monitoring, and incident response under 23 NYCRR Part 500.

If a prompt-related event crosses the notification threshold, Section 500.17's 72-hour clock becomes very real.

The practical takeaway is blunt. Do not start with policy language about “approved AI use.” Start with evidence. Audit 10 adjusters for a week. Measure what data types are actually going into ChatGPT, Copilot, Gemini, and Claude. Count PHI. Count financial data. Count confidential claim notes. Then compare that reality to the controls you can prove.

If you want a fast way to do that at the browser layer, see how Prytive audits and redacts AI prompts before they leave the page.