The Browser Gap: Why Your DLP Misses Shadow AI
The new exfil path is normal behavior
Security controls were built around networks, endpoints, and sanctioned SaaS.
But gen‑AI usage created a new, high‑frequency path:
- Copy/paste from customer tools into chat interfaces
- Uploading "just one file" to summarize or rewrite
- Reusing credentials and snippets in prompts
None of this looks like an attacker. It looks like work.
Why existing controls don't catch it
Most stacks struggle at the browser layer because:
- Context is in the DOM (what the user is doing inside the page)
- SaaS is encrypted (network inspection is limited)
- Endpoints see files but not "where they go" in web apps
- CASB helps but often isn't real‑time at the moment of paste/upload
What to control (without breaking teams)
A practical rollout is phased:
- Monitor: log risky actions to build a baseline
- Coach: warn + justify for medium‑risk events
- Enforce: block only for clearly critical patterns and destinations
What "good" looks like
In a well‑run pilot you should be able to answer:
- Which apps/users generate the most risky events?
- What data types trigger the majority of alerts?
- What policies prevented real leakage?
A simple next step
Pick one workflow (e.g. Finance uploads to ChatGPT) and run a 2‑week baseline + tuning.
You'll learn more than months of policy writing.
"The biggest security risk isn't the tool—it's the gap between what you think is happening and what actually is."