Skip to content
Noxys

Blog Article

Shadow AI: The Invisible Risk Threatening Your Enterprise in 2026

A banker pastes a client’s IBAN into ChatGPT. A consultant uploads a competitive analysis to Claude. A researcher shares patient data with Gemini. Every day, employees at your organization use AI tools that IT has never approved — and nobody sees it. This is shadow AI.

Shadow AIData LeakageGDPREU AI ActAI GovernanceEnterprise Security

What is Shadow AI?

Shadow AI refers to employees’ use of artificial intelligence tools that have not been approved, evaluated, or deployed by IT or the security team. It is an evolution of shadow IT — the well-known phenomenon of teams using unsanctioned cloud applications — but with a fundamentally different risk profile. Where shadow IT mainly concerns file storage or messaging, shadow AI actively involves inputting sensitive data into external language models.

The distinction is critical: when an employee uses Dropbox without authorization, data stays data. When they use ChatGPT without authorization, data potentially becomes training data, information accessible to a third party, and evidence of a regulatory violation. The risk dynamic is an order of magnitude higher.

Why do employees do it? Three main reasons: immediate productivity (free AI tools deliver results in seconds), slow IT procurement cycles (validation can take months), and the availability of tools far superior to internally approved alternatives. This is not malice — it is individual optimization without awareness of collective risk.

Shadow AI in Numbers: The Scale of the Problem

Data available in 2025 paints an alarming picture. The phenomenon is no longer marginal — it is systemic.

68%of employees use free-tier AI tools via personal accounts, and 57% of them input sensitive data into those tools (Menlo Security, 2025).
90%of companies have employees using personal AI tools for work purposes (Fortune/MIT, 2025).
73.8%of ChatGPT accounts used inside enterprises are non-corporate accounts — personal accounts used on corporate devices (Harmonic Security).
38%of employees share confidential data with AI platforms without prior authorization.
$4.63Maverage cost of a data breach involving shadow AI — $670K more than breaches without shadow AI (IBM Cost of a Data Breach Report, 2024).
40%+of large enterprises will have a shadow AI security or compliance incident by 2030, according to Gartner forecasts.
+50%surge in web traffic to generative AI sites, reaching 10.53 billion visits in January 2025 (Cloudflare Radar, 2025).

These figures are not theoretical projections — they describe the current state of most organizations. The question is not whether your employees use shadow AI, but at what scale and with what data.

The 5 Concrete Risks of Shadow AI

1. Sensitive Data Leaks to LLMs

The most immediate form of risk: employees paste customer data (IBANs, social security numbers, addresses), internal system credentials, strategy documents, or confidential emails directly into prompts. This data then flows to third-party vendor servers — OpenAI, Anthropic, Google — without any legal or security validation.

Opt-out parameters for training vary significantly across vendors and pricing tiers. On free plans, data may be used to improve models. According to Cyberhaven, 11% of data pasted into ChatGPT by employees is confidential — a substantial volume at the scale of an organization with several hundred people.

2. GDPR and EU AI Act Violations

Any transfer of personal data to a US-based provider without an adequate legal basis constitutes a GDPR violation. The problem: most employees using free-tier ChatGPT or Gemini do not know they are transferring personal data outside the EU. The absence of an audit trail makes it impossible to demonstrate compliance with Articles 4, 9, 13, and 14 of the EU AI Act.

Potential sanctions are double: up to 4% of global annual turnover under GDPR, plus up to 3% under the EU AI Act. For a company with €500M in revenue, the theoretical cumulative exposure exceeds €35M.

3. Intellectual Property Loss

Trade secrets, proprietary source code, product roadmaps, and strategy documents are among the most frequently entered data categories in unsanctioned AI tools. A developer submitting a proprietary code block for debugging, an analyst pasting a financial model for interpretation: each interaction may constitute an intellectual property disclosure.

The difficulty: unlike a classic file exfiltration, there is no visible copy, no transfer detectable by traditional DLP solutions. Data leaves in a prompt and appears nowhere in standard network logs.

4. Bias and Unauditable Decisions

When operational decisions — CV pre-screening, customer risk scoring, credit evaluation — are made by partially relying on LLM output, the organization embeds systemic biases without any validation process. These LLMs have not been evaluated for these specific uses, and no bias audit has been conducted.

The legal risk is double: potential discrimination (employment law, consumer law) and the impossibility of traceability in the event of a challenge. The EU AI Act specifically requires human oversight and explainability for AI systems used in decisions with significant impact.

5. Expanded Attack Surface

Shadow AI significantly expands the organization’s attack surface. Unvetted AI browser extensions may have access to the content of every web page visited. Personal accounts on AI platforms do not benefit from enterprise SSO, IT-policy-enforced MFA, or anomalous connection monitoring.

A compromised personal ChatGPT account, associated with months of professional conversation history, represents a prime intelligence source for an attacker. Third-party AI APIs integrated into internal scripts without key rotation or access controls represent another frequently overlooked vector.

Why Traditional DLP Fails Against Shadow AI

First-generation Data Loss Prevention (DLP) solutions were designed to monitor emails, USB devices, and cloud storage. They do not see what employees type into a web browser window. When an employee copy-pastes an IBAN into chat.openai.com, no standard network DLP detects the sensitive nature of the transmitted data.

Regex-rule-based DLP also generates massive false positive rates when one attempts to adapt them to web traffic inspection. A regular expression to detect IBANs will trigger alerts on completely harmless plain text. The security team ends up flooded with alerts and eventually disables the rules.

DLP tools do not understand AI context: they do not distinguish an email containing an IBAN (legitimate context) from a prompt sent to an external LLM containing the same IBAN (risky context). The semantics of the transmission channel are absent from their detection model.

Finally, no classic DLP solution has an inventory of AI tools in use within the organization. They cannot answer the question: “Which AI tools are our employees actually using?” — a requirement that is explicitly stated in Article 4 of the EU AI Act.

How to Regain Control: 5 Steps

1. Discover

Start by establishing a complete inventory of all AI tools in use across the organization — sanctioned and shadow. Without this visibility, no policy can be enforced. The inventory must cover web applications, browser extensions, API integrations, and desktop tools. It must be continuously updated, as new AI tools emerge every week.

2. Classify

Assign a risk tier to each identified tool. Criteria include: server location (EU or non-EU), data-use policy for training, SOC 2 / ISO 27001 compliance, availability of a GDPR-compliant DPA (Data Processing Agreement), and enterprise SSO support. This classification directly feeds into usage policies.

3. Define Policies

Policies must be granular: per department, per tool, per data type. HR may have different rules than the sales team. Three primary action modes: block (the action is prevented), coach (the user receives an educational message and can continue after confirmation), log (the action is recorded without interruption). Coach mode is generally more effective than pure blocking, as it educates without frustrating.

4. Monitor in Real Time

Detection of personal (PII) and sensitive data must operate in under 10ms to avoid degrading the user experience. Monitoring must cover not only major AI platforms (ChatGPT, Gemini, Claude, Copilot) but also the dozens of specialized tools used in business functions: writing tools, code assistants, image generators. The audit trail must be immutable and exportable for regulators.

5. Train and Coach

Training must not be limited to an annual awareness session. In-context coaching — directly in the browser, at the moment the employee is about to make a mistake — is far more effective. A clear message explaining why an action is risky, visible at the precise moment it is being considered, builds good reflexes durably. The goal is not to block AI usage, but to secure it.

Noxys covers steps 1, 3, 4, and 5 out of the box. Browser-based discovery is operational in under 10 minutes, with no infrastructure changes or proxy deployment required.

Take Control of Your Shadow AI

Deploy Noxys in under 10 minutes. Free plan for up to 10 users. No credit card required.

FAQ

How do I know if my employees use ChatGPT?

Traditional DLP solutions and network proxies will not reliably tell you. The only effective approach is browser-level detection, which intercepts traffic at the source before TLS encryption. A browser extension deployed via corporate policy (MDM/GPO) allows you to see in real time all AI tools visited, which employees use them, and what types of data are sent to them. Noxys deploys this way in under 10 minutes and delivers a complete inventory from day one.

Is shadow AI illegal?

Shadow AI itself is not illegal, but it can lead to violations of existing laws. If an employee sends customer personal data to an AI provider without a legal basis, the organization violates GDPR — regardless of whether the employee did so inadvertently. If AI tools used in decisions with significant impact (HR, credit) are not documented and audited, the EU AI Act may be breached. Responsibility always rests with the organization, not the employee.

How long does it take to deploy a solution?

It depends on the approach. A network proxy or CASB solution typically requires several weeks of deployment, changes to network parameters, and often an intervention on workstations. A browser-extension-based approach, like Noxys, deploys in under 10 minutes via existing fleet management policies (MDM, GPO, Chrome Enterprise). Initial visibility is immediate; blocking or coaching policies can be activated incrementally.

Related Content