Impact of Employee-Driven AI Adoption in Shadow AI

As Bring Your AI (BYOAI) gains momentum in 2025, it will bring both opportunities and challenges for organisations. We emphasize the need for robust policies and controls to mitigate potential risks associated with using employee-acquired AI tools, such as large language models (LLMs) like ChatGPT.

While these AI and IT tools enhance productivity and streamline workflow, their unsupervised adoption raises alarms about data privacy and security vulnerabilities. Notably, companies like Samsung have proactively restricted the use of generative AI in sensitive environments to protect against these risks.

The phenomenon of “Shadow AI” mirrors the previously identified risks of “Shadow IT, where employees utilise unapproved software and devices, potentially jeopardising organisational security. Many organisations lack a formal AI use policy, with estimates suggesting that only 20-25% have implemented such guidelines. This absence of oversight can lead to a chaotic environment where employees bring AI solutions without proper authorisation.

In response, organisations are adopting varied strategies to manage BYOAI risks. Approaches include forming AI councils for tool approval, restricting access to unapproved applications, providing limited licenses for testing, and implementing enterprise-wide licenses for vetted AI products.

Shadow AI

Shadow IT refers to the use of artificial intelligence systems or tools within an organization without formal approval, oversight, or alignment with governance policies. This phenomenon mirrors the earlier emergence of Shadow IT, where employees or teams deployed unauthorised software, hardware, or cloud services to meet their needs outside of the IT department’s control.

Both Shadow AI and IT arise from employees attempting to innovate, streamline tasks, or solve problems quickly, often bypassing organisational protocols. While such practices can foster agility and creativity, they introduce significant risks, including:

  1. Security and Compliance Risks: Shadow AI, like Shadow IT, may use unvetted tools or datasets, leading to data breaches, intellectual property risks, or violations of regulations like the EU AI Act or GDPR.
  2. Lack of Accountability: Unauthorized AI systems often lack clear oversight, making it difficult to ensure transparency, fairness, or explainability in their outputs.
  3. Operational Silos: Shadow AI creates fragmented systems that may not integrate well with broader organisational processes or strategic goals, potentially duplicating efforts or causing inconsistencies.

The dilemma lies in balancing innovation with control: organizations must provide sanctioned AI tools, foster a culture of responsible experimentation, and implement robust governance frameworks to prevent the risks associated with Shadow AI while leveraging its potential benefits.

Strict controls on Shadow AI

Organisations must develop balanced policies to harness AI’s benefits while safeguarding sensitive data effectively. This includes evaluating how to incorporate employee-driven AI into existing workflows, ensuring strong security measures, and fostering open communication about responsible AI use among staff. By prioritising these practices, companies can maximise the advantages of AI technologies while minimising associated risks. At our AI certification events, we provide extensive guidance and toolkits that include generic policies for starting the processes and controls.

The IT Security Institute by Copenhagen Compliance is actively navigating this landscape by integrating AI into its operations and implementing data protection filters. Initial results indicate that while the coding output from AI has improved—rising from 5% to 10% utility—there remains significant room for enhancement.

LEAVE A COMMENT