The increasing adoption of artificial intelligence (AI) in corporate environments has led to a surge in "shadow AI"βthe use of AI tools without official authorization. A recent study by Ivanti revealed that 42% of office workers utilize generative AI tools like ChatGPT at work, with one-third of them keeping this usage secret. This clandestine behavior often stems from unclear company policies, restrictions on certain AI tools, or a desire for a competitive edge. Such secrecy can result in significant risks, including data breaches, as sensitive corporate information may be inadvertently exposed to unauthorized AI models. axios.com
Moreover, the lack of transparency in AI development and deployment can lead to insider threats. Employees with access to AI systems might manipulate models to gain personal benefits or to bypass established compliance protocols. For instance, AI-generated deepfakes can be used to fabricate documents or communications, undermining corporate integrity. Compliance departments face challenges in detecting and mitigating these risks, as traditional controls may be circumvented by sophisticated AI tools. wp.nyu.edu