AI is revolutionizing how employees discover and use enterprise knowledge. Instead of wading through siloed data, AI copilots quickly surface the right information—boosting productivity [1].
But there’s a dark side to these copilots: data leaks. A recent survey shows 45% of organizations using AI chatbots have experienced leaks, and 80% cite security and privacy concerns as the biggest barrier to wider adoption [2].
While most data sources provide strong identity and access management, AI re-indexes data in new formats which can create new issues and magnify existing ones. This happens for three key reasons:
1: Excessive Agency: Copilots operate over admin credentials which grants them powers to scrape data indiscriminately. Once these credentials are granted, enterprises have little control over what gets indexed and used for generating responses.
2: Removal of “practical obscurity”: Due to the sheer volume of enterprise data, permissions on the underlying documents are often improperly or excessively granted. Chatbots inherit these incorrect permissions and also make them easily exposable by pulling the information in their response.
3: Novel Threats: AI interactions are opaque and can leak data in unpredictable ways. Prompt injections and other security issues also allow adversaries to manipulate AI into leaking information [3].
Protecting sensitive information from accidental or intentional leakage through AI copilots is a significant challenge. With skyrocketing usage, enterprises cannot afford to wait for copilot makers to improve their security foundations and require external solutions.
Historically, enterprises have existing data security tools like DLPs and DSPMs and there is a natural inclination to use them to secure information in copilots. However, this is a flawed approach.
Limitations of traditional solutions include,
Due to their inability to detect complex sensitive information and deal with ever-changing usage patterns, DLPs and DSPMs are ill-suited for the fast-evolving AI applications. As AI becomes the primary way in which enterprise knowledge is created and consumed, it is important to build AI-native security solutions.
While AI has negatively affected data security, it also presents an opportunity to create innovative data security solutions by using models that can reason about data in the same ways as a human can.
Using these foundations, Realm Labs has developed a breakthrough AI Detection and Response solution that:
For enterprises that wish to harness the powers of AI copilots while minimizing the risks, Realm’s AIDR is the ideal lightweight approach to achieve this balance. For more, reach out to hello@realmlabs.ai.
Want to chat? Find us on LinkedIn.
If you want me to cover a particular area of leadership, you can reach out directly to hello@realmlabs.ai.
[2] https://www.cnbc.com/2024/05/16/the-no-1-risk-companies-see-in-gen-ai-usage-isnt-hallucinations.html
[3] https://www.wired.com/story/chatgpt-poem-forever-security-roundup/
[4] https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf