Mindgard introduces a new tool to shield businesses from GenAI data breaches
March 2024 by Marc Jacob
Mindgard launched a new module tailored specifically for data loss prevention (DLP). The new offering enables organisations to minimise business and reputational risk from data loss whilst leveraging the productivity benefits of third-party LLM and GenAI services such as ChatGPT and Microsoft CoPilot.
The breakneck pace of AI evolution has elevated governance and security into urgent concerns. Businesses face reputational risks from both directions: by failing to integrate LLMs into their products and services fast enough to keep up with competitors, and by leaving themselves exposed to data loss from unmonitored use of third-party GenAI solutions. AI systems process vast amounts of data, which could be mishandled intentionally or accidentally, leading to identity theft, financial fraud, and abuse. In 2023, ChatGPT experienced a significant data breach caused by a bug in an open-source library, exposing users’ personal information and chat titles.
Mindgard’s platform already helps customers manage AI security risks, ranging from data poisoning to model theft, across internal AI systems and third-party models. The new module adds protection against the three major data loss threats facing AI systems: outbound risk, external attackers compromising internal models, and ecosystem risk.
The new module allows customers to holistically monitor, detect, and report risk data from LLMs and GenAI. Granular AI data access controls allow flexible configuration based on organisational needs, as well as limit insider risks from rogue employees.
This approach stands apart from existing AI compliance solutions – allowing organisations to develop or consume AI services without compromising their security posture. Mindgard anticipates strong demand as more countries and states enact AI regulations over the coming years.