Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Privacera announced the Availability of Privacera AI Governance (PAIG)

October 2023 by Marc Jacob

Privacera announced the General Availability (GA) of Privacera AI Governance (PAIG). PAIG allows organizations to securely innovate with generative AI (GenAI) technologies by securing the entire AI application lifecycle, from discovery and securing of sensitive fine-tuning data, Retrieval Augmented Generation (RAG) and user interactions feeding into AI-powered models, to model outputs as well as continuous monitoring of AI governance through comprehensive audit trails. Securing sensitive data and managing other risks with AI applications is crucial to enable organizations to accelerate their GenAI product strategies.

The emergence of Large Language Models (LLMs) is providing a vast range of opportunities to innovate and refine new experiences and products. Whether it’s content creation, developing new experiences around virtual assistance or improved productivity around code development, smaller and larger data-driven organizations are going to invest in diverse LLM-powered applications. With these opportunities, there is an increased need to secure and govern the use of LLMs within and outside of any enterprise, small or large. Such risks include sensitive and unauthorized data exposure, IP leakage, abuse of models, and regulatory compliance failures.

PAIG enables organizations to responsibly leverage the power of GenAI by providing deep visibility into risks across use of any model and help enterprise teams to apply consistent controls to both AI applications and the underlying data required to train and fine-tune the AI applications. PAIG has been designed to be open and flexible to protect a range of GenAI applications, models and data - whether it’s structured, semi-structured or fully unstructured data sets. This design principle is particularly relevant as organizations are increasingly looking to apply GenAI techniques to a broad range of use cases to extract, organize and derive critical insights.

PAIG offers the following key capabilities:

• Discover and classify sensitive data used to train, or fine-tune custom or generally available GenAI models and VectorDB
• Protect models and VectorDB from being exposed to sensitive training or tuning data
• Secure and continuously protect models from sensitive data prompt inputs and outputs with allow/deny, masking, or redaction of sensitive data in real-time
• Comprehensive observability alongside built-in dashboards and user query analytics which provide enhanced transparency on who accessed what AI applications, what sensitive data was accessed or denied, what sensitive data assets are leveraged for each AI application, and what data protection policies are in place for each AI application
• Ability to easily integrate with existing security monitoring and management tools
• Open and extensible SDK to integrate seamlessly into your GenAI applications and LLM libraries


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts