Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Enkrypt AI raises $2.35M

February 2024 by Marc Jacob

Generative AI and large language models (LLMs) present an opportunity for enterprises to gain new efficiencies and improve functionality, however, the safety and security of such technology remains an obstacle. Enkrypt AI is today announcing a $2.35M funding round to solve this problem for enterprises, ensuring their use of generative AI and LLMs is safe, secure and compliant. The seed funding round was led by Boldcap with participation from Berkeley SkyDeck, Kubera VC, Arka VC, Veredas Partners, Builders Fund and angel investors in the AI, healthcare and enterprise space.

Enkrypt AI was founded by two Yale PhDs and AI practitioners Sahil Agarwal (CEO) and Prashanth Harshangi (CTO) in 2022. With Enkrypt AI, enterprises have a control layer between these LLMs and end-users, providing security and safety functionality. Enkrypt AI Sentry has been able to reduce vulnerabilities across a wide range of LLMs, demonstrating a reduction in jailbreaks from 6% to 0.6% in the case of LlaMa2-7B. The Enkrypt AI team has previously developed and deployed AI models across diverse sectors, including the US Department of Defense and various businesses in self-driv ing cars, music, insurance and fintech.

Enkrypt AI’s Sentry is the only platform that combines both visibility and security for generative AI applications at the enterprise, so that enterprises can secure and accelerate their Generative AI Adoption with Confidence. A leading Fortune 500 data infrastructure company is using Sentry to have complete access control and visibility over all their LLM projects, helping them to detect and mitigate LLM attacks such as jailbreaks and hallucinations, and prevent sensitive data leaks. This is ultimately leading to faster adoption of LLMs for even more use-cases across departments.

Enkrypt AI is proven to help enterprises accelerate their generative AI adoption by up to 10x, deploying applications into production within weeks compared to the current forecast of 2 years within enterprises. Their comprehensive approach addresses the key concerns causing hesitation among enterprise decision-makers:
• Delivers unmatched visibility and oversight of LLM usage and performance across business functions.
• Ensures data privacy and security by protecting sensitive information and guarding against threats.
• Manages compliance with evolving standards through automated monitoring and strict access controls.

The safety of AI has been a key concern for policymakers and experts. Earlier this month, the US Government’s NIST standards body established an AI safety consortium. In an era where generative AI is becoming a transformative force across industries, safeguarding these systems goes beyond best practice – it’s a necessity.


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts