Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Hundreds of LLM servers leaking sensitive data

August 2024 by James Sherlow, Systems Engineering Director, EMEA, for Cequence Security

Following the news that Flowise has been leaking sensitive data from LLM servers as reported by Dark Reading based on this research…
… The comment from James Sherlow, Systems Engineering Director, EMEA, for Cequence Security.

“LLM’s bring huge amounts of benefits to companies wanting to automate processes and OpenSource implementations are a great way to fast track enhancements that will allow optimisation of business processes and interactions with their customers. Given the large amounts of data, especially PII data, that is being processed a full lifecycle protection scheme needs to be in place.
Even though the OpenSource LLM has thousands of stars and is lauded for functionality, this does not absolve it from testing the security side. Testing LLM’s before they go into production is critical, to see if vulnerabilities can be exlpoited and/or data exfiltrated. Testing alone is not enough, however. To complete the lifecycle one should monitor all activity to the APIs servicing LLMs to ensure Sensitive Data exposure is secured and minimised to authenticated and authorised viewers. Runtime monitoring should extend to vulnerably exploits and even more dangerous business logic abuse, where attackers look to exfiltrate data through exploding the business logic and flaws within it - LLMs will expose new flaws and therefore self-learning models are a requirement to identify these.
Given the data that can be in use in such scenarios, rapid inline native response is critical for any security platform. Relying on instructing third party tools to respond will be too late and the malicious actor will already have moved, probably using OpenSource LLM’s themselves. Whilst LLMs allow business to enhance functionality and customer experience, they also open new opportunities for attackers to exploit. There is no short cut here and those using OpenSource LLMs should not assume all is secure; they are a base for functionality not security.”


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts