Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

The battle of GenAI: Defending against ransomware, misinformation and bias

January 2024 by Simon Bain, CEO and founder of OmniIndex


Over half of business executives say they expect Generative AI (GenAI) to lead to a catastrophic cyber-attack in the next year. Yet more than two-thirds still say they’ll use GenAI for cyber defence over the next 12 months, according to PwC’s latest Global Digital Trust Insights Survey. Simon Bain, CEO and founder of web3 data platform OmniIndex, believes that whether for good or bad, GenAI is everywhere, and when used in the right way is a powerful enabler for businesses and everyday users alike. That said, all users have a responsibility to ensure that sensitive data remains private and secure.

Commenting on the threat of AI in security, Bain said: “AI is evolving so fast that it’s creating new methods of attack and risks. The question on most people’s lips is whether or not defence techniques can keep up. The answer is it depends how the data is stored and protected.

“When the initial buzz around AI has finally died down, people will once again start to worry about their data privacy and what information they are willingly sharing with the huge wave of AI tools littering our professional and personal lives. It has been repeatedly proven that legacy data storage and the cloud services that rely on them are not fit for purpose when it comes to defending our data against attacks. And yet, nearly all of the leading AI tools that hit the headlines in 2023 use these vulnerable data stores and Web2 systems.”

Bain argues that AI can be a useful tool in the fight back against the AI powered threats, but only when used alongside secure and encrypted data storage as part of a wider security strategy.

“AI can help prevent attacks by being trained to continually scan for attack vectors and trigger set defence procedures in response to an incident. For example, in the case of an attack on a specific database or blockchain node, AI can isolate the damage to that single location and stop it spreading into other areas and causing more damage,” Bain continues.

“This is most effective with web3 storage and other sandboxed data stores, using traditional cloud storage leaves you particularly vulnerable. Blockchain and web3 platforms offer huge potential for enhanced data security and privacy with their security, privacy and productivity benefits including immutable data storage and use of improved end-to-end cryptography technology like fully homomorphic encryption.

“In the coming year, people ought to be savvier and more informed about their sensitive data and who or what they share it with. And when this happens, AI companies will no doubt need to reconsider what technology they are using to keep that data secure.”

Bain argues that as well as contributing to the generation of new attack methods and tactics, the rise in popularity of AI powered large language models could fuel questions around our trust in AI.

“Misinformation and bias are incredibly difficult to protect against if you are relying on tools like large language models. These models have ingested vast amounts of data in order to be able to regurgitate it all back to the end user on request. The truth is that the information it holds, often gathered via web feeds, is often littered with both accidental and deliberate inaccuracies.

“It is possible to develop a more accurate and trustworthy AI engine if you narrow the data set and focus on quality and specific expertise over quantity and generalisations. For example, you can produce a narrow AI engine designed specifically to analyse medical data which only uses the medical information you have validated in order to generate the responses. You can also include probability engines within the AI to continually check and score all outputs against the inputs to make sure the responses are relevant.

“An AI model such as that of OmniIndex is uniquely secure because it generates answers from encrypted data through patented fully homomorphic encryption (FHE) into our Data Platform’s AI with full machine learning and large language model integration for real-time analytics of fully encrypted data.

“What this means is that a user’s data is never decrypted and therefore never becomes vulnerable to attack. Even if it an attacker gets his hands on your data, they’ll be unable to read any of the information. When encryption is combined with our blockchain storage and other AI powered defences, the threat of ransomware or other attacks is eliminated.”


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts