ChatGPT is all the rave but what about security?
February 2023 by Daniel Spicer, Chief Security Officer for Ivanti
ChatGPT has taken the internet by storm with major tech companies like Google and Bing rolling out their own AI chatbots. Despite the promise of AI chatbots to increase efficiency, one crucial piece of the puzzle seems to have been overlooked by all these tech giants... security.
Whilst we can all see the potential of these AI tools, they are still very much at the infancy stage. And it is vital this is remembered before businesses blindly embrace tools that have high likelihood of bringing in some serious security threats.
In light of this, we wanted to share some commentary on the topic from Daniel Spicer, Chief Security Officer for Ivanti.
“ChatGPT has taken the internet by storm. So much so, the major tech companies such as Google are rolling out their own AI chatbots in response because god forbid anyone else innovates before them. While AI chatbots undoubtedly have the potential to help teams and individuals become more efficient, we aren’t quite at the stage of letting AI run the world yet. One crucial piece of the puzzle everyone seems to be forgetting in the blind excitement of this new technology is security.
Generative AI, like ChatGPT and DALL—E, are incredibly in favour of threat actors. These tools churn out exactly what you ask for, without understanding the ask, which transpires as the perfect ingredient for crafting phishing emails. The result? It’s now possible for cybercriminals to create phishing emails at a huge scale with minimal effort. Even more alarming, the likes of ChatGPT can create stunningly realistic fake profiles. Even places that were previously considered reasonably bot-free, like LinkedIn, now have convincing profiles that even include profile pictures.
If that wasn’t bad enough, cybercriminals are starting to use ChatGPT to develop malware. While ChatGPT specifically is delivering mixed results on this, we’re not far off seeing these AI tools help exploit vulnerabilities and generate harmful malware. Even one day to the point of no human intervention. And the impact will be felt within hours, not days.
The reality is, while these AI tools are full of potential, we are still very much at the infancy stage. However, they will only continue to get better, making it even easier for cybercriminals to attack and shift the arms race between threat actors vs vendors. When it comes to innovation, we can’t (and shouldn’t) put the genie back in the bottle. But while we still have the opportunity to get ahead, we need to ensure we’re focused on how we will defend against this new threat.”