Cybersecurity Awareness Month - Expert Commentary from Industry Leaders
October 2023 by Marcus Fowler, Director of Strategic Threat de Darktrace
Today marks the start of Cybersecurity Awareness Month, following another turbulent year for CISOs and their organizations. In case you might be covering the event, I wanted to share some commentary from Marcus Fowler, CEO of Darktrace Federal, who shares his thoughts on the current state of the cybersecurity industry while also offering actionable guidance for organizations and citizens alike..
On Cybersecurity Awareness Month & Impact of AI on the Threat Landscape
This year, CISA’s new theme for Cybersecurity Awareness Month is challenging us to reflect on how we can best secure our world. The global threat landscape is always evolving, but AI is poised to have a significant impact on the cybersecurity industry. The tools used by attackers —and the digital environments that need to be protected—are constantly changing and increasingly complex. We expect novel attacks will become the new normal, and we’re entering an era where sophisticated attacks can adapt at machine speed and scale. Luckily, AI is already being used as a powerful tool for defenders – helping to strengthen and empower our existing cyber workers so they can keep pace with increasingly complex environments and the constant onslaught of ever-evolving cyber threats.
On Recognizing and Reporting Phishing:
Both consumers and organizations rely on email as a primary collaboration and communication tool so raising awareness of the prevalence of phishing attacks and how to recognize and report them is important. However, the email threat landscape is constantly evolving and attackers regularly pivot and embrace new techniques to try to thwart defenses. For example, between May and July this year, Darktrace’s Cyber AI Research Centre observed an 11% decrease in VIP impersonation attempts – phishing emails that mimic senior executives – while email account takeover attempts increased by 52% and impersonation of the internal IT team increased by 19%. This is just one example of how attackers pivot as tactics become less effective and more easily recognized. This challenge is only poised to grow in the future as the widespread availability of generative AI tools provide novice attackers the ability to craft sophisticated, personalized phishing scams at scale.
In a recent survey, we found that the top three characteristics that make employees think an email is risky are: being invited to click a link or open an attachment, an unknown sender or unexpected content, and poor spelling and grammar. But generative AI is creating a world where ‘bad’ emails may not possess these qualities and are nearly indistinguishable to the human eye. It is becoming unfair to expect employees to identify every phish and security training, while important, can only go so far. Increasing awareness of and the ability to recognize phishing attempts is an important first step, but an effective path forward lies in a partnership between AI and human beings. AI can determine whether communication is malicious or benign and take the burden of responsibility off the human.