GARTNER: By 2028, a quarter of enterprise breaches will be traced back to AI – HGS’ CISO comments
December 2024 by HGS
Gartner has recently published its top predictions for IT organisations and users in 2025 and beyond. Notably, it predicted that by 2028, 25 percent of enterprise breaches will be traced back to artificial intelligence (AI) agent abuse, from both external and malicious internal actors.
Gartner suggests that AI substantially expands what is already a vast “invisible attack surface” for enterprises. As AI continues to evolve and its capabilities become more accessible to the wider public, the threat of AI agent abuse rises, leading to enterprises being increasingly vulnerable. Consequently, Gartner suggests that organisations need to implement new controls and systems that prevent any potential AI-related enterprise breaches.
Abid Khan, global practice head of cyber strategy and resilience at HGS, believes that although AI abuse will be the source of many cyber breaches, it also will be the best line of defence against them too, mitigating a range of potential threats:
“The continual threat of a cyber breach, whether internal or external, undoubtedly poses a major threat to organisations across the business world and consequently their customers. This threat is constantly increasing, as technological advancements help present new avenues for cyber criminals to exploit organisational vulnerabilities, as outlined by Gartner’s recent predictions.
“Nevertheless, despite AI being tipped as a major catalyst for future enterprise breaches, it is also the solution. AI is leading the way for organisations to tackle this wave of cybercrime, becoming many industries’ greatest weapon in their data protection arsenals.
“AI can rapidly analyse huge volumes of data to identify unusual patterns that would previously go unnoticed by manual investigation. For example, in the banking industry, it creates predictive models that can forecast future spending for consumers. By doing this, the technology is capable of quickly identifying any unusual buying behaviour that occurs in the event of a person’s banking information being breached. It subsequently flags these suspicious activities, mitigating any potential damage.
“What’s more, as cybercriminals develop new methodologies and tactics, AI can learn from these patterns and update its detection algorithms, accordingly, enabling organisations to stay one step ahead. This means that regardless of if the AI agent abuse is occurring internally or externally, AI cybersecurity systems will be able to identify and prevent a range of future breaches.”