Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Netwrix: Multinational firm is the latest victim of deepfake tech at a costly price of $25 million

February 2024 by Tyler Reese, Director of Product Management chez Netwrix

Last weekend, a finance worker in Hong Kong was the latest victim of deepfake technology at a hefty price. After attending a video conference call with other members of staff, all of whom were deepfake creations, the worker paid $25 million to the fraudsters. Despite initial suspicion, the worker was put at ease after the call as people in the call looked and sounded like his colleagues.

Tyler Reese, Director of Product Management* at Netwrix believes that while developments in defenses against AI are important, it is imperative that organisations go back to the basics:

“Recently a business in Hong Kong was tricked to wire $25M by a videocall with their CFO and other team members all of which appeared to be deep fake recreations. This is further evidence that cybercriminals have already enhanced their tactics with the wealth of AI generation capabilities. The proven success of such scams not only renews calls for advanced AI defenses to combat such fakes but encourages organisations to refer to the basics of how to protect against advanced fraud.

“First, organisations should implement dual control for any privileged actions, including wire transfers. Any wire transfer over $10,000 should require two or more people to approve. Second, identity-proofing technologies should be used to verify the identity of individuals participating in financial transactions. Those can include – for example – tokens and authenticators. As a simpler solution, there can be pre-agreed privilege keywords to confirm a privileged action occurrence.

“Third, organisations should make sure to regularly review user entitlements in financial applications to ensure they are up to date and reflect the changes associated with joiners, movers, and leavers. In addition, organisations are advised to stick to the separation of duty principle to ensure business users cannot circumvent security controls in place.

“With the emerging AI technology that is still new for both sides – cybercriminals and security professionals – the world is on its path to learning how to handle these fresh opportunities and previously unknown threats. Organisations should stay vigilant for more sophisticated ways to fraud businesses with the help of AI and ensure proper security controls to mitigate the risk of the attack turning into an actual data breach or financial loss.”


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts