Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Christoph C. Cemper of AIPRM, identifies the most common forms of AI scams and how to avoid them

April 2024 by Christoph C. Cemper, an AI expert on behalf of AIPRM

AI is bringing unforeseen economies of scale to the world of fraud, which is abusing the power of language tools such as ChatGPT.
Experts at Europol highlight that Artificial Intelligence enables criminals to target hundreds of victims at once, putting thousands of euros and personal data at risk.
Eager to help adults around the world, Christoph C. Cemper, an AI expert on behalf of AIPRM, identifies the most common forms of AI scams and how to avoid them.

The most common forms of AI scams
1. Deepfakes
Deepfakes are used by scammers to create manipulated images, audio and video content. It involves cyber criminals hosting a large database of images and videos in order to replicate the voice and appearance of an individual, usually of somebody that is in the public eye.
Celebrities such as Martin Lewis have been involved in viral deepfake videos over the past year, which showed him endorsing a fake investment project from Elon Musk. More recently however, amid conspiracy theories circulating, eagle-eyed users believe recent images of Kate Middleton may perhaps be a deepfake.
To limit your chances of being stung by a deepfake, it’s important to be very cautious about the personal information you share online, as well as watermarking photos and enabling strong privacy settings.
Christoph C. Cemper, on behalf of AIPRM, shares how to spot a deepfake:
“AI allows scam artists to produce very convincing materials, whether it be through text, images, videos or audio clips.”
“If the deepfake is in the form of a video clip, look for unnatural expressions such as limited blinking and lack of expression, which AI can find hard to mimic. A lot of deepfake videos commonly use lip syncing, so carefully monitor this to ensure speech looks natural.”
2. Voice Cloning
A form of deepfake AI, Voice Cloning is capable of replicating the voice of an individual in order to convince someone that they are having an actual conversation with that person.
According to security firm McAfee, it reportedly only takes three seconds of audio for artificial imposters to create a convincing AI voice clone. This is increasingly concerning in an age where 53% of adults share their voice data online at least once per week via social media and voice notes, making it increasingly easy for cybercriminals to tamper with personal data.
To reduce your chances of being involved in AI scams such as Voice Cloning, it’s important to limit the information you share about yourself personally, especially over voice recordings. It’s also advised to verify caller identity and report any suspicious activity to relevant authorities.
Christoph C. Cemper shares how to spot Voice Cloning scams:
“If you think you are being conned by a Voice Cloning scam, be sure to ask the caller for as much detail as possible, as only the individual they are pretending to be will know the correct answers.
“Many Voice Cloning scams pretend to be family or friends in distress, so it’s wise to agree on a verbal safety question or phrase with loved ones, that only they will know the answer to.”
“Be sure to listen for unexpected noises or unusual changes in the scammer’s tone of voice too, such as unusual pauses which suggest you aren’t having a real time conversation with the individual.”
3. Verification Fraud
Something we’ve all become accustomed to is using passwords and biometrics to access apps on our mobile devices. By creating images and videos of non-existent people, this AI scam can deceive security protocols, granting access to financial and sensitive information.
Christoph C. Cemper shares how to spot Verification Fraud:
“It’s important to spend time educating yourself on recognising and avoiding AI scams such as Verification Fraud. Requests for personal information, unrealistic pricing or a pressure to act quickly, all highlight great red flags.”


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts