Deepfake epidemic for the famous: incidents involving celebrities up by 81%
April 2025 by Surfshark
Deepfake incidents are increasing at an alarming pace, with 179 reported in the first quarter of 2025 — a 19% rise from all of 2024, Surfshark’s analysis shows. Notable fakes include Taylor Swift in compromising situations, Trump criticizing Zelenskyy’s attire, and Elon Musk endorsing pro-China views. Celebrities alone were targeted 47 times during this period, an 81% increase compared to all of 2024.
Looking back, from 2017 to 2022, only 22 incidents were recorded. In 2023, this number nearly doubled to 42 incidents. By 2024, incidents increased by 257% to 150. Remarkably, in just the first quarter of 2025, the number of incidents has already surpassed the total number from last year.
“Deepfake technology is advancing at an alarming rate, and with it, the capacity for misinformation and malicious intent grows. The potential for harm ranges from tarnished personal reputation to threatened national security. People have to be cautious, as losing trust in the information we hear and see can significantly impact personal privacy, institutions, and even democracy,” says Tomas Stamulis, Chief Security Officer at Surfshark.
The most popular types of deepfake incidents
Surfshark categorized deepfake incidents into four categories: explicit content generation, fraud, politically charged content and miscellaneous content.
In the first quarter of 2025:
● Explicit content incidents rose to 53, surpassing the 26 from the prior year;
● Fraud reported 48 incidents, nearly matching the total of 56 from 2024;
● Political incidents reached 40, nearing the 50 from 2024;
● Miscellaneous content increased to 38 incidents, exceeding the previous total of 18.
Since 2017, the most popular format for deepfake incidents is video, with 260 reported cases. The second most popular format is image, with 132 incidents reported. The last category is audio, which recorded 117 incidents.
Who are the main targets?
Politicians were targeted 56 times, almost reaching the 2024 total of 62, even though last year was the election year in the US. The general public also saw more attacks in the first quarter of 2025, a 23% increase compared to 2024. But celebrities had the biggest impact with an 81% increase compared to all of 2024.
Since 2017, celebrities have been targeted in 21% of incidents, totaling 84 cases. Elon Musk was targeted 20 times, accounting for 24% of celebrity-related incidents. Taylor Swift follows with 11 counts, while Tom Hanks has 3. Kanye West, Emma Watson, and Brad Pitt have each been targeted 2 times.
Politicians have been involved in 36% of all deepfake incidents, totaling 143 cases since 2017. This activity, often increasing during elections, is used to push political agendas. Donald Trump is the most targeted, with 25 incidents, making up 18% of politician-related deepfakes. Joe Biden follows with 20 incidents, primarily during elections, where his voice was used in robocalls. Kamala Harris and Volodymyr Zelenskyy have faced 6 and 4 incidents, respectively.
The general public was targeted 43% of the time, with 166 deepfake incidents. Of these, 41% accounted for incidents related to various types of fraud, and 39% of these cases resulted in the generation of explicit media — where unauthorized images or videos of people were created.
How to spot a deepfake
According to T. Stamulis, due to their widespread distribution and enhanced realism, detecting deepfakes is becoming progressively more difficult. Technology that generates deepfakes often outpaces the capabilities of detection tools. The significant amount of this type of content online also complicates distinguishing between what is genuine and what is not. However, there are some things you can look out for to detect a deepfake, including:
● Unnatural movements;
● Color differences;
● Inconsistent lightning, as well as unmatching reflections in each eye and unnatural corneas;
● Poor lip-sync (audio doesn’t match lip movements);
● Blurry or distorted backgrounds;
● Distribution channels (it can be shared by a bot).
T. Stamulis also notes that illegal deepfakes, such as intimate image abuse or “revenge porn,” child sexual abuse material, hate crime, fraud, false communications, terrorist activity, stalking and harrasment or blackmail, should be reported to the police. Moreover, taking precautionary measures such as having a secret passphrase with the loved ones, especially children and elderly, in case of a false phone call — are strongly recommended.
METHODOLOGY
This study used data from Resemble.AI and the AI Incident Database to create a combined dataset covering deepfake incidents since 2017. Incidents were included if they involved the generation of fake videos, images, or audio and were covered by media articles. These incidents were categorized into fraud, explicit content generation, politically charged content, and miscellaneous. Additionally, we categorized the target groups: politicians, celebrities, and the general public. We conducted analyses to determine the prevalence of each category and to explore what kind of incidents these categories cover. For complete research material behind this study, visit here.