Contactez-nous Suivez-nous sur Twitter En francais English Language

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN



ETSI releases World-First Report to Mitigate AI-Generated Deepfakes

September 2023 by ETSI

ETSI is thrilled to announce its new Group Report on Artificial Intelligence on the use of AI for what are commonly referred to as deepfakes. The Report ETSI GR SAI 011, released by the Securing AI (ISG SAI) group, focuses on the use of AI for manipulating multimedia identity representations and illustrates the consequential risks, as well as the measures that can be taken to mitigate them.

“AI techniques allow for automated manipulations which previously required a substantial amount of manual work, and, in extreme cases, can even create fake multimedia data from scratch. Deepfake can also manipulate audio and video files in a targeted manner, while preserving high acoustic and visual quality in the results, which was largely infeasible using previous off-the-shelf technology. AI techniques can be used to manipulate audio and video files in a broader sense, e.g., by applying changes to the visual or acoustic background. Our ETSI Report proposes measures to mitigate them”, explains Scott Cadzow, Chair of ETSI ISG SAI.

ETSI GR SAI 011 outlines many of the more immediate concerns raised by the rise of AI, particularly the use of AI-based techniques for automatically manipulating identity data represented in various media formats, such as audio, video, and text (deepfakes and, for example, AI-generated text software such as ChatGPT although, as always per ETSI guidelines, the Report does not address specific products or services). The Report describes the different technical approaches, and it also analyzes the threats posed by deepfakes in various attack scenarios. By analyzing the approaches used the ETSI Report aims to provide the basis for further technical and organizational measures to mitigate these threats, on top of discussing their effectiveness and limitations.
ETSI’s ISG SAI is the only standardization group that focuses on Securing AI. It has already released eight Group Reports. The group works to rationalize the role of AI within the threat landscape, and in doing so, to identify measures that will lead to the safe and secure deployment of AI alongside the population that the AI is intended to serve.

See previous articles


See next articles

Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55

All new podcasts