Briefing - Children and deepfakes - 03-07-2025
Deepfakes – videos, images and audio created using artificial intelligence (AI) to realistically simulate or fabricate content – are booming on the internet. They are becoming increasingly accessible, as what previously required powerful tools can now be done with free mobile apps and limited digital skills. At the same time, they are becoming increasingly sophisticated and therefore more difficult to detect, especially audio deepfakes. While deepfakes have applications in entertainment and creativity, their potential for spreading fake news, creating non-consensual content and undermining trust in digital media is problematic, as they are evolving faster than existing legislative frameworks. A projected 8 million deepfakes will be shared in 2025, up from 500 000 in 2023. The European Commission states that pornographic material accounts for about 98 % of deepfakes. Deepfakes pose greater risks for children than adults, as children's cognitive abilities are still developing and children have more difficulty identifying deepfakes. Children are also more susceptible to harmful online practices including grooming, cyberbullying and child sexual abuse material. This highlights the need for legal action and cooperation, including developing the tools and methods needed to tackle these threats at the required scale and pace. Furthermore, there is a growing need for enhanced generative AI literacy for children, educators and parents. There is also a need for increased industry efforts and better implementation of relevant European Union (EU) legislation such as the Artificial Intelligence Act and the Digital Services Act. Monitoring indicators on children's online use at the EU level are currently non-existent, highlighting the need for their implementation.
Source : © European Union, 2025 - EP