Guest blog by Molly
Hello! My name is Molly, I’m 16 years old and when I’m not stressing out over GCSEs, I enjoy learning about areas of interest such as feminism in the digital age. I’m very excited to be able to share some information and tips concerning one major issue in this area and I hope more people take an interest in this crucial topic. It is no secret that AI can be immensely dangerous in the wrong hands, but what are the specific threats to girls and women? How can we begin to mitigate the damage? This is a multifaceted issue that requires action from multiple actors if we are to combat it effectively. There are actions anyone can take, as well as specific responsibilities for legislators, parents and teachers. In this blog, I will explore the above questions and offer guidance on how I think we should respond to the threat of deepfakes.
A deepfake is a piece of synthetic media that makes someone look like they are performing the actions or saying the words of someone else. You might have seen them on the news, heard about the explicit AI generated images of Taylor Swift that flooded X (formally Twitter), or maybe read about how a deepfake audio of himself almost led Sadiq Kahn to develop a ‘serious disorder’. Whilst coverage of these cases draws attention to the matter, it also reaches people who, once aware of deepfakes, start engaging with them in unethical ways themselves. Deepfakes can be created by anyone for free in less than 25 minutes – that is all it takes to throw truth out of orbit.
Deepfake technology has been in development since the 90s, but the name was only coined in 2017 when a Reddit user named ‘deepfakes’ created a thread of videos of celebrities such as Taylor Swift and Emma Watson having sex. The page was shut down a few months later. But, in that time, it gained over 80,000 subscribers. There is an audience for deepfakes, and not just ones of celebrities. Anyone with a social media presence, from influencers to normal schoolgirls, is at risk of having AI-generated pornography made of them, even if they have never posted themselves naked. There is some singular horror to never even being there at the scene of the crime committed against you.
Despite the prevailing coverage of deepfakes predominantly being a risk to politicians and celebrities, this isn’t actually the primary way deepfakes are used. A study from 2023 found that 98% of all deepfake content is non-consensual pornography and that 99% of the time, women are the victims.
It is incredibly difficult to legislate against deepfakes. Our slow legal systems can’t keep up with the constantly mutating and proliferating nature of deepfakes and the technology used to create them. Those who engage with this technology the most are generally adept at using borderless online platforms, avoiding detection. Even once there are actual laws in place against deepfakes (which have only recently been introduced in the UK and 10 US states), it is onerous for people to completely remove this content from the internet, let alone to prosecute a shadowy online username.
This doesn’t mean we’re defenceless, just underprepared. Here’s what schools and parents can do to help combat the rise of deepfakes:
We need to be able to discuss uncomfortable topics like non- consensual pornography more openly. We cannot solve a problem without talking about it. To achieve this, parents can:
Try to integrate this subject into conversations that may happen about AI or new technology already.
Discuss deepfakes without embarrassment and with an open-mind. If a teenager has access to the internet, they are likely aware to an extent of variations of this kind of technology content already. What is key for parents is to make it clear that this issue isn’t taboo. Rather, it is something that can and should be discussed not just between peers but with their guardians.
Try not to be alarmed by, or judgemental of, how much a child knows or doesn’t know. We are all learning about these things as they develop and we need to educate ourselves and each-other as we go. If a child asks something their parent doesn’t know or makes them uncomfortable, they should take their time to research the issue or find a way to approach it in a sensitive manner, rather than shut the conversation down.
Keep conversations going. This is a developing problem and therefore an ongoing discussion. One chat is not going to solve the problem. Try to let your child know that when they want to talk about deepfakes, they can. Make sure this topic is treated like any other serious conversation about safety.
Schools must ensure PSHE lessons equip students to deal with as many angles of the crisis as possible. Within lessons, students should be taught how to identify deepfakes and what to do if they find one of themselves or someone they know.
2. All young people should be taught how to curate their social media presence to minimise the risk of abuse. This can be achieved by:
Enabling privacy settings. Telling teenagers their account should have restricted following (meaning changing their account setting to private) might seem like beating a dead horse, but the more reasons they are given to do this, the more likely they are to listen. I would say deepfakes are more compelling threats than the seemingly abstract threat of ‘people you don’t really know following you’.
Consider using a digital watermark. This can make the work of deepfake creators more traceable and therefore put them off.
Prevent yourself by being impersonated by using strong passwords and multi-factor authentication for as many of your accounts as possible.
Explore the settings on your socials. For example, on ‘X’, you can empower yourself by changing your settings so only people with a profile photo and registered email address (and therefore more likely to be a real person not trying to cause you harm) can see your profile. This can prevent those wanting to create deepfakes from viewing your social pages.
The government is currently running a consultation on the regulation of pornography which includes a section about AI generated images. You can find more information here. We are awaiting the Government’s response to this consultation and teachers and students are welcome to get involved.
This isn’t some obscure sci-fi augury; these are real crimes - happening now. In 2021, the number of deepfakes doubled every 6 months. From 2022 to 2023, numbers increased by 464%.
Things don’t look brilliant right now on the AI front, but hope is definitely not lost. The recent surge of deepfake related stories in the media is having its effect and heads are being turned towards this problem. I hope that this awareness will apply not only to people making laws, but to teachers, parents and students. Each effort that someone makes to learn or educate themselves and others about the identification of, or protection against, this kind of technology, abates its threats.
If the online world is poorly curated, then we have got to learn to navigate it ourselves, and that is a process as urgent as it is worthwhile.
If you have any questions about what you’ve read, don’t hesitate to get in touch by emailing us at info@sexedmatters.co.uk. You can also sign up to our newsletter here to stay up to date with our work.
Great to see an article like this that contains meaningful courses of action to take to minimize the risks!
Wow! This is so well written. Molly you seem very wise and cool, I hope to see more blogs written by you in the near future 🙌🩷