Generative AI in a Post-Truth Era

Content warning: This article includes topics such as non-fatal strangulation, consent, so-called honour-based abuse and femicide.  

If you or anyone you know needs support, help is available to you now. The Live Fear Free Helpline can provide confidential advice or support around domestic abuse, sexual violence or violence against women. It is available 24/7, 7 days a week. Call 0808 80 10 800, text 07860077333 or email[email protected]. You can also access the Live Fear Free Helpline online chat by heading to:https://www.gov.wales/live-fear-free/contact-live-fear-free. 

The world of technology and artificial intelligence (AI) has been around for far longer than is often acknowledged. From the AI utilised by Alan Turing in the early 50s to the early demonstrations of ChatGPT in 2022, AI has been part of our lives for many years. Whilst this may seem like a new concept, the need to legislate for AI has been hanging over the internet from the beginning. Now, with the latest video and audio generation programme ‘Sora 2’ from OpenAI, this call for regulation becomes ever-more urgent if we are to protect survivors from the harms of AI. 

Since the implementation of the United Kingdom’s Online Safety Act 2023, we have already seen the challenges when it comes to the practicality of legislating for the online space. The legislation requires pornographic content providers to utilise “highly effective age assurance measures”1. OFCOM, the organisation set to regulate these providers, published an open letter in November 2024 reminding generative AI platforms of their obligations under the act.  

Currently, generative AI does not allow users to explicitly request pornographic content but concerningly, these safeguards are easily navigated. In 1.6% of cases, Sora 2 was able to create “sexual deep fakes” despite these safeguards supposedly blocking users from being able to do so2. Whilst this may seem an insignificant statistic, there are countless articles on websites tailored towards sexual generative AI and bypassing safeguards. Every individual image generated is one too many. 

We have also seen in the past these “highly effective” age assurance measures have been bypassed by game character avatars. Many pornographic content sites simply use AI for age estimation with no legal ID required. It is also well known that users can also bypass UK age verification triggers by utilising virtual private networks that trick public networks into thinking the user is accessing the internet from a country other than their own.  

Already, there are trends of Sora 2 generated content featuring young women “outfit checking”. These videos are incredibly suggestive and leaking onto other platforms such as TikTok with users dedicated to this sort of content.  

A “NSFW (Not Safe For Work) AI Video Generator” advertises the ability to create AI pornographic videos that are “as unique as your desires”. A landing page takes you immediately to a function with asks you to upload an image or generate your own. This page does not ask for any age verification. Whilst these websites block the users from being able to use a particular name, for example a celebrity’s name, from being used to generate content, it does not stop users from uploading pictures of people they know. Open AI’s own article on “Launching Sora responsibly” only addresses non-consensual images of “public figures”3. Consent is only addressed in relation to creating images in the user’s own likeness. However, how does Sora 2 know it is the user’s own image that you are trying to create? Answer: it doesn’t. There are no safeguards in place that require a user to prove that it is an image or video of themselves that they are creating.  It simply requires a recording of a face. Additionally, whilst users can ask for permission to create a video of their friends, they do not need to approve the content of that video – just that they are happy with the creator to use their likeness.  

Users of Sora 2 are unable to upload images of photorealistic people. However, the author of this blog was able to find a workaround easily.4 There is a risk that perpetrators could use images of survivors to create non-consensual images. In a world of fake-news, Sora 2 also allows videos of news anchors to be created. 

OpenAI is looking to release Sora 2 via an application programming interface. We are already behind on this. Without proper and modern legislation, open AI models will allow users to integrate generative AI content models like Sora 2 into their own programmes and for personal use. The user base will inherently become wider with programmes more accessible. Users will not need to access Sora 2 itself to use the software. This could lead to a snowball of access and more non-consensual images being created.  

Policy makers and the legislature need to stop leaving safeguarding and moderation to those that control it. Survivors of non-consensual intimate image abuse, deep fakes and impersonation cannot wait for those who control these platforms to properly get to grip with real safeguards that actually work.  

More recently, Grok AI has been used to remove the clothing from images of women that haven’t consented. The BBC reported on Samantha Smith, who had shared her story5 on social media. Many others commented with similar experiences. OFCOM have responded, stating that tech firms “must assess the risk” but according to the BBC, was unable to confirm whether it was investigating Grok on this matter. This is a stark example of how the guidance in place through the Online Safety Act is not working to protect survivors of tech-facilitated abuse and the risk when leaving tech companies to govern themselves. Survivors of so-called ‘Honour-Based Abuse’ are particularly vulnerable to these sorts of abuses. AI does not seem to be aware of cultural nuances of what a sexualised image is.   

We would not allow a car with faulty seatbelts, and we should not allow programmes that do not implement safeguards properly. Yet programmes like Sora 2, ChatGPT, Grok and Gemini regularly generate content against their own policies. Sexually explicit images are easy to create if users are particular with their language.  

AI trains itself from the endless data that the internet provides it with. It would be valid to question whether it is so easy to create sexually explicit images through Generative AI because the internet is so ready to sexualise women and girls.  

But there is a deeper, darker end waiting for us in a world where anyone can create a pornographic AI video depicting their desires. This does not just affect algorithms and social media feeds. Dangerous online material has led to an increase in non-fatal strangulation, leaving survivors facing life threatening situations.  

Increasingly problematic and violent themes are being found within pornographic content. With non-fatal strangulation in pornography now banned, it leaves the question of other dangerous themes that remain. The Sexual Exploitation Research and Policy Institute found that “titles featuring female teenagers are three times more likely to indicate aggression than titles featuring adult women”6. The same study indicated that higher pornography consumption has been linked with “hostile sexism”. Having a digital character that anyone could manipulate to do whatever they like, could lead to dangerous situations when faced with a human being and consent.  

Sora 2 and other similar models like Veo 3 are incredibly realistic. It is becoming more and more difficult to tell what is real and what isn’t. Not only is this problematic in a post-truth era where objective facts are less influential in shaping public opinion, but with channels dedicated to generative AI content depicting the murdering of women, we are yet to see the full consequences of this newly accessible programme and others like it.  

We are simply at the precipice of technology that law makers are not wholly on top of. Whilst deep learning presents a mirror to an online data base that overwhelmingly sexualises women and children, newer AI programmes are being developed every week. Whilst governmental departments struggle to understand how to manage this problem, survivors are facing newer ways of harm.  

 

Written by Stephanie Grimshaw, Head of Public Affairs & Communications.