The Children’s Commissioner for England has called for a total ban on apps that use artificial intelligence (AI) to create sexually explicit images of children. These apps, which allow the manipulation of real photos or the creation of deepfakes, are increasingly targeting young girls. The UK government is being urged to take stronger action to tackle this growing issue, with additional legal measures expected.


Call for Action Against AI Apps Creating Explicit Images of Children

The Children’s Commissioner for England, Dame Rachel de Souza, is urgently calling for the UK government to ban apps that use artificial intelligence (AI) to create sexually explicit images of children. These apps, which allow for “nudification” – where AI alters photos to make individuals appear naked – are posing a serious risk, especially to young girls. Dame Rachel de Souza has highlighted the disturbing trend of AI-driven deepfake technology being used to manipulate images of children into sexually explicit content.

AI and Child Safety: A Growing Concern

Deepfake technology involves the use of AI to create videos, pictures, or audio clips that appear authentic but are entirely fabricated. In her report, Dame Rachel emphasized that these tools are disproportionately affecting girls and young women. Many of the apps seem to target female bodies specifically, leading to concerns that young girls are at greater risk. According to the Commissioner, girls are becoming increasingly cautious about sharing images online, fearing they could be manipulated or exploited through these AI apps. This behavior mirrors the offline precautions girls take for personal safety, such as avoiding walking home alone at night.

The Need for Government Action

Dame Rachel de Souza has urged the government to take stronger measures to prevent such apps from operating unchecked. She is advocating for the creation of laws that would hold AI developers accountable for the risks their products pose to children. Furthermore, she suggests setting up a process to remove harmful deepfake images from the internet swiftly. Recognizing deepfake sexual abuse as a form of violence against women and girls is also a crucial step in addressing this issue.

Paul Whiteman, general secretary of the National Association of Head Teachers (NAHT), echoed the concerns raised by Dame Rachel, stating that the rapid advancement of technology is outpacing both the law and education on the subject. The legal landscape surrounding AI-generated abuse material is currently being strengthened. In February, the government proposed laws to make it illegal to create, distribute, or possess AI tools designed to produce such content.

A Rising Threat

Statistics from the Internet Watch Foundation reveal a shocking rise in AI-generated child sexual abuse material. In 2024, the foundation received 245 reports of such material, a dramatic 380% increase from the previous year. This alarming spike highlights the urgency of the situation and the need for immediate action from both lawmakers and tech companies.

Regulatory Responses and Criticisms

The UK’s media regulator, Ofcom, recently introduced an updated version of the Children’s Code, which imposes stricter requirements on platforms that host harmful content, including pornography and material related to self-harm or eating disorders. However, Dame Rachel has criticized the Children’s Code, suggesting it prioritizes the interests of tech companies over the safety of children.

A spokesperson for the government responded by reaffirming that the creation, distribution, or possession of child sexual abuse material, including AI-generated content, is illegal. Under the Online Safety Act, platforms must take action to remove such content, or they risk facing significant fines. Notably, the UK is the first country to introduce further legal measures specifically targeting AI-generated child sexual abuse material.


Source: BBC News

Leave a comment

Trending