Action needed to close legal gaps on AI-generated child sexual abuse material

Action needed to close legal gaps on AI-generated child sexual abuse material

New research has uncovered legal gaps for tackling child sexual abuse material (CSAM) created by generative artificial intelligence (gen-AI) across the Five Eyes nations.

The research has prompted calls for lawmakers to strengthen legislation to ensure children are protected as gen-AI evolves rapidly.

The findings are part of an investigation into the robustness of regulations across the UK, US, Australia, New Zealand and Canada. Known as the Five Eyes nations, the countries work closely on issues such as cybersecurity and the global problem of technology-facilitated child sexual exploitation and abuse (TF-CSEA).

The study is part of a new report investigating who benefits from the multi-billion-dollar industry of child sexual exploitation and abuse, carried out by Childlight – Global Child Safety Institute, which is hosted by the University of Edinburgh.

The report flags legal gaps across the five countries including. In the UK, there is no provision for indecent pseudo-photographs that include fictitious children. This could be covered by relevant case law interpretation or legislative updates.

Specifically, Scotland has gaps on topics such as the criminalisation of paedophile manuals (i.e. guides on how to groom, sexually abuse and exploit children); and of non-photographic indecent images of children.

Childlight research fellow Dr Konstantinos Gaitis said: “While we found generally laws across the Five Eyes countries are broad enough already to cover the advent of AI or are adapting to it through legislative updates and case law, there are still some gaps and work to be done. These gaps should be addressed to fully provide the protections and accountability needed to keep children safe. AI is rapidly developing, with increased levels of autonomy, so it is essential laws keep pace.”

The study is among the first to examine the regulatory context of the five closely inter-connected countries in terms of accountability around CSAM created via gen-AI. Researchers looked at hundreds of pieces of legislation, cases and statutes, using the ‘black-letter law’ approach to the research, which focuses on the letter of the law.

The review also found many examples of good practice across the five countries which are helping ensure those who create gen-AI child abuse material are held accountable, including Online Safety Acts in the UK and Australia, robust federal laws in the US and the proposed Online Harms Act in Canada. Keeping legislation broad, or ‘tech-agnostic’ can help ensure it can keep pace with technological advancements with targeted modifications needed, as gen-AI develops.

Share icon
Share this article: