Ensuring child security in the age of Artificial Intelligence

4 weeks ago 7

Special Report

August 18, 2025 by

• Panel session featuring Nigeria’s Minister of Communication, Innovation and Digital Economy, Bosun Tijani (second left)

• Panel session featuring Nigeria’s Minister of Communication, Innovation and Digital Economy, Bosun Tijani (second left)

The 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, highlighted urgent and complex discussions surrounding the safety of children and teenagers in the digital realm, particularly with the rapid evolution of Artificial Intelligence (AI) technologies. JUSTINA ASISHANA writes on how experts, policymakers, industry leaders and even young voices converged to tackle what is now recognised not merely as an emerging risk, but a moral imperative: ensuring child security in the age of algorithms.

In South Korea, last year, a chilling revelation shook the country: over 100 secret chat rooms on Telegram were discovered, sharing deep fake videos of elementary, middle and high school students. These aren’t just manipulated images; they are non-consensual intimate images, often created by classmates using the real faces of others.

In recent times, the faces of children, which were posted online either by their parents or the children themselves, have been collected and manipulated into deep fake intimate videos without their consent or the consent of their parents. When asked about Artificial Intelligence (AI), most children are often excited that AI is intelligent and useful; while some feel it knows a lot about them.

Digital devices are nowadays one of the leading causes of family disputes. Google’s Head of Families recently said that parents are spending upwards of four to 12 hours a week trying to manage their children’s online usage.

Several children in a research conducted during an interactive workshop in The Hague about generative AI said they learnt about Generative AI from friends, Tiktok and siblings while a lot are still battling with the bias in AI models and their outputs.

Presentations made at various sessions during the Internet Governance Forum held in Lillestrøm in Norway indicated that half of the children surveyed said they feel addicted to the internet, nearly two-thirds say they often or sometimes feel unsafe online, while more than three-quarters say they encounter content they find disturbing, sexual content, violence and hate. A quarter to a third is bullied online. Half experience sexual harms and a quarter experience sextortion. And now, the acceleration of AI is supercharging these risks and harms.

The sessions focused on this topic include building a child right respecting and inclusive digital future; combating sexual deep fakes: safeguarding teens’ globally; beyond devices-securing students’ future in a complex and digital sphere; elevating children’s voices in AI design; developing a secure, rights respecting digital future; ensuring the personal integrity of minors online; protecting children from online sexual exploitation including live stream spaces and a high level session on securing child safety in the age of the algorithms.

The rate of attention-deficit/hyperactivity disorder (ADHD), depression, eating disorders, child sexual abuse and suicide is going through the roof as the acceleration of Artificial Intelligence (AI) is now set to supercharge these risks and harms. Children’s digital experience is not a result of the technology itself, but it does reflect the priorities of those who own, build and deploy it, including AI.

In one of the sessions on “Combating Sexual Deep Fakes-Safeguarding Teens Globally,” one of the participants highlighted that when students see these deep fakes, they feel shocked or scared and frustrated, while the victims themselves endure anxious and unsafe feelings, alongside the crushing weight of social stigma. The fear can be so profound that students lose trust in their fellow students, feeling helpless.

Read Also: FG orders urgent rehabilitation of alternative routes ahead of flood threat

Discussants recognised that safeguarding childhood in the algorithmic age is no longer an emerging risk, but a moral imperative.

How algorithms shape young lives

Algorithms, far from being neutral tools, are “very active architects of children’s digital experiences,” profoundly influencing what they consume, how long they stay online and even their emotional states, according to Shivani Thabo-Bosniad, a senior journalist.

It was underscored that algorithms are not passive tools, but very active architects of children’s digital experiences, influencing what they see, how long they engage, and even how they feel. The concerns raised span from widespread online harms to the specific, amplifying dangers of generative AI.

Norway’s Minister of Digitisation and Public Governance, Karianne Tung said that algorithms have become powerful tools for personalisation and engagement for children and this also exposes children to harmful content, bias and manipulation.

“They can shape behaviour, they can influence choices and they cause serious damages when it comes to mental and body issues. Let’s be clear on one thing, protecting children online is not about limiting their freedom. It is about empowering them to navigate the digital world safely, confidently and with dignity. It is about ensuring that technology serves their personal growth and not the other way around. So, in my opinion, the platforms need to take more responsibility for taking down content that is damaging and prohibited,” she said.

For developing countries, especially those in Africa, these algorithms trained on datasets that do not reflect the diversity of the African societies has the potential to lead to culture erasure and the adoption of cultures from elsewhere. According to Sierra Leone’s Minister of Communications, Technology and Innovation, Salamah Bah, these algorithms have begun to impact the region and the conversations of the children and teenagers.

A growing crisis of online harms

Mental health impact: The United Nations Children’s Fund (UNICEF) research, cited by Child Rights and Business Specialist, Josianne Galea, underscores the severe psychological toll of children who experience online abuse, bullying or exploitation exhibit higher levels of anxiety, increased suicidal thoughts and are more prone to self-harm.

Digital addiction and loss of control: Leander Barrington-Leach, Executive Director of the Five Rights Foundation, painted a grim picture that reveals that roughly half of the surveyed children feel addicted to the Internet. Nearly two-thirds often feel unsafe online, and alarmingly, children are losing their control, their sleep, their ability to make connections, to pay attention, and to think critically. They are losing their health, sometimes even their lives.

Exposure to harmful content: More than 75 per cent of children encounter disturbing, sexual, violent or hateful content online. Five Rights’ Pathways research revealed that social media accounts registered as children were exposed to messaging from strangers and illegal or harmful content within hours of creation. Algorithms were found to recommend harmful content, including sexualised or pro-suicide material, weighting negative or extreme content five times higher than neutral or positive content.

Corporate priorities vs. child well-being: A critical concern highlighted is that many services children frequent are designed primarily for revenue generation, focusing on maximising time spent, reach and activity through features such as push notifications, infinite scrolls and random rewards (features that maximise engagement over child well-being). Whistleblower reports indicate that tech companies are often aware of the harm caused to children but choose to prioritise these revenue-driven designs.

Reports indicate that over 35,000 such images were available for download from just one generative AI platform.

Reports also showed that deep fake tools can easily be accessed and used online, opening up children to make deep fakes without restrictions.

Kenneth Leung of the Civil Society, Asia-Pacific group highlighted the alarming gap in safeguards, which primarily target adults, leaving teenagers in a vulnerable in-between stage. Disturbingly, many of those producing deep fakes are themselves teenagers, who often dismiss their actions as just funny, oblivious to the profound pain they inflict.

Despite changes in laws, it remains unclear whether the new laws are strong enough to stop these crimes. Social media companies face criticism for their slow response in removing illegal content, allowing it to spread widely. Juliana Cunha, from Safer Net, reported that 90 per cent of Child Sexual Abuse Material (CSAM) reports in 2023 and 2024 related to messaging apps, predominantly Telegram, which showed limited cooperation and reported that out of 20 million reports, none were from Telegram. Janice Richardson, an educator, pointed out that many existing laws are not equipped to handle electronic proof, necessitating legal amendments in some countries.

Recommendations for a safer digital future

The Internet Governance Forum sessions converged on several critical recommendations to construct a child-safe and rights-respecting digital future. Several speakers called for the prioritisation of safety by design and age assurance. The Head of Norad’s Department for Welfare and Human Rights, Lisa Sivertsen, emphasised a safety by design approach, where preventative and detection technologies are embedded in service design. There were also recommendations around empowering youth and responsible parenting, as Josianne Galea from UNICEF advocated for empowering children as activists, participants, and pioneers of the digital world, as opposed to protecting them from the digital world. The online safety regulator in South Africa stressed the vital role of educating parents, recognising that children have a right to responsible parenting and privacy.

Recommendations around robust regulatory frameworks and enforcement saw Zhao Hui from the China Association of Social Societies highlighting China’s efforts in online minor protection through laws such as the 2021 Personal Protection Law and the 2024 regulation on minor protection in cyberspace. These regulations, she said, address cyberbullying, data breaches and internet addiction, with specific rules for generative AI services. South Africa’s online safety regulator stated that they issue take-down notices for prohibited content and collaborate closely with law enforcement on child sexual abuse material cases, pointing out that other countries need to have regulators who do the same.

There is also a massive need for industry accountability and self-discipline, as Caroline Eriksen of Norges Bank Investment Management, Europe, warned that failure to respect children’s rights could be a material risk to companies’ operational licenses. UNICEF said that it has developed guidance to encourage companies to address child rights impacts meaningfully, while internet service providers were called on to be proactive in blocking, monitoring, and preventing content before it spreads.

The majority of the speakers harped on comprehensive digital literacy and education as schools were urged to educate students about deep fakes, their dangers, and consequences, fostering better digital literacy to understand what is real or fake. Janice highlighted the need for teacher training and for educational projects to instil human dignity from a young age. Yi Teng Au from the technical community Asia-Pacific group noted South Korea’s Ministry of Education’s awareness campaign following deep fake incidents, guiding students on how to respond as victims or witnesses.

The issue of harmful content platform hopping necessitates enhanced cross-platform collaboration and global cooperation. Deepening international cooperation is vital for building an inclusive digital future that respects children’s rights, as emphasised by Zhao Hui.

Juliana underscored that the misuse of AI to create sexualised images is not merely a technical or legal issue, but a reflection of a broader system of gender inequality, demanding cultural and long-term school interventions. Comprehensive support and therapies for victims were also highlighted to be crucial.

Citing the need for ethical AI design for children, an AI expert at the UNCRI Centre for AI and Robotics, Maria Eira, declared that the goal cannot be profits. It must be the people urging companies to prioritise children when developing AI tools. A Digital Ethics Leader, Alex stressed the importance of ensuring children come to no harm, especially in digital marketing, where images and media content should portray children respectfully.

The discussions at the IGF culminated in a resounding call for collective action and underscored a shared responsibility to protect children in the digital age. Digital safety for children is no longer an emerging risk, it is now too urgent, too complex and too personal to everyone and protecting children in this digital age and in the age of algorithms is more than a technical challenge.