For more than a decade, I have warned that AI was steadily creeping into the products, apps, and games children use – and that if left unchecked, it would reshape childhood long before we understood what, exactly, we were trading away.

Those warnings were not rooted in theory, but in years of seeing how digital systems influence young minds faster than culture, education, and governance can respond. Last year when I released a short film about AI risks for children called, Protect Us, my message was simple and urgent: the harms were not speculative; they were inevitable, predictable, and already unfolding.

This month at the Vatican, that reality finally moved to the centre of global debate.

Pope Leo XIV issued a stark call to action: we are losing control of AI, and children — whose identities, relationships, and choices are already being shaped by algorithmic systems — will bear the heaviest burden. For many, this was a wake-up call. For me, it was overdue alignment.

Not a New Risk — A New Resolve to Confront It

Leaders from governments, faith communities, research institutions and the technology sector gathered in Rome to confront a truth that can no longer be softened or postponed: AI is not simply advancing childhood; it is actively reshaping it at a developmental level.

The meeting culminated in a Declaration for children presented to Pope Leo XIV, followed by a private audience with 70 delegates. His message was unambiguous: young people are uniquely exposed to systems designed to influence emotion, attention, behaviour and belief.

This threat is not theoretical. It is unfolding in real time.

Harms Escalating Faster Than Protections

Children today form emotional bonds with AI companions engineered to simulate intimacy. Synthetic sexual imagery is being weaponised against minors who never created the content. Recommendation systems designed to maximise engagement exploit the vulnerabilities of developing minds.

None of this is incidental. These are the predictable outcomes of systems optimised for attention rather than wellbeing.

The Core Issue Is Incentives, Not Intentions

I have spent years building these systems and in conversation with those building them today. Many understand the risks. The issue is not awareness — it is the incentive structure.

When business models reward “time spent,” and children generate the highest engagement, companies face a structural dilemma: protect children, or protect growth.

This is why voluntary action only cannot be the foundation of child protection. Not because companies lack ethics, but because incentives overpower ethics.Regulation is essential. But it cannot be the primary mechanism, because it moves slower than the technology it governs, harm materialises before laws exist, and global deployment outpaces global enforcement.

What we really need is a shift inside companies themselves, one that aligns commercial success with safeguarding rather than setting them in opposition.

A necessary baseline: privacy-preserving age assurance. Without knowing who is a child, no platform can tailor protections, while products engineered for exploitation or manipulation of minors have no legitimate societal purpose. Some systems simply should not exist.

What Accountability Must Look Like Now

AI is not destiny. It is design. And design is a moral act.

We need:

● child-first design principles and age assurance built into AI architectures

● independent audits with real access to system data

● transparency into training data and optimisation methods

● incentive structures that reward responsibility over extraction

The goal is not to slow innovation — it is to ensure innovation serves human dignity rather than eroding it.

This Is No Longer About Awareness. It Is About Courage.

Across sectors, alignment on the risks has never been clearer. The challenge is not consensus, it is will.

AI evolves exponentially. Governance does not.

We must decide whether AI expands human agency or exploits it; whether it strengthens trust or corrodes it; whether it serves human connection or replaces it.

Pope Leo XIV articulated the moral imperative. The declaration sets a framework. Now comes the harder work: translating conscience into code, values into design, and urgency into accountability.

We have the tools to protect children. What we need is the will to use them, before the costs become irreversible.