
Once upon a time we were waiting for Artificial Intelligence to become part of our world – but now we know it is already here. No wonder Pope Francis made it the focus of his messages for both the World Day of Peace on 1 January this year and again for World Communications Day just a month later.
But what is AI? The Oxford English Dictionary defines artificial intelligence as:

The rapid development of AI is clear to all of us. Within a few years, AI has become capable of recognizing our particular speech patterns, translating from one language to another instantaneously, and writing and debugging computer programs. Many of you may already be using TeachMateAI, an AI-powered digital assistant, designed by teachers and tech experts that can help with everything from report writing to activity ideas. ChatGPT, launched as recently as November 30, 2022, by San Francisco–based OpenAI, can compose music, fairy tales, and student essays.
It can write poetry and song lyrics and summarize texts. It can create lesson plans and PowerPoint presentations. It can also automatically generate illustrations in a wide variety of styles.
So the question is how do we manage all of this? Should we be afraid of AI? What insights can help us as we guide pupils in navigating this new world of possibilities and challenges?
It’s not new
AI is not new, even if we have seen many new developments in a short time. The famous Enigma machine created during WWII and the computers whose calculations made the Apollo moon missions possible in the 1960s were both works of great intelligence. They produced information that we would normally associate with the capabilities of human intelligence. The computers currently monitoring nuclear reactors are another example of how machines can carry out ‘intelligent’ work at a speed and reliability far superior to that of human beings.
This is one of the less obvious things that authors Fr Andrew Pinsent, Robert Seed and Sean Biggins mention in their brilliantly insightful booklet ‘Artificial Intelligence’, published by CTS.

The authors explain that in 1901, wreckage retrieved from an ancient Greek shipwreck included ‘an elaborate geared machine, now called the Antikythera Mechanism, which was what could be described as a sophisticated analogue computer’. Staggeringly, the mechanism was constructed around the 2nd Century BC. With this mechanical computer, ancient philosophers would have been able to follow the movement of the Moon, the Sun and even predict eclipses.
One thing that is new about modern AI is its ability to produce humanlike responses including creative art and literary work, legal briefs and student essays. These responses can simply appear ‘‘human’. This is why it is now increasingly difficult (even with sophisticated tools) to tell the difference.
What the Church says
In his New Year message 2024, Pope Francis stressed that human intelligence and the fruits of that intelligence are a gift from God – and that includes AI. He commented that humans should:
In his message later that month, he made the incisive observation that the degree to which human beings resist the temptation to use technology or science as a means of exploiting, dominating or manipulating others is not just a matter of human intelligence, but of morality and the human heart.
This means that the question of whether AI is good news or bad news depends to a large extent on how humans choose to interact with it and use it.
What about schools and education?
In December 2023, the Department for Education looked at some of those opportunities and threats around AI in a blog entitled ‘Artificial intelligence in schools – everything you need to know’. They began by acknowledging that:
The article also acknowledges that AI has the potential to benefit education, especially by completing some of the administrative tasks that often take up teacher’s time. As AI tools are interactive and can be easily personalised, they could be used to provide all young people with their own virtual ‘tutor’. For example, AI could help provide pupils with a bespoke work plan, based on the marking of their work and assessments provided by teachers.
However, AI is a powerful tool that can easily be misused. Schools are already seeing AI being used by pupils as a homework shortcut or as an efficient solution to completing a lengthy piece of coursework. Schools are finding they have to develop new policies around plagiarism and cheating that consider the impact of AI and the difficulty in monitoring its use.
There are also concerns about protecting data and intellectual property as well as safety and security. The Government blog cited above mentions the danger of ‘children and young people accessing or creating harmful or inappropriate content online through generative AI’. AI-generated porn is an obvious example.
There are also some deeper concerns about the way AI could shape our human development. For example, what about children or young people finding friendship with an AI chatbot? Far-fetched as this may sound, this scenario is powerfully portrayed in the film ‘Her’ in which Theodore Twombly (Joaquin Phoenix), plays a man who develops a relationship with Samantha (Scarlett Johansson), an artificially intelligent virtual assistant personified through a female voice.
The authors of the CTS booklet tackle this issue head-on in their excellent ‘Questions and Answers’ section. They suggest that children learn so many skills through their friendships and an over-reliance on AI could lead to these skills being underdeveloped:
The final part of the CTS booklet highlights three ways in which AI can only imitate human-like behaviour and responses.
- Firstly, AI lacks what we would call understanding. It can generate remarkable results with far greater efficiency than humans, but does it understand what it is calculating or doing? The authors suggest that the human experience of insight – those eureka moments where the discovery of truth bursts in upon human consciousness with understanding – is not really possible for AI.
- Secondly, AI lacks the capacity for directed thinking towards imaginary things or future goals. It cannot discern nor interact with others in the discernment process. It cannot deliberate about which option to choose. It has no free will. AI can only calculate.
- Finally, AI has no commitment to the truth or to what is good. It can just as easily generate moral as immoral conclusions provided the calculations are correct, according to its algorithm.
The conclusion of the booklet (p45-47) reminds us that while ‘‘we are understandably dazzled by the immense computational power of AI-related technologies’, we should not forget ‘that these systems lack an appreciation of what is true and moral’, particularly in the steps they ‘choose’ to achieve a given outcome. It is down to us as humans to ensure that we do not become like our AI creations and ‘risk abdicating from the responsibilities of human intelligence’. If we can remain conscious of what and who we are as human beings, then the wise creation and use of AI is wholly compatible with a perspective in which we love God above all things and our neighbour as ourself.
If you enjoyed this article, you may be interested in this hour-long podcast with Fr Andrew Pinsent on ‘AI, Reality Risks and the Future’. If you are interested in obtaining a copy of the CTS booklet Artificial Intelligence’ go to ctsbooks.org and use the code TENTEN15 to obtain 15% off.

Our CPD session
Our CPD session is aimed at teachers in Primary Schools and explores the history of AI and some recent Church teaching. It looks at the opportunities and threats AI poses, particularly in education. It also explores how our Life to the Full and Life to the Full Plus programmes lay some strong foundations for children about what it means to be human which can inform the way they think about these questions as they grow up.