Between Silicon and Soul
A three-part series on AI, ethics, looking back at what it means to be a human person from the lens of faith.
On January 22nd, I gave a keynote on AI at the Annual Meeting of the Gulf Churches Fellowship (GCF) in Dubai. The special focus topic of the meeting was “Faithful Christian Responses to New Technology and Artificial Intelligence.”
I have been fascinated by the fantastic contributions that various faiths and religions around the world are making to the debates on ethical development and the use of AI and advanced technologies. While I will use follow-up re-stacks to point you to such contributions, here I want to share what I said at the GCF annual meeting. I hope to hear your thoughts through comments or emailing me back.
As the speech was a bit long and split into three parts, I am going to post it in a three-part series.
Here we go.
“When I look at your heavens, the work of your fingers,
the moon and the stars that you have established;
what are human beings that you are mindful of them,
mortals that you care for them?”
This question, though it may seem grandiose to start our conversation with, is vital for understanding the intersection of faith, culture, and emerging technologies like artificial intelligence.
Thank you for welcoming me here today. It's an honor to share with you my personal journey, an exploration of artificial intelligence through the lens of faith. While our discussion today centers on new technologies, particularly artificial intelligence or AI, the insights we'll discuss are equally applicable to the rapid evolution of technology in general.
Artificial Intelligence is a term that's become commonplace. It refers to the ability of machines to perform tasks that typically require human intelligence. There's a lot of buzz about AI, but how much do we truly understand it? How does it impact our lives, and what challenges and opportunities does it present, especially in the context of our Christian faith? These are some of the key questions I hope to explore with you today.
I'd like to share three stories with you, drawing lessons and insights related to our theme.
Learning
Let's start with the first story. Not long ago, when my daughter Caterina was around four years old, she approached me as I was preparing a barbecue and asked, 'Daddy, what is this?' She was pointing to a piece of wood soaking in water. I explained that soaking wood chips creates a pleasant smoke when added to the barbecue. Her response was typical of children her age, a simple, straightforward ‘Ha’–Her way of saying: 'Oh, I get it now!' Her trust in my explanation was implicit, without the need for her own experimentation.
This trust, this acceptance of knowledge from an authoritative source, is a fundamental way in which we all learn. I remember learning multiplication in primary school, trusting my teacher's instruction before I fully understood the mathematical principles behind it. I memorized timetables, trusting the result was correct and the teacher was not a deceiver!
But when we talk about teaching computers through machine learning, we must be careful not to anthropomorphize them. They don't learn as humans do. The language we use often imbues these technologies with human-like qualities, which can be misleading.
So, as we delve deeper into the world of AI, let us keep this distinction in mind and explore how AI truly works. Modern AI is mostly based on programming computers using “Machine Learning.”
In traditional programming, like the calculator on your phone, the process is straightforward: you input a problem (like 2 plus 2), and the program uses a set of predefined rules to give you an answer (which is 4). This is programming as many of us know it – input, process, output.
However, when we step into the world of machine learning, this process is flipped. Instead of giving the computer rules to follow, we give it the answers – the outputs – along with some data, and then it figures out the rules on its own. It's like teaching a child through examples rather than explicit instructions.
How ChatGPT Works
Let's take a quick, simplified look at how a large language model like ChatGPT works. At its core, a computer is a sophisticated calculator working with ones and zeros. So, how does a machine that only understands numbers deal with words and language?
Here's the trick: it transforms words into numbers. But not directly – first, words are broken down into smaller pieces called 'tokens'. These tokens are then converted into numerical vectors, which are essentially long strings of numbers. These numbers don't carry meaning the way words do for us, but they're structured in a way that similar words have similar numerical 'shapes'.
Now, you might wonder, if these machines are so good at processing numbers, why aren't they as good at math as a simple calculator? That's because their strength isn't in performing straightforward calculations. Instead, they excel at predicting patterns in data, like predicting the next word in a sentence.
Let me illustrate this with an example. If we input the phrase 'What is a GPT?' into a model like ChatGPT, it predicts the next part of the sentence, one piece at a time. 'What is a GPT?' is the initial input, and ‘ A’ (space-A, the next most likely token) is the output. This output becomes the input to the next iteration of the GPT model. We input 'What is a GPT? A', and we get ‘ G’ as output. The next input is then ‘'What is a GPT? A G' and so on until we get a complete, coherent response: ‘A GPT, or Generative Pre-trained Transformer, is a type of artificial intelligence model designed for understanding and generating human-like text.’
These predictions are based on a massive amount of data that the model has been trained on – from websites, books, and various internet sources. The training process involves adjusting millions, even billions, of parameters within the model so that its predictions closely align with the real-world data it's been fed. It's a complex and sophisticated process, but the result is a model that can engage in surprisingly human-like dialogue.
“Intelligence”?
When speaking about AI, it's important to understand these basics. Not to become technical experts but to appreciate the capabilities and limitations of these technologies, especially as they become more intertwined with our daily lives and society at large.
This brings us to an essential understanding: computers, no matter how advanced, do not possess consciousness. They don't 'understand' or 'reason' as humans do. Despite often hearing such terms in media or even from tech experts, it's crucial to remember that computers are essentially advanced calculators. They make predictions that seem fluent and coherent, but they're not grounded in genuine understanding or consciousness.
This leads us to a pivotal figure in AI history: Alan Turing, a British mathematician whose work in World War II helped crack Nazi codes, significantly aiding the Allies. But Turing's contributions went beyond wartime efforts. In the 1930s, he distinguished between human ingenuity – our capacity for original thought, scientific discoveries, and creative processes – and the capabilities of computing machinery.
Interestingly, the term 'computer' originally referred to a human who performed calculations. It's a historical footnote that reminds us how the language surrounding computers has evolved. In the late 1950s, Turing began discussing the concept of 'intelligent machines' and proposed the famous Turing Test. This test was designed to determine whether a machine could exhibit intelligent behavior indistinguishable from that of a human.
Turing's idea was about mimicry – if a machine could convincingly simulate human intelligence, it could be considered 'intelligent'. This concept laid the groundwork for the field of artificial intelligence, which formally emerged in the 1950s.
Today, there's a vibrant debate about what it truly means for a computer to be 'intelligent'. Does it simply mean processing information and making predictions, or does it encompass learning and reasoning as humans understand them?
As we consider these questions, it's important to ground our understanding of the reality of what computers are and what they are not. They are tools of immense sophistication, capable of mimicking certain aspects of human communication, but they lack the genuine consciousness and understanding that characterize human intelligence.
Defining AI
Let's delve into defining AI, a task that's not as straightforward as it might seem. There's no singular, all-encompassing definition of AI, but there are certain characteristics that help us understand what an AI system entails.
Firstly, AI is software created by humans to achieve human objectives. This is a key point – AI doesn't exist in a vacuum; it's a tool designed with human goals in mind. Some AI systems, especially those based on machine learning, may exhibit statistical errors, reflecting their non-deterministic nature. However, not all AI involves learning. Consider GPS technology, which was once categorized as AI. GPS systems don't learn; they don't use machine learning algorithms.
An AI system often has a high degree of adaptivity, improving its performance as it acquires more data and experience. It also exhibits a degree of autonomy, performs tasks in varied environments, and acts rationally, following certain logical principles.
***
Now, what do these characteristics teach us about human intelligence and our understanding of the human condition? They remind us that while AI can mimic certain aspects of human intelligence, it falls short of capturing the full essence of what it means to be human. The example of my daughter learning about making smoke on the barbecue highlights the rich complexity of human intelligence – it's not just about processing information but also about experiencing, sensing, trusting others, and existing in an interconnected world, including relationships with other people.
The ongoing debates about whether machines can ever be as intelligent as humans aren't just technical discussions. They push us towards deeper, often neglected, or taboo topics like truth, freedom, the essence of human beings, human nature, and dignity.
That's why I started this talk with a verse from Psalm 8, 'What are human beings that you are mindful of them, mortals that you care for them?' It's a crucial reminder that our exploration of AI isn't just a technological journey but a journey into the heart of what it means to be human.
The next part will come out in a couple of weeks.
Let me know your thoughts!
Thank you for your this essay. To bring in the inter-human inter-connectedness to the discussion is such an important thing to do. It is exactly right, human intelligence evolved because we connect with other humans. And we tell stories, as much as we read other humans' stories. We are very good at that, actually, so good we rarely even notice.
AI systems as we know them do not do anything even remotely like it. A good example are self-driving cars. As a human driver, I see a pedestrian, judge the persons' gate, posture, clothing, hair, movements, viewing direction, all in a split second. I get into the person's head and realise maybe she or he is a bit slow, maybe disoriented, or fast pacing in a hurry, not paying attention. Then I build a relationship between myself and the person and decide that it is maybe better to slow down, or put my foot on the break just in case.
So a self-driving car may be a car that drives, but there is no driver. It's just a machine. It may behave like a driver, but the understanding of self and other, the concept of danger, all that, are found outside of the self-driving car in the head of those who have designed the AI system at its core. So there is still human intelligence that underpins it, but it is absent at the moment the self-driving car operates. I think this is key. With AI, we still interact with other human being, but they are hidden behind the algorithms they created. If the self-driving car kills a pedestrian, which has happened, it was not the AI that did the killing, but the people who have designed the car.