Technological progress and the risk of an artificial heart (1/3)
Is technological progress always progress? And, what is the dignity of the human person being ‘exploited’ by algorithms?
Earlier this month, I published an article with the Italian magazine TEMPI, on ‘To talk about artificial intelligence, we must answer the question: what is man?’ In turn, this was based on a speech that I gave in April at the International Center of Communion and Liberation in Rome, ‘‘Technological progress: the risk of an artificial heart.’
In the coming weeks, I’ll be sharing pieces of this article translated from Italian into English. I hope you find it helpful, and as always, welcome discussion in the comments.
Progress and the Social Doctrine of the Church
Lately, I’ve been asking myself: is the so-called technological progress we have experienced really progress? Has it really brought us forward, into a better state than before? To illustrate with a simple analogy, I am a rather nostalgic person and often watch films from the nineties, a time with we’d still call with corded telephone telephones. I remember how simple it was then to organize a meet-up with ten or more friends over a telephone chain.
Now, using Whatsapp makes me anxious. There is no clear responsibility, no way of communicating what everyone knows or respects, and it’s very easy to quickly fall into a jungle of opinions, compromised friendships, and the absolute tyranny of the final moment.
So, I ask myself, has the digitalisation of social relations really made our communication system better? Or, are we rather faced with a complication of human dynamics?
Therefore, discussing the meaning of progress, I’ve often reflected on the ethics behind the technology that we’re building. While writing the Ethics of AI, I often turned to texts of the social doctrine of the Church, comparing myself with the various indications on what truly human progress means, progress that concerns the person as a whole and not just a limited aspect of reality.
In Populorum Progressio, the social encyclical of 1967, reads:
“To be freed from poverty, to guarantee one’s subsistence, health, and stable employment in a more secure manner; a fuller participation in responsibilities, free from any oppression, protected from situations that offend their dignity as men; enjoy greater education; in a word, to make people know and have more: this is the aspiration of men today, while a large number of them are condemned to live in conditions that make this legitimate desire illusory.”
The desire to “be more”
Reflecting on developments in AI, the promise often emerges that these technologies will lead to scientific discoveries, increase our knowledge, and ultimately extend our humanity. Why is there this desire to “be more”? The human being is a particular entity, a “strange animal” that possesses a capacity for transcendence, a being that aspires to overcome its own limits. However, as Paul VI observed in the 1960s, a large number of people are forced to live in conditions that make this legitimate desire an illusion. Arguably, to this day, we haven’t moved much from that situation.
In Caritas In Veritate, Benedict XVI’s 2009 social encyclical reads:
“The problem of development today is closely linked to technological progress [...] Technology – it is worth underlining – is a profoundly human fact, linked to the autonomy and freedom of man. The lordship of spirit over matter is expressed and confirmed in technology [...] Technique allows us to dominate matter, reduce risks, save effort, and improve living conditions.”
The promise of the technology is vast, and to understand its impact even in my professional experience, I asked myself whether artificial intelligence has truly succeeded. A significant example is found in industrial automation, particularly industrial robotics. Take mechanical robots or the various mechanical trolleys used in warehouses, for instance. By employing forms of AI for ‘planning’, such as planning the next action in a physical space, these machines have achieved tremendous success and have transformed multiple industries. Their operation is made possible by the definition of what is called a “robotic envelope”.
From the robotic shell to the cultural shell: where (and how) AI succeeds
“Robotic envelopes” refers to that defined and delineated 3D reality that establishes the range and safety conditions for the effective operation of a machine. For example, a mechanical robot designed to paint cars on a production line operates within a specially crafted 3D space. This allows the robot to carry out its tasks with a high degree of success. If that same robot were positioned in an unsuitable home environment, such as my home kitchen, its operation would be ineffective because the suitable robotic casing would be missing: the surrounding environment is not configured to support its operations. Similarly, in the field of AI we speak of a “digital envelope”, which, unlike the robotic envelope, is not physical.
The case of autopilot systems
To illustrate the concept of the digital envelope, it is useful to consider autopilot systems, which represent a middle ground between robotic and digital systems. While working at Tesla, I worked on the algorithms for a product that had to calculate the risk of accidents, even when autopilot was activated. Ironically, one day, while driving with Autopilot on, my car braked sharply and a vehicle rear-ended me. According to the current traffic code, the fault was attributed to the driver behind me, because he did not maintain a safe distance. However, a reflection arose within me: yes, according to the law, it was his fault. But it was an algorithm that decided to break, and that decision was wrong. I wouldn’t have broken in those circumstances.
Roads and artificial intelligence
Tesla, like many other American companies, has a system that often allows them to avoid legal liability for accidents of this type. This leads to the question of digital versus physical casing. The error occurred because the autopilot algorithm, unlike a human, cannot make nuanced decisions. It has to choose in binary fashion. 1 or 0. The autopilot cameras had detected a vehicle in my lane, but in reality, there was only one wheel of the vehicle in front of me sticking out into my lane. I, driving, had all the space necessary to continue without breaking, while the autopilot categorized the presence of the vehicle as an imminent obstacle and chose to break. In that case, the physical envelope is not designed to make a correction decision, and the digital one, much less the digital representation of the physical reality that the AI reconstructs (the digital envelope).
These errors of interpretation are also linked to how the lanes on the roads are painted: if poorly painted or absent, or in situations of work in progress with temporary lanes, the artificial intelligence can fail. This is because there is no well-defined “robotic envelope”, meaning the operating environment is not optimized to ensure the success of the machine. Roads are designed for humans to perform at their best, not for robots or artificial intelligence.
Subscribe to Honest AI for the second part of ‘Technological progress and the risk of an artificial intelligence’, exploring the role of ‘digital envelopes’ in social media, what makes AI successful, and the anthropological problem of technology.