Workshop AI as the inauthentic other
Commenting on developments in Artificial Intelligence from viewpoints of philosophy and (Eastern Christian) theology
An interdisciplinary workshop organized by the Institute of Eastern Christian Studies in Nijmegen
“Are you real?”, is the question asked by a visitor of “Westworld”, an amusement park a recent TV series, based on Michael Crighton’s science fiction movie out of 1973. In this park, androids take the role of humans to make the experience of Wild West adventures, sexual affairs and other distractions possible to human visitors. “Well, if you can’t tell, does it matter?”, the android replies rhetorically. Almost half a century later, with increasing presence of various forms of Artificial Intelligence (AI) in our daily life, the question of difference, of the distinctiveness of humans becomes ever more virulent. It will be in the center of this workshop, and will be addressed by a philosopher, and an (Orthodox, Eastern Christian) theologian. The preliminary starting point is, yes, it does matter – but why, and in what sense? What answers can theology, can philosophy suggest, and what would a fruitful dialogue between natural science, technology on the one hand, and humanities on the other hand have to look like?
The workshop took place online (via Zoom) on March 25, 2021
5 - 5.10 p.m. Introduction by Alfons Brüning
5.10 - 5.35 p.m. 'On being smart without getting it: Effects of human reliance on empty vessels' by Pim Haselager
5.35 - 5.45 p.m. break
5.45 - 6.10 p.m. 'An Eastern Orthodox voice on Artificial Intelligence' by Marius Dorobantu
6.10 - 6.30 p.m. Discussion
More information about the lectures:
'On being smart without getting it: Effects of human reliance on empty vessels' by Pim Haselager, department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen
Increasingly, we interact with or are supported or regulated by artificial intelligence (physical robots and software agents). Algorithms currently already decide on sometimes crucial aspects of our transport (Uber), cleaning (Roomba), education (SlimStampen), health (Fitbit), consumption (Amazon), news (Twitter), finances (Lenddo), social communication (Facebook), dating (Tinder) and family life (Alexa). Increasingly we can get attached to robots, ranging from smart dolls (Cayla) and artificial pets (Paro), to sex robots (RealDoll). Much of the ethical debate concerning implications of this emerging ‘rule of the algorithms’, or algocracy, focuses on the protection against excesses, such as privacy or machine bias. Behind such discussions about the consequences of the approaching omnipresence of algorithms lies a basic concern regarding the effect of AI on how we see others and ourselves, what we value about human interaction, and on how we see humanity.
An important characteristic of AI systems is that they can combine an extremely well-informed (big data) high level of intelligence and (deep) learning, with a complete absence of sentience (understanding, empathy). These systems can ‘go through the motions’ at high levels of performance, while completely not ‘getting it’, as they fundamentally lack awareness of what the motions are about or why they are important. Chess computers beat world champions, without understanding what ‘winning’ means, or why that would be relevant. Robots can respond to human expressions of pain or pleasure without having the capacity to experience it themselves.
Our increasing acceptance, reliance, not to say dependence, on such systems demonstrates a remarkable indifference to the presence or absence of genuine awareness in other agents. Being accommodated by non-sentient machines, is apparently becoming sufficient for increasing parts of our lives. Meaningful relations with others are becoming machine-mediated relations or even being replaced by relations with machines. How will that affect our genuine directedness towards the other? To understand which aspects of our humanity are at stake, we may need other frameworks to amend and complete the traditional economic, scientific, or even ethical ones. Our capacity for authentic other-directedness has traditionally been emphasized as a gift to humanity, but also, in religious perspective, as a spiritual challenge. Perhaps such perspectives can illuminate why we should care about our increasing interactions with inauthentic other agents.
'An Eastern Orthodox voice on Artificial Intelligence and Human Distinctiveness' by Marius Dorobantu, department of Beliefs and Practices, Faculty of Religion and Theology, VU Amsterdam
The development of AI challenges us to re-evaluate the way we think of ourselves and our place in the world. If machines could one day replace humans in decision-making, in jobs, and even in relationships, what place would there still be for us? From a theological perspective, looking at AI through the notion of divine image (imago Dei) can yield surprising insights. To God, we are an other, created in his image. We now want to create an other, this time in our own image. Relationality and stewardship are at the core of what it means to be created in the image of God, and they both require a high degree of sentience and agency from the part of the image. Humans might be erratic thinkers, plagued with cognitive biases, and prone to making stupid mistakes, but they are responsible agents, to one another and to God. Current AI is the opposite of that: it is a hyper-rational and increasingly competent entity, but with none of the responsibility or understanding. As long as AI remains this way, it will never be capable of authentic relationships and a purposeful stewardship of the world. The relational theology of imago Dei helps us understand the crucial role that humans might still play in a world of intelligent robots. It also hints to what the field of AI might need to focus on to endow robots with human-level intelligence.
You can watch the lectures on our YouTube channel: https://youtube.com/playlist?list=PL7zXAE4eRMWMve9iQkJauzadOm03Rfby4