Video | Podcast
Monday 30 September 2024 | 20.00 – 21.30 hrs | LUX, Nijmegen | Radboud Reflects, Radboud Centre Social Sciences, SIAC, School of Artificial Intelligence and Workgroup The Human Factor in New Technologies. See announcement
Review - AI as a Mirror of Ours, Not a Mind of Its Own
Thanks to Susana Margarida Carvalho dos Reis | photos Ramon Tjan
On the evening of September 30th, philosopher and AI ethicist Shannon Vallor took the stage at LUX to discuss the role of AI in shaping our lives and futures. Vallor challenged the dominant narrative from Big Tech, which claims that Generative AI (GAI) is not only replicating human intelligence but is on the path to surpassing it. This was also the topic of her latest book: The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. The conversation was moderated by philosopher Nina Poth, and the evening was organized by Radboud Reflects, Radboud Centre for Social Sciences, the School of Artificial Intelligence, and the Workgroup The Human Factor in New Technologies.
The Mirror Metaphor
To explain the nature of GAI, Vallor introduced what she called the best metaphor: the mirror. Just as a mirror reflects but doesn’t create a new person, GAI doesn’t truly generate new human-like intelligence. Instead, it reflects human thought and data back at us. When we see coherent texts from tools like ChatGPT, we must recognize them for what they are: reflections, not real minds.
But why a mirror? According to Vallor, technology is less about tools and more about ideas. As she explained, “it is an extension of human values, reflecting what people of a given time believed was worth doing.” This is particularly true for AI, where different algorithms generate distinct reflections based on the data they process. In this metaphor, the data acts like light hitting a mirror, and the final output is the image we see—a reflection shaped by the way the algorithm handles that data.
Vallor used this mirror metaphor to challenge three key assumptions about Generative AI, while refraining from rejecting AI altogether. First, GAI doesn’t fully reflect humanity; it only mirrors certain digital communities, meaning many people and perspectives are left out. Not everyone gets to see their lives reflected in this digital mirror. Second, GAI depends on past data to create new content, which Vallor called "regressive." By relying on historical patterns, GAI pulls the past into the future, reinforcing old biases rather than promoting true progress. This idea should unsettle those who push the narrative that innovation is always a forward leap.