Who am I and what do I work on?
As a philosopher of technology, I study the ethical and societal impacts of (digital) technologies, mostly in the health domain. I’ve done research on how technologies such as pre-implantation genetic diagnosis and psychopharmaceuticals blur the boundaries between natural and artificial; how apps and wearables for health monitoring change our understandings of autonomy and solidarity; how the entrance of Big Tech actors into health and medical research is reshaping how we practice and deliver healthcare; and how the implementation of AI in healthcare brings with it new ethical dilemmas that have to do with equity and justice.
What initiative do I represent?
I am one of the directors of iHub, Radboud’s Interdisciplinary Hub for Digitalization and Society. At iHub we bring together researchers from all walks of the university to work together on the complex challenges that digitalization poses for society – legal scholars, philosophers and ethicists, computer scientists and software programmers, social scientists and more. Our approach is value-driven: rather than focus on technological applications (e.g., wearables, AI, or robots), we focus on public values, such as privacy, solidarity, democracy and expertise, and study both how digitalization puts these values under strain and what needs to be done to protect them.
Finally, iHub does research which is both critical and constructive. While carefully studying the risks raised by digitalization, we also seek to address these, either by developing value-sensitive technological prototypes in our iLab, elaborating policy recommendations for technology regulation, or expanding guidelines for professionals implementing digital technologies, for example in schools, hospitals, or municipalities.
What is our connection with the Healthy Brain pillar?
There are many ways in which digital technologies affect the minds and brains of individuals. For example, they can change our (collective) mental capital, including our (shared) sense of agency, our capacity for self-control. Furthermore, digital technologies can influence and “nudge” human behaviour in ways that users are often unaware of, for example when they use chatbots they believe to be truthful, or when they are not aware of the algorithms that determine recommender systems. This challenges individuals’ wellbeing, autonomy, and other public values such as democracy, thus raising key questions such as: What kinds of algorithmic persuasion and manipulation are (un)acceptable? Should AI development be regulated to promote mental health and wellbeing? If so, what research is required to establish how such regulation will be effective?
To address such questions, we see value in connecting knowledge about the societal and ethical impacts of (digital) technologies, as pursued at iHub, with knowledge about brain, cognition, language and behavior, as pursued at other key interdisciplinary hubs of the Healthy Brain network on the Nijmegen campus, including the Donders Institute, the MPI for Psycholinguistics, the Centre for Language Studies, the Radboud Centre for Decision Science, and the Behavioural Science Institute.