Our group investigates the ethics and philosophy of intelligent technology. It focuses on “Meaningful Human Control” so that technology aligns with human values. Our methods include, but are not limited to, conceptual analysis, conceptual engineering and value-sensitive design. We strive to provide actionable insights for the development and regulation of responsible, trustworthy and human-centered AI systems. We believe in transdisciplinary collaboration as a key to make a tangible societal impact.
Some of our overarching research questions:
- How can we promote the development of technology that respects human values?
- What are the consequences of artificial intelligence and neurotechnology for human moral agency and responsibility for action?
- How can we ensure that social, democratic control over AI can be exerted?