Well-functioning organizations in the public sector (education, healthcare, law, science, arts, journalism and governmental services) need core values to be of high quality and integrity. The uncritical adoption of ‘AI’ technologies can put such values at risk. For instance, AI technologies can amplify social hierarchies and inequalities (racism, sexism, ableism); aggravate climate change due to extensive need for power and water; contribute to the spread of misinformation; devalue the labour of artists; profit of the exploitation of workers; deskill legal, scientific, educational and medical professionals; and so on. As a result, organizations and individuals face a crucial tension between their values (e.g., scientific integrity, diversity, equity, sustainability, digital sovereignty, and democracy) and the harms associated with the uncritical adoption of AI technologies.
This course is designed to foster critical AI literacies in participants to empower them to develop ways of resisting or reclaiming AI in their own practices and social context. The school is directed to all members of society (staff, teachers, researchers, students etc.) in organizations from the public sector who want to develop critical AI literacy and apply this knowledge to their particular contexts.
Learning objectives
- Gain conceptual clarity and deeper understanding of ‘AI’, its various forms and meanings
- Understand how the science of human cognition can help resist AI hype
- Critically uncover pseudoscientific claims about AI
- Analyse critical intersectional theories (racism, sexism, ableism and more) to analyse social harms such as the exploitation of labor behind AI technologies
- Critically grasp the relationship between AI technologies, sustainability, and environmental devastation
- Perform a critical AI literacy project to start ways of resisting AI in a specific context