New Horizon Europe project looks into discrimination in algorithmic hiring
FINDHR, a research project aimed at preventing, detecting and mitigating discrimination in algorithmic hiring, will receive a Horizon Europe Grant. The three year project involves researchers from Radboud University’s Interdisciplinary Hub for Security, Privacy and Data Governance, or iHub.
Algorithmic hiring is on the rise and rapidly becoming necessary in some sectors. Corporate job postings that used to attract about 120 applicants in 2010, now attract over 250. Artificial Intelligence technologies promise to deal with hundreds or thousands of applicants at high speeds. Moreover, their uptake in European HR teams and Public Employment Services is growing faster than the global average. European tools are highly innovative, and include tools that instantly select and rank candidates based on their resumes and application materials, or process candidates using online tests or games.
Discriminatory biases have been documented across almost all applied domains of AI, and it is increasingly acknowledged that algorithmic hiring systems do this too, reproducing and amplifying pre-existing discriminatory entry barriers into the labor market. FINDHR, which stands for Fairness and Intersectional Non-Discrimination in Human Recommendation, is designed to create practical integrated solutions to tackle this issue.
FINDHR will provide the technical, legal, and ethical tools required by an inclusive and diverse Europe that is moving towards remote jobs and global work, and where recruitment is increasingly done online and made more complex by an increase of applicants from intersecting minority backgrounds.
Through a context-sensitive, interdisciplinary approach, FINDHR will develop new technologies to measure discrimination risks, to create fairness-aware rankings and interventions, and to provide multi-stakeholder actionable interpretability. It will also produce new technical guidance to perform impact assessment and algorithmic auditing, a protocol for equality monitoring, and a guide for fairness-aware AI software development. The project will also design and deliver specialized skills training for developers and auditors of AI systems.
The project is grounded in EU regulation and policy. As tackling discrimination risks in AI requires processing sensitive data, it will perform a targeted legal analysis of tensions between data protection regulation (including the GDPR) and anti-discrimination regulation in Europe. It will also engage with underrepresented groups through multiple mechanisms including consultation with experts and participatory action research. All outputs will be released as open access publications, open source software, open datasets, and open courseware.
The consortium is coordinated by Carlos Castillo of the Universitat Pompeu Fabra. ‘Algorithms are increasingly intersecting with important aspects of our lives and shaping our social interactions and careers,’ says Castillo. ‘Without the necessary understanding and oversight, there are critical risks that need to be better understood. I am excited to work with academic and industry researchers and representatives from advocacy groups in this very challenging, high-risk/high-reward research project.’
The iHub, Radboud's interdisciplinary research hub on digitalization and society, is a member of the consortium. Prof. Frederik Zuiderveen Borgesius, professor ICT and law, leads the work of Radboud University in this consortium. The iHub will recruit a legal postdoc, and will focus on the legal questions related to this interdisciplinary consortium.
Zuiderveen Borgesius said: ‘I am very excited to be involved in this consortium. Discrimination-related risks of Artificial Intelligence are a tricky problem. To mitigate the risks, different disciplines should cooperate, and that’s what we do in this consortium. We cooperate with, among others, amazing computer scientists and NGOs such as Algorithm Watch.’
Read the full article in Dutch here.