In August 2024, the European AI Act came into effect: a law designed to protect citizens from irresponsible use of artificial intelligence (AI) and to require organizations to handle AI with care and transparency. Radboud University is taking responsibility by launching the AI Act project. This initiative ensures the university complies with the legislation and promotes responsible AI use in education, research, and operations.
The AI Act sets strict requirements for the use of AI. As a researcher or student, you want the freedom to explore the possibilities of AI and to choose which AI tools to use, while remaining aware of ethics, transparency, safety, and legal regulations. This project ensures that the university complies with the legislation and encourages staff and students to use AI responsibly in education, research, and operations.
Our Approach
The AI Act Project consists of five main components, running in parallel:
- AI Policy
The project aims to deliver the research policy in November 2025. The education and operational policy is planned for December 2025. - AI Literacy
Offering training, knowledge clips, and resources for students and staff. Understanding AI is essential for using it effectively. - AI Governance
Establishing a structure of roles and processes to evaluate and improve AI use. This includes an AI helpdesk for questions and support. - AI Register
Creating and maintaining a central database of all AI applications within the university, including risk classifications as defined by the AI Act. This helps manage AI deployment and prevent risks like bias, copyright infringement, or unintended discrimination. - University-wide GenAI Application
Evaluating suitable generative AI tools for university-wide use. Microsoft Copilot Chat is already available, but is it enough?
Wat will we deliver?
By the end of 2025, Radboud University will have a solid foundation for AI:
- A legally sound AI policy aligned with our core values and supported by our academic community.
- Training and information tailored to the needs and roles of students and staff.
- A governance structure that ensures oversight without stifling innovation.
- An AI register that shows where and how AI is used, along with associated risk levels.
- A well-founded recommendation for one or more university-wide GenAI tools.
Planning
The planned timeline is as follows:
- AI Register: In September 2025, AI applications will be classified to determine whether they can be safely used in terms of data collection. Starting from October 2025, the assessment for each tool will be available in the software overview on ru.nl.
- AI Policy: The policy framework for research and education will be published in November 2025. The policy for administrative and operational use of AI will follow in December 2025.
- AI Literacy: From December 2025 onwards, information materials and e-learning modules on AI will be made available.
- Generative AI Application Advisory: In March 2026, an advisory report will be published on the use of generative AI applications within the university.
Why this project?
AI is everywhere. Radboud University offers education and conducts research in AI. We collaborate with industry and public sector partners on AI innovations. Together with Radboudumc we develop AI solutions for healthcare. Students use generative AI for their theses, lecturers experiment with AI, and the software we use increasingly includes built-in AI features. So how do we ensure this is done safely, fairly, and transparently?
The AI Act is the world’s first legislation to classify AI systems based on risk. Europe is leading the way in regulating AI. The risk level depends on how AI is used. Writing an email? Low risk. Screening job applicants? High risk.
The AI Act imposes strict requirements, especially for high-risk applications like hiring processes or adaptive learning. Violations can result in fines of up to €35 million or 7% of annual revenue. But it’s not just about rules—AI touches on public values like privacy, equality, and academic integrity.
Vision, context, and education are essential. That’s why AI literacy is a key focus of this project. How does our university view AI and its recent developments? Is a tool truly AI-based, or is it just a marketing buzzword? We aim to provide insight into the capabilities of AI tools, as well as the risks and potential consequences, such as those related to privacy and sustainability.