in-to-jaarvergadering-030-landing-ai-trainingen-2
in-to-jaarvergadering-030-landing-ai-trainingen-2

How can generative AI be responsibly integrated into academic education?

Commissioned by Radboud AI, students of Communication Science at Radboud University conducted a qualitative study on the role of GenAI within education on campus. Based on focus groups with students and lecturers, they mapped opportunities, concerns, and conditions for its use.

The findings show that the successful use of GenAI requires more than technical knowledge alone. Clear guidelines, transparent communication, and explicitly placing educational goals at the center form the foundation for responsible use.

Lack of consensus within programmes

Both students and lecturers currently experience a lack of alignment regarding the use of GenAI within academic programmes. This leads to major differences between courses in terms of expectations and forms of guidance. There is a lack of consistency: what is permitted in one course may be explicitly prohibited in another.

Respondents emphasize that GenAI can only be used responsibly when there is active dialogue about it: between students and lecturers, but also among lecturers themselves. Reaching consensus is a necessary condition for developing clear guidelines on what is and is not allowed.

Awareness and AI literacy

Awareness of GenAI goes beyond simply learning how to use tools or applications. Students and lecturers indicate that attention is also needed for the ethical, societal, and ecological implications of GenAI. This includes issues such as bias, transparency, data usage, and energy consumption.

Workshops, guest lectures, and courses led by subject-matter experts can help strengthen AI literacy. This involves not only practical skills, but also understanding how GenAI outputs are produced, critically evaluating generated information, and reflecting on one’s own use.

Clear communication about policy

Current policies regarding GenAI in education are perceived by many students and lecturers as unclear. Guidelines are insufficiently known and interpreted differently, leading to uncertainty and inconsistent application across courses.

The research shows a need for explicit communication about acceptable and unacceptable uses of GenAI. At the same time, it is acknowledged that guidelines may differ per course depending on content, learning objectives, and assessment formats. Transparency about these considerations is crucial.

Educational goals at the center

A recurring concern involves the impact of GenAI on the learning process. Both students and lecturers see the risk that excessive or unfocused use could undermine independent thinking and deep learning. Lecturers assume that assignments are completed by students themselves and assessed based on their own knowledge and skills.

Uncertainty about the use of GenAI (where it is allowed, where it is not, and to what extent) can put this relationship of trust under pressure, particularly when ownership of the work is not transparent. The research therefore emphasizes that GenAI only adds value when its use is explicitly aligned with

the educational goals of a course. Without clear frameworks, there is a risk that GenAI will weaken rather than strengthen the learning process.

This research was conducted by: Myriam Steinhauer, Rachel Boerman, Isa Teunissen, Tetske van den Biggelaar, and Noa van Helvoirt. They also wrote this article. Would you like to learn more about this research? Please contact Noa van Helvoirt (noa.vanhelvoirt [at] ru.nl). 

Onderzoekers focusgroep Radboud AI

Contact information

Organizational unit
Radboud AI
Theme
Artificial intelligence (AI), Education