That is the question. Now that everyone has recovered from the initial shock about the potential impact of AI for higher education, the next question looms; What should we actually do with it? Yes, you can make nice pictures in the style of Gibli with GenAI, you can create Saint Nicholas poems but also essays with GenAI, and brainstorm about research with GenAI. You can do all that. But what exactly should we do with it in education? The fact that students use it en masse (according to the recent VOX publication on AI, 70% of students use GenAI once or several times a week) does not yet mean that GenAI has added value for our education. It is time to put that question at the centre of our thinking. Do we see GenAI as a potentially useful tool for more effective study, or do we see a concrete added value of GenAI in the academic education of our students?
At least two important perspectives come into play here. Firstly, the fact that GenAI is a word predictor that cannot ‘think’, but predicts the best matching word based on statistical algorithms. In addition, those predictions are also based on bias, copyright infringement, and are done by energy- and water-guzzling data centres. Add to this the fact that the most widely used GenAI such as ChatGPT and CoPilot are American, which is a far from desirable situation given the current geopolitical circumstances. So there are all sorts of snags in the technology itself.
A scan around my personal network of secondary school teachers and students also reveals that nothing to very little is being done there to raise awareness of these snags in terms of GenAI. Incidentally, this is not an unwillingness on the part of secondary schools, but stems from a lack of resources to get serious about this. So high school students come to study with minimal knowledge about the responsible use of AI. On the other hand, there is the reality of the ever-increasing implementation of AI in society. According to CBS, 60% of larger companies in the Netherlands are already using one or more AI technologies. Large consulting firms appoint AI ambassadors to promote the use of GenAI among employees. So the chances that a graduate of our university will be working with AI technology already at the beginning of his or her career are very high.
Taken together, this raises important questions for the university regarding AI. After all, the ethical and sustainability flaws of AI are diametrically opposed to its increasing use in society. How are we, as Radboud University, going to take a position on this when, on the one hand, we value sustainability, but also have the ambition to connect with the field? Added to this is the fact that the responsibility for AI literacy, especially critical reflection on GenAI, falls on universities. How are we going to ensure that we train students to be critical users, assuming for a moment that students continue to use it anyway. As a university, what do we consider our responsibility in this? This requires choices and vision.
Perhaps the most crucial question below is this: do we see GenAI as a (sometimes permissible, sometimes impermissible) tool that helps students study, or as a technology that fundamentally alters the learning outcomes of our courses? If we see it as a tool, then we need to look at how we can safeguard the testing of current final attainment levels against the unauthorised use of GenAI. That means: clear rules, AI-proof examinations, and explicitly including GenAI in fraud provisions of the Education and Examination Regulations. The result then is a cat-and-mouse game between student and lecturer, between students and Examination Board, but it is indeed an understandable choice.
I am more in favour of the more risky and far-reaching ‘never waste a good crisis’ perspective. Or as a colleague at the TLC aptly describes it; ‘Around GenAI, nobody really knows the answer, and that's what we're all talking about’. Let's take advantage of that ignorance, and take a good look at our attainment targets, at the reasons for our existence. Where is the added value of our education, when we know that ChatGPT can produce texts that are barely distinguishable from student texts? What do we really want students to be able to do and know at the end? Because in the big bad outside world, there will soon be no Examination Boards or fraud schemes that are going to stop students from using GenAI. Indeed, there it will very likely be part of their work in some way.
In a world full of AI, what is it that they learn here at our university that they cannot get out of GenAI? Students spend four, five or sometimes six years here, what do we really want to achieve with them during those years? Only when we have an appropriate answer to this can we start thinking about what didactic added value GenAI has for our education. So GenAI does not only require reactive prioritisation, or what is allowed and what is possible with GenAI. It requires proactively seeking answers to essential questions about the university's position in relation to society, the university's responsibility as an educator, and about the raison d'être of universities.