In'to Languages is getting a new course system. As a result, you can currently only log in to your personal page or register for a course or intake if you have a U number or S number and use EduVPN. This also applies when you are on campus. If you do not have a U number or S number, please use this form
We apologize for the inconvenience.

Vrouw aan het werk op laptop

In'to Languages launches test to measure AI literacy

The meteoric development of generative AI poses serious questions for education. What knowledge and skills do students need in a world where AI is taken for granted? And how can you reliably test those skills? Instead of banning AI tools, there is growing awareness among educational institutions that AI literacy is essential. In'to Languages has developed a new test to assess students' AI literacy. Bram de Jong, language testing project manager at In'to, talks about how this test came about, the preliminary results, and the challenges AI brings to the testing world.

Language skills

Why would an organisation specialising in language training and testing feel called to develop a test on AI? The form of generative AI (genAI) that has the greatest impact on education involves language models such as ChatGPT. These models generate texts that are sometimes barely distinguishable from human content. But, as Bram de Jong explains, you need to know how to use them: “This requires good linguistic and communication skills. You have to know how to draft good prompts, but more importantly, how to critically evaluate the output, and whether and how to incorporate it into a text. At In'to, we have the right expertise to test these skills, as this requires integrating our knowledge of writing and language skills, of testing and of genAI.”

Based on this expertise, In'to advised the Radboud University's Language Policy Committee to include genAI as a pilot in its policy. Subsequently, some budget was allocated to develop the AI test module. What is being tested? The first questions are about the extent to which students use genAI and how proficient they consider themselves to be in it. This is followed by knowledge questions on being able to recognise AI content and critically reflect on its inputs and outputs.

Bram de Jong, medewerker Radboud In'to Languages

Interesting insights

The AI literacy test was recently administered to first-year students at the Faculty of Science, as an extra module included in the diagnostic language test (RADAr). There are already some interesting insights, says Bram: “First of all, there is apparently a fairly large group that does not yet use AI or that uses it very little. We also see that there are many differences between students: scores vary widely, both on the self-assessment and on the knowledge questions. We also notice that, by and large, students score lower on the knowledge questions than when assessing their own knowledge and skills. So they tend to overestimate themselves. And the other really interesting discovery is that students who score high on language skills also score high on AI literacy. We are very curious whether these outcomes differ by faculty.”

EU AI Act resource

Besides the Faculty of Science, other faculties have also expressed interest in administering the AI test. Meanwhile, In'to is working on a variant of the module to test the AI literacy of RU employees. “This helps the University comply with the EU AI Act, which requires organisations to safeguard adequate AI literacy among their staff. To support other organisations too in this process, we plan to also offer the test module outside the University.”

The future of testing

Besides the issue of how we get students to use AI responsibly, the growing influence of AI also raises the question of whether traditional assessment methods, such as theses and other writing assignments, are still sustainable. Bram: “AI forces us to rethink exactly what we want to measure, or in other words, what do we think students should know and be able to do? If it is so easy to outsource writing assignments to a language model, how can we still measure the student's own skills and understanding? Should we focus on assessing the process rather than the end result? Is that feasible and useful? And what does this mean for writing and language skills themselves: are they now less important, or more important than ever because they are essential to be able to use language models effectively and critically? We don't yet have answers to these questions either, but they are certainly questions that will radically change the world of testing, and education more broadly.”

One thing is certain: AI is here to stay. That is why it is essential that both students and lecturers learn to use this technology consciously and critically. With the new AI literacy test module, In'to Languages is already taking a step in that direction.

Written by
B. de Jong (Bram) MAA.A.M. van Paasen (Angela)
Bram de Jong is language testing project manager at Radboud and Wageningen in'to Languages. Angela is In'to's content specialist and regularly receives interesting information from In’to’s language and communication experts and participants.