Interesting insights
The AI literacy test was recently administered to first-year students at the Faculty of Science, as an extra module included in the diagnostic language test (RADAr). There are already some interesting insights, says Bram: “First of all, there is apparently a fairly large group that does not yet use AI or that uses it very little. We also see that there are many differences between students: scores vary widely, both on the self-assessment and on the knowledge questions. We also notice that, by and large, students score lower on the knowledge questions than when assessing their own knowledge and skills. So they tend to overestimate themselves. And the other really interesting discovery is that students who score high on language skills also score high on AI literacy. We are very curious whether these outcomes differ by faculty.”
EU AI Act resource
Besides the Faculty of Science, other faculties have also expressed interest in administering the AI test. Meanwhile, In'to is working on a variant of the module to test the AI literacy of RU employees. “This helps the University comply with the EU AI Act, which requires organisations to safeguard adequate AI literacy among their staff. To support other organisations too in this process, we plan to also offer the test module outside the University.”
The future of testing
Besides the issue of how we get students to use AI responsibly, the growing influence of AI also raises the question of whether traditional assessment methods, such as theses and other writing assignments, are still sustainable. Bram: “AI forces us to rethink exactly what we want to measure, or in other words, what do we think students should know and be able to do? If it is so easy to outsource writing assignments to a language model, how can we still measure the student's own skills and understanding? Should we focus on assessing the process rather than the end result? Is that feasible and useful? And what does this mean for writing and language skills themselves: are they now less important, or more important than ever because they are essential to be able to use language models effectively and critically? We don't yet have answers to these questions either, but they are certainly questions that will radically change the world of testing, and education more broadly.”
One thing is certain: AI is here to stay. That is why it is essential that both students and lecturers learn to use this technology consciously and critically. With the new AI literacy test module, In'to Languages is already taking a step in that direction.