The recent introduction of ChatGPT has caused a stir in education. The easy interface suddenly made it easy for everyone to use generative AI. Many teachers and students are already using it, and see the opportunities. Yet, there are also risks to using generative AI. For example, students can use it to do homework, reports and assignments. This means that students do not perform the necessary cognitive processing to learn themselves, but let the generative AI do the work. In this way, generative AI often sneaks into schools without a formal process, through the back door. Education professionals are, therefore, struggling with ways to respond appropriately and meaningfully.
There is a lot of discussion about applying generative AI in all kinds of application areas, especially in education (Holmes & Tuomi, 2022; Miller, 2022). A thorough analysis of pedagogical and didactic arrangements is first needed to ensure that we do not 'hand over' crucial processes for learning or teaching to AI (Molenaar, 2022). The research question is: How does education keep control over knowledge and teaching using generative AI, where are opportunities and risks, and what are the consequences on a pedagogical-didactic, ethical, technological and professionalization level?
The current project aims to provide insight into the use of AI in the schools. The project investigates the degree of automation by AI, by checking which tasks are outsourced to generative AI. That way, a better estimate can be made of its consequences, and opportunities can be made. Two frameworks are used to arrive at these insights; two frameworks are used: the Detect-Interpret-Act Framework and the Six Levels of Automation Model (Horvers & Molenaar, 2021). Together, these frameworks help better understand the form of use and degree of automation by generative AI in various educational situations and to provide broader and structured insight into the implications for learning and teaching. In addition, there will be an interdisciplinary reflection on the use cases from the five focus areas of the NOLAI: Pedagogical-didactic, Professionalization of teachers, Technical AI, Data and infrastructure and Embedded ethics.
The project aims to develop an overview of the opportunities and risks of generative AI from the five perspectives of the NOLAI, which can act as a guideline to use and anchor it thoughtfully within Quadraam and (ideally) as a pilot for other secondary education institutions. A technical prototype will be developed, connecting generative AI to pedagogical-didactic models over which teachers can take control, along with professionalization activities to train teachers to deal with generative AI responsibly.