Closed questions are questions where the student picks the correct answer from a number of given answer options. The student does not formulate their own answer.
Within the exam you can combine different question types and questions with different cognitive levels well. Of course, a combination with open exam questions is also possible.
What can be assessed?
With this assessment method you can assess skills as well as knowledge (and insight). Depending on the learning objective, you can, for example, use a multiple choice question, match question or a drag-and-drop question (digitally). For an overview of question types, consult the assessment software you have available.
Which levels can you assess?
Within Bloom’s Taxonomy, assessing with closed questions primarily lends itself to the cognitive levels of Remembering, Understanding and Applying.
When it comes to Miller’s Pyramid, assessing with open questions appeals to the levels of Knows and Knows how.
Prerequisites for this assessment method
- An exam with closed questions is suitable for all group sizes, and especially for large groups since grading happens automatically.
- An exam with closed questions is taken on campus.
Points of attention while taking the exam
While taking a written exam with closed questions, several factors can influence the validity and reliability of the assessment results.
- Contents of the exam: the questions in an exam need to be complete and free of errors.
- Duration: if it is important how quickly a student gives their answer, you need to adjust the available time. In all other cases, you make sure there is enough time. Tip: before students take the exam, give it a try yourself.
- Assessment instruction: the instruction needs to be clear.
- Location and circumstances: the circumstances in different rooms need to be the same.
- Tools: before the exam, students need to be informed on what tools are allowed (law bundle, calculator, dictionary).
- Examiners and invigilators: the examiner is almost never present during the exam. So the instructions for the invigilator need to be very clear.
With closed questions, the correct answers are recorded in the answer key. Based on the number of correct answers, a student’s score is (automatically) determined.
When converting the scores to grades, there are several aspects that can be used in different ways, such as cut-off percentages, rounding, partial or total grades, and guess correction. The rules for this can differ per faculty or study programme and are formulated in the assessment policy and/or Education and Examination Regulations (EER).
Feedback to the student
When using digital assessment software, it is possible to add feedback to the question construct per alternative or per question. This can be shown to students during the inspection.
During an inspection you can give plenary feedback by, for example, discussing mistakes that were made often. If you link learning objectives to exam questions in the digital assessment software or via Evaluation Service, it is possible to generate individual score sheets for students. This way, a student gains more insight into their performance.
Evaluating an assessment method
What do you evaluate?
Among other things, you look at:
- The score distribution of the assessment.
- Whether the assessment scores are distributed normally.
- Whether the highest achieved score comes close to the maximum achievable score.
- How individual questions were answered, and whether it happened as you expected. How hard or easy did students find the questions?
- What the score distribution is for the different questions, and whether there are questions for which no one achieved the maximum score. Are there questions where the lowest scoring students performed better than the highest scoring students?
How do you evaluate?
After the assessment it is important to discuss the strengths and weaknesses of the assessment and draw conclusions about the assessment and education. Based on the analysis, you decide whether the assessment design or exam questions need to be adjusted. You can immediately adjust the exam questions in your item bank so they are ready for future (re-)use.
To gain insight into the scores and score distribution, it can help to create an overview of the scores per question per student. With large exams, from approximately 100 students onward, you can make use of a so-called psychometric analysis: a statistical edit of the scores. It gives a lot of information about the quality of the exam and how questions within the exam functioned.