A questionnaire as an evaluation method makes it easier to compare different courses or study programs over a long period of time. However, a standardised questionnaire can be perceived as an imposed routine, and the students might therefore lose interest and provide less constructive feedback. Fewer questionnaires with purposeful questions should be prioritized over more frequent questionnaires that includes short but many questions. Always keep in mind that the motivation to participate in such evaluations depends on to what degree the institution utilizes the feedback.

Standard questionnaire
It is recommended that the course responsible and student coordinators collaborate to prepare a standardised questionnaire that includes several courses in combination with the study program. This is particularly applicable to study programs in which students have the same expected progress. 

Different ways to collect information
There are several methods to conduct a questionnaire. Here are some examples:

  • Distribute online questionnaires via e-mail.

  • Publish online questionnaires on the course or study program web-site.

  • Allocate time in class for students to answer the electronic or printed questionnaires.

It can be beneficial to use online rather than printed questionnaires. Online questionnaires are time efficient and easier to process; students can answer the questionnaire in their own time, which can encourage detailed answers where possible; and, it will be easier to analyse.

How to use this method

  • It takes time to formulate and process questionnaires. Carefully consider the areas you want feedback on and how you plan to apply the information you collect.

  • Conduct the evaluation when students have had time to acquire a reasonable sense of the course or study program.

  • Announce the evaluation in advance so that students can be prepared.

  • Explain to the students the importance of their feedback and how the feedback is processed and used.

  • Check that the address list is updated if you plan to distribute the questionnaire directly to students via e-mail. The e-mail that complements the questionnaire should be short, precise and informative. 

  • You probably need to send out a reminder about the questionnaire, but try to limit this to one time as it might just cause annoyance. This in turn might provoke the student to submit an answer just to avoid more reminders, and the information collected might be misleading and incomplete. 

How to formulate multiple-choice questions
Well-formulated questions and multiple choice answers are the key to a useful questionnaire. Begin by identifying and prioritizing what areas you wish to evaluate. Then you draft questions that cover these areas. Below are advice and recommendations on how to formulate useful multiple choice questions and possible answers:

  • Only ask questions that provides information that is applicable according to your intentions.

  • Formulate questions that only requires one answer. For example: “To what extent does the lecturer manage to engage the students and encourage discussion in class?”. Alternatively: “To what extent does the lecturer manage to engage the students?”.

  • Avoid leading questions, emotional language and prestige bias. Remember that the order in which questions are presented can affect the way students respond.

  • Vary between simple and general questions (for example variables like gender and age), claims (agree/disagree) and rankings (strongly/somewhat, satisfied/unsatisfied). When presenting a claim consider the use of positive and negative formulations.

  • Keep the amount of open-ended questions to a minimal. Such questions require more effort from the students to answer than multiple choice-questions do.  It might be difficult to predefine multiple choice categories, but the extra effort can be rewarding. In some cases, it might be best to exclude predefined answer categories, such as in instances like this: “Do you have concrete suggestions to how the teaching can be improved?”.

  • Make sure that the multiple choice-answers are comprehensive and useful for the whole group of participants. Unnecessary annoyance can be caused by predefined categories that students do not relate to. This might diminish their motivation and produce random answers, which compromises the evaluation.

  • The multiple choice-answers should be mutually exclusive. This might prove difficult, in which case it should be possible to chose more than one answer, indicated in the question.

  • Strive for a logical structure in the questionnaire. Group related questions, and if necessary use headings to make the questionnaire clear and concise. Excessive use of follow-up questions like “If no/If yes” might complicate the questionnaire and make it seem disorganized. Spend some time to design the layout and proof read the text. A messy and unfinished questionnaire is often disregarded resulting in low response and decreased validity.

  • It might be wise to get feedback on your questionnaire before distributing it. This might reveal that certain concepts might be interpreted differently and that questions might lack answer categories.

The University of Idaho has made a list of menu items that can be helpful to decide on topics to focus on in a questionnaire for students. 

Analyse the results
After all responses to the questionnaire are submitted you have to compile the information. Because the questionnaire often consists of both multiple choice-answers and elaborations, it provides both qualitative and quantitative information. Remember that the feedback from students reflects each students’ subjective experience. The results therefore indicate how somethings are perceived, not how things are.

  • Omit obvious unreliable answers before you analyse the results.

  • It is important to treat the student responses in light of possible contextual factors that may influence the feedback. Sick leave, new employees, reforms, reconstruction of buildings, change in routines etc., may both indirectly and directly serve as such an influence.

  • Be aware of what the data actually reflects and what it does not reflect. For example, 43 percent of a student group with 250 students may have responded that they strongly agree with the claim “I am satisfied with my own efforts this semester”. However, this only means that almost half of the class is satisfied, not anything else.

  • Be careful not to jump to conclusions about causes and effects. Bad feedback to a course might not be because of the teaching. It might be explained by variables such as a difficult topic and/or that few students actually are engaged in the specific topic of the course.

  • A high response rate is always good, but if the rate is lower than expected you should attempt to find out why this is the case, and consider what consequences this might have for your conclusions. A systematic bias in respondents will lead to a false impression of what the total student mass really thinks. 
Published 20. October 2015 - 12:48 - Updated 23. May 2017 - 19:17