Although multiple-choice examinations (MCEs) are widely used to evaluate students in nursing education, the quality of constructed multiple-choice questions (MCQs) is variable. Tarrant and Ware (2012) reported that few nurse educators have specific training to write high quality, valid, and reliable MCQs. Constructing quality MCQs is challenging and time-consuming (Hijji, 2017); educators often construct MCQs hastily and inadequately analyze written items, compromising quality (Redmond, Hartigan-Rogers, & Cobbett, 2012).
It has long been debated whether MCEs are capable of testing students' higher cognitive domains, such as critical thinking (Bailey, Mossey, Moroso, Cloutier, & Love, 2012). Some students become test-wise, recognizing clues suggesting correct answers rather than recalling knowledge (Nemec & Welch, 2016). Bailey et al. (2012) reported that poorly constructed MCQs can wrongly inform students. Tarrant, Knierim, Hayes, and Ware (2006) estimated that 50 percent of MCQs do not differentiate among students with variable understanding of the material tested.
Educators support MCEs in the assessment of student learning (Tarrant et al., 2006), but not all MCEs are valid. Students have also identified that, given fair assessment, MCEs are an essential component of satisfaction (Leung, Mok, & Wong, 2008). As interdisciplinary studies show improvement with appropriate training, it is recommended that universities provide training for MCE composition (Tarrant & Ware, 2012). High quality MCEs can result from the use of a collaborative strategy to create a blueprint and develop and review a standard set of items for high-stakes examinations (AlMahmoud, Elzubeir, Shaban, & Branicki, 2015; Leung et al., 2008).
The purposes of this study were twofold: 1) to determine how nurse educators create, review, and modify MCQs and 2) to provide an opportunity for nurse educators to envision their ideal nursing education practice for MCEs given optimal supports.
METHOD
The 4-D Cycle of Appreciative Inquiry (AI; Cooperrider & Whitney, 2002) was used to explore nurse educators' MCE practices. AI focuses on acknowledging current organizational strengths to facilitate change in a system or community. Academic institutions represent a community of educators, and the best understanding of exam practices requires an approach accounting for all perspectives. Participants' perspectives reveal what is working while sharing and valuing collected insights. Participants identify themes by describing ideals of what could be and what may work, formulate an action plan, and identify resources (Cooperrider & Whitney, 2002). The researchers' delimited data collection to a single faculty of nursing is consistent with AI, where the focus is on strengthening a single organization.
Eligible faculty of nursing educators (n = 110) who used MCEs in undergraduate courses were invited to participate. Based on an historical account of courses, it was estimated that half of the invitees used MCEs; to ensure anonymity, online surveys (Fluid Surveys Software, http://fluidsurveys.com) were sent to all faculty. A paper copy to mail slots followed due to the low response. Only 14 faculty responded to the online survey, and five returned the paper copy. As a member of the senior administration was on the team, tracking participants was excluded to avoid coercion and maintain confidentiality. All respondents were invited to participate in a focus group or interview.
The research team developed and piloted a 13-item survey that recorded teaching years, MCQ and MCE construction practices, and management of poorly performing questions. Focus group and interview guides were developed using the 4-D Cycle of AI (Cooperrider & Whitney, 2002). The focus groups and interviews were digitally audio-recorded and transcribed verbatim. Investigators independently reviewed the transcripts and identified potential codes and themes using Braun and Clarke's (2006) phases of thematic analysis. A team expert in AI guided the discussion, refinement, and finalization of themes. The study was approved by the institution's Research Ethics Board (REB14-1273).
RESULTS
Of the 19 nurse educators who responded to the survey, eight taught theory courses for >5 years and six had recently (< 5 years) taken exam-writing workshops. Commonly used resources included inherited faculty exams (68 percent), textbook items (63 percent), commercial test banks (52 percent), and online exam questions (52 percent). The majority (74 percent) removed poorly performing MCQs from student-completed exams, reducing the total achievable score of the exam.
Two focus groups (n = 9) and two interviews were held for a total of 11 participants. Guided by AI, participants proposed ideal practices and supports needed to strengthen MCE practices: 1) guidelines and expectations for faculty members, 2) faculty-generated test bank, 3) team development, and 4) assessment blueprint at the curriculum level.
Ideal 1: Guidelines and Expectations
Participants lacked formalized resources to make valid MCEs. "I feel like a lot of it, in my experience, was learning as you go[horizontal ellipsis]But at the same time[horizontal ellipsis]looking for guidance." Participants wanted guidance on the development of course-specific test plans, defined expectations for peer review of MCQs and MCEs, consistent use of exam statistics across courses, guidelines on management of poorly performing questions, and an exam review guideline for use with students.
Ideal 2: Faculty-Generated Test Bank
Time constraints were a commonly reported barrier; thus, a faculty-generated test bank with previously tested questions was favored. "I don't have enough knowledge and background in making that question to make sure it is a good question. Yeah, I know the content, and I know the stuff but - to ask [you] the right way? So, the test bank is probably better."
Ideal 3: Team Development
Some participants wanted support from colleagues: "I think it is there, it's just trying to figure out how to better utilize the skills we have on some of our teams." The development of term teams with diverse strengths and an overarching reporting structure would provide a supportive learning context.
Ideal 4: Assessment Blueprint at the Curriculum Level
An assessment blueprint at the curricular level would provide guidance to nurse educators and consistent quality of MCEs across the curriculum. MCQs should exhibit increasing challenges to students as they progress through the curriculum, which could be tracked with a blueprint. A process for vetting changes to the blueprint as the curriculum developed would be required.
DISCUSSION
The small convenience sample in one institution and a low response rate may have contributed to bias in the results of this study. In addition, before resources are dedicated, further information is needed to assess the administration's approach to MCQ/MCE development. However, the study highlights that nursing faculties may need to train and support nurse educators to construct fair and valid MCEs.
Ideas to carry forward include faculty-generated test banks fitted to curriculum objectives. Nurse educators could be confident in using pretested questions, which would also improve time efficiency. They could also find peer support to review and modify MCEs using term teams. Peer review of MCQs could lead to the identification and modification of test questions that do not match intended cognitive levels stated in the learning objectives. MCE reviews in a classroom setting or individual student/instructor meetings provide a learning opportunity for students to reflect on exam writing and for the instructor to correct any misinformation. This is part of the evidence-based educational foundation upon which good exams are built (Hijji, 2017).
CONCLUSION AND RECOMMENDATIONS
This study sharpens the approach to evaluative assessments in nursing education by proposing ideals generated from an inquiry detailing the opinions and experience of nurse educators. The researchers recommend identifying and building upon the strengths existing in an academic community and to consider training in MCE construction, item writing, and analysis as well as a test blueprint based on curriculum goals. The researchers acknowledge that MCEs with well-written and discerning MCQs cannot be the only assessment method applied to students. To give a comprehensive assessment, multiple methods are recommended.
REFERENCES