Abstract
ABSTRACT: Nurse practitioners (NPs) require clinical competence in advanced health assessment skills to differentiate between normal, variations of normal, and abnormal patient findings. High-stakes clinical examinations, using live hands-on simulation scenarios and standardized patients (SPs) or other human role players, are accepted teaching and evaluation tools for NP students. Providing objective, valid, and reliable feedback to students during high-stakes clinical examinations is of considerable value for ongoing skill development. The study examined opportunities to improve the quality of student evaluation in simulation testing modes. A purposive sample of 17 video recordings of health students' comprehensive examination of an SP or physical examination teaching associate (PETA) from a nursing graduate level health assessment course was evaluated. Using a standardized rubric, students were scored live and after a comprehensive examination of a SP/PETA and via a secure web-based video platform by the faculty and an independent reviewer. Evaluator group examination score comparisons revealed that distributions of examination scores within evaluator groups were not similar. Median examination scores were significantly different between groups; faculty median examination scores significantly higher than SPs/PETAs. Efficiency of student evaluation may be increased by improving reviewer training, reducing checklist length, and adopting electronic scoring. Development of an exemplary teaching video providing explanation and detail for expected student skill performance will allow reviewers to practice and improve competence in reliable scoring, reduce time and effort of scorers, and increase accuracy of scoring.