The current nursing faculty shortage has challenged administrators in filling vacant faculty positions (American Association of Colleges of Nursing, 2017). The shortage has caused administrators to become more dependent on adjunct faculty to educate and foster students' critical thinking skills in the clinical setting. Overall, the use of adjunct faculty has risen significantly, increasing by 422.1 percent between 1970 and 2003 (Schuster & Finkelstein, 2006). The National Center for Education Statistics (2018) reported an increase in part-time faculty of 74 percent between 1999 and 2011. The increase in the number of adjunct faculty may prove to be a challenge for administrators who are called upon to evaluate the effectiveness of faculty to ensure quality assurance.
Little information exists in the literature regarding the evaluation of adjunct faculty. Langen (2011) identified that student evaluations are most often used in the evaluation process. However, the reliability of student teaching evaluations to accurately evaluate teaching remains a challenge (Annan, Tratnack, Rubenstein, Metzler-Sawin, & Hulton, 2013; Miles & House, 2015). Miles and House determined that student evaluations were impacted by many variables and questioned whether using them in the evaluation of faculty can accurately reflect faculty. How to evaluate teaching in the clinical area has not been adequately defined in the nursing literature.
To determine if methods of evaluation used by administrators are effective, the Lean Higher Education Theory, adapted from the Toyota Lean Principle, was used to guide this study (Balzer, 2010). Balzer (2010) identified the importance of the evaluation of a process to determine its value from the perspective of the beneficiaries, noting that the flow of the process should be identified to determine the value of each step. As there is little literature regarding the process of adjunct clinical faculty, the first step in this research study was to determine how adjunct clinical faculty are evaluated. The next step was to explore nursing administrators' experience with the evaluation of adjunct clinical faculty in order to understand the evaluation process.
METHOD
The institutional review board of a small private Catholic university approved the descriptive exploratory methodology used for this study. The author created a 10-item survey instrument; four nurse administrators with experience in the evaluation of faculty reviewed the survey to determine validity, and modifications were made based their feedback. Administration of the survey and semistructured follow-up interviews with a small group of administrators allowed for the triangulation of data and elaboration on the evaluation process. Conversations were audio-recorded and transcribed by the researcher verbatim and sent via email to participants to ensure their descriptions were accurately transcribed. Nursing programs with accreditation by the Commission on Collegiate Nursing Education and the Middle States Commission on Higher Education were identified via website. The survey was sent to 90 administrators of traditional undergraduate nursing programs. The survey sample consisted of 26 nursing school administrators responsible for the hiring of adjunct clinical faculty. In addition, seven administrators from Pennsylvania, New York, and Washington, DC, agreed to be interviewed.
The survey was administered via SurveyMonkey(R). Demographic information collected included administrator title, type of institution, student population, if the university employed adjunct clinical faculty, the number of adjunct faculty used in the nursing program, and the method used for evaluation. Two open-ended questions addressed the process by which administrators review and share evaluation results with adjunct clinical faculty. Twenty-six survey responses were collated, and the data were tallied to display responses. Using Colaizzi's (1978) process for data analysis, each transcript was read to obtain a sense of the content and to extract significant statements. Themes and categories were developed from the significant statements and reviewed with a qualitative methodology expert to verify and validate the findings. Themes were categorized and reevaluated multiple times with clarifying statements from the transcripts to devise appropriate themes. Data saturation occurred.
SURVEY RESULTS
The survey respondents were evenly distributed in type of institution between private religious, private secular, and public institutions of higher education. Respondents identified as either chair, dean, director, or clinical coordinator; the majority identified as chair. All participants responded that they employed adjunct clinical faculty; 47 percent reported that they employed more than 20 adjuncts per semester.
Overall, 92 percent of respondents reported student evaluation as the primary method of evaluation; 54 percent identified course coordinators or faculty as the person doing the evaluation. Self-evaluation, chair/dean evaluation, and peer evaluation were reported by less than 30 percent of participants. Administrators stated the primary use of evaluation data, in addition to accreditation requirements, was to determine potential reappointment for subsequent semesters. Participants also identified that evaluations can be used to improve or enhance adjunct faculty development. Evaluation information was shared via letter (25 percent), a meeting with the adjunct (63 percent), or by providing online access (12 percent) to evaluation results. Most administrators (63 percent) reported that evaluation results were used to determine if faculty would be rehired.
INVERVIEW FINDINGS
Seven participants agreed to be interviewed one on one. The following themes were identified from the data they provided.
Ability as Clinician Versus Educator
Six participants reported role ambiguity when it came to their ability to assume the role of educator. Participant statements included: "not understanding the adjunct faculty role" and adjunct faculty may be a "nice person but may not be an effective instructor." One participant commented: "Even those with their masters and many of the new ones are not used to being in a teaching role."
Blurring of Boundaries With Students
All but one participant identified that adjunct faculty can have difficulty establishing boundaries with students, including understanding the appropriate ways of interaction. One participant noted that students saw adjunct clinical faculty as intimidating and feared evaluating faculty harshly for fear of failure themselves: "They [students] were deathly afraid they were going to get failed in clinical." In case of conflicts with faculty, "students may not be capable of "navigating situations that were over their heads."
Incident That Raises a Red Flag
All participants recounted a story or incident that occurred in the clinical area with an adjunct that caused concern. One participant reported: "Some incident takes place on the clinical site that warrants or puts up red flags regarding what that person has actually been doing." Such events, when identified by a student or clinical site, triggered concern among administrators.
DISCUSSION/IMPLICATIONS
Twenty-five of the 26 respondents reported using student evaluations for adjunct faculty, noting that low student response rates can make the data unreliable. Stark and Freishtat (2014) identified that small sample sizes, like those in clinical groups, are susceptible to the "luck of the draw." Previous studies also reported that student evaluations were more beneficial when used as formative versus summative evaluation for faculty improvement (Stark & Freishtat, 2014). Administrators responded that they are comfortable not bringing faculty with poor evaluations back.
During interviews, all but one participant stated that they had difficulty finding expert clinicians who were also qualified educators. Literature supports that adjunct clinical faculty often do not feel prepared for education, citing a lack of knowledge in teaching strategies, educational philosophies, and technology (Jacobson, 2013). Participants in this study reported being unable to consistently evaluate adjunct clinical faculty who may not feel supported or prepared for their role as educators. All those interviewed reported a story or an example of an adjunct clinical faculty who did not demonstrate the traits necessary to be an effective educator. The significant increase in the use of adjunct clinical faculty in nursing education highlights the need for a consistent process for the evaluation of adjunct clinical faculty.
This study was completed utilizing using a sample of universities with specific accreditation, which limits the generalization of the findings to similar schools. The small number of participants completing the online survey also limits generalizability. This study only provides the voice of nursing administration and does not explore the views of adjunct clinical faculty, personnel at clinical sites, or students. Understanding how adjunct clinical faculty view their evaluation may provide additional insight into how to improve the process.
CONCLUSION
The nurse faculty shortage may continue to increase the need to utilize adjunct clinical faculty. A comprehensive evaluation process should support improved compliance with quality assurance standards. Although many participants reported offering some type of quality assurance, several identified issues with little resolution, other than termination. The development of a process that protects all stakeholders, most importantly students and patients, would seem to be a priority action based on these findings.
REFERENCES