As the health information technology (HIT) and health information system (HIS) have become widely applied in healthcare settings, researchers/clinicians have conducted studies in order to evaluate the outcomes and effectiveness of using technology in patient care. A technology evaluation framework is a set of guidelines for conducting technological appraisals of designs, objectives, subjects, methods, and data analysis skills or processes (Eisenstein, Juzwishin, Kushniruk & Nahm, 2011; Yusof, Papazafeiropoulou, Paul & Stergioulas, 2008). These guidelines help identify problems and barriers in the implementation process and identify the outcomes and benefits of HIS in order to facilitate improvements in technology applications and to guide organizational decision making (Anderson & Aydin, 2010; Sockolow, Bowles & Rogers, 2015; Yusof, Kuljis, Papazafeiropoulou & Stergioulas, 2008).
Researchers have proposed various approaches to evaluating technology. Researchers stated that a socio-technical approach should be applied because the target of these evaluations is ultimately the social practices within an organization rather than the technology itself. Yusof (2015) suggested an evaluation framework that includes human, organizational, and technological factors (Yusof, Papazafeiropoulou et al., 2008; Yusof Kuljis et al., 2008). Sockolow et al. (2015) advised that a framework for evaluating HIT should consider organizational, systematic, environmental, and professional factors. In the following sections, I introduce a multidimensional framework that addresses four factors and another framework that draws on current theories and models.
Framework Based on Technology, Human, Social and Timing Factors
The technology framework may be categorized into four major factors: technology, human, social, and timing.
Technology Factor
Technology is central to the evaluation framework (Yusof, Papazafeiropoulou, et al, 2008, Yusof, Kuljis et al., 2008). Anderson and Aydin (2010) claimed that computers represent an external force that effects changes in the behaviors of individuals and organizational units. The technology factor has been defined as an HIS success factors in the areas of system quality, information quality, and service quality (Yusof, 2015). An HIT evaluation framework has been proposed that includes the structural qualities of hardware (system availability), software (usability), and functionality (tools and resources; Sockolow, et al., 2015). Studies that have adopted this perspective emphasize device function/ performance over environmental and social interaction-related influences (Anderson & Aydin, 2010).
Human Factor
Anderson and Adyin (2010) described technological design as a controlled process that addresses user needs. Studies that adopt this approach tend to be optimistic regarding the influence of users and designers on technical features. Eisenstein et al. (2011) presented a bioinformatics framework that includes three evaluation dimensions: domain, mechanism, and timing. The domain dimension determines whether the evaluation measures the information intervention or its outcomes. The concepts of formative evaluations and summative evaluations have been distinguished as follows. Formative evaluations aim to optimize a technology function, while summative evaluations aim to ensure that people benefit from using this technology. The authors highlighted the difference between these two concepts using the example of pacemaker manufacturers. These manufacturers work on pacemaker devices using formative evaluation to ensure that they function as expected (technology factor), and then conduct summative evaluations to ensure the safety and efficacy of these devices from the perspective of user (human factor).
Social Factor
Humans use technology, and the environment influences human behaviors (e.g., organizational regulations, social norms). Moreover, complex social interactions affect the use and impact of technology (Anderson & Aydin, 2010). For example, while nurses may welcome an online documentation system, physicians may resist using the system in front of patients due to their lack of familiarity with system functions. Yusof, Kuljis, et al. (2008) proposed an HIS evaluation framework called: "human, organization and technology-fit factors" (HOT-fit). To illustrate this framework, the researchers conducted a study that interviewed users in order to determine the outcomes of a critical-care information system. The framework contains the following: technological factor (HIS success factors: system quality, information quality, service quality), human factor (system development, system use), organization factor (structure, environment), and net benefit (overall IS impact; Yusof, 2015).
Timing Factor
Eilsenstein et al. (2011) argued that technology must be in use for a sufficient period of time in order to generate an adequate large volume of valid data for evaluation. Timing determines whether an evaluation occurs prior to or after a technology is implemented. System development life cycle (SDLC) illustrates the timing dimension because it addresses the distinct system development processes that are used in the planning, analysis, design, implementation, and support phases. HIT evaluation frameworks use a cross-sectional design in order to compare the results among technology, human, and social factors (Sockolow et al., 2015; Yusof, 2015). Therefore, as technology designs and development processes may be improved, organizational regulations and policies may be revised, and the behaviors of users may change over time, the timing factor should be considered when interpreting results.
Evaluation Framework Based on Existing Theories
While the evaluation framework may provide guidelines for conducting research, another approach is to apply existing theory to explore the relationship among variables of interest. Instead of using a complex framework that includes three or four major factors (and their respective sub-factors), using appropriate theories may be self-explained to achieve a study purpose. Additionally, Lewin's change theory, technology acceptance model (TAM) and Rogers' innovation diffusion theory may be applied using explicit variables to guide evaluation studies.
Change Theory
This theory has been illustrated on homecare computers that are used to document nursing records (Geraci, 1997). Lippitt's theory of change, which analyzes the process of change includes 7 stages: diagnosing the problem, assessing the motivation for change, assessing the changes in agent motivation, selecting change objectives, the role of the change agent, maintaining the change, and ending the helping relationship (Kritsonis, 2005). Lippitt's theory is derived from Lewin's theory, Lewin's theory includes the three stages of unfreezing, moving and refreezing. In Lewin's theory, the driving force toward change and the restraining force must be identified, especially during the unfreezing stage. This theory has been used to evaluate the use of personal digital assistants in nursing documentation (Lee, 2006), with results indicating that an additional stage (the anticipatory stage) should be included in this process because nurses may request that further functions be added to a device or system as they become conversant and comfortable with the technology.
Technology Acceptance Theory (TAM)
The technology acceptance model (TAM) is another currently popular concept. If users perceive that a device is easy to use and useful, they form positive attitudes/intentions to use (Davis, 1993). TAM has been utilized to survey the use and the effect on the quality of life (QOL) of elderly clients of telecare programs (Chou, Chang, Lee, Chou & Mills, 2013). The results showed that elderly clients that had better social welfare status and health conditions and who used the service more frequently had better QOL and adopted the service more easily. Nonetheless, whether usage intention leads directly to actual use remains to be studied. The factors that mediate the relationship between intention and actual behavior have been investigated in the context of nurses' use of nursing information systems (Lin et al., 2016), with results revealing intention stability and prior experience as the key moderators of the adoption process.
Rogers' Innovation Diffusion Theory (IDT)
Rogers (2003) proposed that a multi-stage process to describe the process that individuals use to adopt or reject an innovation. Users first develop awareness of an innovation and then form attitudes toward this innovation based on its perceived advantages, compatibility, complexity, trialability, and observability. Based on four of the perceived technology attributes (advantage, compatibility, complexity, observability), a questionnaire survey was mailed to older/chronically ill clients to examine their adoption of home telecare (Peeters, de Veer, van der Hoek & Francke, 2012). This study found that although the device was easy to use, previous usage experience is vital in adopting telecare. A broader scope of Rogers' IDT was applied on the results of using a computer-assisted therapy (Elison, Ward, Davies, & Moody, 2014). A qualitative study of interviews with managers, practitioners, peer mentors and service users was performed in order to analyze the results in terms of personal/ organization characteristics, knowledge, perception, innovation, adoption-related decision making, and adoption or continued adoption.
Other Concern Issues
In addition to the above mentioned theories, the five Ws have been widely used for research, and may be applied on the evaluation framework. Yusof, Papazafeiropoulou et al. (2008) suggested asking the following evaluation questions: why (evaluation objective), who (which stakeholders' perspective), when (what stage of system development life cycle), what (which focus of evaluation), and how (evaluation method). Anderson and Adyin (2010) provided various evaluation methods as follows: interviews (to explore why and what outcomes occur), survey (to produce relationships among variables), work sampling (to measure the behavioral change patterns), and computer simulation (to predict the expected outcomes). However, it was concluded that, in order to measure the complex social interactions affecting technology use, a combination of mixed methods is preferred (Anderson & Aydin).
Conclusions
Evaluation is a complex process that should incorporate considerations of the perspectives of various stakeholders and implementation time periods. Therefore, evaluations that use disparate perspectives such as technology, social, organization, and timing may enhance the success of evaluation efforts. However, rather than applying a complicated framework as a standard measure for all issues, novice researchers and clinicians may select and use succinct theories/models that target specific technology features of interest in order to make the most effective use of limited resources and to obtain results in a reasonable timeframe.
References