Abstract
In determining intervention effects, quality improvement researchers typically use statistical testing-Fisher's "significance testing" and/or Neyman and Pearson's "hypothesis testing." Such tests are employed in an effort to demonstrate whether or not a statistically and practically significant difference exists when comparing experimental and comparison group(s). Although power analysis is often not considered when these tests are applied, this article postulates potential benefits of including power analysis in the early stages of a study's design. Two procedures developed by Fisher and Neyman and Pearson are reviewed. Important background statistical concepts including [alpha] values, [beta] values, P values, effect sizes, and statistical power analysis are defined and discussed. A proposed statistical approach combining both Fisher and Neyman-Pearson procedures along with power analysis for sample size determination and the effect sizes is described and illustrated in a hypothetical research context. The benefits of this combination are discussed within a framework of adding value to a study design and data analysis.