Authors

  1. ,

Article Content

* These guidelines provide a framework for reporting formal, planned studies designed to assess the nature and effectiveness of interventions to improve the quality and safety of care.

 

* It may not always be appropriate or even possible to include information about every numbered guideline item in reports of original studies, but authors should at least consider every item in writing their reports.

 

* Although each major section (i.e., Introduction, Methods, Results, and Discussion) of a published original study generally contains some information about the numbered items within that section, information about items from one section (for example, the Introduction) is also often needed in other sections (for example, the Discussion).

 

Title and abstract:Did you provide clear and accurate information for finding, indexing, and scanning your paper?

 

1. Title

 

a. Indicates the article concerns the improvement of quality (broadly defined to include the safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity of care)

 

b. States the specific aim of the intervention

 

c. Specifies the study method used (for example, "A qualitative study," or "A randomized cluster trial")

 

2. Abstract

 

Summarizes precisely all key information from various sections of the text using the abstract format of the intended publication

 

Introduction:Why did you start?

 

3. Background knowledge

 

Provides a brief, non-selective summary of current knowledge of the care problem being addressed, and characteristics of organizations in which it occurs

 

4. Local problem

 

Describes the nature and severity of the specific local problem or system dysfunction that was addressed

 

5. Intended improvement

 

a. Describes the specific aim (changes/improvements in care processes and patient outcomes) of the proposed intervention b. Specifies who (champions, supporters) and what (events, observations) triggered the decision to make changes, and why now (timing)

 

6. Study question

 

States precisely the primary improvement-related question and any secondary questions that the study of the intervention was designed to answer

 

Methods:What did you do?

 

7. Ethical issues

 

Describes ethical aspects of implementing and studying the improvement, such as privacy concerns, protection of participants' physical well-being, and potential author conflicts of interest, and how ethical concerns were addressed

 

8. Setting

 

Specifies how elements of the local care environment considered most likely to influence change/improvement in the involved site or sites were identified and characterized

 

9. Planning the intervention

 

a. Describes the intervention and its component parts in sufficient detail that others could reproduce it

 

b. Indicates main factors that contributed to choice of the specific intervention (for example, analysis of causes of dysfunction; matching relevant improvement experience of others with the local situation)

 

c. Outlines initial plans for how the intervention was to be implemented: e.g., what was to be done (initial steps; functions to be accomplished by those steps; how tests of change would be used to modify intervention), and by whom (intended roles, qualifications, and training of staff)

 

10. Planning the study of the intervention

 

a. Outlines plans for assessing how well the intervention was implemented (dose or intensity of exposure)

 

b. Describes mechanisms by which intervention components were expected to cause changes, and plans for testing whether those mechanisms were effective

 

c. Identifies the study design (for example, observational, quasi-experimental, experimental) chosen for measuring impact of the intervention on primary and secondary outcomes, if applicable

 

d. Explains plans for implementing essential aspects of the chosen study design, as described in publication guidelines for specific designs, if applicable (see, for example, http://www.equator-network.org)

 

e. Describes aspects of the study design that specifically concerned internal validity (integrity of the data) and external validity (generalizability)

 

11. Methods of evaluation

 

a. Describes instruments and procedures (qualitative, quantitative, or mixed) used to assess a) the effectiveness of implementation, b) the contributions of intervention components and context factors to effectiveness of the intervention, and c) primary and secondary outcomes

 

b. Reports efforts to validate and test reliability of assessment instruments

 

c. Explains methods used to ensure data quality and adequacy (for example, blinding; repeating measurements and data extraction; training in data collection; collection of sufficient baseline measurements)

 

12. Analysis

 

a. Provides details of qualitative and quantitative (statistical) methods used to draw inferences from the data

 

b. Aligns unit of analysis with level at which the intervention was implemented, if applicable

 

c. Specifies degree of variability expected in implementation, change expected in primary outcome (effect size), and ability of study design (including size) to detect such effects

 

d. Describes analytic methods used to demonstrate effects of time as a variable (for example, statistical process control)

 

Results:What did you find?

 

13. Outcomes

 

a. Nature of setting and improvement intervention

 

I. Characterizes relevant elements of setting or settings (for example, geography, physical resources, organizational culture, history of change efforts), and structures and patterns of care (for example, staffing, leadership) that provided context for the intervention

 

II. Explains the actual course of the intervention (for example, sequence of steps, events or phases; type and number of participants at key points), preferably using a time-line diagram or flow chart

 

III. Documents degree of success in implementing intervention components

 

IV. Describes how and why the initial plan evolved, and the most important lessons learned from that evolution, particularly the effects of internal feedback from tests of change (reflexiveness)

 

b. Changes in processes of care and patient outcomes associated with the intervention

 

I. Presents data on changes observed in the care delivery process

 

II. Presents data on changes observed in measures of patient outcome (for example, morbidity, mortality, function, patient/staff satisfaction, service utilization, cost, care disparities)

 

III. Considers benefits, harms, unexpected results, problems, failures

 

IV. Presents evidence regarding the strength of association between observed changes/improvements and intervention components/context factors

 

V. Includes summary of missing data for intervention and outcomes

 

Discussion:What do the findings mean?

 

14. Summary

 

a. Summarizes the most important successes and difficulties in implementing intervention components, and main changes observed in care delivery and clinical outcomes

 

b. Highlights the study's particular strengths

 

15. Relation to other evidence

 

Compares and contrasts study results with relevant findings of others, drawing on broad review of the literature; use of a summary table may be helpful in building on existing evidence

 

16. Limitations

 

a. Considers possible sources of confounding, bias, or imprecision in design, measurement, and analysis that might have affected study outcomes (internal validity)

 

b. Explores factors that could affect generalizability (external validity), for example: representativeness of participants; effectiveness of implementation; dose-response effects; features of local care setting

 

c. Addresses likelihood that observed gains may weaken over time, and describes plans, if any, for monitoring and maintaining improvement; explicitly states if such planning was not done

 

d. Reviews efforts made to minimize and adjust for study limitations

 

e. Assesses the effect of study limitations on interpretation and application of results

 

17. Interpretation

 

a. Explores possible reasons for differences between observed and expected outcomes

 

b. Draws inferences consistent with the strength of the data about causal mechanisms and size of observed changes, paying particular attention to components of the intervention and context factors that helped determine the intervention's effectiveness (or lack thereof), and types of settings in which this intervention is most likely to be effective

 

c. Suggests steps that might be modified to improve future performance

 

d. Reviews issues of opportunity cost and actual financial cost of the intervention

 

18. Conclusions

 

a. Considers overall practical usefulness of the intervention

 

b. Suggests implications of this report for further studies of improvement interventions

 

Other information:Were there other factors relevant to the conduct and interpretation of the study?

 

19. Funding

 

Describes funding sources, if any, and role of funding organization in design, implementation, interpretation, and publication of study

 

Editor's note: Quality improvement (QI) studies focus on changes in care delivery that can improve the efficiency and outcomes of care. A change or intervention that works in one setting or facility may not work in another without modification. Understanding the context-or the characteristics of the setting in which a QI study takes place-is essential if the intervention is to be publicized outside of an institution. In addition, if sufficient rigor isn't applied to the design of QI work, the conclusions drawn from analysis of the change's impact may be incorrect.

 

In 2005 standards for reporting QI studies were published in the journal Quality and Safety in Health Care.1 Health Care Improvement Leadership Development, a program at the Dartmouth Institute for Health Policy and Clinical Practice, has led the continuing refinement of the guidelines with partial funding from the Robert Wood Johnson Foundation. The aim of the work is to "stimulate the publication of high-caliber improvement studies, and to increase the completeness, accuracy, and transparency of published reports of that work."2 A revision of the guidelines, rationales for changes, and descriptions of the guidelines' potential uses are included in an article published in the October 2008 issue of Quality and Safety inHealth Care. The guideline portion of this publication is reprinted below and is available at http://qshc.bmj.com%2fcontent%2fvol17%2fSuppl%26%2395%3b1.

 

More and more reports of QI work involving nurses are being submitted to AJN. We will expect authors of QI reports to prepare manuscripts in accordance with these guidelines. The guidelines can also be used to help those conducting QI projects to develop more rigorous and thoughtful study designs.

 

Diana J. Mason, PhD, RN, FAAN, editor-in-chief

 

REFERENCES

 

1. Davidoff F, Batalden P. Toward stronger evidence on quality improvement. Draft publication guidelines: the beginning of a consensus project. Qual Saf Health Care 2005;14(5):319-25. [Context Link]

 

2. Davidoff F, et al. Development of the SQUIRE guidelines: the evolution of a consensus project. Qual Saf Health Care; in press. [Context Link]