Introduction
Belgian Red Cross-Flanders (BRC-F) is active at home and abroad in many different fields: from blood supply to emergency aid. In 2005, BRC-F spearheaded an initiative (European First Aid Manual), together with a group of European experts, to update training for basic first responders according to the best available medical and scientific data. Using evidence-based methodology, we identified effective interventions and also interventions that were outdated, ineffective or even harmful. This led to the publication of validated European first aid guidelines1 and an accompanying user manual. Following this project, it became part of our strategy to support all BRC-F programmes with evidence-based practice by developing evidence-based recommendations and practice guidelines. For many of the interventions and activities conducted in all fields of Red Cross activity, for example in the field of disaster management, there are no systematic reviews or evidence-based guidelines available yet. Therefore, a Centre for Evidence-Based Practice (CEBaP) was founded, with the task of developing practice guidelines and systematic reviews that answer questions relevant to our organisation. This centre is directed by a Steering Committee, composed of the operational managers of the different Red Cross services and chaired by the Chief Executive Officer/Secretary General. The Steering Committee determines the priority of projects according to fixed criteria.
It is generally known that guideline development requires a lot of effort and money.2 To create trustworthy guidelines in a timely and cost-effective way, we developed a methodology for an action-oriented organisation that needs to balance a quick response to a need with high-quality work. More details on the types of projects and the terminology and methodology are given in the following sections.
Methods
We created a charter in which we describe our approach to developing evidence-based guidelines, versus systematic reviews, in a timely and cost-effective way, based on existing methodologies. To obtain an overview of existing methodologies used to develop different types of reviews and guidelines, we consulted the following sources: the Appraisal of Guidelines for Research and Evaluation (AGREE) checklist,3 the Cochrane Handbook for Systematic Reviews of Interventions,4 guideline manuals of well-known guideline developers such as the Scottish Intercollegiate Guidelines Network (SIGN; http://www.sign.ac.uk/methodology/index.html, accessed 2 September 2013) or the National Institute for Health and Clinical Excellence (NICE; http://publications.nice.org.uk/the-guidelines-manual-pmg6/reviewing-the-evidenc, accessed 2 September 2013), international conferences about the evidence-based methodology such as the Cochrane Colloquium and the Guidelines International Network (GIN) conference, and personal conversations with methodologists. In addition, we performed a MEDLINE search of the last 10 years (via the PubMed interface; search last updated on 30 June 2013), using search terms such as 'Practice Guidelines as Topic'[Mesh], 'Review Literature as Topic'[Mesh], 'Evidence-Based Practice'[Mesh], 'rapid review', 'scoping review', 'pragmatic review' and 'practice guideline'. We selected the articles that gave a better view of the various methodologies and terminologies being used in the development of evidence-based end products. Reference lists and related citations of relevant articles were also checked. The information we collected was synthesised in a narrative way and used as a basis for the development of our own methodology, which is described in the following sections.
Semantics and quality in evidence-based practice
The success of evidence-based practice has led to a rise in review studies.5 Grant and Booth identified 14 commonly published types of reviews, including literature reviews, systematic reviews, systematic searches, rapid reviews, and scoping reviews. These different types of reviews all have subtle variations in purpose (e.g. scoping, giving a rapid answer, etc.), methodology (systematic versus nonsystematic, quantitative versus qualitative or mixed, type of primary research that is being considered, etc.) and the type of question dealt with (e.g. 'What is the impact/cost of an intervention?', 'What is the effect of an approach to social policy?', etc.). Their value is therefore not always clear to the reader.6-8 Furthermore, reporting results can occur in a narrative (often called a 'narrative review') or systematic (e.g. tabular) way. All these types of reviews can form the scientific basis of guidelines, and consequently guidelines also differ in quality.
While developing this methodological charter, we encountered and struggled with these linguistic and methodological problems. In the following sections, we discussed in more detail on how an action-oriented organisation such as the Red Cross deals with these problems by using three categories of evidence-based materials ('rapid reviews', 'systematic reviews' and 'guidelines').
Rapid reviews
Decision makers sometimes need a quick answer to a particular question. As a consequence, there are currently a variety of rapid review methodologies that are all originally derived from the systematic review methodology. However, it often remains unclear which part of the rapid review is carried out more rapidly than a systematic review.9 Products developed using this kind of methodology have different names, such as 'rapid reviews', 'pragmatic systematic reviews', 'scoping reviews', 'rapid responses', 'evidence summaries', 'evidence maps', 'scoping studies', and so on.6,9-16 The 'rapid review' terminology and methodology is widely used among health technology assessment organisations to deliver evidence to decision makers in a shortened time frame, typically 1-6 months (as opposed to 1-2 years for a systematic review).9-11,14 Additionally, BestBETs or 'Best Evidence Topics' offers a database of 'pragmatic systematic reviews' for clinical practice because clinicians also need quick answers (http://bestbets.org/, accessed 2 September 2013). Also, the Cochrane Collaboration has developed 'Cochrane response rapid reviews' (http://innovations.cochrane.org/response, accessed 2 September 2013). Based on a survey among health technology assessment organisations, it was observed that systematic reviews were always included in their rapid reviews, and randomised and nonrandomised trials were included in 94 and 83% of their rapid reports, respectively. In 75% of the reviews, the quality of the evidence was assessed and in 67% of reviews an expert panel was involved.11 A more recent study of 49 rapid reviews addressed some other methodological aspects: it was observed that 47% of the rapid reviews did not have a clear research question; 61% of the reviews were developed by two reviewers; 67% searched the following three databases MEDLINE/Embase/Central; 69% reported the full search strategy; 47% reported the quality assessment method used and 88% presented the results in summary data tables.9 It is clear from both surveys that a huge variety in methodology exists among rapid reviews, which is mainly a consequence of the fact that, until today, there has been no clear guidance for authors of rapid reviews. As a rapid review methodology could introduce bias, it should as a minimum be recommended to rapid review authors that the potential limitations of this type of reviews are provided.9
In general, BRC-F uses the rapid review methodology to explore a possible new topic, after a need is identified in the field by one of the operational Red Cross services. It is therefore called a 'scoping review'. To define the research question as accurately as possible, the input of the operational service is included. The aim of a scoping review is to get an initial idea of the content, quantity and quality of the available evidence; it is only used as an internal document, to prepare a systematic review or guideline project. The scoping review is performed using a specific search strategy in at least two databases (The Cochrane Library, MEDLINE), based on the methodological framework proposed by Arksey and O'Malley16 and Levac et al.15 After finalising the scoping review, the CEBaP Steering Committee decides whether the systematic review or guideline project will be initiated. If no new project is initiated, the result of the scoping review is used only to support internal decision making. The decision to follow up the scoping review is based on the following criteria: urgency, potential impact (i.e. impact on practice and society, opportunity for a publication, intellectual property and quality of the body of evidence), economic and financial impact on BRC-F and relevance for BRC-F (does it fit into our core business, in our strategic plan?). The same criteria are also used to prioritise projects, in case there are more project requests than CEBaP can handle. Figure 1 illustrates the workflow used to choose between the different types of projects.
Systematic reviews
Systematic reviews have been developed to answer questions about the effect of interventions; they give a systematic documented overview of the available evidence on a given topic. A systematic review literally means 'performing a literature review in a systematic way', and according to the Shorter Oxford English Dictionary 'systematic' means 'arranged or conducted according to a system, plan, or organised method; involving or observing a system'.17 The Cochrane Collaboration uses the strictest methodological criteria for the development of systematic reviews and defines a systematic review as 'a review of a clearly formulated question that uses systematic and explicit methods to identify, select and critically appraise relevant research, and to collect and analyse data from the studies that are included in the review'.4 These systematic and explicit methods are clearly described in the Cochrane Handbook.4 However in reality, there is no single systematic literature review method, and many variations and gradations are used when performing a systematic literature review. This can sometimes lead to different reviews on the same topic coming to different conclusions.18 It is therefore highly recommended to be transparent about decisions made during the development of a systematic review, for example about what evidence is included in the review as 'best evidence'.18
In BRC-F, a systematic review will be developed after a scoping review, using the methodological principles of Cochrane, if we want to use the systematic review for a policy change, the answer to the question is not urgent, there is a real chance that it will result in a peer-reviewed publication or the quality of the body of evidence is moderate to high. Examples of such BRC-F projects are a systematic review about the effect of nonresuscitative first aid training,19 a systematic review about the safety and effectiveness of blood of hemochromatosis patients as donor blood20 and a systematic review investigating the scientific basis behind the blood type diet.21
Guidelines
The US Institute of Medicine (IOM) defines clinical practice guidelines as 'systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances'.22 In an updated definition from 2011, it is stated that 'clinical practice guidelines are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options'.23 More broadly, and not limited to a clinical context, terminologies such as 'best practice guidelines', 'practice guidelines' or 'guidelines' are being used. Terms such as 'guidance' or 'guide' are currently used in reports that aim to give advice, rather than statements of best practice, or to provide a model on how to deal with particular situations. Guidelines can be developed in several ways and, in the past, guidelines were often developed according to the so-called Good Old Boys Sat Around the Table method, mainly based on the knowledge, opinion and received wisdom of experts rather than on evidence collected from a systematic literature review. Guidelines developed in this way may be biased by undeclared conflicts of interest, lack of or outdated knowledge.24 A more formal way to develop guidelines is through a meeting of experts who define 'consensus-based guidelines' using a formal consensus technique. However, this is still not based on current scientific evidence and thus subject to different sources of bias.24 In contrast to consensus-based guidelines, 'evidence-based guidelines' are based on the best available evidence. The AGREE website states that 'Practice guidelines are evidence-based if they undertake a review of the literature and link their concluding recommendations to the evidentiary base identified through the literature search' (http://www.agreetrust.org/resource-centre/practice-guidelines/, accessed 10 October 2013). Some evidence-based guidelines use existing systematic reviews, whereas others are based on new systematic reviews, or both.25 Evidence-based guidelines are generally considered to produce more valid recommendations because they systematically integrate the scientific evidence.22,26 However, expert opinion is still necessary as inevitable gaps in the research for many questions still exist,27,28 and a judgement (on benefits, harms, preferences, costs) is needed to formulate recommendations.29 An expert opinion should be formulated in a way that prevents bias.30 It is therefore important that guidelines are developed by a multidisciplinary group and that panel members do not have conflicts of interest.24,31 The AGREE II checklist is a tool that provides a methodological strategy for the development of guidelines. Other organisations have also proposed standards for guideline developers3: GIN proposed minimum standards for high-quality guidelines,32 and IOM developed standards for trustworthiness for clinical practice guidelines.23,33 Guideline developing groups are increasingly striving for a better quality and more uniform methodology. For example, SIGN and NICE adhere to the AGREE principles, and since 2013 and 2009, respectively, use Grading of Recommendations Assessment, Development and Evaluation (GRADE) as a tool for determining the level of evidence and the strength of recommendations (http://www.sign.ac.uk/methodology/index.html, accessed 2 September 2013; http://publications.nice.org.uk/the-guidelines-manual-pmg6/reviewing-the-evidenc, accessed 2 September 2013).34 The National Guideline Clearinghouse recently formulated more stringent inclusion criteria for accepting guidelines in its database from June 2014 onwards (http://www.guideline.gov/about/inclusion-criteria.aspx, accessed 2 September 2013), and GIN asks guideline developers uploading guidelines in the GIN database to indicate which guideline standards have been met (http://www.g-i-n.net/library/international-guidelines-library, accessed 2 September 2013). All these measures are particularly important because many guidelines do not meet quality standards, as illustrated by the following studies. Giannakakis et al.35 found that of 40 guidelines published in six influential medical journals in 1999, only 12.5% performed a systematic literature review and pertinent randomised controlled trials were often not included. A study from 2000 reported that 67% of 461 guidelines published between 1988 and 1989 did not describe the professionals and stakeholders involved, 88% gave no information on the search strategy and 82% did not provide grades of recommendations.31 An overview of studies that performed quality assessments of 627 guidelines published since 1980 demonstrated that many guidelines are of low quality and that only half of the guidelines (55%, 168 of a subsample of 270 guidelines) could be recommended or could be recommended with provisos, after an evaluation with the AGREE instrument.36 The GIN also recognises that many guidelines do not meet basic quality criteria,32 and poor adherence to the IOM trustworthiness standards was demonstrated.33
BRC-F uses AGREE II for the development of practice guidelines. This checklist recommends a systematic search of the literature. However, as we make compromises between the number of topics, on the one hand, and a reasonable time span for the development of the practice guideline, on the other, this results in a review that is systematic but less rigorous than a Cochrane systematic review. The main differences are a specific search strategy instead of a sensitive search strategy, one reviewer instead of two reviewers and not searching for grey literature. However, for guideline development, additional expert opinion from a multidisciplinary expert panel and the preferences of the target group are taken into account, and practical recommendations are being formulated. The methodological principles for guideline development used by BRC-F are described in detail in the following sections.
Examples of BRC-F projects developed for the First Aid Service are European first aid guidelines,1 African first aid guidelines,37 evidence-based recommendations on automated external defibrillator training for children,38 guidelines for first aid and prevention of sports injuries. Examples of projects developed for the Social Service are evidence-based recommendations about effective interventions to support vulnerable children at school and evidence-based recommendations about effective interventions to decrease loneliness in the elderly.
An overview of the different criteria used either for guideline development or the development of a systematic review is given in Table 1.
Methodology used by an action-oriented organisation
Development of evidence-based practice guidelines
The methodology used to develop an evidence-based practice guideline by BRC-F is based on AGREE II, a framework in which the potential biases of guideline development have been adequately addressed.11 In the following sections we comment on how we address several topics of the AGREE tool.
First of all the scope and purpose of the guideline is described, in which the target population is clearly described, in the case of the Red Cross often consisting of laypeople. To work in a pragmatic way we decided not to start a systematic literature search when the Population-Intervention-Comparison-Outcome (PICO) question concerns a 'good practice point' or common sense, the responsibility of professionals (such as a medical doctor or pharmacist, in case our guidelines are intended to be used by laypeople), the practical organisation of activities, medicolegal aspects and anatomy or physiology.
All relevant stakeholders are represented in the guideline development group, which consists of members of the Steering Committee, methodological experts who are responsible for collecting and critically appraising the evidence, representatives of the operational Red Cross service for whom the guideline is being developed and which is responsible for formulating the draft recommendations, and the expert panel, which makes a trade-off between the quality of the evidence and the potential benefits and harm, and which validates the final recommendations. The expert panel consists of a chairman, with expertise in evidence-based methodology and the project content, and additional panel members who at the very least have expertise in the content of the project. The target population is represented in the guideline development group, for example by involving Red Cross volunteers. Additionally, the guideline development group receives information about the views and preferences of the target population from the Red Cross service involved, which has expertise in the content or collects the necessary information (e.g. by composing a reading group or by interviewing the target population), and/or a literature search concerning the values, preferences and experiences of the target population, and/or a feedback session or pilot test.
In AGREE II no detailed description of the methodology for the literature search is given. We therefore based our methodology on that used by other guideline developers such as SIGN (http://www.sign.ac.uk/methodology/index.html, accessed 2 September 2013) and NICE (http://publications.nice.org.uk/the-guidelines-manual-pmg6/reviewing-the-evidenc, accessed 2 September 2013). The search process takes into account of the fact that a BRC-F practice guideline consists of many different topics (>40 topics), and we therefore have to make methodological trade-offs that preserve the validity and trustworthiness of guidelines while improving efficiency. For each project, a search is performed for evidence from the date of inception of the databases until the date of the current search. The different sources searched and information on the methodological search filters (only used if necessary) are given in Table 2.39-41 For the choice of search terms, we focus on possible synonyms and, if present in the database, we consult the thesaurus of index terms, to build an adequate search strategy.
In the search process, evidence is selected in a stepwise approach, whereby we first search for guidelines and systematic reviews (as a source of individual studies), then for intervention studies and finally for observational studies. We only move to the next step of the search process if no evidence is found or if the evidence cannot be included based on the inclusion and exclusion criteria. For guidelines and systematic reviews, we run a supplemental search for individual studies from the date when the search was stopped in the selected guideline or systematic review. During the search for evidence additional references can be selected by checking the 20 related citations in PubMed and/or by manual searching (i.e. by checking the reference list of an included reference).
The selection of evidence is based on the language (in general English, Dutch, French and German literature is selected), criteria on the content (general criteria, which are also used to decide not to start a search for evidence, and specific criteria based on the PICO question) and methodological criteria depending on the type of study design (described in Table 2). For determining the study design, we use a flowchart that we developed based on a tool from the Cochrane Non-Randomised Studies Methods Group42 (Fig. 2). Only studies that are relevant for our projects are included, and a clear distinction is made between experimental and observational studies, which is important for assessing the strengths and limitations of the body of evidence with the GRADE methodology.34 For each topic, one reviewer selects and evaluates the evidence and then describes the search strategy, inclusion and exclusion criteria, data and levels of evidence in an 'evidence summary'. As an internal control the search for evidence for a random selection of questions is performed periodically by another methodological expert.
A multidisciplinary expert panel is involved in formulating the final recommendations by consensus, making use of a table in which the corresponding evidence is presented for every draft recommendation. If a consensus cannot be reached, the decision depends on the opinion of the majority by voting. The expert panel is responsible for reading through the whole guideline and for assigning the grades of recommendation. The expert panel also makes a trade-off between benefits and harm, side-effects or risks during the assignment of the grades of recommendation, using the GRADE approach.34 As our target population consists largely of laypeople, it was decided not to use the grades of recommendations in the didactical materials with the final recommendations, or to translate the grade of recommendation in the specific wording of the recommendations.43
Each guideline is also reviewed by external experts or peer reviewers who were not involved in the guideline development group. Reviewers include experts in the content of the guideline and some methodological experts.
Depending on the type of project, context and target group, we can decide to complement the practice guideline with an implementation guide. This implementation guide can contain the following information: the facilitators and barriers to the application of the guideline, advice on how to put the recommendations into practice, the potential resource implications of applying the recommendations and monitoring and/or auditing criteria.
When publishing the practice guideline, the topics mentioned above are preferably described in detail. In every case the methodology is described or reference is made to a document containing the detailed methodology.
BRC-F guidelines will be updated every 5 years,44 unless stated otherwise. To achieve this, the literature search will be repeated from the end of the previous literature search until the start of the update.
Development of systematic reviews
A systematic review provides an overview of the best available evidence collected by a literature search on a very specific topic described by a clearly formulated question. Making a trade-off between the estimated benefits, harm and the estimated costs, and thus making specific recommendations for an action, goes beyond the scope of a systematic review and is typically the task of (clinical practice) guideline developers. If the systematic review deals with questions that are relevant for our evidence-based guidelines, the results from the systematic reviews are included in the guideline when the guideline is updated. For the development of a systematic review we follow the methodology described in the Cochrane Handbook.4 In the following paragraphs, some of the differences with the search process as described for practice guideline development will be highlighted.
The types of studies to be included as the source of evidence are clearly specified. In making this choice we consider a priori which study designs are likely to provide reliable data with which to address the objectives of the review.
We use a very sensitive search strategy and try to avoid search filters. In the case of methodological filters, the sensitive filters of Cochrane are used. Study selection and data extraction are performed by at least two independent reviewers. A clear procedure for action is described in case of disagreement between the two reviewers, and consists of consulting a third reviewer. Wherever possible, the authors of studies are contacted when information in the study is missing. To assess the quality and the risk of bias for each individual study, we use the 'Cochrane Collaboration's tool for assessing risk of bias'. For the body of evidence a quality rating is compiled for each outcome according to the GRADE method.34
For transparent reporting of the development of a systematic review, we use the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statements 2009.45 This is a 27-item checklist that aims to guarantee the quality of systematic reviews by clear and transparent reporting in a publication.
Conclusion
Like any Red Cross organisation, we have to meet the needs of the most vulnerable in our society. To achieve our goals in a quality-oriented manner, we adopted an evidence-based approach to ensure that all our activities are supported by solid scientific data. As the Red Cross is a humanitarian organisation, we often have to compromise between working rigorously, on the one hand, and meeting the needs in a reasonable time span, on the other. Therefore, there is a need for a specific methodology to create practice guidelines and systematic reviews. In our search for an adequate methodology, we encountered an enormous variety of methodological approaches and terminology used for evidence-based guidelines and reviews. To be transparent about our methodology, we developed a methodological charter to be published on our website. This charter may inspire other organisations who want to use the evidence-based methodology to support their activities, and who struggle with similar issues. For users of evidence-based guidelines and systematic reviews it is important to be aware of the variety in methodology and quality, and it is recommended that as a minimum the rigour of development is verified.
Acknowledgements
The information in this article has not been published or submitted for publication elsewhere. All authors have contributed significantly to this work (P.V.D.K. contributed to the conception and design of the manuscript; P.V.D.K., E.D.B., N.S.P. and T.D. developed the methodology described in the article; E.D.B. and N.S.P. wrote the methodological charter and T.D. formulated feedback and contributed to the development of appendices to the charter; E.D.B. prepared the draft of the article (all other authors revised critically), and all authors are in agreement with the content of the manuscript. All authors are employees at the BRC-F and receive no other funding.
References