Critical Appraisal of the Evidence Essay.

Critical Appraisal of the Evidence Essay.

 

Critical Appraisal of the Evidence: Part I

In May’s evidence-based prac- tice (EBP) article, Rebecca R., our hypothetical staff nurse, and Carlos A., her hospital’s ex- pert EBP mentor, learned how to search for the evidence to answer their clinical question (shown here in PICOT format): Critical Appraisal of the Evidence Essay.“In hos pitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac ar rests (O) and unplanned admis sions to the ICU (O) during a threemonth period (T)?” With the help of Lynne Z., the hospi- tal librarian, Rebecca and Car- los searched three databases, PubMed, the Cumulative Index of Nursing and Allied Health Literature (CINAHL), and the Cochrane Database of Systematic Reviews. They used keywords from their clinical question, in- cluding ICUrapid response teamcardiac arrest, and un planned ICU admissions, as well as the following synonyms: failure to rescuenever eventsmedical emergency teamsrapid response systems, and code blue. Whenever terms from a

database’s own indexing lan- guage, or controlled vocabulary, matched the keywords or syn- onyms, those terms were also searched. At the end of the data- base searches, Rebecca and Car- los chose to retain 18 of the 18 studies found in PubMed; six of the 79 studies found in CINAHL; and the one study found in the Cochrane Database of System- atic Reviews, because they best answered the clinical question.Critical Appraisal of the Evidence Essay.

As a final step, at Lynne’s rec- ommendation, Rebecca and Car- los conducted a hand search of the reference lists of each study they retained looking for any rele- vant studies they hadn’t found in their original search; this process is also called the ancestry method. The hand search yielded one ad- ditional study, for a total of 26.Critical Appraisal of the Evidence Essay.

ORDER A PLAGIARISM-FREE PAPER HERE

RAPID CRITICAL APPRAISAL

The next time Rebecca and Car- los meet, they discuss the next step in the EBP process—critically appraising the 26 studies. They obtain copies of the studies by printing those that are immedi- ately available as full text through

library subscription or those flagged as “free full text” by a database or journal’s Web site. Others are available through in- terlibrary loan, when another hospital library shares its articles with Rebecca and Carlos’s hospi- tal library.Critical Appraisal of the Evidence Essay.

Carlos explains to Rebecca that the purpose of critical appraisal isn’t solely to find the flaws in a study, but to determine its worth to practice. In this rapid critical appraisal (RCA), they will review each study to determine

• its level of evidence. • how well it was conducted. • how useful it is to practice.

Once they determine which studies are “keepers,” Rebecca and Carlos will move on to the final steps of critical appraisal: evaluation and synthesis (to be discussed in the next two install- ments of the series). These final steps will determine whether overall findings from the evi- dence review can help clinicians improve patient outcomes.

Rebecca is a bit apprehensive because it’s been a few years since she took a research class. She

shares her anxiety with Chen M., a fellow staff nurse, who says she never studied research in school but would like to learn; she asks if she can join Carlos and Rebecca’s EBP team. Chen’s spirit of inquiry encourages Re- becca, and they talk about the opportunity to learn that this project affords them. Together they speak with the nurse man- ager on their medical–surgical unit, who agrees to let them use their allotted continuing educa- tion time to work on this project, after they discuss their expecta- tions for the project and how its outcome may benefit the patients, the unit staff, and the hospital.Critical Appraisal of the Evidence Essay.

Learning research terminol- ogy. At the first meeting of the

new EBP team, Carlos provides Rebecca and Chen with a glossary of terms so they can learn basic research terminology, such as sam pleindependent variable, and de pendent variable. The glossary also defines some of the study de- signs the team is likely to come across in doing their RCA, such as systematic review, randomized controlled trial, and cohort, qual- itative, and descriptive studies. (For the definitions of these terms and others, see the glossaries pro- vided by the Center for the Ad- vancement of Evidence-Based Practice at the Arizona State Uni- versity College of Nursing and Health Innovation [http://nursing andhealth.asu.edu/evidence-based- practice/resources/glossary.htm]

and the Boston University Medi- cal Center Alumni Medical Li- brary [http://medlib.bu.edu/ bugms/content.cfm/content/ ebmglossary.cfm#R].)

Determining the level of evi- dence. The team begins to divide the 26 studies into categories ac- cording to study design. To help in this, Carlos provides a list of several different study designs (see Hierarchy of Evidence for Intervention Studies). Rebecca, Carlos, and Chen work together to determine each study’s design by reviewing its abstract. They also create an “I don’t know” pile of studies that don’t appear to fit a specific design. When they find studies that don’t actively answer the clinical question but

Hierarchy of Evidence for Intervention Studies

Type of evidence Level of evidence Description
Systematic review or meta-analysis I A synthesis of evidence from all relevant randomized controlled trials.
Randomized con- trolled trial II An experiment in which subjects are randomized to a treatment group or control group.
Controlled trial with- out randomization III An experiment in which subjects are nonrandomly assigned to a treatment group or control group.
Case-control or cohort study IV Case-control study: a comparison of subjects with a condition (case) with those who don’t have the condition (control) to determine characteristics that might predict the condition.

Cohort study: an observation of a group(s) (cohort[s]) to determine the development of an outcome(s) such as a disease.

Systematic review of qualitative or descrip- tive studies V A synthesis of evidence from qualitative or descriptive studies to answer a clinical question.
Qualitative or de- scriptive study VI Qualitative study: gathers data on human behavior to understand why and how decisions are made.

Descriptive study: provides background information on the whatwhere, and when of a topic of interest.

Expert opinion or consensus VII Authoritative opinion of expert committee.

Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins.

48 AJN ▼ July 2010 ▼ Vol. 110, No. 7

ajnonline.com

page3image1400

Critical Appraisal Guide for Quantitative Studies

1. Why was the study done? • Was there a clear explanation of the purpose of the study and, if so, what was it?

2. What is the sample size? • Were there enough people in the study to establish that the findings did not occur by chance?Critical Appraisal of the Evidence Essay.

3. Are the instruments of the major variables valid and reliable? • How were variables defined? Were the instruments designed to measure a concept valid (did

they measure what the researchers said they measured)? Were they reliable (did they measure a

concept the same way every time they were used)?

4. How were the data analyzed?

• What statistics were used to determine if the purpose of the study was achieved?

5. Were there any untoward events during the study?

• Did people leave the study and, if so, was there something special about them?

6. How do the results fit with previous research in the area?

• Did the researchers base their work on a thorough literature review?

7. What does this research mean for clinical practice?

• Is the study purpose an important clinical issue?

Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins.Critical Appraisal of the Evidence Essay.

page3image14512

may inform thinking, such as descriptive research, expert opin- ions, or guidelines, they put them aside. Carlos explains that they’ll be used later to support Rebecca’s case for having a rapid response team (RRT) in her hospital, sh- ould the evidence point in that direction.Critical Appraisal of the Evidence Essay.

After the studies—including those in the “I don’t know” group—are categorized, 15 of the original 26 remain and will be included in the RCA: three systematic reviews that include one meta-analysis (Level I evi- dence), one randomized con- trolled trial (Level II evidence), two cohort studies (Level IV evi- dence), one retrospective pre- post study with historic controls (Level VI evidence), four preex- perimental (pre-post) interven- tion studies (no control group) (Level VI evidence), and four EBP implementation projects (Level VI evidence). Carlos reminds Rebecca and Chen that Level I evidence—a systematic review

of randomized controlled trials

or a meta-analysis—is the most reliable and the best evidence to answer their clinical question.

Using a critical appraisal guide. Carlos recommends that the team use a critical appraisal checklist (see Critical Appraisal Guide for Quantitative Studies) to help evaluate the 15 studies. This checklist is relevant to all studies and contains questions about the essential elements of research (such as, purpose of the study, sample size, and major variables).Critical Appraisal of the Evidence Essay.

The questions in the critical ap- praisal guide seem a little strange to Rebecca and Chen. As they re- view the guide together, Carlos explains and clarifies each ques- tion. He suggests that as they try to figure out which are the essen- tial elements of the studies, they focus on answering the first three questions: Why was the study done? What is the sample size? Are the instruments of the major variables valid and reliable? The remaining questions will be ad- dressed later on in the critical

appraisal process (to appear in future installments of this series).

Creating a study evaluation table. Carlos provides an online template for a table where Re- becca and Chen can put all the data they’ll need for the RCA. Here they’ll record each study’s essential elements that answer the three questions and begin to ap- praise the 15 studies. (To use this template to create your own eval- uation table, download the Eval uation Table Template at http:// links.lww.com/AJN/A10.)

EXTRACTING THE DATA

Starting with level I evidence studies and moving down the hierarchy list, the EBP team takes each study and, one by one, finds and enters its essential elements into the first five columns of

the evaluation table (see Table 1; to see the entire table with all 15 studies, go to http://links. lww.com/AJN/A11). The team discusses each element as they enter it, and tries to determine if it meets the criteria of the critical

[email protected]

AJN ▼ July 2010 ▼ Vol. 110, No. 7 49

50 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com Table 1. Evaluation Table, Phase I First Author (Year) Conceptual Design/Method Framework Sample/Setting Major Variables Studied (and Their Definitions) Measure- ment Data Analysis Findings Appraisal: Worth to Practice Chan PS, et al. None SR Purpose: effect of RRT on N = 18 studies IV: RRT DV1: HMR DV2: CR Arch Intern Med HMR and CR • Searched 5 databases Setting: acute care hos- pitals; 13 adult, 5 peds 2010;170(1):18-26. McGaughey J, et al. None SR (Cochrane review) Purpose: effect of RRT N = 2 studies 24 adult hospitals Attrition: NR IV: RRT DV1: HMR Cochrane Database Syst Rev 2007;3: CD005529. on HMR • Searched 6 databases Winters BD, et al. None SR Purpose: effect of RRT on N = 8 studies Average no. beds: 500 Attrition: NR IV: RRT DV1: HMR DV2: CR Crit Care Med HMR and CR • Searched 3 databases 2007;35(5): 1238-43. from 1990-2005 • Included only studies with a control group Hillman K, et al. Lancet 2005; 365(9477): 2091-7. None RCT Purpose: effect of RRT on N = 23 hospitals Average no. beds: 340 • Intervention group IV: RRT protocol for 6 months • 1 AP • 1ICUorEDRN DV1: HMR (unexpected deaths, excluding DNRs) DV2: CR (excluding DNRs) HMR CR rates of UICUA Note: • Criteria for from 1950-2008, and “grey literature” from MD conferences Average no. beds: NR Attrition: NR • Included only studies with a control group from 1990-2006 • Excluded all but 2 RCTs CR, HMR, and UICUA (n=12) • Control group activating RRT (n = 11) Setting: Australia Attrition: none DV3: UICUA Shaded columns indicate where data will be entered in future installments of the series. AP = attending physician; CR = cardiopulmonary arrest or code rates; DNR = do not resuscitate; DV = dependent variable; ED = emergency department; HMR: hospital-wide mor- tality rates; ICU = intensive care unit; IV = independent variable; MD = medical doctor; NR = not reported; Peds = pediatric; RCT = randomized controlled trial; RN = registered nurse; RRT = rapid response team; SR = systematic review; UICUA = unplanned ICU admissions.Critical Appraisal of the Evidence Essay.

appraisal guide. These elements— such as purpose of the study, sam- ple size, and major variables—are typical parts of a research report and should be presented in a pre- dictable fashion in every study so that the reader understands what’s being reported.

suggests they leave the column in. He says they can further discuss this point later on in the process when they synthesize the studies’ findings. As Rebecca and Chen review each study, they enter its citation in a separate reference list so that they won’t have to create

findings, not to compare them with other like studies. Rebecca realizes that she enjoys this kind of conversation, in which she and Chen have a voice and can contribute to a deeper under- standing of how research impacts practice.

As Rebecca and Chen con- tinue to enter data into the table, they begin to see similarities and differences across studies. They mention this to Carlos, who tells them they’ve begun the process of synthesis! Both nurses are en- couraged by the fact that they’re learning this new skill.

The MERIT trial is next in the stack of studies and it’s a good trial to use to illustrate this phase of the RCA process. Set in Aus- tralia, the MERIT trial1 examined whether the introduction of an RRT (called a medical emergency team or MET in the study) would reduce the incidence of cardiac arrest, unplanned admissions to the ICU, and death in the hospi- tals studied. See Table 1 to follow along as the EBP team finds and enters the trial data into the table.Critical Appraisal of the Evidence Essay.

Design/Method. After Rebecca and Chen enter the citation infor- mation and note the lack of a con- ceptual framework, they’re ready to fill in the “Design/Method” column. First they enter RCT for randomized controlled trial, which they find in both the study title and introduction. But MERIT is called a “cluster-randomised controlled trial,” and cluster is a term they haven’t seen before. Carlos explains that it means that hospitals, not individuals or pa- tients, were randomly assigned to the RRT. He says that the likely reason the researchers chose to randomly assign hospitals is that if they had randomly assigned individual patients or units, oth- ers in the hospital might have heard about the RRT and poten- tially influenced the outcome.Critical Appraisal of the Evidence Essay.

Usually the important information in a study can be found in the abstract.

As the EBP team continues to review the studies and fill in the evaluation table, they realize that it’s taking about 10 to 15 minutes per study to locate and enter the information. This may be because when they look for a description of the sample, for example, it’s important that they note how the sample was obtained, how many patients are included, other char- acteristics of the sample, as well as any diagnoses or illnesses the sample might have that could be important to the study outcome. They discuss with Carlos the like- lihood that they’ll need a few ses- sions to enter all the data into the table. Carlos responds that the more studies they do, the less time it will take. He also says that it takes less time to find the information when study reports are clearly written. He adds that usually the important informa- tion can be found in the abstract.Critical Appraisal of the Evidence Essay.

Rebecca and Chen ask if it would be all right to take out the “Conceptual Framework” column, since none of the stud- ies they’re reviewing have con- ceptual frameworks (which help guide researchers as to how a study should proceed). Carlos replies that it’s helpful to know that a study has no framework underpinning the research and

this list at the end of the process. The reference list will be shared with colleagues and placed at the end of any RRT policy that re- sults from this endeavor.

Carlos spends much of his time answering Rebecca’s and Chen’s questions concerning how to phrase the information they’re entering in the table. He suggests that they keep it simple and con- sistent. For example, if a study indicated that it was implement- ing an RRT and hoped to see a change in a certain outcome, the nurses could enter “change in [the outcome] after RRT” as the purpose of the study. For studies examining the effect of an RRT on an outcome, they could say as the purpose, “effect of RRT on [the outcome].” Using the same words to describe the same pur- pose, even though it may not have been stated exactly that way in the study, can help when they compare studies later on.Critical Appraisal of the Evidence Essay.

Rebecca and Chen find it frus- trating that the study data are not always presented in the same way from study to study. They ask Carlos why the authors or journals wouldn’t present similar information in a similar manner. Carlos explains that the purpose of publishing these studies may have been to disseminate the

[email protected]

AJN ▼ July 2010 ▼ Vol. 110, No. 7 51

page6image1024 page6image1184 page6image1344 page6image1504

To randomly assign hospitals (instead of units or patients) to the intervention and comparison groups is a cleaner research de- sign.

the RRTs were activated and pro- vided their protocol for calling the RRTs. However, these elements might be helpful to the EBP team later on when they make decisions

continue the work—as long as Carlos is there to help.

In applying these principles for evaluating research studies to your own search for the evi- dence to answer your PICOT question, remember that this se- ries can’t contain all the available information about research meth- odology. Fortunately, there are many good resources available in books and online. For example, to find out more about sample size, which can affect the likeli- hood that researchers’ results oc- cur by chance (a random finding) rather than that the intervention brought about the expected out- come, search the Web using terms that describe what you want to know. If you type sample size findings by chance in a search en- gine, you’ll find several Web sites that can help you better under- stand this study essential.

Be sure to join the EBP team in the next installment of the se- ries, “Critical Appraisal of the Evidence: Part II,” when Rebecca and Chen will use the MERIT trial to illustrate the next steps in the RCA process, complete the rest of the evaluation table, and dig a little deeper into the studies in order to detect the “keepers.” ▼

Ellen FineoutOverholt is clinical profes sor and director of the Center for the Advancement of EvidenceBased Practice at Arizona State University in Phoenix, where Bernadette Mazurek Melnyk

is dean and distinguished foundation professor of nursing, Susan B. Stillwell is clinical associate professor and pro gram coordinator of the Nurse Educator EvidenceBased Practice Mentorship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of EvidenceBased Practice. Contact author: Ellen FineoutOverholt, [email protected].

REFERENCE

1.Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised con- trolled trial. Lancet 2005;365(9477): 2091-7.

Keep the data in the table consistent by using simple, inclusive terminology.

To keep the study purposes consistent among the studies in the RCA, the EBP team uses inclu- sive terminology they developed after they noticed that different trials had different ways of de- scribing the same objectives. Now they write that the purpose of the MERIT trial is to see if an RRT can reduce CR, for cardiopulmo- nary arrest or code rates, HMR, for hospital-wide mortality rates, and UICUA for unplanned ICU admissions. They use those same terms consistently throughout the evaluation table.Critical Appraisal of the Evidence Essay.

Sample/Setting. A total of 23 hospitals in Australia with an average of 340 beds per hospi- tal is the study sample. Twelve hospitals had an RRT (the inter- vention group) and 11 hospitals didn’t (the control group).

Major Variables Studied. The independent variable is the vari- able that influences the outcome (in this trial, it’s an RRT for six months). The dependent vari- able is the outcome (in this case, HMR, CR, and UICUA). In this trial, the outcomes didn’t include do-not-resuscitate data. The RRT was made up of an attending phy- sician and an ICU or ED nurse.Critical Appraisal of the Evidence Essay.

While the MERIT trial seems to perfectly answer Rebecca’s PICOT question, it contains ele- ments that aren’t entirely relevant, such as the fact that the research- ers collected information on how

about implementing an RRT in their hospital. So that they can come back to this information, they place it in the last column, “Appraisal: Worth to Practice.”Critical Appraisal of the Evidence Essay.

After reviewing the studies to make sure they’ve captured the essential elements in the evalua- tion table, Rebecca and Chen still feel unsure about whether the in- formation is complete. Carlos reminds them that a system-wide practice change—such as the change Rebecca is exploring, that of implementing an RRT in her hospital—requires careful consid- eration of the evidence and this is only the first step. He cautions them not to worry too much about perfection and to put their efforts into understanding the information in the studies. He re- minds them that as they move on to the next steps in the critical appraisal process, and learn even more about the studies and proj- ects, they can refine any data in the table. Rebecca and Chen feel uncomfortable with this uncer- tainty but decide to trust the pro- cess. They continue extracting data and entering it into the table even though they may not com- pletely understand what they’re entering at present. They both realize that this will be a learn- ing opportunity and, though the learning curve may be steep at times, they value the outcome of improving patient care enough to

page6image51408 page6image51568

52 AJN ▼ July 2010 ▼ Vol. 110, No. 7

ajnonline.com

page1image400 page1image568 page1image728

Critical Appraisal of the Evidence: Part II

Digging deeper—examining the “keeper” studies.

By Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN, Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP,

FNAP, FAAN, Susan B. Stillwell, DNP, RN, CNE, and Kathleen M. Williamson, PhD, RN

This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation’s Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved.Critical Appraisal of the Evidence Essay.

The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be pub- lished with November’s Evidence-Based Practice, Step by Step.

In July’s evidence-based prac- tice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospital’s expert EBP mentor, and Chen M., Rebecca’s nurse colleague, collected the evidence to an- swer their clinical question: “In hospitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three- month period (T)?” As part of their rapid critical appraisal (RCA) of the 15 potential “keeper” studies, the EBP team found and placed the essential elements of each study (such as its population, study design, and setting) into an evaluation table. In so doing, they began to see similarities and differ- ences between the studies, which Carlos told them is the beginning of synthesis. We now join the team as they continue with their RCA of these studies to determine their worth to practice.Critical Appraisal of the Evidence Essay.

RAPID CRITICAL APPRAISAL

Carlos explains that typically an RCA is conducted along with an RCA checklist that’s specific to the research design of the study being evaluated—and before any data are entered into an evalua- tion table. However, since Rebecca and Chen are new to appraising studies, he felt it would be easier for them to first enter the essen- tials into the table and then eval- uate each study. Carlos shows Rebecca several RCA checklists and explains that all checklists have three major questions in common, each of which contains other more specific subquestions about what constitutes a well- conducted study for the research design under review (see Example of a Rapid Critical Appraisal Checklist).Critical Appraisal of the Evidence Essay.

Although the EBP team will be looking at how well the re- searchers conducted their studies and discussing what makes a “good” research study, Carlos reminds them that the goal of critical appraisal is to determine the worth of a study to practice, not solely to find flaws. He also

suggests that they consult their glossary when they see an unfa- miliar word. For example, the term randomization, or random assignment, is a relevant feature of research methodology for in- tervention studies that may be unfamiliar. Using the glossary, he explains that random assignment and random sampling are often confused with one another, but that they’re very different. When researchers select subjects from within a certain population to participate in a study by using a random strategy, such as tossing a coin, this is random sampling. It allows the entire population

to be fairly represented. But because it requires access to a particular population, random sampling is not always feasible. Carlos adds that many health care studies are based on a con- venience sample—participants recruited from a readily available population, such as a researcher’s affiliated hospital, which may or may not represent the desired population. Random assignment, on the other hand, is the use of a random strategy to assign study

[email protected]

AJN ▼ September 2010 ▼ Vol. 110, No. 9 41

page1image44784

page2image424 page2image856 page2image1024 page2image1184 page2image1344 page2image1504

participants to the intervention or control group. Random as- signment is an important feature of higher-level studies in the hier- archy of evidence.Critical Appraisal of the Evidence Essay.

Carlos also reminds the team that it’s important to begin the RCA with the studies at the high- est level of evidence in order to see the most reliable evidence first. In their pile of studies, these are the three systematic reviews, includ- ing the meta-analysis and the Cochrane review, they retrieved from their database search (see “Searching for the Evidence,” and “Critical Appraisal of the Evidence: Part I,” Evidence- Based Practice, Step by Step, May and July). Among the RCA checklists Carlos has brought

with him, Rebecca and Chen find the checklist for systematic reviews.

As they start to rapidly criti- cally appraise the meta-analysis, they discuss that it seems to be biased since the authors included only studies with a control group. Carlos explains that while hav- ing a control group in a study is ideal, in the real world most stud- ies are lower-level evidence and don’t have control or compari- son groups. He emphasizes that, in eliminating lower-level studies, the meta-analysis lacks evidence that may be informative to the question. Rebecca and Chen— who are clearly growing in their appraisal skills—also realize that three studies in the meta-analysis

are the same as three of their potential “keeper” studies. They wonder whether they should keep those studies in the pile, or if, as duplicates, they’re unnecessary. Carlos says that because the meta- analysis only included studies with control groups, it’s impor- tant to keep these three studies so that they can be compared with other studies in the pile that don’t have control groups. Rebecca notes that more than half of their 15 studies don’t have control or comparison groups. They agree as a team to include all 15 stud- ies at all levels of evidence and go on to appraise the two remaining systematic reviews.Critical Appraisal of the Evidence Essay.

The MERIT trial1 is next in the EBP team’s stack of studies.

Example of a Rapid Critical Appraisal Checklist

Rapid Critical Appraisal of Systematic Reviews of Clinical Interventions or Treatments

1. Are the results of the review valid?

A. Are the studies in the review randomized controlled trials? B. Does the review include a detailed description of the search

strategy used to find the relevant studies? C. Does the review describe how the validity of the individual

studies was assessed (such as, methodological quality, including the use of random assignment to study groups and complete follow-up of subjects)?

D. Are the results consistent across studies? E. Did the analysis use individual patient data or aggregate data?

2. What are the results?

A. How large is the intervention or treatment effect (odds ratio, relative risk, effect size, level of significance)?

B. How precise is the intervention or treatment (confidence interval)?

3. Will the results assist me in caring for my patients?

A. Are my patients similar to those in the review? B. Is it feasible to implement the findings in my practice setting? C. Were all clinically important outcomes considered, including

both risks and benefits of the treatment? D. What is my clinical assessment of the patient, and are there any

contraindications or circumstances that would keep me from

implementing the treatment? E. What are my patients’ and their families’ preferences and

values concerning the treatment? © Fineout-Overholt and Melnyk, 2005.

Yes No Yes No

Yes No

Yes No Patient Aggregate

Yes No Yes No

Yes No

Yes No Yes No

42 AJN ▼ September 2010 ▼ Vol. 110, No. 9

ajnonline.com

page3image832
As we noted in the last install- ment of this series, MERIT is a good study to use to illustrate the different steps of the critical appraisal process. (Readers may want to retrieve the article, if possible, and follow along with the RCA.) Set in Australia, the MERIT trial examined whether the introduction of a rapid re- sponse team (RRT; called a med- ical emergency team or MET

in the study) would reduce the incidence of cardiac arrest, death, and unplanned admissions to the ICU in the hospitals studied. To follow along as the EBP team addresses each of the essential elements of a well-conducted randomized controlled trial (RCT) and how they apply to the MERIT study, see their notes in Rapid Critical Appraisal of the MERIT Study.

ARE THE RESULTS OF THE STUDY VALID?

The first section of every RCA checklist addresses the validity of the study at hand—did the researchers use sound scientific methods to obtain their study results? Rebecca asks why valid- ity is so important. Carlos replies that if the study’s conclusion can be trusted—that is, relied upon to inform practice—the study must be conducted in a way that reduces bias or eliminates con- founding variables (factors that influence how the intervention affects the outcome). Researchers typically use rigorous research methods to reduce the risk of bias. The purpose of the RCA checklist is to help the user deter- mine whether or not rigorous methods have been used in the study under review, with most questions offering the option of a quick answer of “yes,” “no,” or “unknown.”Critical Appraisal of the Evidence Essay.

Were the subjects randomly assigned to the intervention and control groups? Carlos explains

that this is an important question when appraising RCTs. If a study calls itself an RCT but didn’t randomly assign participants, then bias could be present. In appraising the MERIT study, the team discusses how the research- ers randomly assigned entire hospitals, not individual patients, to the RRT intervention and control groups using a technique called cluster randomization. To better understand this method, the EBP team looks it up on the Internet and finds a PowerPoint presentation by a World Health Organization researcher that explains it in simplified terms: “Cluster randomized trials are experiments in which social units or clusters [in our case, hospitals] rather than individuals are ran- domly allocated to intervention groups.”2

Was random assignment concealed from the individuals enrolling the subjects? Conceal- ment helps researchers reduce potential bias, preventing the person(s) enrolling participants from recruiting them into a study with enthusiasm if they’re des- tined for the intervention group or with obvious indifference if they’re intended for the control or comparison group. The EBP team sees that the MERIT trial used an independent statistician to conduct the random assign- ment after participants had already been enrolled in the study, which Carlos says meets the criteria for concealment.Critical Appraisal of the Evidence Essay.

Were the subjects and pro- viders blind to the study group? Carlos notes that it would be difficult to blind participants or researchers to the interven- tion group in the MERIT study because the hospitals that were to initiate an RRT had to know it was happening. Rebecca and Chen wonder whether their “no” answer to this question makes

the study findings invalid. Carlos says that a single “no” may or may not mean that the study findings are invalid. It’s their job as clinicians interpreting the data to weigh each aspect of the study design. Therefore, if the answer to any validity question isn’t affirmative, they must each ask themselves: does this “no” make the study findings untrustworthy to the extent that I don’t feel comfortable using them in my practice?

Were reasons given to explain why subjects didn’t complete the study? Carlos explains that sometimes par- ticipants leave a study before the end (something about the study or the participants themselves may prompt them to leave). If all or many of the participants leave for the same reason, this may lead to biased findings. Therefore, it’s important to look for an explanation for why any subjects didn’t complete a study. Since no hospitals dropped out of the MERIT study, this ques- tion is determined to be not applicable.Critical Appraisal of the Evidence Essay.

Were the follow-up assess- ments long enough to fully study the effects of the intervention? Chen asks Carlos why a time frame would be important in studying validity. He explains that researchers must ensure that the outcome is evaluated for a long enough period of time to show that the intervention indeed caused it. The researchers in the MERIT study conducted the RRT intervention for six months be- fore evaluating the outcomes. The team discusses how six months was likely adequate to determine how the RRT affected cardio- pulmonary arrest rates (CR) but might have been too short to es- tablish the relationship between the RRT and hospital-wide mor- tality rates (HMR).

[email protected]

AJN ▼ September 2010 ▼ Vol. 110, No. 9 43

page4image432 page4image864 page4image1032 page4image1192 page4image1352 page4image1512

44 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com

Rapid Critical Appraisal of the MERIT Study

1. Are the results of the study valid? A. Were the subjects randomly assigned to the intervention and control groups? Yes No Unknown

Random assignment of hospitals was made to either a rapid response team (RRT; intervention) group or no RRT (con- trol) group. To protect against introducing further bias into the study, hospitals, not individual patients, were randomly assigned to the intervention. If patients were the study subjects, word of the RRT might have gotten around, potentially influencing the outcome.Critical Appraisal of the Evidence Essay.

B. Was random assignment concealed from the individuals enrolling the subjects? Yes No Unknown An independent statistician randomly assigned hospitals to the RRT or no RRT group after baseline data had been

collected; thus the assignments were concealed from both researchers and participants.

C. Were the subjects and providers blind to the study group? Yes No Unknown

Hospitals knew to which group they’d been assigned, as the intervention hospitals had to put the RRTs into practice. Management, ethics review boards, and code committees in both hospitals knew about the intervention. The control hospitals had code teams and some already had systems in place to manage unstable patients. But control hospitals didn’t have a placebo strategy to match the intervention hospitals’ educational strategy for how to implement an RRT (a red flag for confounding!). If you worked in one of the control hospitals, unless you were a member of one of the groups that gave approval, you wouldn’t have known your hospital was participating in a study on RRTs; this lessens the chance of confounding variables influencing the outcomes.Critical Appraisal of the Evidence Essay.

D. Were reasons given to explain why subjects didn’t complete the study? Yes No Not Applicable This question is not applicable as no hospitals dropped out of the study.

E. Were the follow-up assessments long enough to fully study the effects of the inter vention? Yes No Unknown

The intervention was conducted for six months, which should be adequate time to have an impact on the outcomes of car- diopulmonary arrest rates (CR), hospital-wide mortality rates (HMR), and unplanned ICU admissions (UICUA). However, the authors remark that it can take longer for an RRT to affect mortality, and cite trauma protocols that took up to 10 years.Critical Appraisal of the Evidence Essay.

F. Were the subjects analyzed in the group to which they were randomly assigned? Yes No Unknown

All 23 (12 intervention and 11 control) hospitals remained in their groups, and analysis was conducted on an intention- to-treat basis. However, in their discussion, the authors attempt to provide a reason for the disappointing study results; they suggest that because the intervention group was “inadequately implemented,” the fidelity of the intervention was compromised, leading to less than reliable results. Another possible explanation involves the baseline quality of care; if high, the improvement after an RRT may have been less than remarkable. The authors also note a historical confounder: in Australia, where the study took place, there was a nationwide increase in awareness of patient safety issues.Critical Appraisal of the Evidence Essay.

G. Was the control group appropriate? Yes No Unknown

See notes to question C. Controls had no time built in for education and training as the intervention hospitals did, so this time wasn’t controlled for, nor was there any known attempt to control the organizational “buzz” that something was going on. The study also didn’t account for the variance in how RRTs were implemented across hospitals. The researchers indicate that the existing code teams in control hospitals “did operate as [RRTs] to some extent.” Because of these factors, the appropriateness of the control group is questionable.

H. Were the instruments used to measure the outcomes valid and reliable? Yes No Unknown

The primary outcome was the composite of HMR (that is, unexpected deaths, excluding do not resuscitates [DNRs]), CR (that is, no palpable pulse, excluding DNRs), and UICUA (any unscheduled admissions to the ICU).

page4image35728 page4image35888 page4image36048 page4image36208 page4image36368 page4image36528 page4image36688 page4image36848

page5image384

I. Were the demographics and baseline clinical variables of the subjects in each of the groups similar? Yes No Unknown

The researchers provided a table showing how the RRT and control hospitals compared on several variables. Some variability existed, but there were no statistical differences between groups.Critical Appraisal of the Evidence Essay.

2. What are the results?

A. How large is the intervention or treatment effect?

The researchers reported outcome data in various ways, but the bottom line is that the control group did better than the intervention group. For example, RRT calling criteria were documented more than 15 minutes before an event by more hospitals in the control group than in the intervention group, which is contrary to expectation. Half the HMR cases in the intervention group met the criteria compared with 55% in the control group (not statistically significant). But only 30% of CR cases in the intervention group met the criteria compared with 44% in the control group, which was statistically significant (= 0.031). Finally, regarding UICUA, 51% in the intervention group compared with 55% in the control group met the criteria (not significant). This indicates that the control hospitals were doing a better job of documenting unstable patients before events occurred than the intervention hospitals.Critical Appraisal of the Evidence Essay.

B. How precise is the intervention or treatment?

The odds ratio (OR) for each of the outcomes was close to 1.0, which indicates that the RRT had no effect in the intervention hospitals compared with the control hospitals. Each confidence interval (CI) also included the num- ber 1.0, which indicates that each OR wasn’t statistically significant (HMR OR = 1.03 (0.84 – 1.28); CR OR = 0.94 (0.79 – 1.13); UICUA OR = 1.04 (0.89 – 1.21). From a clinical point of view, the results aren’t straightfor- ward. It would have been much simpler had the intervention hospitals and the control hospitals done equally badly; but the fact that the control hospitals did better than the intervention hospitals raises many questions about the results.

3. Will the results help me in caring for my patients?

A. Were all clinically important outcomes measured? Yes No Unknown

It would have been helpful to measure cost, since participating hospitals that initiated an RRT didn’t eliminate their code team. If a hospital has two teams, is the cost doubled? And what’s the return on investment? There’s also no mention of the benefits of the code team. This is a curious question . . . maybe another PICOT question?Critical Appraisal of the Evidence Essay.

B. What are the risks and benefits of the treatment?

This is the wrong question for an RRT. The appropriate question would be: What is the risk of not adequately introduc- ing, monitoring, and evaluating the impact of an RRT?

C. Is the treatment feasible in my clinical setting? Yes No Unknown

We have administrative support, once we know what the evidence tells us. Based on this study, we don’t know much more than we did before, except to be careful about how we approach and evaluate the issue. We need to keep the following issues, which the MERIT researchers raised in their discussion, in mind: 1) allow adequate time to measure outcomes; 2) some outcomes may be reliably measured sooner than others; 3) the process of implementing an RRT is very important to its success.Critical Appraisal of the Evidence Essay.

D. What are my patients’ and their families’ values and expectations for the outcome and the treatment itself?

We will keep this in mind as we consider the body of evidence.

page5image30456 page5image30616 page5image30776

[email protected] AJN ▼ September 2010 ▼ Vol. 110, No. 9 45

page6image424 page6image856 page6image1024 page6image1184 page6image1344 page6image1504

Were the subjects analyzed in the group to which they were randomly assigned? Rebecca sees the term intention-to-treat analysis in the study and says that it sounds like statistical language. Carlos confirms that it is; it means that the researchers kept the hos- pitals in their assigned groups when they conducted the analysis, a technique intended to reduce possible bias. Even though the MERIT study used this technique, Carlos notes that in the discussion section the authors offer some important caveats about how the study was conducted, including poor intervention implementation, which may have contributed to MERIT’s unexpected findings.1

Was the control group appro- priate? Carlos explains that it’s challenging to establish an ap- propriate comparison or control group without an understanding of how the intervention will be implemented. In this case, it may be problematic that the interven- tion group received education and training in implementing the RRT and the control group re- ceived no comparable placebo (meaning education and training about something else). But Car- los reminds the team that the re- searchers attempted to control for known confounding variables by stratifying the sample on char- acteristics such as academic versus nonacademic hospitals, bed size, and other important parameters. This method helps to ensure equal representation of these pa- rameters in both the intervention and control groups. However, a major concern for clinicians con- sidering whether to use the MERIT findings in their decision making involves the control hos- pitals’ code teams and how they may have functioned as RRTs, which introduces a potential con- founder into the study that could possibly invalidate the findings.Critical Appraisal of the Evidence Essay.

Were the instruments used to measure the outcomes valid and reliable? The overall measure in the MERIT study is the compos- ite of the individual outcomes: CR, HMR, and unplanned ad- missions to the ICU (UICUA). These parameters were defined reasonably and didn’t include do not resuscitate (DNR) cases. Car- los explains that since DNR cases are more likely to code or die, in- cluding them in the HMR and CR would artificially increase these outcomes and introduce bias into the findings.

As the team moves through the questions in the RCA check- list, Rebecca wonders how she and Chen would manage this kind of appraisal on their own. Carlos assures them that they’ll get better at recognizing well- conducted research the more RCAs they do. Though Rebecca feels less than confident, she appre- ciates his encouragement nonethe- less, and chooses to lead the team in discussion of the next question.

Were the demographics and baseline clinical variables of the subjects in each of the groups similar? Rebecca says that the intervention group and the con- trol or comparison group need to be similar at the beginning of any intervention study because any differences in the groups could influence the outcome, poten- tially increasing the risk that the outcome might be unrelated to the intervention. She refers the team to their earlier discussion about confounding variables. Carlos tells Rebecca that her explana- tion was excellent. Chen remarks that Rebecca’s focus on learning appears to be paying off.

WHAT ARE THE RESULTS?

As the team moves on to the sec- ond major question, Carlos tells them that many clinicians are apprehensive about interpreting

statistics. He says that he didn’t take courses in graduate school on conducting statistical analysis; rather, he learned about different statistical tests in courses that re- quired students to look up how to interpret a statistic whenever they encountered it in the articles they were reading. Thus he had a context for how the statistic was being used and interpreted, what question the statistical analysis was answering, and what kind of data were being analyzed. He also learned to use a search engine, such as Google.com, to find an explanation for any statistical tests with which he was unfamil- iar. Because his goal was to un- derstand what the statistic meant clinically, he looked for simple Web sites with that same focus and avoided those with Greek symbols or extensive formulas that were mostly concerned with conducting statistical analysis.Critical Appraisal of the Evidence Essay.

How large is the intervention or treatment effect? As the team goes through the studies in their RCA, they decide to construct a list of statistics terminology for quick reference (see A Sampling of Statistics). The major statistic used in the MERIT study is the odds ratio (OR). The OR is used to provide insight into the measure of association between an inter- vention and an outcome. In the MERIT study, the control group did better than the intervention group, which is contrary to what was expected. Rebecca notes that the researchers discussed the pos- sible reasons for this finding in the final section of the study. Carlos says that the authors’ discussion about why their findings occurred is as important as the findings themselves. In this study, the discussion communicates to any clinicians considering initiating an RRT in their hospital that they should assess whether the current code team is already functioning

46 AJN ▼ September 2010 ▼ Vol. 110, No. 9

ajnonline.com

page7image832
page7image1408

A Sampling of Statistics

Statistic Simple Definition Important Parameters Understanding the Statistic Clinical Implications
Odds Ratio (OR) The odds of an outcome occurring in the intervention group compared with the odds of it occurring in the comparison or control group. · IfanORisequalto1,thenthe intervention didn’t make a differ- ence.

· Interpretation depends on the out- come.

· If the outcome is good (for exam- ple, fall prevention), the OR is pre- ferred to be above 1.

· If the outcome is bad (for example, mortality rate), the OR is preferred to be below 1.

The OR for hospital-wide mor- tality rates (HMR) in the MERIT study was 1.03 (95% CI, 0.84 – 1.28). The odds of HMR in the intervention group were about the same as HMR in the comparison group. From the HMR OR data alone, a clinician may not feel confident that a rapid response team (RRT) is the best intervention to reduce HMR but may seek out other evidence before making a decision.
Relative Risk (RR) The risk of an out- come occurring in the intervention group compared with the risk of it occurring in the comparison or control group. · IfanRRisequalto1,thenthe intervention didn’t make a differ- ence.

· Interpretation depends on the out- come.

· If the outcome is good (for example fall prevention), the RR is preferred to be above 1.

· If the outcome is bad (for example, mortality rate), the RR is preferred to be below 1.

The RR of cardiopulmonary ar- rest in adults was reported in the Chan PS, et al., 2010 sys- tematic reviewa as 0.66 (95% CI, 0.54 – 0.80), which is sta- tistically significant because there’s no 1.0 in the CI.

Thus, the RR of cardiopulmo- nary arrest occurring in the intervention group compared with the RR of it occurring in the control group is 0.66, or less than 1. Since cardiopulmonary arrest is not a good outcome, this is a desirable finding.

The RRT significantly reduced the RR of cardiopulmonary arrest in this study. From these data, clinicians can be reasonably confident that ini- tiating an RRT will reduce CR in hospitalized adults.
Confidence Interval (CI) The range in which clinicians can expect to get results if they pres- ent the interven- tion as it was in the study. · CI provides the precision of the study finding: a 95% CI indicates that clinicians can be 95% con- fident that their findings will be within the range given in the study.

· CI should be narrow around the study finding, not wide.

· If a CI contains the number that indicates no effect (for OR it’s 1; for effect size it’s 0), the study finding is not statistically significant.

See the two previous examples. In the Chan PS, et al., 2010 systematic review,a the CI is a close range around the study finding and is statistically significant. Clinicians can be 95% confident that if they conduct the same interven- tion, they’ll have a result simi- lar to that of the study (that is, a reduction in risk of cardio- pulmonary arrest) within the range of the CI, 0.54 – 0.80. The narrower the CI range, the more confident clinicians can be that, using the same intervention, their results will be close to the study findings.
Mean (X) Average • Caveat: Averaging captures only those subjects who surround a central tendency, missing those who may be unique. For example, the mean (average) hair color in a classroom of schoolchildren cap- tures those with the predominant hair color. Children with hair color different from the predominant hair color aren’t captured and are con- sidered outliers (those who don’t converge around the mean). In the Dacey MJ, et al., 2007 study,a before the RRT the aver- age (mean) CR was 7.6 per 1,000 discharges per month; after the RRT, it decreased to

3 per 1,000 discharges per month.

Introducing an RRT decreased the average CR by more than 50% (7.6 to 3 per 1,000 discharges per month).

a For study details on Chan PS, et al., and Dacey MJ, et al., go to http://links.lww.com/AJN/A11.

[email protected] AJN ▼ September 2010 ▼ Vol. 110, No. 9 47

page8image400 page8image832 page8image992

as an RRT prior to RRT imple- mentation.

How precise is the interven- tion or treatment? Chen wants to tackle the precision of the findings and starts with the OR for HMR, CR, and UICUA, each of which has a confidence interval (CI) that includes the number 1.0. In an EBP workshop, she learned that a1.0inaCIforORmeansthat the results aren’t statistically sig- nificant, but she isn’t sure what statistically significant means. Car- los explains that since the CIs for the OR of each of the three out- comes contains the number 1.0, these results could have been ob- tained by chance and therefore aren’t statistically significant. For clinicians, chance findings aren’t reliable findings, so they can’t confidently be put into practice. Study findings that aren’t statisti- cally significant have a probabil- ity value (value) of greater than 0.5. Statistically significant find- ings are those that aren’t likely to be obtained by chance and have a value of less than 0.5.

WILL THE RESULTS HELP ME IN CARING FOR MY PATIENTS? The team is nearly finished with their checklist for RCTs. The third and last major question addresses the applicability of the study— how the findings can be used to help the patients the team cares for. Rebecca observes that it’s easy to get caught up in the de- tails of the research methods and findings and to forget about how they apply to real patients.Critical Appraisal of the Evidence Essay.

Were all clinically important outcomes measured? Chen says that she didn’t see anything in the study about how much an RRT costs to initiate and how to com- pare that cost with the cost of one code or ICU admission. Carlos agrees that providing costs would have lent further insight into the results.Critical Appraisal of the Evidence Essay.

What are the risks and ben- efits of the treatment? Chen won- ders how to answer this since the findings seem to be confounded by the fact that the control hos- pital had code teams that func- tioned as RRTs. She wonders if there was any consideration of the risks and benefits of initiating an RRT prior to beginning the study. Carlos says that the study doesn’t directly mention it, but the consideration of the risks and benefits of an RRT is most likely what prompted the researchers to conduct the study. It’s helpful to remember, he tells the team, that often the answer to these questions is more than just “yes” or “no.”

Is the treatment feasible in my clinical setting? Carlos acknowl- edges that because the nursing administration is open to their project and supports it by provid- ing time for the team to conduct its work, an RRT seems feasible in their clinical setting. The team discusses that nursing can’t be the sole discipline involved in the project. They must consider how to include other disciplines as part of their next step (that is, the im- plementation plan). The team con- siders the feasibility of getting all disciplines on board and how to address several issues raised by the researchers in the discussion sec- tion (see Rapid Critical Appraisal of the MERIT Study), particu- larly if they find that the body of evidence indicates that an RRT does indeed reduce their chosen outcomes of CR, HMR, and UICUA.

What are my patients’ and their families’ values and expec- tations for the outcome and the treatment itself? Carlos asks Rebecca and Chen to discuss with their patients and their patients’ families their opinion of an RRT and if they have any objections to the intervention. If there are

objections, the patients or fami- lies will be asked to reveal them. The EBP team finally com- pletes the RCA checklists for the

15 studies and finds them all to be “keepers.” There are some studies in which the findings are less than reliable; in the case of MERIT, the team decides to in- clude it anyway because it’s con- sidered a landmark study. All the studies they’ve retained have something to add to their under- standing of the impact of an RRT on CR, HMR, and UICUA. Car- los says that now that they’ve determined the 15 studies to be somewhat valid and reliable, they can add the rest of the data to the evaluation table.Critical Appraisal of the Evidence Essay.

Be sure to join the EBP team for “Critical Appraisal of the Evi- dence: Part III” in the next install- ment in the series, when Rebecca, Chen, and Carlos complete their synthesis of the 15 studies and determine what the body of evi- dence says about implementing an RRT in an acute care setting. ▼

Ellen Fineout-Overholt is clinical pro- fessor and director of the Center for the Advancement of Evidence-Based Practice at Arizona State University in Phoenix, where Bernadette Mazurek Melnyk is dean and distinguished foun- dation professor of nursing, Susan B. Stillwell is clinical associate professor and program coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program, and Kathleen M. Williamson is associate director of

the Center for the Advancement of Evidence-Based Practice. Contact author: Ellen Fineout-Overholt, ellen. [email protected].

REFERENCES

1. Hillman K, et al. Introduction of the medical emergency team (MET) sys- tem: a cluster-randomised controlled trial. Lancet 2005;365, 2091-7.

2. Wojdyla D. Cluster randomized trials and equivalence trials [PowerPoint presentation]. Geneva, Switzerland: Geneva Foundation for Medical Education and Research; 2005. http:// www.gfmer.ch/PGC_RH_2005/pdf/ Cluster_Randomized_Trials.pdf.

page8image55728 page8image55888

48 AJN ▼ September 2010 ▼ Vol. 110, No. 9

ajnonline.com

page1image400 page1image568 page1image728

Critical Appraisal of the Evidence: Part III

The process of synthesis: seeing similarities and differences across the body of evidence.

By Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN, Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP,

FNAP, FAAN, Susan B. Stillwell, DNP, RN, CNE, and Kathleen M. Williamson, PhD, RN

This is the seventh article in a series from the Arizona State University College of Nursing and Health Innovation’s Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician exper- tise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved.

The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide a direct line to the experts to help you resolve questions. See details below.

In September’s evidence- based practice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospi- tal’s expert EBP mentor, and Chen M., Rebecca’s nurse colleague, ra- pidly critically appraised the 15 articles they found to answer their clinical question—“In hospital- ized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three-month period (T)?”—and determined that they were all “keepers.” The team now begins the process of evaluation and synthesis of the articles to see what the evidence says about initiating a rapid re- sponse team (RRT) in their hos- pital. Carlos reminds them that evaluation and synthesis are syn- ergistic processes and don’t neces- sarily happen one after the other. Nevertheless, to help them learn, he will guide them through the EBP process one step at a time.Critical Appraisal of the Evidence Essay.

STARTING THE EVALUATION

Rebecca, Carlos, and Chen begin to work with the evaluation table

they created earlier in this process when they found and filled in the essential elements of the 15 stud- ies and projects (see “Critical Ap- praisal of the Evidence: Part I,” July). Now each takes a stack of the “keeper” studies and system- atically begins adding to the table any remaining data that best re- flect the study elements pertain- ing to the group’s clinical question (see Table 1; for the entire table with all 15 articles, go to http:// links.lww.com/AJN/A17). They had agreed that a “Notes” sec- tion within the “Appraisal: Worth to Practice” column would be a good place to record the nuances

of an article, their impressions of it, as well as any tips—such as what worked in calling an RRT— that could be used later when they write up their ideas for ini- tiating an RRT at their hospital, if the evidence points in that direc- tion. Chen remarks that although she thought their initial table con- tained a lot of information, this final version is more thorough by far. She appreciates the opportu- nity to go back and confirm her original understanding of the study essentials.

The team members discuss the evolving patterns as they complete the table. The three systematic

Need Help with Evidence-Based Practice? Chat with the Authors on November 16!

On November 16 at 3 PM EST, join the “Chat with the Au- thors” call. It’s your chance to get personal consultation from the experts! Dial-in early! U.S. and Canada, dial 1-800-947-5134 (International, dial 001-574-941-6964). When prompted, enter code 121028#.

Go to www.ajnonline.com and click on “Podcasts” and then on “Conversations” to listen to our interview with Ellen Fineout- Overholt and Bernadette Mazurek Melnyk.

page1image41440

[email protected]

AJN ▼ November 2010 ▼ Vol. 110, No. 11 43

page1image42768

page2image432 page2image864 page2image1032 page2image1192 page2image1352 page2image1512

page2image2376 page2image2536 page2image2696 page2image2856 page2image3016 page2image3176 page2image3336 page2image3496 page2image3656 page2image3816 page2image3976 page2image4136 page2image4296 page2image4888 page2image5048 page2image5208 page2image5368 page2image5792 page2image5952 page2image6112 page2image6272 page2image6432 page2image6592 page2image6752 page2image6912 page2image7504 page2image7664 page2image7824 page2image7984 page2image8144 page2image8304 page2image8464 page2image8624 page2image9216 page2image9376 page2image9536 page2image9696 page2image9856 page2image10280 page2image10440 page2image10600

44 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com

Table 1. Final Evaluation Table

First Author (Year)

Conceptual Framework

Design/Method

Sample/Setting

Major Variables Studied (and Their Definitions)

Measurement

Data Analysis

Findings

Appraisal: Worth to Practice

Chan PS, et al.

None

SR Purpose: effect of RRT on HMR and CR • Searched 5

N = 18 out of 143 potential studies

IV: RRT DV1: HMR (including DNR, excluding DNR, not treated in ICU, no HMR definition) DV2: CR

RRT: was the MD involved?

• Fre – quency

13/16 studies reporting team structure

Weaknesses: • Potential missed evi-

Arch Intern Med

• Relative risk

dence with exclusion of all studies except those with control groups

2010;170(1): 18-26

Setting: acute care hospitals; 13 adult, 5 peds

HMR: overall hospital deaths (see definition)

7/11 adult and 4/5 peds studies had sig- nificant reduc- tion in CR

databases from 1950–2008 and “grey literature” from MD confer- ences

Average no. beds: NR

CR: cardio and/or pulmo- nary arrest; cardiac arrest calls

• Grey literature search limited to medical meet- ings

• Included only 1) RCTs and

CR: • In adults,

• Only included HMR and CR outcomes

prospective

Strengths: • Identified no. of activa-

studies with 2) a control group or

tions of RRT/1,000 admissions Identified variance in outcome definition and measurement (for example, 10 of 15 stud- ies included deaths from DNRs in their mortality measurement)

control period

and 3) hospital

mortality well described as outcome

• Excluded 5 studies that met criteria due to no response to e-mail by primary authors

HMR: • In adults,

Conclusion: • RRT reduces CR in

Attrition: NR

21%–48% reduction in CR; RR 0.66 (95% CI, 0.54–0.80) In peds, 38% reduction in CR; RR 0.62 (95% CI, 0.46–0.84)

• No cost data

HMR RR 0.96 (95% CI, 0.84– 1.09)

adults, and CR and HMR in peds

• In peds, HMR RR

Feasibility: • RRT is reasonable to

0.79 (95% CI, 0.63– 0.98)

implement; evaluating cost will help in making decisions about using RRT

• Risk/Benefit (harm): benefits outweigh risks

page3image840
page3image1696 page3image1856 page3image2016 page3image2176 page3image2336 page3image2496 page3image2656 page3image2816 page3image2976 page3image3136 page3image3296 page3image3456 page3image4048 page3image4208 page3image4368 page3image4528 page3image4952 page3image5112 page3image5272 page3image5432 page3image5592 page3image5752 page3image5912 page3image6072 page3image6664 page3image6824 page3image6984 page3image7144 page3image7304 page3image7464 page3image7624 page3image7784 page3image8376 page3image8536 page3image8696 page3image8856 page3image9016 page3image9440 page3image9600 page3image9760

[email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11 45

McGaughey J, et al. Cochrane Database Syst Rev 2007;3: CD005529

None

SR (Cochrane review)

N = 2 studies

IV: RRT DV1: HMR

HMR: Australia: overall hospital mortality with- out DNR

OR

OR of Aus- tralian study, 0.98 (95% CI, 0.83–1.16)

Weaknesses: • Didn’t include full body

Winters BD, et al. Crit Care Med 2007;35(5): 1238-43

None

SR Purpose: effect of RRT on HMR and CR • Searched 3

N = 8 studies

IV: RRT DV1: HMR DV2: CR

HMR: overall death rate

Risk ratio

HMR: • Observa-

Strengths: • Provides comparison

Purpose: effect of RRT on HMR • Searched 6

Acute care set- tings in Australia and the UK

of evidence • Conflicting results of

databases from

Attrition: NR

UK: Simplified Acute Physiol- ogy Score (SAPS) II death probabil- ity estimate

OR of UK study, 0.52 (95% CI, 0.32–0.85)

retained studies, but no discussion of the impact of lower-level evidence

1990 –2006 • Excluded all but

• Recommendation “need more research”

2 RCTs

Conclusion: • Inconclusive

databases from

Attrition: NR

4–82 months) Sample size (range,

1990 –2005 • Included only studies with a control group

• Cluster RCTs, risk ratio for RRT on HMR, 0.76 (95% CI, 0.39– 1.48)

2,183–199,024) Criteria for RRT initia-

Average no. beds: 500

CR: no. of in- hospital arrests

tional studies, risk ratio for RRT on HMR, 0.87 (95% CI, 0.73– 1.04)

across studies for Study lengths (range,

CI = confidence interval; CR = cardiopulmonary arrest or code rates; DNR = do not resuscitate; DV = dependent variable; HMR = hospital-wide mortality rates; ICU = intensive care unit; IV = independent variable; MD = medical doctor; NR = not reported; OR = odds ratio; Peds = pediatrics; RCT = randomized controlled trial; RR = relative risk; RRT = rapid response team; SR = systematic review; UK = United Kingdom

CR: • Observa-

tion (common: respira- tory rate, heart rate, blood pressure, mental status change; not all studies, but notewor- thy: oxygen saturation, “worry”)

tional studies, risk ratio for RRT on CR, 0.70 (95% CI, 0.56– 0.92)

Conclusion: • Some support for RRT,

• Cluster RCTs, risk ratio for RRT on CR, 0.94 (95% CI, 0.79– 1.13)

but not reliable enough to recommend as stan- dard of care

Includes ideas about future evidence gen- eration (conducting research)—finding out what we don’t know

page4image424 page4image856 page4image1024 page4image1184 page4image1344 page4image1504

reviews, which are higher-level evidence, seem to have an inher- ent bias in that they included only studies with control groups. In general, these studies weren’t in favor of initiating an RRT. Carlos asks Rebecca and Chen whether,

Chen in their efforts to appraise the MERIT study and comments on how well they’re putting the pieces of the evidence puzzle to- gether. The nurses are excited that they’re able to use their new knowledge to shed light on the

as well as a good number of jour- nals have encouraged their use. When they review the actual guidelines, the team notices that they seem to be focused on re- search; for example, they require a research question and refer to

It’s not the number of studies or projects that determines the reliability of their ndings, but the uniformity and quality of their methods.

now that they’ve appraised all the evidence about RRTs, they’re con- fident in their decision to include all the studies and projects (in- cluding the lower-level evidence) among the “keepers.” The nurses reply with an emphatic affirma- tive! They tell Carlos that the proj- ects and descriptive studies were what brought the issue to life for them. They realize that the higher- level evidence is somewhat in conflict with the lower-level evi- dence, but they’re most interested in the conclusions that can be drawn from considering the entire body of evidence.

Rebecca and Chen admit they have issues with the systematic reviews, all of which include the MERIT study.1-4 In particular, they discuss how the authors of the systematic reviews made sure to report the MERIT study’s finding that the RRT had no effect, but didn’t emphasize the MERIT study authors’ discussion about how their study methods may have influenced the reliability of the findings (for more, see “Critical Appraisal of the Evidence: Part II,” September). Carlos says that this is an excellent observation. He also reminds the team that clinicians may read a systematic review for the conclusion and never consider the original stud- ies. He encourages Rebecca and

study. They discuss with Carlos how the interpretation of the MERIT study has perhaps con- tributed to a misunderstanding of the impact of RRTs.

Comparing the evidence. As the team enters the lower-level evi- dence into the evaluation table, they note that it’s challenging to compare the project reports with studies that have clearly described methodology, measurement, anal- ysis, and findings. Chen remarks that she wishes researchers and clinicians would write study and project reports similarly. Although each of the studies has a process or method determining how it was conducted, as well as how out- comes were measured, data were analyzed, and results interpreted, comparing the studies as they’re currently written adds another layer of complexity to the eval- uation. Carlos says that while it would be great to have studies and projects written in a similar for- mat so they’re easier to compare, that’s unlikely to happen. But he tells the team not to lose all hope, as a format has been developed for reporting quality improve- ment initiatives called the SQUIRE Guidelines; however, they aren’t ideal. The team looks up the guide- lines online (www.squire-statement. org) and finds that the Institute for Healthcare Improvement (IHI)

the study of an intervention, whereas EBP projects have PICOT questions and apply evidence to practice. The team discusses that these guidelines can be confusing to the clinicians authoring the re- ports on their projects. In addition, they note that there’s no mention of the synthesis of the body of evidence that should drive an evidence-based project. While the SQUIRE Guidelines are a step in the right direction for the future, Carlos, Rebecca, and Chen con- clude that, for now, they’ll need to learn to read these studies as they find them — looking care- fully for the details that inform their clinical question.

Once the data have been en- tered into the table, Carlos sug- gests that they take each column, one by one, and note the similari- ties and differences across the studies and projects. After they’ve briefly looked over the columns, he asks the team which ones they think they should focus on to an- swer their question. Rebecca and Chen choose “Design/Method,” “Sample/Setting,” “Findings,” and “Appraisal: Worth to Practice” (see Table 1) as the initial ones to consider. Carlos agrees that these are the columns in which they’re most likely to find the most pertinent information for their synthesis.Critical Appraisal of the Evidence Essay.

46 AJN ▼ November 2010 ▼ Vol. 110, No. 11

ajnonline.com

page5image832
SYNTHESIZING: MAKING DECISIONS BASED ON THE EVIDENCE Design/Method. The team starts with the “Design/Method” column because Carlos reminds them that it’s important to note each study’s level of evidence. He suggests that they take this information and create a synthesis table (one in which data is extracted from the evaluation table to better see the similarities and differences between studies) (see Table 21-15). The synthesis table makes it clear that there is less higher-level and more lower-level evidence, which will impact the reliability of the overall findings. As the team noted, the higher-level evidence is not without methodological issues, which will increase the challenge of coming to a conclusion about

the impact of an RRT on the out- comes.

Sample/Setting. In reviewing the “Sample/Setting” column, the group notes that the number of hospital beds ranged from 218 to 662 across the studies. There were several types of hospitals represented (4 teaching, 4 com- munity, 4 no mention, 2 acute care hospitals, and 1 public hos- pital). The evidence they’ve col- lected seems applicable, since their hospital is a community hospital.Critical Appraisal of the Evidence Essay.

Findings. To help the team better discuss the evidence, Car- los suggests that they refer to all projects or studies as “the body of evidence.” They don’t want to get confused by calling them all studies, as they aren’t, but at the

same time continually referring to “studies and projects” is cum- bersome. He goes on to say that, as part of the synthesis process, it’s important for the group to determine the overall impact of the intervention across the body of evidence. He helps them create a second synthesis table contain- ing the findings of each study or project (see Table 31-15). As they look over the results, Rebecca and Chen note that RRTs reduce code rates, particularly outside the ICU, whereas unplanned ICU admissions (UICUA) don’t seem to be as affected by them. However, 10 of the 15 studies and projects reviewed didn’t evaluate this outcome, so it may not be fair to write it off just yet.

page5image22824

Table 2: The 15 Studies: Levels and Types of Evidence

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Level I: Systematic review or meta-analysis X X X
Level II: Randomized con- trolled trial X
Level III: Controlled trial without randomization
Level IV: Case-control or cohort study X X
Level V: Systematic review of qualitative or descrip- tive studies
Level VI: Qualitative or descriptive study (includes evidence implementation projects) X X X X X X X X X
Level VII: Expert opinion or consensus

Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice. 2nd ed. Philadelphia: Wolters Kluwer Health / Lippincott Williams and Wilkins; 2010.

1 = Chan PS, et al. (2010); 2 = McGaughey J, et al.; 3 = Winters BD, et al.; 4 = Hillman K, et al.; 5 = Sharek PJ, et al.; 6 = Chan PS, et al. (2009); 7 = DeVita MA, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 10 = McFarlan SJ, Hensley S.; 11 = Offner PJ, et al.; 12 = Bertaut Y, et al.; 13 = Benson L, et al.; 14 = Hatler C, et al.; 15 = Bader MK, et al.

[email protected]

AJN ▼ November 2010 ▼ Vol. 110, No. 11 47

page6image400 page6image832 page6image992

Table 3: Effect of the Rapid Response Team on Outcomes

1a 2a 3a 4a 5a

b

6a 7 8

c

9

b

10 11 12 13

c

14 15

b

HMR adult b

peds

page6image24744

page6image26264 NE page6image28304 page6image29176 NR NE page6image31776 NE page6image33384

,d

CRO NE b NE NE b NE page6image40856

c

page6image42792

b

NE b NE c page6image47032

b

page6image48696

c

page6image50360

b

page6image52024

c

NE b page6image54896

c

page6image56560

c

CR page6image58944

peds and adult

NE page6image61280 NE page6image63424 page6image64296 NE NE NE NE page6image68080 NE NE
UICUA NE NE NE page6image75200 NE NE NE NE page6image80336

b

c

page6image82448

NE NE NE page6image86848 page6image87632

b

1 = Chan PS, et al. (2010); 2 = McGaughey J, et al.; 3 = Winters BD, et al.; 4 = Hillman K, et al.; 5 = Sharek PJ, et al.; 6 = Chan PS, et al. (2009); 7 = DeVita MA, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 10 = McFarlan SJ, Hensley S.; 11 = Offner PJ, et al.; 12 = Bertaut Y, et al.; 13 = Benson L, et al.; 14 = Hatler C, et al.; 15 = Bader MK, et al.

CR = cardiopulmonary arrest or code rates; CRO = code rates outside the ICU; HMR = hospital-wide mortality rates; NE = not evaluated; NR = not reported; UICUA = unplanned ICU admissions

a higher-level evidence; b statistically significant findings; c statistical significance not reported; d non-ICU mortality was reduced

The EBP team can tell from reading the evidence that research- ers consider the impact of an RRT on hospital-wide mortality rates (HMR) as the more important outcome; however, the group re- mains unconvinced that this out- come is the best for evaluating the purpose of an RRT, which, according to the IHI, is early in- tervention in patients who are unstable or at risk for cardiac or respiratory arrest.16 That said, of the 11 studies and projects that evaluated mortality, more than half found that an RRT reduced it. Carlos reminds the group that four of those six articles are level-VI evidence and that some weren’t research. The findings produced at this level of evidence are typi- cally less reliable than those at higher levels of evidence; how- ever, Carlos notes that two articles

having level-VI evidence, a study and a project, had statistically significant (less likely to occur by chance, < 0.05) reductions in HMR, which increases the reli- ability of the results.

Chen asks, since four level-VI reports documented that an RRT reduces HMR, should they put more confidence in findings that occur more than once? Carlos re- plies that it’s not the number of studies or projects that determines the reliability of their findings, but the uniformity and quality of their methods. He recites something he heard in his Expert EBP Mentor program that helped to clarify the concept of making decisions based on the evidence: the level of the evidence (the design) plus the quality of the evidence (the validity of the methods) equals the strength of the evidence, which is

what leads clinicians to act in con- fidence and apply the evidence (or not) to their practice and expect similar findings (outcomes). In terms of making a decision about whether or not to initiate an RRT, Carlos says that their evidence stacks up: first, the MERIT study’s results are questionable because of problems with the study meth- ods, and this affects the reliability of the three systematic reviews as well as the MERIT study itself; second, the reasonably conducted lower-level studies/projects, with their statistically significant find- ings, are persuasive. Therefore, the team begins to consider the possibility that initiating an RRT may reduce code rates outside the ICU (CRO) and may impact non- ICU mortality; both are outcomes they would like to address. The evidence doesn’t provide equally

48 AJN ▼ November 2010 ▼ Vol. 110, No. 11

ajnonline.com

page7image896

promising results for UICUA, but the team agrees to include it in the outcomes for their RRT proj- ect because it wasn’t evaluated

in most of the articles they ap- praised.

As the EBP team continues to discusses probable outcomes, Rebecca points to one study’s

data in the “Findings” column that shows a financial return on investment for an RRT.9 Carlos remarks to the group that this is only one study, and that they’ll need to make sure to collect data on the costs of their RRT as well as the cost implications of the outcomes. They determine that

the important outcomes to mea- sure are: CRO, non-ICU mortality (excluding patients with do not resuscitate [DNR] orders), UICUA, and cost.

Appraisal: Worth to Practice.

As the team discusses their syn- thesis and the decision they’ll make based on the evidence,

Table 4. Defined Criteria for Initiating an RRT Consult

4 8 9 13 15
Respiratory distress (breaths/min) Airway threatened Respiratory arrest RR < 5 or > 36 RR < 10 or > 30 RR < 8 or > 30

Unexplained dys- pnea

RR < 8 or > 28

New-onset difficulty breathing

RR < 10 or > 30 Shortness of breath
Change in mental status Change in LOC Decrease in Glasgow Coma Scale of > 2 points ND Unexplained change Sudden decrease in LOC with normal blood glucose Decreased LOC
Tachycardia (beats/ min) >140 > 130 Unexplained > 130 for 15 min > 120 > 130
Bradycardia (beats/ min) < 40 < 60 Unexplained < 50 for 15 min < 40 < 40
Blood pressure (mmHg) SBP < 90 SBP < 90 or > 180 Hypotension (unex- plained) SBP > 200 or < 90 SBP < 90
Chest pain Cardiac arrest ND ND Complaint of nontrau- matic chest pain Complaint of nontraumatic chest pain
Seizures Sudden or extended ND ND Repeated or pro- longed ND
Concern/worry about patient Serious concern about a patient who doesn’t fit the above criteria NE Nurse concern about overall deterioration in patients’ condi- tion without any of the above criteria

(p. 2077)

Nurse concern • Uncontrolled pain • Failure to respond to

treatment • Unable to obtain prompt

assistance for unstable patient

Pulse oximetry (SpO2) NE NE NE < 92% < 92%
Other · Color change of patient

· Unexplained agita- tion for > 10 min

· CIWA > 15 points

• UOP < 50 cc/4 hr • Color change of patient

(pale, dusky, gray, or

blue) • New-onset limb weak-

ness or smile droop • Sepsis: ≥2 SIRS criteria

4 = Hillman K, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 13 = Benson L, et al.; 15 = Bader MK, et al.

cc = cubic centimeters; CIWA = Clinical Institute Withdrawal Assessment; hr = hour; LOC = level of consciousness; min = minute; mmHg = millimeters of mercury; ND = not defined; NE = not evaluated; RR = respiratory rate; SBP = systolic blood pressure; SIRS = systemic inflammatory response syndrome; SpO2= arterial oxygen saturation; UOP = urine output

[email protected]

AJN ▼ November 2010 ▼ Vol. 110, No. 11 49

page8image400 page8image832 page8image992

Rebecca raises a question that’s been on her mind. She reminds them that in the “Appraisal: Worth to Practice” column, teaching was identified as an important factor in initiating an RRT and expresses concern that their hospital is not an academic medical center. Chen reminds her that even though theirs is not a designated teaching hospital with residents on staff 24 hours a day, it has a culture

of teaching that should enhance the success of an RRT. She adds that she’s already hearing a buzz

of excitement about their project, that their colleagues across all disciplines have been eager to hear the results of their review of the evidence. In addition, Carlos says that many resources in their hos- pital will be available to help them get started with their project and reminds them of their hospital administrators’ commitment to support the team.Critical Appraisal of the Evidence Essay.

ACTING ON THE EVIDENCE

As they consider the synthesis of the evidence, the team agrees

that an RRT is a valuable inter- vention to initiate. They decide to take the criteria for activating an RRT from several successful studies/projects and put them into a synthesis table to better see their major similarities (see Table 44, 8, 9, 13, 15). From this com- bined list, they choose the criteria for initiating an RRT consult that they’ll use in their project (see Table 5). The team also begins discussing the ideal make up for their RRT. Again, they go back to the evaluation table and look

Table 5. Defined Criteria for Initiating an RRT Consult at Our Hospital

Pulmonar y
Ventilation Color change of patient (pale, dusky, gray, or blue)
Respiratory distress RR < 10 or > 30 breaths/min or unexplained dyspnea or new-onset difficulty breathing or shortness of breath
Cardiovascular
Tachycardia Unexplained > 130 beats/min for 15 min
Bradycardia Unexplained < 50 beats/min for 15 min
Blood pressure Unexplained SBP < 90 or > 200 mmHg
Chest pain Complaint of nontraumatic chest pain
Pulse oximetry < 92% SpO2
Perfusion UOP < 50 cc/4 hr
Neurologic
Seizures Initial, repeated, or prolonged
Change in mental status • Sudden decrease in LOC with normal blood glucose • Unexplained agitation for > 10 min • New-onset limb weakness or smile droop
Concern/worry about patient Nurse concern about overall deterioration in patients’ condition without any of the above criteria
Sepsis
• Temp, > 38°C • HR, > 90 beats/min • RR, > 20 breaths/min • WBC, > 12,000, < 4,000, or > 10% bands

cc = cubic centimeters; hr = hours; HR = heart rate; LOC = level of consciousness; min = minute; mmHg = millimeters of mercury; RR = respiratory rate; SBP = systolic blood pressure; SpO2 = arterial oxygen saturation; Temp = temperature; UOP = urine output; WBC = white blood count

50

AJN ▼ November 2010 ▼ Vol. 110, No. 11

ajnonline.com

page9image584

over the “Major Variables Studied” column, noting that the composition of the RRT varied among the studies/projects. Some

evidence that led to the project, how to call an RRT, and out- come measures that will indicate whether or not the implementation

3. Winters BD, et al. Rapid response sys- tems: a systematic review. Crit Care Med 2007;35(5):1238-43.

4.Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised con- trolled trial. Lancet 2005;365(9477): 2091-7.

5. Sharek PJ, et al. Effect of a rapid re- sponse team on hospital-wide mortal- ity and code rates outside the ICU in a children’s hospital. JAMA 2007; 298(19):2267-74.

6.Chan PS, et al. Hospital-wide code rates and mortality before and after implementation of a rapid response team. JAMA 2008;300(21):2506-13.

7. DeVita MA, et al. Use of medical emergency team responses to reduce hospital cardiopulmonary arrests. Qual Saf Health Care 2004;13(4): 251-4.

8.Mailey J, et al. Reducing hospital standardized mortality rate with early interventions. J Trauma Nurs 2006; 13(4):178-82.Critical Appraisal of the Evidence Essay.

9. Dacey MJ, et al. The effect of a rapid response team on major clinical out- come measures in a community hos- pital. Crit Care Med 2007;35(9): 2076-82.

10. McFarlan SJ, Hensley S. Implementa- tion and outcomes of a rapid response team. J Nurs Care Qual 2007;22(4): 307-13.

11. Offner PJ, et al. Implementation of a rapid response team decreases cardiac arrest outside the intensive care unit. J Trauma 2007;62(5):1223-8.

12. Bertaut Y, et al. Implementing a rapid- response team using a nurse-to-nurse consult approach. J Vasc Nurs 2008; 26(2):37-42.

13. Benson L, et al. Using an advanced practice nursing model for a rapid re- sponse team. Jt Comm J Qual Patient Saf 2008;34(12):743-7.

14. Hatler C, et al. Implementing a rapid response team to decrease emergen- cies. Medsurg Nurs 2009;18(2):84-90, 126.

15. Bader MK, et al. Rescue me: saving the vulnerable non-ICU patient popu- lation. Jt Comm J Qual Patient Saf 2009;35(4):199-205.

16. Institute for Healthcare Improvement. Establish a rapid response team. n.d. http://www.ihi.org/IHI/topics/ criticalcare/intensivecare/changes/ establisharapidresponseteam.htm.

As they consider the synthesis of the evidence, the team agrees that an RRT is a valuable intervention to initiate.

RRTs had active physician partic- ipation (n = 6), some had desig- nated physician consultation on an as-needed basis (n = 2), and some were nurse-led teams (n = 4). Most RRTs also had a respira- tory therapist (RT). All RRT mem- bers had expertise in intensive care and many were certified in advanced cardiac life support (ACLS). They agree that their team will be comprised of ACLS- certified members. It will be led by an acute care nurse practi- tioner (ACNP) credentialed for advanced procedures, such as central line insertion. Members will include an ICU RN and an RT who can intubate. They also discuss having physicians will- ing to be called when needed. Although no studies or projects had a chaplain on their RRT, Chen says that it would make sense in their hospital. Carlos, who’s been on staff the longest

of the three, says that interdisci- plinary collaboration has been a mainstay of their organization. A physician, ACNP, ICU RN, RT, and chaplain are logical choices for their RRT.

As the team ponders the evi- dence, they begin to discuss the next step, which is to develop ideas for writing their project implementation plan (also called a protocol). Included in this pro- tocol will be an educational plan to let those involved in the proj- ect know information such as the

of the evidence was successful. They’ll also need an evaluation plan. From reviewing the studies and projects, they also realize that it’s important to focus their plan on evidence implementation, in- cluding carefully evaluating both the process of implementation and project outcomes.Critical Appraisal of the Evidence Essay.

Be sure to join the EBP team in the next installment of this se- ries as they develop their imple- mentation plan for initiating an RRT in their hospital, including the submission of their project proposal to the ethics review board. ▼

Ellen Fineout-Overholt is clinical pro- fessor and director of the Center for the Advancement of Evidence-Based Prac- tice at Arizona State University in Phoe- nix, where Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing, Susan B. Stillwell is clinical associate professor and pro- gram coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Practice. Contact author: Ellen Fineout- Overholt, ellen.fineout-overholt@asu. edu.

REFERENCES

1. Chan PS, et al. (2010). Rapid re- sponse teams: a systematic review and meta-analysis. Arch Intern Med 2010;170(1):18-26.

2. McGaughey J, et al. Outreach and early warning systems (EWS) for the prevention of intensive care admission and death of critically ill adult patients on general hospital wards. Cochrane Database Syst Rev 2007;3:CD005529.

page9image52464 page9image52624

[email protected]

AJN ▼ November 2010 ▼ Vol. 110, No. 11 51

page1image7816 page1image7976 page1image8296 page1image8456

Evidence-based practice (EBP) is an approach that enables psychiatric mental health care practitioners as well as all clinicians to provide the highest quality of care using the best evidence available (Melnyk & Fineout-Overholt, 2005). One of the key steps of EBP is to critically appraise evidence to best answer a clinical question. For many mental health questions, understanding levels of evidence, qualitative inquiry methods, and questions used to appraise the evidence are necessary to implement the best qualitative evi- dence into practice. Drawing conclusions and making judgments about the evidence are imperative to the EBP process and clinical decision making (Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008). The over- all purpose of this article is to familiarize clinicians with qualitative research as an important source of evidence to guide practice decisions. In this article, an overview of the goals, methods and types of qualita- tive research, and the criteria used to appraise the quality of this type of evidence will be presented.Critical Appraisal of the Evidence Essay.

QUALITATIVE BELIEFS

Qualitative research aims to generate insight, describe, and understand the nature of reality in

Kathleen M. Williamson, PhD, RN, associate director, Center for the Advancement of Evidence-Based Practice, Arizona State University, College of Nursing & Healthcare Innovation, Phoenix, Arizona; [email protected].

human experiences (Ayers, 2007; Milne & Oberle, 2005; Polit & Beck, 2008; Saddler, 2006; Sandelowski, 2004; Speziale & Carpenter, 2003; Thorne, 2000). Qualitative researchers are inquisitive and seek to understand knowledge about how people think and feel, about the circumstances in which they find themselves, and use methods to uncover and decon- struct the meaning of a phenomenon (Saddler, 2006; Thorne, 2000). Qualitative data are collected in a natural setting. These data are not numerical; rather, they are full and rich descriptions from participants who are experiencing the phenomenon under study. The goal of qualitative research is to uncover the truths that exist and develop a complete understand- ing of reality and the individual’s perception of what is real. This method of inquiry is deeply rooted in descriptive modes of research. “The idea that multiple realties exist and create meaning for the individuals studied is a fundamental belief of qualitative research- ers” (Speziale & Carpenter, 2003, p. 17). Qualitative research is the studying, collecting, and understand- ing the meaning of individuals’ lives using a variety of materials and methods (Denzin & Lincoln, 2005).Critical Appraisal of the Evidence Essay.

WHAT IS A QUALITATIVE RESEARCHER?

Qualitative researchers commonly believe that indi- viduals come to know and understand their reality in

page1image31624 page1image31784 page1image31944

202

Copyright © 2009 The Author(s)

TABLE 1. Most Commonly Used Qualitative Research Methods

Critical Appraisal of Qualitative Evidence

page2image1928 page2image2088

Method Purpose

Research question(s)

Sample size (on average) Data sources/collection

Ethnography Describe culture of people

What is it like to live . . . What is it . . . 30-50 Interviews, observations, field

notes, records, chart data, life histories

Phenomenology Describe phenomena, the

appearance of things, as lived experience of humans in a natural setting

What is it like to have this experience? What does it feel like?

6-8 Interviews, videotapes, observations,

in-depth conversations

Grounded theory To develop a theory rather than

describe a phenomenon

Questions emerge from the data

25-50 Taped interview, observation,

diaries, and memos from researcher

page2image11744

Source. Adapted from Polit and Beck (2008) and Speziale and Carpenter(2003).

different ways. It is through the lived experience and the interactions that take place in the natural setting that the researcher is able to discover and understand the phenomenon under study (Miles & Huberman, 1994; Patton, 2002; Speziale & Carpenter, 2003). To ensure the least disruption to the environ- ment/natural setting, qualitative researchers care- fully consider the best research method to answer the research question (Speziale & Carpenter, 2003).Critical Appraisal of the Evidence Essay. These researchers are intensely involved in all aspects of the research process and are considered participants and observers in setting or field (Patton, 2002; Polit & Beck, 2008; Speziale & Carpenter, 2003). Flexibility is required to obtain data from the richest possible sources of information. Using a holistic approach, the researcher attempts to cap- ture the perceptions of the participants from an “emic” approach (i.e., from an insider’s viewpoint; Miles & Huberman, 1994; Speziale & Carpenter, 2003). Often, this is accomplished through the use of a variety of data collection methods, such as inter- views, observations, and written documents (Patton, 2002). As the data are collected, the researcher simultaneously analyzes it, which includes identi- fying emerging themes, patterns, and insights within the data. According to Patton (2002), quali- tative analysis engages exploration, discovery, and inductive logic. The researcher uses a rich literary account of the setting, actions, feelings, and mean- ing of the phenomenon to report the findings (Patton, 2002).Critical Appraisal of the Evidence Essay.

COMMONLY USED QUALITATIVE DESIGNS

According to Patton (2002), “Qualitative methods are first and foremost research methods. They are ways of finding out what people do, know, think, and

feel by observing, interviewing, and analyzing docu- ments” (p. 145). Qualitative research designs vary by type and purpose: data collection strategies used and the type of question or phenomenon under study. To critically appraise qualitative evidence for its valid- ity and use in practice, an understanding of the types of qualitative methods as well as how they are employed and reported is necessary.Critical Appraisal of the Evidence Essay.

Many of the methods are routed in the anthropol- ogy, psychological, and sociology disciplines. Many commonly used methods in the health sciences research are ethnography, phenomenology, and grounded theory (see Table 1).

Ethnography

Ethnography has its traditions in cultural anthropology, which describe the values, beliefs, and practice of cultural groups (Ploeg, 1999; Polit & Beck, 2008). According to Speziale and Carpenter (2003), the characteristics that are central to eth- nography are that (a) the research is focused on culture, (b) the researcher is totally immersed in the culture, and (c) the researcher is aware of her/ his own perspective as well as those in the study. Ethnographic researchers strive to study cultures from an emic approach. The researcher as a par- ticipant observer becomes involved in the culture to collect data, learn from participants, and report on the way participants see their world (Patton, 2002). Data are primarily collected through obser- vations and interviews. Analysis of ethnographic results involves identifying the meanings attrib- uted to objects and events by members of the cul- ture. These meanings are often validated by members of the culture before finalizing the results (called member checks). This is a labor-intensive method that requires extensive fieldwork.Critical Appraisal of the Evidence Essay.

Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 203

Williamson

Phenomenology

Phenomenology has its roots in both philosophy and psychology. Polit and Beck (2008) reported, “Phenomenological researchers believe that lived experience gives meaning to each person’s percep- tion of a particular phenomenon” (p. 227). According to Polit and Beck, there are four aspects of the human experience that are of interest to the phe- nomenological researcher: (a) lived space (spatial- ity), (b) lived body (corporeality), (c) lived human relationships (relationality), and (d) lived time (tem- porality). Phenomenological inquiry is focused on exploring how participants in the experience make sense of the experience, transform the experience into consciousness, and the nature or meaning of the experience (Patton, 2002). Interpretive phenom- enology (hermeneutics) focuses on the meaning and interpretation of the lived experience to better understand social, cultural, political, and historical context. Descriptive phenomenology shares vivid reports and describes the phenomenon.

In a phenomenological study, the researcher is an active participant/observer who is totally immersed in the investigation. It involves gaining access to participants who could provide rich descriptions from in-depth interviews to gather all the informa- tion needed to describe the phenomenon under study (Speziale & Carpenter, 2003). Ongoing analyses of direct quotes and statements by participants occur until common themes emerge. The outcome is a vivid description of the experience that captures the meaning of the experience and communicates clearly and logically the phenomenon under study (Speziale & Carpenter, 2003).

Grounded Theory

Grounded theory has its roots in sociology and explores the social processes that are present within human interactions (Speziale & Carpenter, 2003). The purpose is to develop or build a theory rather than test a theory or describe a phenomenon (Patton, 2002). Grounded theory takes an inductive approach in which the researcher seeks to generate emergent categories and integrate them into a theory grounded in the data (Polit & Beck, 2008). The research does not start with a focused problem; it evolves and is discovered as the study progresses. A feature of grounded theory is that the data collection, data analysis, and sampling of participants occur simulta- neously (Polit & Beck, 2008; Powers, 2005). The

researchers using ground theory methodology are able to critically analyze situations, not remove themselves from the study but realize that they are part of it, recognize bias, obtain valid and reliable data, and think abstractly (Strauss & Corbin, 1990).Critical Appraisal of the Evidence Essay.

Data collection is through in-depth interview and observations. A constant comparative process is used for two reasons: (a) to compare every piece of data with every other piece to more accurately refine the relevant categories and (b) to assure the researcher that saturation has occurred. Once saturation is reached the researcher connects the categories, pat- terns, or themes that describe the overall picture that emerged that will lead to theory development.

ASPECTS OF QUALITATIVE RESEARCH

The most important aspects of qualitative inquiry is that participants are actively involved in the research process rather than receiving an interven- tion or being observed for some risk or event to be quantified. Another aspect is that the sample is pur- posefully selected and is based on experience with a culture, social process, or phenomena to collect infor- mation that is rich and thick in descriptions. The final essential aspect of qualitative research is that one or more of the following strategies are used to collect data: interviews, focus groups, narratives, chat rooms, and observation and/or field notes. These methods may be used in combination with each other. The researcher may choose to use triangulation strategies on data collection, investigator, method, or theory and use multiple sources to draw conclusions about the phenomenon (Patton, 2002; Polit & Beck, 2009).

SUMMARY

This is not an inclusive list of qualitative methods that researchers could choose to use to answer a research question, other methods include historical research, feminist research, case study method, and action research. All qualitative research methods are used to describe and discover meaning, understand- ing, or develop a theory and transport the reader to the time and place of the observation and/or inter- view (Patton, 2002).Critical Appraisal of the Evidence Essay.

THE HIERARCHY OF QUALITATIVE EVIDENCE

Clinical questions that require qualitative evi- dence to answer them focus on human response and

204 Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3

Critical Appraisal of Qualitative Evidence

TABLE 2.

How were they selected?

Was it adequate?

How were the data collected?

Subquestions to Further Answer, Are the Study Findings Valid?

page4image3720 page4image4040 page4image4200

Participants Sample

Data collection

Did they provide rich and thick descriptions?

Was the setting appropriate to acquire an adequate sample?

Were the tools adequate?

Were the participants’ rights protected?

Was the sampling method

appropriate?

How were the data coded? If so how?

Did the researcher eliminate bias?

Do the data accurately represent the study participants?

How accurate and complete were the data?

Was the group or population adequately described?

Was saturation achieved?

Does gathering the data adequately portray the phenomenon?

page4image13560

Source. Adapted from Powers (2005), Polit and Beck (2008), Russell and Gregory (2003), and Speziale and Carpenter (2003).

meaning. An important step in the process of apprais- ing qualitative research as a guide for clinical prac- tice is the identification of the level of evidence or the “best” evidence. The level of evidence is a guide that helps identify the most appropriate, rigorous, and clinically relevant evidence to answer the clinical question (Polit & Beck, 2008). Evidence hierarchy for qualitative research ranges from opinion of authori- ties and/or reports of expert committees to a single qualitative research study to metasynthesis (Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008). A metasynthesis is comparable to meta-analysis (i.e., systematic reviews) of quantitative studies. A meta- synthesis is a technique that integrates findings of multiple qualitative studies on a specific topic, pro- viding an interpretative synthesis of the research findings in narrative form (Polit & Beck, 2008). This is the strongest level of evidence in which to answer a clinical question. The higher the level of evidence the stronger the evidence is to change practice. However, all evidence needs be critically appraised based on (a) the best available evidence (i.e., level of evidence), (b) the quality and reliability of the study, and (c) the applicability of the findings to practice.Critical Appraisal of the Evidence Essay.

CRITICAL APPRAISAL OF QUALITATIVE EVIDENCE

Once the clinical issue has been identified, the PICOT question constructed, and the best evidence located through an exhaustive search, the next step is to critically appraise each study for its validity (i.e., the quality), reliability, and applicability to use in practice (Melnyk & Fineout-Overholt, 2005). Although there is no consensus among qualitative researchers on the quality criteria (Cutcliffe & McKenna, 1999; Polit & Beck, 2008; Powers, 2005; Russell & Gregory, 2003; Sandelowski, 2004), many have published excellent tools that guide the process

for critically appraising qualitative evidence (Duffy, 2005; Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008; Powers, 2005; Russell & Gregory, 2003; Speziale & Carpenter, 2003). They all base their cri- teria on three primary questions: (a) Are the study findings valid? (b) What were the results of the study? (c) Will the results help me in caring for my patients? According to Melnyk and Fineout-Overholt (2005), “The answers to these questions ensure rele- vance and transferability of the evidence from the search to the specific population for whom the practi- tioner provides care” (p. 120). In using the questions in Tables 2, 3, and 4, one can evaluate the evidence and determine if the study findings are valid, the method and instruments used to acquire the knowl- edge credible, and if the findings are transferable.Critical Appraisal of the Evidence Essay.

The qualitative process contributes to the rigor or trustworthiness of the data (i.e., the quality). “The goal of rigor in qualitative research is to accurately represent study participants’ experiences” (Speziale & Carpenter, 2003, p. 38). The qualitative attributes of validity include credibility, dependability, confirm- ability, transferability, and authenticity (Guba & Lincoln, 1994; Miles & Huberman, 1994; Speziale & Carpenter, 2003).

Credibility is having confidence and truth about the data and interpretations (Polit & Beck, 2008). The credibility of the findings hinges on the skill, competence, and rigor of the researcher to describe the content shared by the participants and the abil- ity of the participants to accurately describe the phenomenon (Patton, 2002; Speziale & Carpenter, 2003). Cutcliffe and McKenna (1999) reported that the most important indicator of the credibility of findings is when a practitioner reads the study find- ings and regards them meaningful and applicable and incorporates them into his or her practice.Critical Appraisal of the Evidence Essay.

Confirmability refers to the way the researcher documents and confirms the study findings (Speziale

Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 205

Williamson

TABLE 3. Subquestions to Further Answer, What Were the Results of the Study?

page5image2032 page5image2192

Was the purpose of the study clear?

Is the research design

appropriate for the research question?

Is the description of findings thorough?

Do findings fit the data from which they were generated? follow?

Were all themes identified, useful, creative, and convincing of the phenomena?

Are the results logical,

consistent, and easy to

page5image9304

Source. Adapted from Powers (2005), Russell and Gregory (2003), and Speziale and Carpenter (2003). TABLE 4. Subquestions to Further Answer, Will the Results Help Me in Caring for My Patients?

page5image11176 page5image11336

What meaning and relevance does this study have for my patients?

How would I use these findings in my practice?

How does the study help provide perspective on my practice?

Are the conclusions appropriate to my patient population?

Are the results applicable to my patients?

How would patient and family values be considered in applying these results?

page5image17512

Source. Adapted from Powers (2005), Russell and Gregory (2003), and Speziale and Carpenter (2003).

& Carpenter, 2003). Confirmability is the process of confirming the accuracy, relevance, and meaning of the data collected. Confirmability exists if (a) the researcher identifies if saturation was reached and (b) records of the methods and procedures are detailed enough that they can be followed by an audit trail (Miles & Huberman, 1994).Critical Appraisal of the Evidence Essay.

Dependability is a standard that demonstrates whether (a) the process of the study was consistent, (b) data remained consistent over time and conditions, and (c) the results are reliable (Miles & Huberman, 1994; Polit & Beck, 2008; Speziale & Carpenter, 2003). For example, if study methods and results are depend- able, the researcher consistently approaches each occurrence in the same way with each encounter and results were coded with accuracy across the study.Critical Appraisal of the Evidence Essay.

Transferability refers to the probability that the study findings have meaning and are usable by oth- ers in similar situations (i.e., generalizable to others in that situation; Miles & Huberman, 1994; Polit & Beck, 2008; Speziale & Carpenter, 2003). To deter- mine if the findings of a study are transferable and can be used by others, the clinician must consider the potential client to whom the findings may be applied (Speziale & Carpenter, 2003).

Authenticity is when the researcher fairly and faithfully shows a range of different realities and develops an accurate and authentic portrait for the phenomenon under study (Polit & Beck, 2008). For example, if a clinician were to be in the same

environment as the researcher describes, they would experience the phenomenon similarly. All mental health providers need to become familiar with these aspects of qualitative evidence and hone their criti- cal appraisal skills to enable them to improve the outcomes of their clients.Critical Appraisal of the Evidence Essay.

CONCLUSION

Qualitative research aims to impart meaning of the human experience and understand how people think and feel about their circumstances. Qualitative researchers use a holistic approach in an attempt to uncover truths and understand a person’s reality. The researcher is intensely involved in all aspects of the research design, collection, and analysis pro- cesses. Ethnography, phenomenology, and grounded theory are some of the designs that a researcher may use to study a culture, phenomenon, or theory. Data collection strategies vary based on the research question, method, and informants. Methods such as interviews, observations, and journals allow for information-rich participants to provide detailed lit- erary accounts of the phenomenon. Data analysis occurs simultaneously as data collection and is the process by which the researcher identifies themes, concepts, and patterns that provide insight into the phenomenon under study.Critical Appraisal of the Evidence Essay.

One of the crucial steps in the EBP process is to critically appraise the evidence for its use in practice

206 Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3

Based Nursing3, 68-70. For reprints and permission queries, please visit SAGE’s Web site at http://www.sagepub.com/journalsPermissions.nav.

Critical Appraisal of Qualitative Evidence

and determine the value of findings. Critical appraisal is the review of the evidence for its validity (i.e., strengths and weaknesses), reliability, and usefulness for clients in daily practice. “Psychiatric mental health clinicians are practicing in an era emphasizing the use of the most current evidence to direct their treatment and interventions” (Rice, 2008, p. 186). Appraising the evidence is essential for assurance that the best knowledge in the field is being applied in a cost-effective, holistic, and effective way. To do this, one must incorporate the critically appraised findings with their abilities as clinicians and their clients’ preferences. As professionals, clinicians are expected to use the EBP process, which includes appraising the evidence to determine if the best results are believable, useable, and dependable. Clinicians in psychiatric mental health must use qualitative evidence to inform their practice deci- sions. For example, how do clients newly diagnosed with bipolar and their families perceive the life impact of this diagnosis? Having a well done meta- synthesis that provides an accurate representation of the participants’ experiences, and is trustworthy (i.e., credible, dependable, confirmable, transferable, and authentic), will provide insight into the situational context, human response, and meaning for these cli- ents and will assist clinicians in delivering the best care to achieve the best outcomes.Critical Appraisal of the Evidence Essay.