Evidence-Based Medicine
Evidence-Based Medicine
Evidence-Based Medicine
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Highlights On<br />
<strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong><br />
By<br />
Prof. Tawfik A. M. Khoja<br />
MBBS, DPHC, FRCGP, FFPH, FRCP (UK)<br />
Family and Community <strong>Medicine</strong> Consultant (Primary Health Care)<br />
Director General, Executive Board for HMC/GCC<br />
Member of the American Academy of Family <strong>Medicine</strong><br />
Member of the American Collage of Executive physicians<br />
Member of the Board of Trustees of IUHPE<br />
Member of the Advisory Board of WAPS<br />
Chairperson of the Saudi Society for <strong>Evidence</strong> <strong>Based</strong> Health Care<br />
Dr. Noha A. Dashash<br />
MBBS,DPHC,ABFM,SBFM<br />
Consultant Family Physician<br />
Deputy Director of Primary Health Care, Jeddah Governorate<br />
Supervisor of the EBM Jeddah Working Group<br />
Member of the Board of Directors,<br />
Saudi Society for <strong>Evidence</strong> <strong>Based</strong> Health Care<br />
Trainer, Postgraduate Joint Program of Family & Community <strong>Medicine</strong>, Jeddah<br />
Dr. Lubna A. Al-Ansary<br />
MBBS, MSc, FRCGP<br />
Associate Professor and Consultant,<br />
Deputy Chairperson of the Saudi Society for <strong>Evidence</strong> <strong>Based</strong> Health Care<br />
Member, Executive Council, National and Gulf EBM Committee<br />
Member, National Family Safety Program<br />
Member, Executive Council, National Society for Human Rights<br />
Dept of Family and Community <strong>Medicine</strong>,<br />
College of <strong>Medicine</strong>, King Saud University, Riyadh, Suadi Arabia<br />
Dr. Abdullah Alkhenizan<br />
MBBS, CCFP, ABHPM, DCEpid<br />
Consultant Family <strong>Medicine</strong><br />
Assistant Clinical Professor<br />
Secretary General of the Saudi Society for <strong>Evidence</strong> <strong>Based</strong> Health Care<br />
Member, of the National and Gulf Center for EBM<br />
Sixth Edition<br />
Rabi’II 1431H / April 2010G<br />
-1-
© Excutive Board of the Health Ministers› Council, 2010<br />
King Fahd National Library Cataloging-in-publication Data<br />
Khoja, Tawfik Ahmed<br />
Highlights on evidence based medicine./ Tawfik Ahmed Khoja;<br />
Noha Ahmed Dashash-6 - Riyadh, 2010<br />
89 P. ; 24 Cm.<br />
ISBN: 603-90062-3-9<br />
1- <strong>Medicine</strong> 2- <strong>Evidence</strong> <strong>Based</strong> <strong>Medicine</strong> - Saudi Arabia<br />
I- Noha Ahmed Dashash (co.author)<br />
II- Title<br />
616.75 dc 1430/1040<br />
L.D. No. 1430/1040<br />
ISBN 603-90062-3-9<br />
Executive Board<br />
Of the<br />
Health Ministers’ Council<br />
For Cooperation Council States<br />
Tel.: 00966 1 4885262 - Fax: 00966 1 4885266<br />
P.O.box 7431 Riyadh 11462<br />
E-mail: sgh@sgh.org.sa<br />
www.sgh.org.sa<br />
-2-
-3-
-4-
Index<br />
Contents<br />
Page No.<br />
- Preface to the Sixth Edition................................................ 7<br />
- Introduction.................................................................................. 9<br />
- What is <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong>.................................... 11<br />
- Why is <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong>...................................... 12<br />
- Forms of evidence..................................................................... 13<br />
- Hierarchy of evidence............................................................... 14<br />
- Strength of recommendation taxonomy.............................. 15<br />
- Steps of EBM............................................................................... 20<br />
- Asking answerable questions................................................ 20<br />
- Clinical Scenario........................................................................ 20<br />
- Searching for the best evidence............................................ 20<br />
- Critically appraising the evidence........................................ 21<br />
- Critical appraisal........................................................................ 21<br />
- Applying the evidence to individual patient care............. 22<br />
- Evaluating the process............................................................. 22<br />
- The Logic Behind EBM.............................................................. 22<br />
- Analyzing Information............................................................... 23<br />
- Advantages and disadvantages in practicing EBM......... 23<br />
- Suggestive Guideline................................................................ 25<br />
- At the Central Level................................................................... 25<br />
- At the Peripheral......................................................................... 25<br />
- Implementation of the strategies........................................... 26<br />
- Glossary of Terms in EBM....................................................... 27<br />
- <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong> Resources ................................ 53<br />
- References for further readings............................................. 59<br />
-5-
-6-
Preface to the Sixth Edition<br />
Praise be to Allah and Peace and Blessings on the<br />
most honorable of the Messengers and the last of<br />
the prophets Mohammad, Peace be Upon Him.<br />
Six years ago, when we issued the first edition of<br />
the booklet (Highlights on <strong>Evidence</strong>-based <strong>Medicine</strong>)<br />
it was meant to be an introductory book to the concept<br />
of <strong>Evidence</strong>-based <strong>Medicine</strong> that is why we were very keen to make it<br />
simple, palatable and in both Arabic and English language for the readers<br />
to whom the subject is still new or unknown. The booklet surpassed all<br />
expectations and the first as well as the second editions were gone very<br />
fast, although produced in large quantity.<br />
The great demand for the second edition was the real motive for us to<br />
update it and carry on the message of dissemination of the concept of<br />
EBM, and evidence based healthcare as well as evidence-based public<br />
health, not only in the Kingdom of Saudi Arabia but also in the Gulf region,<br />
Arab Region and EMRO.<br />
In this edition you will fine topics like hierarchy of evidence and the SORT<br />
(Strength of Recommendations Taxonomy) supported with the concepts<br />
of how to assess quality of evidence and how to assess consistency of<br />
evidence access studies. Steps of EBM were presented in an elaborate<br />
but simple and clear way.<br />
Despite these additions, the booklet is still retaining its simplicity,<br />
clearness as well as its informativeness. This makes it – we hope – more<br />
readable in the EBM and EBHC field.<br />
I hope that this booklet will realize the objective for which it has been<br />
produced, i.e. acting as a sort of an appetizer for those who want to know<br />
more about this discipline, and helping in dissemination of evidencebased<br />
practices in all helathcare fields. and medical / health education as<br />
well as continuous professional development.<br />
-7-
I would like to express my gratitude and appreciation to my colleagues<br />
Dr.Noha A. Dashash, Dr. Lubna A. Al-Ansary and Dr. Abdullah Alkhenizan<br />
for the dedicated efforts and the valuable assistance in preparing this<br />
booklet.<br />
I realize that our vision for practising EBM and EBHC has transformed<br />
from dream to reality, hoping that we could fulfill our mission, and<br />
could offer a handy reference for everybody in the field. to build up the<br />
<strong>Evidence</strong>-based public health culture.<br />
On the other hand, we hope that the booklet will inspire health workers<br />
in general and the healthcare authorities in particular to disseminate the<br />
concept and implement it is all healthcare facilities.<br />
I do pray to Almighty Allah, the Lord of Universe to crown all<br />
our efforts with rightedness and success.<br />
Prof. Tawfik A M Khoja<br />
MBBS,DPHC, FRCGP,FFPH, FRCP(UK)<br />
Family and Community <strong>Medicine</strong> Consultant (Primary Health Care)<br />
Director General Executive Board, HMC/GCC<br />
Chairperson of the Saudi Society for <strong>Evidence</strong> <strong>Based</strong> Health Care<br />
-8-
EVIDENCE-BASED MEDICINE<br />
Introduction<br />
t is worth to mention that practicing according to the results of clinical studies<br />
and experiments is not a new concept in clinical practice. Ibn Al-Razi (Rhazes<br />
865-925) described the best clinical practice as: “the practice that has been<br />
agreed up on by practitioners and supported by experiments”. In addition he<br />
was the first scientist to recognize the need for a comparison group in clinical<br />
studies. Ibn sina (Avicenna 981-1037) listed several requirements for studies<br />
evaluating new medications. These principles include the need for the drug to<br />
be tested on a well defined disease, the effect of the drug must be seen to occur<br />
constantly in many cases, and the study must be done on humans, for testing a<br />
drug on a lion or a horse might not prove anything about its effect on humans.<br />
All these principles are still valid in the era of evidence based medicine. In<br />
1992 a group of researchers from McMaster University started to use the term<br />
“<strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong>”. They wrote a series of articles in collaboration<br />
with the Journal of The American Medical Association (JAMA) where they<br />
established the principles of the concept of evidence based medicine.<br />
<strong>Evidence</strong>-based medicine (EBM) is a relatively new approach to the teaching<br />
and practice of medicine. Historically, physicians› clinical decision-making was<br />
based on the knowledge received during their medical training and experiences<br />
gained through individual patient encounters i.e. opinion-based.<br />
Evolution of epidemiology, and subsequently clinical epidemiology, resulted<br />
in methods that allowed the objective critique of therapies used in clinical<br />
practice. Epidemiologic principles were applied to problems encountered<br />
in clinical medicine and an increasing number of clinical trials and medical<br />
journals emerged.<br />
The past two decades have witnessed an acceleration of the information<br />
explosion and with it the volume of medical publications. The importance of<br />
keeping updated is emphasized even more, given that the half life of medical<br />
-9-
knowledge is extremely short. Clinicians face the difficult task of keeping<br />
track of a large amount of new and potentially important information. Although<br />
every one knows this, reports continue to demonstrate that the time devoted for<br />
reading among physicians can not by any means be enough to fill this gap.<br />
On the other hand the Continuing Medical Education (CME), as a means of<br />
keeping physicians up-to-date was growing, moving from lectures by experts<br />
to small group learning, tutorials and interactive feedback sessions. However<br />
studies have shown that CME had limited impact on modifying physician<br />
performance. A legitimate concern is that many physicians will fail to recognize<br />
new and necessary changes in practice and patient care will suffer as doctors<br />
become outdated and their performance deteriorates over time.<br />
Clinicians and health care workers face clinical questions on daily basis,<br />
regarding patient care. These could be about the interpretation of diagnostic<br />
tests, harm associated with treatments they provide, prognosis of a disease in<br />
a specific patient and the effectiveness of a preventive or therapeutic agent.<br />
Using traditional methods they get less than a third of the answers. Clinicians<br />
need simple yet scientifically sound ways to get answers to there questions.<br />
Existing research has many flaws. Archie Cochrane, the late British<br />
epidemiologist, estimated that only 15 to 20% of medical practice is based<br />
on scientific, statistically sound research. Much of our medical practice is<br />
based on either experiences of seniors or research of unknown validity. In<br />
fact, most of what we practice is based on ‘logic’ coming from knowing human<br />
biology, physiology and pathophysiology. For example, we treat arrhythmias<br />
leading to death, after coronary events in order to prevent death. We patch<br />
eyes of patients with corneal abrasions to protect them and enhance healing.<br />
However, when properly designed studies were performed looking specifically<br />
at important outcomes that matter to patients, it turned out that our logic didn’t<br />
really help patients. In the first situation, randomised controlled trials showed<br />
that treating arrhythmia improved ECG’s but in increased death. Similarly, in the<br />
second case, patients with their eyes patched had longer healing durations than<br />
those treated conservatively. Studies looking at pathophysiologic outcomes<br />
are known as DOE’s (Disease Oriented <strong>Evidence</strong>), whereas studies looking at<br />
important clinical outcomes are known as POEM’s (Patient Oriented <strong>Evidence</strong><br />
that Matters).<br />
-10-
The practice of evidence-based medicine requires an understanding of simple<br />
and basic clinical epidemiology, as well as excellent communication skills,<br />
patience, and a commitment to provide the patient with the knowledge required<br />
to make informed choices. It is important that physicians become familiar with<br />
the meaning of EBM and its role in influencing the provision of care and use of<br />
health resources.<br />
What is <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong><br />
<strong>Evidence</strong>-based medicine has been defined by David Sackett, as «the<br />
conscientious, explicit, and judicious use of current best evidence in making<br />
decisions about the care of individual patients», to aid the delivery of optimum<br />
clinical care to patients.<br />
<strong>Evidence</strong>-based medicine can be practiced by the integration of individual<br />
clinical expertise with the best available clinical evidence from systematic<br />
research and patient values and circumstances.<br />
Simply put, EBM means applying the best information to manage patient<br />
problems, diagnosis, prognosis, harm, patient safety …etc. It is based on the<br />
assumption that: 1) medical literature, and thus useful information about patient<br />
care, is growing at an alarming rate; and 2) in order to provide best care for<br />
patients, doctors must be able to continuously upgrade their knowledge, i.e.<br />
by accessing, appraising, interpreting and using medical literature in a timely<br />
fashion.<br />
Best<br />
Clinical<br />
<strong>Evidence</strong><br />
Clinical<br />
Expertise<br />
Patient’s<br />
Values &<br />
Circumstances<br />
Fig. 1. Practice of <strong>Evidence</strong> <strong>Based</strong> <strong>Medicine</strong><br />
-11-
What is the problem Why is there a need for EBM Don›t we already practice<br />
medicine fairly uniformly based on a common fund of evidence.<br />
Bottom line: we are now often practicing medicine based on clinical judgment<br />
that is not well informed by the best evidence of medical research – a slippery<br />
slope to diminished affectivity and/or compromised competence.<br />
Why evidence-based medicine<br />
The first reaction of any doctor to EBM is likely to be «Well, of course that›s<br />
what I always do.» The second response, perhaps more thoughtful and certainly<br />
more honest, will be a degree of confusion: «What does it really mean How<br />
does one actually do evidence based medicine Surely there is not enough<br />
time What kind of doctor am I if my medicine is not evidence based»<br />
Some doctors perceived EBM as diminishing the role of clinical acumen and<br />
experience, fearing that the «art» of decision-making will be lost. It should<br />
be noted that EBM neither excludes the vital role played by experience, nor<br />
advocates the replacement of sound clinical judgment. The practice of EBM<br />
means integrating individual clinical expertise with the best available external<br />
clinical evidence from systematic research. EBM respects clinical skills while<br />
emphasizing the need to develop new skills in information management.<br />
Health care professional of the, whether physicians, nurses, pharmacists or<br />
others, require a basic understanding of steps for seeking out, assessing and<br />
applying the most useful information in concert with patients› preferences.<br />
Although we need new evidence daily, we usually fail to get it. The result is that<br />
both our up-to-date knowledge and our clinical performance deteriorate with<br />
time. Trying to overcome clinical entropy through traditional CME programs<br />
doesn›t improve our clinical performance. A different approach to clinical<br />
learning has been shown to be effective in keeping practitioners up to date:<br />
EBM<br />
The premise of EBM is a simple one, that excellence in patient care correlates<br />
with the use of the best currently available evidence, and that physicians require<br />
a unique set of skills which are not part of traditional medical education, in<br />
order to access and utilize this information.<br />
In EBM, Systematic Reviews are considered the best source of evidence.<br />
-12-
A systematic review is a critical assessment and evaluation of research<br />
(not simply a summary) that attempts to address a focused clinical question<br />
using methods designed to reduce the likelihood of bias. When it includes a<br />
quantitative strategy for combining the results of included studies into a single<br />
pooled or summary estimate, it is called a Meta-Analysis. It is a process of<br />
‹merging› data of similar smaller studies to obtain the ‹power› of a larger study<br />
which can assist in drawing firmer conclusions. This indeed, is the «simplified»<br />
rationale for evidence-based medicine.<br />
This study type is of particular importance when research findings contradict<br />
each other and obscure the true picture. Therefore, by pooling together all<br />
the results of various research studies, the sample size can, in effect, be<br />
increased.<br />
Although pooling together the results of a number of trials will provide a greater<br />
weight of evidence, it is still important to examine meta-analyses critically:-<br />
• Was a broad enough search strategy used<br />
MEDLINE, for instance, covers only about a quarter of the world›s biomedical<br />
journals.<br />
• Do the results all or mostly point in the same direction<br />
A meta-analysis should not be used to produce a positive result by<br />
averaging the results of, say, five trials with negative and ten trials with<br />
positive findings.<br />
• Are the trials in the meta-analysis all small trials<br />
If so, be very cautious.<br />
Forms of evidence<br />
<strong>Evidence</strong> is presented in many forms, and it is important to understand the<br />
basis on which it is stated. The value of evidence can be ranked according to<br />
the following classification in descending order of credibility:<br />
I. Strong evidence from at least one systematic review of multiple welldesigned<br />
randomized controlled trials.<br />
II. Strong evidence from at least one properly designed randomized controlled<br />
trail of appropriate size.<br />
III. <strong>Evidence</strong> from well - designed trials such as non-randomised<br />
trials, cohort studies, time series or matched case-controlled studies.<br />
-13-
IV. <strong>Evidence</strong> from well-designed non-experimental studies from more than<br />
one center or research group.<br />
V. Opinions of respected authorities, based on clinical evidence, descriptive<br />
studies or reports of expert committees.<br />
Hierarchy of <strong>Evidence</strong><br />
When searching for an answer for questions on therapeutic and preventive<br />
interventions, a Systematic Review of Randomized Control Trials (RCT) is<br />
considered the best study type. If such a study was not found, the next level of<br />
evidence would be a single RCT. Again if not found, the next level of evidence<br />
would be a cohort study (which is an observational study). If not found we<br />
would have to go to lower levels of evidence (from weaker study designs) until<br />
we reach «Expert Opinion», which is considered the lowest level of evidence.<br />
This highlights one of the fundamentals of EBM evidence which is ‹<strong>Evidence</strong> is<br />
graded by Strength›. Figure 2, illustrates the hierarchy of evidence (for therapy<br />
and prevention).<br />
Fig. 2. Hierarchy of <strong>Evidence</strong>.<br />
-14-
Strength of Recommendation Taxonomy (SORT)<br />
AFP uses the Strength-of-Recommendation Taxonomy (SORT), defined below,<br />
to label key recommendations in clinical review articles. In general, only key<br />
recommendations are given a Strength-of-Recommendation grade. Grades<br />
are assigned on the basis of the quality and consistency of available evidence.<br />
Table 1 shows the three grades recognized.<br />
As the table indicates, the strength-of-recommendation grade depends on the<br />
quality and consistency of the evidence for the recommendation. Quality and<br />
consistency of evidence are determined as indicated in Table 2 and Table 3.<br />
An alternative way to understand the significance of a strength-ofrecommendation<br />
grade is through the algorithm generally followed by authors<br />
and editors in assigning grades based on a body of evidence (figure 1). While<br />
this algorithm provides a general guideline authors and editors may adjust the<br />
strength of recommendation based on the benefits, harms, and costs of the<br />
intervention being recommended.<br />
-15-
TABLE 1. Strength-of-Recommendation Grades<br />
Strength of<br />
recommendation<br />
A<br />
B<br />
C<br />
Basis for recommendation<br />
Consistent, good-quality patient-oriented evidence*<br />
Inconsistent or limited-quality patient-oriented<br />
evidence*<br />
Consensus, disease-oriented evidence,* usual practice,<br />
expert opinion, or case series for studies of diagnosis,<br />
treatment, prevention, or screening<br />
* Patient-oriented evidence measures outcomes that matter to patients:<br />
morbidity, mortality, symptom improvement, cost reduction, and quality of<br />
life.<br />
Disease-oriented evidence measures intermediate, physiologic, or<br />
surrogate end points that may or may not reflect improvements in patient<br />
outcomes (e.g., blood pressure, blood chemistry, physiologic function,<br />
pathologic findings).<br />
-16-
TABLE 2. Assessing Quality of <strong>Evidence</strong><br />
Study quality<br />
Level 1: good-quality<br />
Patient oriented<br />
evidence<br />
Level 2: limitedquality<br />
patientoriented<br />
evidence<br />
Level 3:<br />
other evidence<br />
Diagnosis<br />
Validated clinical<br />
decision rule SR / meta -<br />
analysis of high -quality<br />
studies<br />
High-quality diagnostic<br />
cohort study*<br />
Invalidated clinical<br />
decision rule<br />
SR / meta - analysis of<br />
lower quality studies or<br />
studies with inconsistent<br />
findings<br />
Lower quality diagnostic<br />
cohort study or<br />
diagnostic case -control<br />
study<br />
Treatment / prevention /<br />
screening<br />
SR/meta-analysis or RCTs with<br />
consistent findings<br />
High-quality individual RCT**<br />
All-or-none study***<br />
SR/meta-analysis of lower<br />
quality clinical trials or of<br />
studies with inconsistent<br />
findings<br />
Lower quality clinical trial<br />
Cohort study<br />
Case-control study<br />
Prognosis<br />
SR / meta - analysis<br />
of good-quality<br />
cohort studies<br />
Prospective cohort<br />
study with good<br />
follow-up<br />
SR/meta-analysis of<br />
lower quality cohort<br />
studies or with<br />
inconsistent results<br />
Retrospective cohort<br />
study or prospective<br />
cohort study with<br />
poor follow-up<br />
Case-control study<br />
Consensus guidelines, extrapolations from bench research, usual practice,<br />
opinion, disease-oriented evidence (intermediate or physiologic outcomes only), or<br />
case series for studies of diagnosis, treatment, prevention, or screening<br />
* High-quality diagnostic cohort study: cohort design, adequate size, adequate<br />
spectrum of patients, blinding, and a consistent, well-defined reference<br />
standard.<br />
** High-quality RCT: allocation concealed, blinding if possible, intention-totreat<br />
analysis, adequate statistical power, adequate follow-up (greater<br />
than 80 percent).<br />
*** In an all-or-none study, the treatment causes a dramatic change in<br />
outcomes, such as antibiotics for meningitis or surgery for appendicitis,<br />
which precludes study in a controlled trial.<br />
(SR = systematic review; RCT = randomized controlled trial)<br />
-17-
TABLE 3.<br />
Studies<br />
Assessing Consistency of <strong>Evidence</strong> Across<br />
Consistent<br />
Inconsistent<br />
Most studies found similar or at least coherent conclusions<br />
(coherence means that differences are explainable).<br />
or<br />
If high-quality and up-to-date systematic reviews or<br />
meta-analyses exist, they support the recommendation<br />
Considerable variation among study findings and lack of<br />
coherence<br />
or<br />
If high-quality and up-to-date systematic reviews or<br />
meta-analyses exist, they do not find consistent evidence<br />
in favor of the recommendation..<br />
-18-
Strength of Recommendation <strong>Based</strong> on a Body of <strong>Evidence</strong><br />
Is this a key recommendation for clinicians<br />
regarding diagnosis or treatment that merits a<br />
label<br />
Yes<br />
Is the recommendation based on patientoriented<br />
evidence (i.e., an improvement in<br />
morbidity, mortality, symptoms, quality of life,<br />
or cost)<br />
Yes<br />
Is the recommendation based on expert opinion,<br />
bench research, a consensus guideline, usual<br />
practice, clinical experience, or a case series<br />
study<br />
No<br />
Is the recommendation based on one of the<br />
following<br />
• Cochrane Review with a clear<br />
recommendation<br />
• USPSTF Grade A recommendation<br />
• Clinical <strong>Evidence</strong> rating of Beneficial<br />
• Consistent findings from at least two goodquality<br />
randomized controlled trials or a<br />
systematic review/meta-analysis of same<br />
• Validated clinical decision rule in a relevant<br />
population<br />
• Consistent findings from at least two<br />
good-quality diagnostic cohort studies or<br />
systematic review/meta-analysis of same<br />
No<br />
Yes<br />
Yes<br />
Yes<br />
No<br />
Recommendation<br />
not needed<br />
Strength of<br />
Recommendation = C<br />
Strength of<br />
Recommendation = A<br />
Strength of<br />
Recommendation = B<br />
Figure 1. Assigning a Strength-of-Recommendation grade based on a body of evidence.<br />
(USPSTF = U.S. Preventive Services Task Force)<br />
-19-
Steps of EBM<br />
There are five steps in practicing EBM «5 A’s»:<br />
1. Ask: Asking answerable questions.<br />
2. Acquire: Searching for the best evidence.<br />
3. Appraise: Critically appraising the evidence.<br />
4. Apply: Applying the evidence to individual patient care, and,<br />
5. Asses: Evaluating the process.<br />
Asking answerable questions<br />
The practicing physician is always faced with the dilemma of how best to<br />
answer the clinical questions either arising, for example, from failure of therapy<br />
or from the inquisitive patient. By extension questions could also come from<br />
diverse areas that have stake in health care delivery.<br />
In order to be able to search for evidence regarding a particular clinical issue, a<br />
proper answerable question must be formulated. This is not always as easy as<br />
it may seem. It can be done by making sure the question contains four areas<br />
abbreviated by the acronym PICO. ‘P’ stands for the description of patient<br />
or population; ‘I’ for the intervention, ‘C’ for the comparison group; ‘O’ for the<br />
outcome.<br />
Clinical Scenario<br />
Ibrahim is a 60 years old businessman, with no prior history of any cardiovascular<br />
event. He presented to your clinic for follow up, he wanted your advice about<br />
using aspirin, as it was recommended for one of his friends, who had a heart<br />
attack recently.<br />
Formulation a clinical question:<br />
* Patient / Population : Primary prevention<br />
* Intervention : Aspirin<br />
* Comparison : Placebo<br />
* Outcome : Prevention of Cardiovascular events<br />
Searching for the best evidence<br />
Considering the time lapse between writing a book and the book hitting the<br />
stand, may be five years in some cases- by which time some information might<br />
have become obsolete, books cannot be considered as the best source of<br />
evidence. They are good for teaching purposes reference to a limited extent<br />
and it needs update regularly.<br />
-20-
In order to obtain the best evidence, an electronic-based search of answers to<br />
the formulated questions in sources of «ready made» evidence, is the easiest<br />
and fastest way. These sources include the Cochrane Library, Best <strong>Evidence</strong>,<br />
ACP Journal Club, Clinical <strong>Evidence</strong>; Infopoems, DARE and others. The main<br />
obstacle against using these resources is cost of subscription (that they are<br />
not free).<br />
If these sources are not available or if the answers of the search question<br />
was not found in them, one would have to search sources of primary evidence<br />
(original articles and systematic reviews). These articles can be found in<br />
electronic databases (e.g. Medline, EMBASE, SAM) and Electronic journals<br />
(e.g. Bandolier, Journal of <strong>Evidence</strong> <strong>Based</strong> medicine, JAMA, NEJM, Lancet,<br />
BMJ etc.).<br />
Critically appraising the evidence<br />
The practitioner needs to develop a sorting strategy in reviewing the available<br />
literature so as to remove relevant from irrelevant materials. Then he should<br />
decide whether the article is well conducted and can be used or not.<br />
Several checklists have been developed to help make this process easy,<br />
systematic and more or less reproducible. Usually they focus on three man<br />
areas; validity, results and applicability. Validity or closeness to truth usually<br />
examines the methodology of the article<br />
Critical appraisal<br />
For any clinician, the real key to assessing the usefulness of a clinical study<br />
and interpreting the results to an area of work is through the process of critical<br />
appraisal. This is a method of assessing and interpreting the evidence by<br />
systematically considering its validity, results and relevance to the area of<br />
work considered.<br />
The Critical Appraisal Skills helps health service professionals and decisionmakers<br />
develop skills in appraising evidence about clinical effectiveness. Its<br />
process uses three broad issues that should be considered when appraising a<br />
review article:<br />
• Are the results of the review valid<br />
• What are the results<br />
• Will the results help locally<br />
-21-
Next, the magnitude of the results and its significance are evaluated. Finally,<br />
one should look to the applicability of these results to his/her patients.<br />
Questions asked in this process include:<br />
a) Is the outcome of the study a «patient oriented evidence that matters»<br />
(POEM) or a «disease oriented evidence» (DOE)<br />
b) Does the study population correspond to your practice population<br />
c) What method is described to answer the research question<br />
d) How will this study impact on your practice<br />
Applying the evidence to individual patient care<br />
EBM will modify individual patient care, leading to the use of proven therapies<br />
and diagnostic tests only where data exists to support their use, to the<br />
withdrawal of those which are unproven, and to closer scrutiny of those for<br />
which clear evidence for continued use is lacking. As physicians become more<br />
aware, patients become better educated and a more equitable physicianpatient<br />
relationship follows.<br />
Evaluating the process<br />
A periodic review of the process will show how well a clinical question has<br />
been answered and advise as to its replicability either in the same or another<br />
setting. The more EBM is used, the more the challenges to the practitioner and<br />
the more the experience gained.<br />
The Logic Behind EBM<br />
To make EBM more acceptable to clinicians and to encourage its use, it is best to turn a<br />
specified problem into answerable questions by examining the following issues:<br />
• Person or population in question.<br />
• Intervention given.<br />
• Comparison (if appropriate).<br />
• Outcomes considered.<br />
For example: Is an elderly man given nicotine patches more likely to stop<br />
smoking than a similar man who is not<br />
Next, it is necessary to refine the problem into explicit questions and then<br />
check to see whether the evidence exists. But where can we find the information<br />
to help us make better decisions<br />
-22-
The following are all common sources:<br />
• Personal experience – for example, a bad drug reaction.<br />
• Reasoning and intuition.<br />
• Colleagues.<br />
• Bottom drawer (pieces of paper lying around the office, and son on).<br />
• Published evidence.<br />
Analyzing information<br />
In using the evidence it is necessary to:<br />
• Search for and locate it.<br />
• Appraise it.<br />
• Store and retrieve it.<br />
• Ensure it is updated.<br />
• Communicate and use it.<br />
Every clinician strives to provide the best possible care for patients. However,<br />
given the multitude of research information available, it is not always possible<br />
to keep abreast of current developments or to translate them into clinical<br />
practice. One must also rely on published papers, which are not always tailored<br />
to meet the clinician›s needs.<br />
Advantages and disadvantages in practicing EBM<br />
Advantages<br />
• Clinicians upgrade their knowledge base;<br />
• It improves clinicians› understanding of research and its methods;<br />
• It improves confidence in managing clinical situations;<br />
• It improves computer literacy and data searching skills;<br />
• It allows group problem solving and teaching;<br />
• Juniors can contribute as well as seniors;<br />
• For patients, it is a more effective use of resources;<br />
• It allow better communication with the patient about the rationale behind<br />
treatment;<br />
• It improves our reading habit;<br />
• It leads us to ask questions, and then to be skeptical of the answers: what<br />
better definition is there of sciences<br />
• Wasteful practices can be abandoned;<br />
-23-
• <strong>Evidence</strong>-based medicine presupposes that we keep up-to-date, and<br />
makes it worthwhile to take trips around the perimeter of our knowledge;<br />
• <strong>Evidence</strong>-based medicine opens decision making processes to patients.<br />
EBM forms part of the multifaceted process of assuring clinical effectiveness,<br />
the main elements of which are:<br />
- Production of evidence through research and scientific review.<br />
- Production and dissemination of evidence-based clinical guidelines.<br />
- Implementation of evidence-based, cost-effective practice through<br />
education and management of change.<br />
- Evaluation of compliance with agreed practice guidance and patient<br />
outcomes – this process includes clinical audit.<br />
Disadvantages<br />
• It takes time to learn the methods and to put them into practice;<br />
• There is the financial cost of buying and maintaining equipment;<br />
• Medline and other electronic databases are not always comprehensive;<br />
• Authoritarian practitioners may find these methods threatening.<br />
• How do we balance cost and quality in healthcare<br />
• Where should investments be made that improve care in a<br />
cost effective way<br />
• How do we engage patients more responsibility in<br />
their care<br />
• How do we maintain and enhance the professional integrity<br />
of the caring professions<br />
• How do we narrow the gap between knowledge and<br />
practice<br />
The practice of evidence-based medicine is the starting<br />
point for answering these overarching questions.<br />
<strong>Evidence</strong>-based medicine is not cookbook medicine; it is<br />
a basis for the next generation of health delivery in<br />
the Gulf States.<br />
-24-
(Suggestive Guideline)<br />
STRATEGIC PLANNING IN THE GCC STATES<br />
FOR EVIDENCE-BASED MEDICINE<br />
Strategies<br />
Considering possible strategies to be adopted in promoting evidence-based<br />
health care (EBHC) in the GCC states, the following theoretical frameworks can<br />
be helpful in setting up activities at the central and peripheral levels of health<br />
care delivery:<br />
• Establishment of a national committee for EBM.<br />
• Advocacy (seek legal and political support for EBM).<br />
• Identify sources of financial support (government, organized private<br />
sector, donor agencies etc.).<br />
• Establishment of a reference e-library.<br />
• Launching of a local website dedicated to EBM.<br />
• Training (trainers, trainees).<br />
• Organize workshops and courses on EBM.<br />
At the Central Level:<br />
I- Establishment of a reference e-library:<br />
a) Vital introductory books (10-15 classical books) on how to practice and<br />
teach EBM.<br />
b) Basic important sources of evidence online and in print viz. the Cochrane<br />
library; Best <strong>Evidence</strong>; ACP Journal Club, Diagnostic Strategies for<br />
common medical problems (ed. Black et al) and Clinical <strong>Evidence</strong>.<br />
c) EBM websites; electronic databases (Medline, EMBASE, SAM,<br />
UP TO DATE etc.); electronic journals (JAMA, New England Journal of<br />
<strong>Medicine</strong>, The Lancet, British Medical Journal, etc.).<br />
II- Develop a local EBM website which is updated regularly.<br />
III- Establish a core of national and regional trainers. Can be facilitated by<br />
international, regional and national experts in the field of EBM.<br />
At the Peripheral (Regional/ District/ PHCC) level:<br />
I- Provide easy access to the e-library within the locality, with travel time not<br />
exceeding 15-30 minutes.<br />
-25-
II- Publish a regular newsletter that is directed towards health providers. Part<br />
of the newsletter may be written in Arabic. The newsletter may include:<br />
- EB fact sheets/cards.<br />
- Critically appraised clinical practice guidelines.<br />
- Selection from available EBM resources with recommendations for<br />
clinical care.<br />
- Questions and evidence-based answers.<br />
- Provision of distant learning program.<br />
- Updating the MOH manuals, protocols and guidelines with the best<br />
available evidence.<br />
- Organize weekend courses/workshop (4-5 per year, facilitated<br />
by the core trainers, 50-200 health care professionals may be trained<br />
each year).<br />
- Establish a rapport with local drug companies for facilitation<br />
and sponsorship.<br />
Implementation of the strategies<br />
The national committee is expected to play a leading role in implementing<br />
the outlined strategies. Members of the committee should be drawn from the<br />
ministry of health, university medical schools and EBM-related organizations.<br />
Their task would include advocacy, setting the structure right, sourcing for<br />
finance, securing the services of skilled personnel (computer programmers,<br />
feeding of scientific materials, secretarial support etc.), liaison with the ministry<br />
of health, the universities and other relevant institutions, curriculum design,<br />
development of EBM continuing medical education programmes, problembased<br />
education, vocational training and future improvement in teaching<br />
methodology.<br />
Medical schools are expected to consider the adoption of EBM curriculum. The<br />
faculty should be versed not only in EBM but also in best-evidence medical<br />
education ‹BEME›, reference BEME collaboration website, and the umbrella of<br />
this concept extended to <strong>Evidence</strong> <strong>Based</strong> Health Care.<br />
-26-
GLOSSARY OF TERMS<br />
IN EVIDENCE-BASED MEDICINE<br />
This glossary in intended to provide explanation and guidance as to the<br />
meanings of EBM term (a simple definition).<br />
A<br />
Absolute risk (AR)<br />
The probability that an individual will experience the specified outcome during<br />
a specified period. It lies in the range 0 to 1, or is expressed as a percentage.<br />
In contrast to common usage, the word “risk” may refer to adverse events (such<br />
as myocardial infarction) or desirable events (such as cure).<br />
Absolute risk increase (ARI)<br />
The absolute difference in risk between the experimental and control groups<br />
in a trial. It is used when the risk in the experimental group exceeds the risk in<br />
the control group, and is calculated by subtracting the AR in the control group<br />
from the AR in the experimental group. This figure does not give any idea of<br />
the proportional increase between the two groups: for this, relative risk (RR) is<br />
needed.<br />
Absolute risk reduction (ARR)<br />
The absolute difference in risk between the experimental and control groups<br />
in a trial. It is used when the risk in the control group exceeds the risk in the<br />
experimental group, and is calculated by subtracting the AR in the experimental<br />
group from the AR in the control group. The arithmetic difference in risk or<br />
outcomes between treatment and control groups,. Example: if mortality is 30<br />
percent in controls and 20 percent with treatment, ARR is 30-20=10 percent.<br />
This figure does not give any idea of the proportional reduction between the<br />
two groups: for this, relative risk (RR) is needed.<br />
Allocation concealment<br />
A method used to prevent selection bias by concealing the allocation sequence<br />
from those assigning participants to intervention groups. Allocation concealment<br />
prevents researchers from (unconsciously or otherwise) influencing which<br />
intervention group each participant is assigned to.<br />
-27-
Applicability<br />
The application of the results from clinical trials to individual people. A<br />
randomised trial only provides direct evidence of causality within that specific<br />
trial. It takes an additional logical step to apply this result to a specific individual.<br />
Individual characteristics will affect the outcome for this person.<br />
People involved in making decisions on health care must take relevant<br />
individual factors into consideration. To aid informed decision-making about<br />
applicability, we provide information on the characteristics of people recruited<br />
to trials.<br />
B<br />
Baseline risk<br />
The risk of the event occurring without the active treatment. Estimated by the<br />
baseline risk in the control group.<br />
The base line risk is important for assessing the potential beneficial effects<br />
of treatment. People with a higher baseline risk can have a greater potential<br />
benefit.<br />
Best evidence<br />
Systematic reviews of RCTs are the best method for revealing the effects of<br />
a therapeutic intervention. RCTs are unlikely to adequately answer clinical<br />
questions in the following cases:<br />
1. Where there are good reasons to think the intervention is not likely to be<br />
beneficial or is likely to be harmful;<br />
2. Where the outcome is very rare (e.g. a 1/10000 fatal adverse<br />
reaction);<br />
3. Where the condition is very rare;<br />
4. Where very long follow up is required (e.g. does drinking milk in adolescence<br />
prevent fractures in old age);<br />
5. Where the evidence of benefit from observational studies is overwhelming<br />
(e.g. oxygen for acute asthma attacks);<br />
6. When applying the evidence to real clinical situations (external validity);<br />
7. Where current practice is very resistant to change and/or patients would<br />
not be willing to take the control or active treatment;<br />
8. Where the unit of randomisation would have to be too large (e.g. a nationwide<br />
public health campaign); and<br />
-28-
9. Where the condition is acute and requires immediate treatment. Of these,<br />
only the first case is categorical. For the rest the cut off point when an RCT<br />
is not appropriate is not precisely defined. If RCTs would not be appropriate<br />
we search and include the best appropriate form of evidence.<br />
Bias<br />
Systematic deviation of study results from the true results, because of the<br />
way(s) in which the study is conducted.<br />
Blinding / blinded<br />
A trial is fully blinded if all the people involved are unaware of the treatment<br />
group to which trial participants are allocated until after the interpretation of<br />
results. This includes trial participants and everyone involved in administering<br />
treatment or recording trial results.<br />
Ideally, a trial should test whether people are aware of which group they<br />
have been allocated to. This is particularly important if, for example, one of<br />
the treatments has a distinctive taste or adverse effects. Unfortunately such<br />
testing is rare. The terms single and double blind are common in the literature<br />
but are not used consistently. So a study is blinded if any or all of the clinicians,<br />
patients, participants, outcome assessors, or statisticians were unaware of<br />
who received which study intervention. The double double-blind usually refers<br />
to patient and clinician being blind, but is ambiguous so it is better to state who<br />
is blinded.<br />
C<br />
Case control study<br />
A study design that examines a group of people who have experienced an event<br />
(usually an adverse event) and a group of people who have not experienced the<br />
same event, and looks at how exposure to suspect (usually noxious) agents<br />
differed between the two groups. This type of study design is most useful for<br />
trying to ascertain the cause of rare events, such as rare cancers.<br />
Case control studies can only generate odds ratios (OR) and not relative risk<br />
(RR). Case control studies provide weaker evidence than cohort studies but are<br />
more reliable than case series.<br />
-29-
Case series<br />
It is a report on a series of patients with an outcome of interest. No control<br />
group is involved. Case series provide weaker evidence than case control<br />
studies.<br />
Clinical Practice Guideline<br />
Is a systematically developed statement designed to assist practitioners and<br />
patient make decisions about appropriate health care for specific clinical<br />
circumstances.<br />
Cluster randomisation<br />
A cluster randomised study is one in which a group of participants are randomised<br />
to the same intervention together. Examples of cluster randomisation include<br />
allocating together people in the same village, hospital, or school. If the results<br />
are then analysed by individuals rather than the group as a whole bias can<br />
occur.<br />
The unit of randomisation should be the same as the unit of analysis. Often<br />
a cluster randomised trial answers a different question from one randomised<br />
by individuals. An intervention at the level of the village or primary care<br />
practice may well have a different effect from one at the level of an individual<br />
patient. Therefore, trying to compensate by allowing for intra class correlation<br />
coefficients or some other method may not be appropriate.<br />
Cohort study<br />
Involves identification of two groups (cohorts) of patients, one which did receive<br />
the exposure of interest, and one which did not, and following these cohorts<br />
forward for the outcome of interest. Exposure is likely to cause specified events<br />
(e.g. lung cancer). Prospective cohort studies (which track participants forward<br />
in time) are more reliable than retrospective cohort studies. Cohort study is a<br />
non-experimental study design that follows a group of people (a cohort), and<br />
then looks at how events differ among people within the group. A study that<br />
examines a cohort, which differs in respect to exposure to some suspected<br />
risk factor (e.g. smoking), is useful for trying to ascertain whether exposure is<br />
likely to cause specified events (e.g. lung cancer). Prospective cohort studies<br />
(which track participants forward in time) are more reliable than retrospective<br />
cohort studies.<br />
-30-
Cohort studies should not be included within the Benefits section, unless it is<br />
not reasonable to expect higher levels of evidence.<br />
Completer analysis<br />
Analysis of data from only those participants who remained at the end of<br />
the study. Compare with intention to treat analysis, which uses data from all<br />
participants who enrolled.<br />
Confidence interval (CI)<br />
An estimate of precision. If a study is repeated 100 times, the results will fall<br />
within this range 95 times; the CI quantifies the uncertainty in measurement;<br />
usually reported as 95% CI, which is the range of values within which we can be<br />
95% sure that the true value for the whole population lies.<br />
The 95% confidence interval (or 95% confidence limits) would include 95% of<br />
results from studies of the same size and design in the same population. This<br />
is close but not identical to saying that the true size of the effect (never exactly<br />
known) has a 95% chance of falling within the confidence interval. If the 95%<br />
confidence interval for a relative risk (RR) or an odds ratio (OR) crosses 1,<br />
then this is taken as no evidence of an effect. The practical advantages of a<br />
confidence interval (rather than a P value) is that they present the range of<br />
likely effects.<br />
Controlled clinical trial (CCT)<br />
A trial in which participants are assigned to two or more different treatment<br />
groups. In BMJ Clinical <strong>Evidence</strong>, we use the term to refer to controlled trials in<br />
which treatment is assigned by a method other than random allocation. When<br />
the method of allocation is by random selection, the study is referred to as a<br />
randomised controlled trial (RCT; see below). Non-randomised controlled trials<br />
are more likely to suffer from bias than RCTs.<br />
Controls<br />
In a randomised controlled trial (RCT), controls refer to the participants in its<br />
comparison group. They are allocated either to placebo, no treatment, or a<br />
standard treatment.<br />
Correlation coefficient<br />
A measure of association that indicates the degree to which two variables<br />
change together in a linear relationship. It is represented by r, and varies<br />
-31-
etween – 1 and +1. When r is +1, there is a prefect positive relationship (when<br />
one variable increases, so does the other, and the proportionate difference<br />
remains constant). When r is –1 there is a perfect negative relationship (when<br />
one variable increases the other decreases, or vice versa, and the proportionate<br />
difference remains constant). This, however, does not rule out a relationship —<br />
it just excludes a linear relationship.<br />
Cost-Benefit Analysis<br />
Converts effects into the same monetary terms as the costs and compares<br />
them.<br />
Cost-Effectiveness Analysis<br />
Coverts effects into health terms and describes the costs for some additional<br />
health gain (e.g. cost per additional MI prevented).<br />
Cost-Utility analysis<br />
Converts effects into personal preferences (or utilities) and describes how<br />
much it costs for some additional quality gain (e.g. cost per additional quality<br />
life-year, or QALY).<br />
Crossover randomised trial<br />
A trial in which participants receive one treatment and have outcomes measured,<br />
and then receive an alternative treatment and have outcomes measured again.<br />
The order of treatments is randomly assigned. Sometimes a period of no<br />
treatment is used before the trial starts and in between the treatments (washout<br />
periods) to minimise interference between the treatments (carry over effects).<br />
Interpretation of the results from crossover randomised controlled trials (RCTs)<br />
can be complex.<br />
Crossover studies have the risk that the intervention may have an effect after<br />
it has been withdrawn, either because the washout period is not long enough<br />
or because of path dependency. A test for evidence of statistically significant<br />
heterogeneity is not sufficient to exclude clinically important heterogeneity. An<br />
effect may be important enough to affect the outcome but not large enough to<br />
be significant.<br />
Crossver Study Design<br />
The administration of two or more experimental therapies one after the other in<br />
a specified or random order to the dame group of patients.<br />
-32-
Crossver-sectional Study<br />
The observation of a defined population at a single point in time or time interval.<br />
Exposure and outcome are determined simultaneously.<br />
Cross sectional study<br />
A study design that involves surveying a population about an exposure, or<br />
condition, or both, at one point in time. It can be used for assessing prevalence<br />
of a condition in the population. Cross sectional studies should never be used<br />
for assessing causality of a treatment.<br />
D<br />
Data pooling<br />
Crude summation of the raw data with no weighting, to be distinguished from<br />
meta-analysis).<br />
Decimal places<br />
We always precede decimal points with an integer. Numbers needing treatment<br />
to obtain one additional beneficial outcome (NNTs) are rounded up to whole<br />
numbers e.g. an NNT of 2.6 would become 3. Numbers needing treatment to<br />
obtain one additional harmful outcome (NNHs) are rounded down to whole<br />
numbers e.g an NNH of 2.3 would become 2. For P values, we use a maximum<br />
of three noughts after the decimal: P < 0.0001. We try to report the number<br />
of decimal places up to the number of noughts in the trial population e.g 247<br />
people, with RR 4.837 would be rounded up to 4.84. We avoid use of more than<br />
three significant figures.<br />
Decision analysis<br />
The application of explicit, quantitative methods that quantify prognosis,<br />
treatment effects, and quality of life and cost in order to analyze a decision<br />
under conditions of uncertainty.<br />
Disability Adjusted Life Year (DALY)<br />
A method for measuring disease burden, which aims to quantify in a single<br />
figure both the quantity and quality of life lost or gained by a disease, risk<br />
factor, or treatment. The DALYs lost or gained are a function of the expected<br />
number of years spent in a particular state of health, multiplied by a coefficient<br />
determined by the disability experienced in that state (ranging from 0 [optimal<br />
health] to 1 [deaths]). Later years are discounted at a rate of 3% per year, and<br />
childhood and old age are weighted to count for less.<br />
-33-
Drillability<br />
Refers to the ability to trace a statement from its most condensed form through<br />
to the original evidence that supports it. This requires not only the data but also<br />
all the methods used in the generation of the condensed form to be explicit and<br />
reproducible. We see it as an important component of the quality of evidencebased<br />
publications.<br />
E<br />
Ecological Survey<br />
<strong>Based</strong> on aggregated data for some population as it exists at some point or<br />
points in time; to investigate the relationship of an exposure to a known or<br />
presumed risk factor for a specified outcome.<br />
Event<br />
The occurrence of a dichotomous outcome that is being sought in the study<br />
(such as myocardial infarction, death, or a four-point improvement in pain<br />
score).<br />
Event rates<br />
In determining the power of a trial the event rate is more important than the<br />
number of participants. Therefore, we provide the number of events as well as<br />
the number of participants when this is available.<br />
Event Rate is the proportion of patients in a group in whom an the event is<br />
observed. Thus, if out of 100 patients, the event is observed in 27, the event rate<br />
is 0.27. Control Event Rate (CER) and Experimental Event Rate (EER) are used<br />
to refer to this in control and experimental groups of patients respectively.<br />
<strong>Evidence</strong>-<strong>Based</strong> Health Care<br />
Extends the application of the principles of <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong> ( see<br />
below) to all professions associated with health care, including purchasing and<br />
management.<br />
<strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong><br />
Is the conscientious, explicit and judicious use of current best evidence<br />
in making decisions about the care of individual patients. The practice of<br />
evidence-based medicine means integrating individual clinical expertise with<br />
the best available external clinical evidence from systematic research.<br />
-34-
Experimental study<br />
A study in which the investigator studies the effect of intentionally altering one<br />
or more factors under controlled conditions.<br />
External validity (generalisabilty)<br />
The validity of the results of a trial beyond that trial.<br />
A randomised controlled trial (RCT) only provides direct evidence of causality<br />
within that trial. It takes an additional logical step to apply this result more<br />
generally. However, practically it is necessary to assume that results are<br />
generalisable unless there is evidence to the contrary. If evidence is consistent<br />
across different settings and in different populations (e.g. across ages and<br />
countries) then there is evidence in favour of external validity. If there is only<br />
evidence from atypical setting (e.g. teaching hospital when most cases are<br />
seen in primary care) then one should be more sceptical about generalising the<br />
results. Generalisability is not just a consequence of the entry requirements for<br />
the trial, but also depends on the population from which the trial population was<br />
drawn (see applicability).<br />
F<br />
Factorial design<br />
A factorial design attempts to evaluate more than one intervention compared<br />
with control in a single trial, by means of multiple randomisations.<br />
False negative<br />
A person with the target condition (defined by the gold standard) who has a<br />
negative test result.<br />
False positive<br />
A person without the target condition (defined by the gold standard) who has a<br />
positive test result.<br />
Fixed effects<br />
The “fixed effects” model of meta-analysis assumes, often unreasonably, that<br />
the variability between the studies is exclusively because of a random sampling<br />
variation around a fixed effect (see random effects below).<br />
-35-
H<br />
Harms<br />
<strong>Evidence</strong>-based healthcare resources often have great difficulty in providing<br />
good quality evidence on harms. Most RCTs are not designed to assess<br />
harms adequately: the sample size is too small, the trial too short, and often<br />
information on harms is not systematically collected. Often a lot of the harms<br />
data are in the form of uncontrolled case reports. Comparing data from these<br />
series is fraught with difficulties because of different numbers receiving the<br />
intervention, different baseline risks and differential reporting. We aim to<br />
search systematically for evidence on what are considered the most important<br />
harms of an intervention. The best evidence is from a systematic review of<br />
harms data that attempts to integrate data from different sources. However,<br />
because of these difficulties and following the maxim “first one must not do<br />
harm” we accept weaker evidence. This can include information on whether<br />
the intervention has been either banned or withdrawn because of the risk of<br />
harms.<br />
Hazard ratio (HR)<br />
Broadly equivalent to relative risk (RR); useful when the risk is not constant<br />
with respect to time. It uses information collected at different times. The term<br />
is typically used in the context of survival over time. If the HR is 0.5 then the<br />
relative risk of dying in one group is half the risk of dying in the other group.<br />
If HRs are recorded in the original paper then we report these rather than<br />
calculating RR, because HRs take account of more data.<br />
Heterogeneity<br />
In the context of meta-analysis, heterogeneity means dissimilarity between<br />
studies. It can be because of the use of different statistical methods (statistical<br />
heterogeneity), or evaluation of people with different characteristics, treatments<br />
or outcomes (clinical heterogeneity). Heterogeneity may render pooling of data<br />
in meta-analysis unreliable or inappropriate.<br />
Finding no significant evidence of heterogeneity is not the same as finding<br />
evidence of no heterogeneity. If there are a small number of studies,<br />
heterogeneity may affect results but not be statistically significant.<br />
Homogeneity<br />
Similarity (see heterogeneity).<br />
-36-
I<br />
Incidence<br />
The number of new cases of a condition occurring in a population over a<br />
specified period of time.<br />
Inclusion / exclusions<br />
We use validated search and appraisal criteria to exclude unsuitable papers.<br />
Authors are then sent exclusion forms to provide reasons why further papers<br />
are excluded.<br />
Intention to treat (ITT) analysis<br />
Analysis of data for all participants based on the group to which they were<br />
randomised and not based on the actual treatment they received.<br />
Where possible we report ITT results. However, different methods go under<br />
the name ITT. Therefore, it is important to state how withdrawals were handled<br />
and any potential biases, e.g. the implication of carrying last result recorded<br />
forward will depend on the natural history of the condition.<br />
L<br />
Likelihood ratio (L R)<br />
Is the likelihood of a given test result in a patient with the target disorder compared<br />
to the likelihood of the same result in a patient without that disorder.<br />
The ratio of the probability that an individual with the target condition has<br />
a specified test result to the probability that an individual without the target<br />
condition has the same specified test result.<br />
LR>1 indicates and increased likelihood of disease, LR
M<br />
Meta-analysis<br />
A type of systematic review that uses rigorous statistical methods to<br />
quantitatively, summarize o synthesize the results of multiple similar studies. A<br />
statistical technique that summarises the results of several studies in a single<br />
weighted estimate, in which more weight is given to results of studies with<br />
more events and sometimes to studies of higher quality.<br />
We use meta-analysis to refer to the quantitative methods (usually involving<br />
weighting) used to integrate data from trials. This is logically distinct from a<br />
systematic review, which is defined by an explicitly systematic search and<br />
appraisal of the literature. It is also distinct from data pooling, which is based<br />
purely on the raw data.<br />
Morbidity<br />
Rate of illness but not death.<br />
Mortality<br />
Rate of death.<br />
N<br />
Negative likelihood ratio (NLR)<br />
The ratio of the probability that an individual with the target condition has<br />
a negative test result to the probability that an individual without the target<br />
condition has a negative test result. This is the same as the ratio (1-sensitivity/<br />
specificity).<br />
Negative predictive value (NPV)<br />
The chance of not having a disease given a negative test result (not to be<br />
confused with specificity, which is the other way round). NPV is the proportion<br />
of people with a negative test who are free of disease. See also SpPins and<br />
SnNouts.<br />
N-of-1 Trials<br />
The patient undergoes pairs of treatment periods organized so that one period<br />
involved the use of the experimental treatment and one period involves the use<br />
of an alternate or placebo therapy. The patients and physician are blinded,<br />
if possible, and outcomes are monitored. Treatment periods are replicated<br />
-38-
until the clinician and patient are convinced that the treatments are definitely<br />
different or definitely not different.<br />
Negative statements<br />
At what stage does no evidence of an effect become evidence of no effect<br />
If confidence intervals are available then we should aim to indicate in words<br />
the potential size of effect they encompass. If a result is not significant we try<br />
and state if the confidence intervals include the possibility of a large effect<br />
(e.g. “The RCT found no significant effect but included the possibility of a large<br />
harm/ benefit/ harm or benefit”). The exact wording depends on the mean result<br />
and the width of the confidence intervals.<br />
Non-systematic review<br />
A review or meta-analysis that either did not perform a comprehensive search of<br />
the literature and contains only a selection of studies on a clinical question, or<br />
did not state its methods for searching and appraising the studies it contains.<br />
Not significant/non-significant (NS)<br />
Not significant means that the observed difference, or a larger difference, could<br />
have arisen by chance with a probability of more than 1/20 (i.e. 5%), assuming<br />
that there is no underlying difference. This is not the same as saying there is<br />
no effect, just that this experiment does not provide convincing evidence of<br />
an effect. This could be because the trial was not powered to detect an effect<br />
that does exist, because there was no effect, or because of the play of chance.<br />
If there is a potentially clinically important difference that is not statistically<br />
significant then do not say there was a non-significant trend. Alternative<br />
phrases to describe this type of uncertainty include, “Fewer people died after<br />
taking treatment x but the difference was not significant” or “The difference was<br />
not significant but the confidence intervals covered the possibility of a large<br />
beneficial effect” or even, “The difference did not quite reach significance.”<br />
Number needed to harm (NNH)<br />
One measure of treatment harm. It is the average number of people from a<br />
defined population you would need to treat with a specific intervention for a<br />
given period of time to cause one additional adverse outcome. NNH can be<br />
calculated as 1/ARI.<br />
-39-
Number needed to treat (NNT)<br />
NNT is the number of patients who need to be treated to prevent one bad<br />
outcome. It is the inverse of the ARR:<br />
NNT=1/ARR.<br />
The number of patients who need to receive an intervention instead of the<br />
alternative in order for one additional patient to benefit. The NNT is calculated<br />
as: 1/AAR. Example ; if the AAR is 4 percent the NNT=1/4 percent=1/0.04=25.<br />
NNT is one measure of treatment effectiveness. It is the average number of<br />
people who need to be treated with a specific intervention for a given period<br />
of time to prevent one additional adverse outcome or achieve one additional<br />
beneficial outcome. NNT can be calculated as 1/ARR :<br />
1. NNTs are easy to interpret, but they can only be applied at a given level of<br />
baseline risk.<br />
2. How do we calculate NNTs from meta-analysis data The odds ratio (OR)<br />
(and 95% CI) with the AR in the control group can be used to generate<br />
absolute risk (AR) in the intervention group and from there to the NNT. This is<br />
a better measure than using the pooled data, which only uses trial size (not<br />
variance) and does not weight results (e.g. by trial quality). As people can<br />
not be treated as fractions, we round NNTs up and numbers needed to harm<br />
(NNHs) down to the largest absolute figure. This provides a conservative<br />
estimate of effect (it is most inaccurate for small numbers).<br />
3. NNTs should only be provided for significant effects because of the difficulty<br />
of interpreting the confidence intervals for non-significant results. Nonsignificant<br />
confidence intervals go from an NNT to an NNH by crossing<br />
infinity rather than zero.<br />
NNT for a meta-analysis<br />
Absolute measures are useful at describing the effort required to obtain a benefit,<br />
but are limited because they are influenced by both the treatment and also by<br />
the baseline risk of the individual. If a meta-analysis includes individuals with<br />
a range of baseline risks, then no single NNT will be applicable to the people<br />
in that meta-analysis, but a single relative measure (odds ratio or relative risk)<br />
may be applicable if there is no heterogeneity. In BMJ Clinical <strong>Evidence</strong>, an<br />
NNT is provided for meta-analysis, based on a combination of the summary<br />
odds ratio (OR) and the mean baseline risk observed in average of the control<br />
groups.<br />
-40-
O<br />
Observational studies<br />
Observational studies may be included in the Harms section or in the Comment.<br />
Observational studies are the most appropriate form of evidence for the<br />
Prognosis, Aetiology, and Incidence/Prevalence sections. The minimum data<br />
set and methods requirements for observational studies have not been finalised.<br />
However, we always indicate what kind of observational study, whether case<br />
series, case control, prospective or retrospective cohort study.<br />
Odds<br />
The odds of an event happening is defined as the probability that an event will<br />
occur, expressed as a proportion of the probability that the event will not occur.<br />
Odds are a ratio of events to non-events, e.g, if the event rate for a disease is<br />
0.2 (20 percent), its non-event rte is 0.8 (80%) then its odds are 0.2/0.8=0.25 (see<br />
Odds Ratio).<br />
Odds ratio (OR)<br />
One measure of treatment effectiveness. It is the odds of an event happening<br />
in the experimental group expressed as a proportion of the odds of an event<br />
happening in the control group. Odds Ratio is the odds of an experimental<br />
patient suffering an event relative to the odds of a control patient. The closer<br />
the OR is to one, the smaller the difference in effect between the experimental<br />
intervention and the control intervention. If the OR is greater (or less) than one,<br />
then the effects of the treatment are more (or less) than those of the control<br />
treatment. Note that the effects being measured may be adverse (e.g. death or<br />
disability) or desirable (e.g. survival).<br />
When events are rare the OR is analagous to the relative risk (RR), but as event<br />
rates increase the OR and RR diverge.<br />
The ratio of events to non-events in the intervention group over the ratio of<br />
events to non-events in the control group.<br />
Odds reduction<br />
The complement of odds ratio (1-OR), similar to the relative risk reduction (RRR)<br />
when events are rare.<br />
-41-
Open label trial<br />
A trial in which both participant and assessor are aware of the intervention<br />
allocated.<br />
Outcomes<br />
This generally means mortality, morbidity, quality of life, ability to work, pain,<br />
etc. Laboratory outcomes are avoided if possible. Even if there is a strong<br />
relationship between a laboratory outcome marker and a clinical outcome it is<br />
not automatic that it will hold under new conditions. Outcomes that are markers<br />
for clinically important patient centred outcomes are often called surrogate<br />
outcomes (e.g. ALT concentrations are a proxy for liver damage following<br />
paracetamol overdose).<br />
P<br />
PICOt<br />
Population, intervention, comparison, and outcome, all with a time element<br />
(PICOt). The current reporting requirements of systematic reviews are: how<br />
many RCTs, how many participants in each, comparing what with what, in what<br />
type of people, with what results. Each variable needs a temporal element,<br />
(how old are the participants, how long is the treatment given for, when is the<br />
outcome measured). In the future, we hoping to have a brief description in the<br />
text with full details accessible from the website.<br />
Placebo<br />
A substance given in the control group of a clinical trial, which is ideally identical<br />
in appearance and taste or feel to the experimental treatment and believed<br />
to lack any disease specific effects. In the context of non-pharmacological<br />
interventions, placebo is usually referred to as sham treatments.<br />
Placebo is not the same as giving no treatment and can induce real physiological<br />
changes. Whether it is appropriate to compare the experimental with placebo<br />
or no treatment depends on the question being asked. Where possible we<br />
report on the specific intervention given as a placebo. We include, if available,<br />
information is available on whether participants or clinicians could distinguish<br />
between placebo and the intervention.<br />
POEMs<br />
The acronym POEMs stands for Patient-Oriented <strong>Evidence</strong> that Matters, and<br />
refers to summaries of valid research that is relevant to physicians and their<br />
-42-
patients. POEMs are selected from research published in more than 100 clinical<br />
journals. Each month, a team of family physicians and educators reviews these<br />
journals and identifies research results that are important and can be applied<br />
to day-to-day practice. The valid POEMs are summarized, reviewed, revised,<br />
and compiled into InfoRetriver, part of the InfoPOEMs, Inc. POEMs have to meet<br />
three criteria: they address a question that primary care physicians face in daytoday<br />
practice; they measure outcomes important to physicians and patients,<br />
including symptoms, morbidity, quality of life, and mortality; and they have the<br />
potential to change the way physicians practice. Studies that do not meet<br />
these criteria cannot be POEMs.<br />
Types of Studies Selected:<br />
• Studies of treatments must be randomized, controlled trials.<br />
• Studies of diagnostic tests, such as in a laboratory or as part of the physical<br />
examination.<br />
• Only systematic reviews, including meta-analyses, are considered rather<br />
than nonsystematic reviews.<br />
• Studies of prognosis that identify patients before they have the outcome<br />
of importance and are able to follow-up at least 80 percent of the study<br />
population.<br />
• Decision analysis involves choosing an action after formally and<br />
logically weighing the risks and benefits of the alternatives.<br />
• Qualitative research findings are reported if they are highly relevant,<br />
although specific conclusions will not be drawn from the research.<br />
Positive likelihood ratio (LR+)<br />
The ratio of the probability that an individual with the target condition has a<br />
positive test result to the probability that an individual without the target<br />
condition has a positive test result. This is the same as the ratio (sensitivity/1-<br />
specificity).<br />
Positive predictive value (PPV)<br />
The chance of having a disease given a positive test result (not to be confused<br />
with sensitivity, which is the other way round.<br />
PPV is the proportion of the people with a positive test who have disease. Also<br />
called the post-test probability of disease after positive test See also SpPins<br />
and SnNouts.<br />
-43-
Posttest Probability<br />
The proportion of patients with that particular test result who have the target<br />
disorder (posttest odds/{1+ posttest odds}).<br />
Power<br />
A study has adequate power if it can reliably detect a clinically important<br />
difference (i.e. between two treatments) if one actually exists. The power of a<br />
study is increased when it includes more events or when its measurement of<br />
outcomes is more precise.<br />
We do not generally include power calculations, but prefer to provide confidence<br />
intervals (CIs) and leave it to readers to say if this covers a clinically significant<br />
difference. If no CIs are available a power calculation can be included assuming<br />
it is adequately explained.<br />
Pragmatic study<br />
An RCT designed to provide results that are directly applicable to normal practice<br />
(compared with explanatory trials that are intended to clarify efficacy under<br />
ideal conditions). Pragmatic RCTs recruit a population that is representative<br />
of those who are normally treated, allow normal compliance with instructions<br />
(by avoiding incentives and by using oral instructions with advice to follow<br />
manufacturers instructions), and analyse results by “intention to treat” rather<br />
than by “on treatment” methods.<br />
Predictive value<br />
(positive and negative)<br />
Pretest probability<br />
PV+<br />
PV-<br />
Percentage of patients with a positive<br />
or negative test for a disease who do<br />
or do not have the disease in question.<br />
Portability of disease before a test is<br />
performed.<br />
Pre-test probability / prevalence<br />
The proportion of people with the target disorder in the population at risk at a<br />
specific time (point prevalence) or time interval (period prevalence).<br />
-44-
Prevalence<br />
The proportion of people with a finding or disease in a given population at a<br />
given time.<br />
Publication bias<br />
Occurs when the likelihood of a study being published varies with the results<br />
it finds. Usually, this occurs when studies that find a significant effect are<br />
more likely to be published than studies that do not find a significant effect, so<br />
making it appear from surveys of the published literature that treatments are<br />
more effective than is truly the case.<br />
Can occur through both preference for significant (positive) results by journals<br />
and selective releasing of results by interested parties. A systematic review can<br />
try and detect publication bias by a forest plot of size of trial against results.<br />
This assumes that larger trials are more likely to be published irrespective of<br />
the result. If a systematic review finds evidence of publication bias this should<br />
be reported. Often publication bias takes the form of slower or less prominent<br />
publication of trials with less interesting results.<br />
P value<br />
P-value is the probability of obtaining the same or more extreme data assuming<br />
the null hypothesis of no effect; p-values are generally (but arbitrarily) considered<br />
significant if p
Quasi randomised<br />
A trial using a method of allocating participants to different forms of care that<br />
is not truly random; for example, allocation by date of birth, day of the week,<br />
medical record number, month of the year, or the order in which participants are<br />
included in the study (e.g. alternation).<br />
R<br />
Randomised<br />
We aim to provide an explanation of how a trial is quasi-randomised in the<br />
Comment section.<br />
Random effects<br />
The “random effects” model assumes a different underlying effect for each<br />
study and takes this into consideration as an additional source of variation,<br />
which leads to somewhat wider confidence intervals than the fixed effects<br />
model. Effects are assumed to be randomly distributed, and the central point of<br />
this distribution is the focus of the combined effect estimate.<br />
We prefer the random effects model because the fixed effects model is<br />
appropriate only when there is no heterogeneity—in which case results<br />
will be very similar. A random effects model does not remove the effects of<br />
heterogeneity, which should be explained by differences in trial methods and<br />
populations.<br />
Randomised controlled trial (RCT)<br />
RCT a group of patients is randomized into an experimental group and a<br />
control group. These groups are followed up for the variables/outcomes of<br />
interest. A trial in which participants are randomly assigned to two or more<br />
groups: at least one (the experimental group) receiving an intervention that<br />
is being tested and an other (the comparison or control group) receiving an<br />
alternative treatment or placebo. This design allows assessment of the relative<br />
effects of interventions.<br />
Regression analysis<br />
Given data on a dependent variable and one or more independent variables,<br />
regression analysis involves finding the “best” mathematical model to describe<br />
or predict the dependent variable as a function of the independent variable(s).<br />
There are several regression models that suit different needs. Common forms<br />
are linear, logistic, and proportional hazards.<br />
-46-
Relative risk (RR)<br />
The number of times more likely (RR > 1) or less likely (RR < 1) an event is<br />
to happen in one group compared with another. It is the ratio of the absolute<br />
risk (AR) for each group. It is analogous to the odds ratio (OR) when events are<br />
rare.<br />
We define relative risk as the absolute risk (AR) in the intervention group<br />
divided by the AR in the control group. It is to be distinguished from odds ratio<br />
(OR) which is the ratio of events over non-events in the intervention group over<br />
the ratio of events over non-events in the control group. In the USA, odds ratios<br />
are sometimes known as rate ratios or relative risks.<br />
Relative risk increase (RRI)<br />
The proportional increase in risk between experimental and control participants<br />
in a trial.<br />
Relative risk reduction (RRR)<br />
Is the percent reduction in events in the treated group event rate (EER) compared<br />
to the control group event rate (CER).<br />
RRR= (CER-EER)/CER*100<br />
The percentage difference in risk or outcomes between treatment and control<br />
groups. Example: if mortality is 30 percent in control and 20 percent with<br />
treatment, RRR is (30-20)/30=33 percent.<br />
The proportional reduction in risk between experimental and control participants<br />
in a trial. It is the complement of the relative risk (1-RR).<br />
Risk Ratio<br />
Is the ratio of risk in the treated group (EER) to the risk in the control group<br />
(CER): RR=EER/CER.RR is used in randomized trials and cohort studies.<br />
S<br />
Sensitivity (Sn)<br />
Percentage of patients with disease who have appositive test for the disease<br />
in question.<br />
Sensitivity is the proportion of people with disease who have a positive test.<br />
See also SpPins and SNNouts.<br />
Sn is the chance of having a positive test result given that you have a disease<br />
(not to be confused with positive predictive value (PPV) which is the other way<br />
around.<br />
-47-
Sensitivity analysis<br />
Analysis to test if results from meta-analysis are sensitive to restrictions on<br />
the data included. Common examples are large trials only, higher quality trials<br />
only, and more recent trials only. If results are consistent this provides stronger<br />
evidence of an effect and of generalisability.<br />
Sham treatment<br />
An intervention given in the control group of a clinical trial, which is ideally<br />
identical in appearance and feel to the experimental treatment and believed<br />
to lack any disease specific effects (e.g. detuned ultrasound or random<br />
biofeedback).<br />
Placebo is used for pills, whereas sham treatment is used for devices,<br />
psychological, and physical treatments. We always try and provide information<br />
on the specific sham treatment regimen.<br />
Significant<br />
Significance comes in 2 varieties:<br />
Statistical significance is when the p-value is small enough to reject the null<br />
hypothesis of no effect; where clinical significance is when the effect size is large<br />
enough to be potentially considered worthwhile by patients. By convention,<br />
taken to mean statistically significant at the 5% level. This is the same as a 95%<br />
confidence interval not including the value corresponding to no effect.<br />
SnNout<br />
When a sign/test has a high sensitivity, a negative result rules out the diagnosis;<br />
e.g. the sensitivity of a history of ankle swelling for diagnosis ascites is 92<br />
percent, therefore, is a person does not have a history of ankle swelling, it is<br />
highly unlikely that the person has ascites.<br />
Specificity (Sp)<br />
Percentage of patients ( or the proportion of people) without (free of) disease<br />
who have a negative test for the disease in question. See also SpPins and<br />
SnNouts.<br />
SpPin<br />
When a sign / test / symptom has a high Specificity, a Positive result rules<br />
in the diagnosis. For example, the specificity of Western plot test for HIV for<br />
diagnosing HIV is 98% therefore if a person does have a positive western plot<br />
test, it rules in the diagnosis of HIV.<br />
-48-
Standardised mean difference (SMD)<br />
SpPin when a sign/test has a high specificity, a Positive result rules in the<br />
diagnosis; e.g. the specificity of fluid wave for diagnosing ascites is 92 percent.<br />
Therefore, it a person has a fluid wave, it is highly likely that the person has<br />
ascites.<br />
A measure of effect size used when outcomes are continuous (such as<br />
height, weight, or symptom scores) rather than dichotomous (such as death or<br />
myocardial infarction).<br />
The mean differences in outcome between the groups being studied are<br />
standardised to account for differences in scoring methods (such as pain<br />
scores). The measure is a ratio; therefore, it has no units.<br />
SMD are very difficult for non-statisticians to interpret and combining<br />
heterogenous scales provides statistical accuracy at the expense of clinical<br />
intelligibility. We prefer results reported qualitatively to reliance on effect sizes,<br />
although we recognise that this may not always be practical.<br />
Statistically significant<br />
Means that the findings of a study are unlikely to have arisen because of<br />
chance. Significance at the commonly cited 5% level (P < 0.05) means that the<br />
observed difference or greater difference would occur by chance in only 1/20<br />
similar cases. Where the word “significant” or “significance” is used without<br />
qualification in the text, it is being used in this statistical sense.<br />
Subgroup analysis<br />
Analysis of a part of the trial/meta-analysis population in which it is thought the<br />
effect may differ from the mean effect.<br />
Subgroup analysis should always be listed as such and generally only<br />
prespecified subgroup analysis should be included. Otherwise, they provide<br />
weak evidence and are more suited for hypothesis generation. If many tests are<br />
done on the same data this increases the chance of spurious correlation and<br />
some kind of correction is needed (e.g. Bonforroni). Given independent data,<br />
and no underlying effect, 1 time in 20 a significant result would be expected<br />
by chance.<br />
Surrogate outcomes<br />
Outcomes not directly of importance to patients and their careers but predictive<br />
of patient centered outcomes.<br />
-49-
Systematic review<br />
A type of review article that uses explicit methods to comprehensively analyze<br />
and qualitatively synthesize information from multiple studies. Aystamatic<br />
Review is a literature review focused on a single question which tries to identify,<br />
appraise, select and synthesis all high quality research evidence relevant to<br />
that question. A review in which specified and appropriate methods have<br />
been used to identify, appraise, and summarise studies addressing a defined<br />
question. It can, but need not, involve meta-analysis.<br />
The present requirements for reporting systematic reviews are search date,<br />
number of trials of the relevant option, number of trials that perform the<br />
appropriate comparisons, comparisons, details on the type of people, follow up<br />
period, and quantified results if available.<br />
T<br />
True negative<br />
A person without the target condition (defined by a gold standard) who has a<br />
negative test result.<br />
True positive<br />
A person with the target condition (defined by a gold standard) who also has a<br />
positive test result.<br />
V<br />
Validity<br />
The soundness or rigour of a study. A study is internally valid if the way it is<br />
designed and carried out means that the results are unbiased and it gives you<br />
an accurate estimate of the effect that is being measured. A study is externally<br />
valid if its results are applicable to people encountered in regular clinical<br />
practice.<br />
-50-
W<br />
Weighted mean difference (WMD)<br />
A measure of effect size used when outcomes are continuous (such as symptom<br />
scores or height) rather than dichotomous (such as death or myocardial<br />
infarction). The mean differences in outcome between the groups being studied<br />
are weighted to account for different sample sizes and differing precision<br />
between studies. The WMD is an absolute figure and so takes the units of the<br />
original outcome measure.<br />
A continuous outcome measure, similar to standardised mean differences but<br />
based on one scale so in the real units of that scale. Ideal ly should be replaced<br />
by a discrete outcome and a relative risk; however, we use WMD if this is not<br />
possible.<br />
* * *<br />
* *<br />
*<br />
-51-
-52-
<strong>Evidence</strong>-<strong>Based</strong><br />
<strong>Medicine</strong> Resources<br />
A- Textbooks<br />
1. <strong>Evidence</strong>-based medicine: How to practice and teach EBM. Sackett DL<br />
et.al. New York, Churchill Livingstone,1997.<br />
2. <strong>Evidence</strong>-based medicine: How to practice and teach EBM. Straus et.<br />
al. New York, Churchill Livingstone, 3rd ed.<br />
3. <strong>Evidence</strong>-based healthcare: How to make health policy and management<br />
decisions. Muir Gray JA. New York, Churchill Livingstone,1997.<br />
4. Towards evidence-based medicine in general practice. Rosser W.<br />
Blacwell Science Inc., 1997.<br />
5. <strong>Evidence</strong>-<strong>Based</strong> Family <strong>Medicine</strong>. Rosser WW, Shafir MS. Hamilton.<br />
B.C. Decker Inc. 1998. [Also available in CD-ROM].<br />
6. The evidence-based primary care handbook. Ed Mark Gabbay. Royal<br />
Society of <strong>Medicine</strong> Press, 2000.<br />
7. <strong>Evidence</strong>-<strong>Based</strong> Practice in Primary Care: Silagy C and Haines A. 2nd<br />
Ed BMJ Books 2001 (Also available in Arabic).<br />
املمارسة املستندة إلى أدلة في الرعاية الصحية األولية حترير أ.د. كريس سيالجي .8<br />
وأ.د. أندروهينز، ترجمة د. لبنى األنصاري، النشرالعلمي واملطابع - جامعة امللك<br />
سعود، 1425ه / 2004م<br />
9. <strong>Evidence</strong>-based public health. Brownson RC, Baker EA, Leet TL,<br />
Gillespie KN. Oxford University Press, New York 2002.<br />
10. Users› Guides to the Medical Literature. Essentials of <strong>Evidence</strong>-<strong>Based</strong><br />
Clinical Practice. Gordon Guyatt, MD. Drummond Rennie, MD. 2002.<br />
11. Users› Guides to the Medical Literature. A Manual for <strong>Evidence</strong>-<strong>Based</strong><br />
Clinical Practice. Gordon Guyatt, MD. Drummond Rennie, MD. 2002.<br />
12. Appraisal of Guidelines for research & evaluation. AGREE instrument<br />
training manual. The AGREE Collaboration, January 2003.<br />
13. Fundamentals of <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong> basic concepts in easy<br />
language. Prasad K. 1st Ed. Meeta Publishers, New Delhi, 2004.<br />
14. <strong>Evidence</strong>-<strong>Based</strong> Practice: A Primer for Health Care Professionals: Dawes<br />
M, Davies M and Gray A. 2nd Ed. Churchill Livivngstone 2005.<br />
15. How to Read a Paper: The Basics of <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong>:<br />
Greenhalgh T. 3rd Ed. BMJ Books & Blackwell Publishing London 2006.<br />
-53-
16. <strong>Evidence</strong>-based <strong>Medicine</strong> Toolkit: Heneghan C and Badenoch D. 2nd<br />
Ed. BMJ Books & Blackwell Publishing London 2006.<br />
B- Reappraised literature [Peer-reviewed publications<br />
which retrieve and appraise articles from prominent<br />
medical journals through rigorous criteria:<br />
• The American College of Physicians Journal Club and <strong>Evidence</strong> <strong>Based</strong><br />
<strong>Medicine</strong>, a joint venture between ACP and BMJ.<br />
Published six times a year, ACP Journal Club is the critically acclaimed<br />
source to find the most important articles among the thousands<br />
published each year in peer-reviewed journals. ACP Journal Club›s<br />
distinctive format facilitates rapid assessment of each study›s validity<br />
and relevance to your clinical practice.<br />
http://www.acponline.org/journals/acpjc /jcmenu.hun.<br />
• <strong>Evidence</strong>-<strong>Based</strong> Nursing.<br />
http://ebn.bmjjournals.com/<br />
• Cochrane Collaboration- An international network of more than 4000<br />
scientists and clinical epidemiologists dedicated to «preparing,<br />
maintaining and disseminating systematic reviews of health care.»<br />
Cochrane Centers exist in Europe and North America, and have been<br />
founded on the principle that summary data in the form of scientifically<br />
conducted review articles (systematic overviews) represent the most<br />
efficient means by which clinicians can quickly access relevant<br />
information.<br />
The Cochrane Library Updated quarterly. It is formed of several separate<br />
databases, including The Cochrane Database of Systematic Reviews<br />
(CDSR), which contains the full text of specially compiled systematic<br />
reviews covering many branches of health care. It also contains the<br />
protocols and progress reports of systematic reviews that are currently<br />
being undertaken; The Database of Abstracts of Reviews of Effectiveness<br />
(DARE) which contains structured abstracts of good quality systematic<br />
reviews already published elsewhere; The Cochrane Controlled Trials<br />
Register (CCTR) which contains the bibliographic details and MEDLINE<br />
abstracts of about two orders of magnitude. A search of the Cochrane<br />
Library searches all the databases, so search results will not all be<br />
systematic reviews. CDSR and DARE form part of EBMR on Ovid Biomed<br />
-54-
(with ACP Journal Club). The Cochrane Library is a good source to try<br />
first.<br />
http://www.update-software.com/cochrane/cochrane-frame.html<br />
• Clinical <strong>Evidence</strong><br />
Clinical <strong>Evidence</strong> (CE) is the continually updated international source<br />
of the best available evidence on the effects of common clinical<br />
interventions, published by the BMJ publishing group. Topics are selected<br />
to cover common or important clinical conditions seen in primary care<br />
or ambulatory settings. It presents clear summaries of evidence, derived<br />
from systematic reviews and randomized controlled trials wherever<br />
possible. Each CE topic is developed following a rigorous process to<br />
ensure relevance and reliability. It is available online or PDA format.<br />
http://www.clinicalevidence.com/<br />
• Essential <strong>Evidence</strong> Plus:<br />
Formerly InfoPOEMs/ InfoRetriever, is a powerful electronic resource<br />
packed with the content, tools, calculators, podocasts and daily email<br />
alerts fto help clinicians deliver first-contact evidence-based patient<br />
care. InfoPOEMs: The Clinical Awareness System InfoPOEMs is a<br />
searchable database of POEMS (Patient Oriented <strong>Evidence</strong> that Matters)<br />
from the Journal of Family Practice. POEMS are summaries similar to<br />
ACP Journal Club articles in methodology and format, targeted at family<br />
practitioners. InfoRetriever simultaneously searches the complete<br />
POEMs database (Infopoems) along with 6 additional evidence-based<br />
databases, plus the leading quick-reference tool, to enable rapid lookup<br />
and application of information and tools while you practice. In seconds,<br />
you search the complete POEMs database, 120 clinical decision<br />
rules, 1700+ diagnostic-test and H&PE calculators, the complete set of<br />
Cochrane systematic review abstracts, all USPSTF guidelines plus all<br />
evidence-based guidelines from the National Guidelines Clearinghouse<br />
(NGC), and the Five-Minute Clinical Consult. The information is organized<br />
and presented for immediate application to your practice. There is even<br />
basic drug information and an ICD-9 lookup tool within the application.<br />
http://www.essentialevidenceplus.com/index.cfm<br />
• <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong><br />
Available by subscription from the American College of Physicians and<br />
Canadian Medical Association and the BMJ Publishing Group. Published<br />
-55-
imonthly in printed format, and included with ACP Journal Club on a<br />
CD Rom called Best <strong>Evidence</strong>. Very similar in approach and format to<br />
ACP Journal Club, but covering general practice, surgery, psychiatry,<br />
paediatrics, obstetrics, and gynaecology.<br />
http://www.bmjpg.comidata/ebm.htm<br />
• The <strong>Evidence</strong> <strong>Based</strong> <strong>Medicine</strong> Jeddah Working Group<br />
Provides EBM teaching materials, resources and links. Provides<br />
information on local, regional and international EBM events.<br />
P.O.Box: 15814 Jeddah 21454, KSA. Tel./ Fax: 00966 2 6725232<br />
E-mail: ebmjeddah@yahoogroups.com<br />
http://www.ebmjeddah.org/<br />
• National & Gulf Center for <strong>Evidence</strong> <strong>Based</strong> <strong>Medicine</strong> (NGCEBM)<br />
Post Graduate Training Center-National Guard Health Affairs, King<br />
Abdulaziz Medical City - Riyadh. P.O.Box: 22490 Riyadh 11426, KSA.<br />
E-mail: ebm@ngha.med.sa<br />
www.ngha.med.sa/internet/Abunt-NGHA/Centers-noHP/NGCEBM/index.<br />
htm<br />
• Reference Gulf Center for EBM<br />
Arabian Gulf University, College of <strong>Medicine</strong> and Medical Sciences,<br />
P.O.Box: 22979 Manama, Kingdom of Bahrain. Tel.: 239999 - Fax:<br />
271090.<br />
• <strong>Evidence</strong>-based Mental Health<br />
<strong>Evidence</strong>-<strong>Based</strong> Mental Health is published quarterly by the BMJ<br />
Publishing Group. <strong>Evidence</strong>-<strong>Based</strong> Mental Health alerts clinicians<br />
to important advances in treatment, diagnosis, etiology, prognosis,<br />
continuing education, economic evaluation and qualitative research in<br />
mental health. It selects and summarizes the highest quality original and<br />
review articles. Experts in the field comment on the clinical relevance<br />
and context of each study Thereby integrating the best available clinical<br />
evidence with clinical experience.<br />
http://ebmh.bmjjournals.com/<br />
• <strong>Evidence</strong>-based Cardiovascular <strong>Medicine</strong><br />
http://www.harcourt-international.com/journals/ebm/<br />
-56-
C- Resources that will help you to learn more about EBM:<br />
• Centre for <strong>Evidence</strong> <strong>Based</strong> <strong>Medicine</strong> (Oxford University)<br />
http://www.cebm.net/ The site provides information on learning, doing<br />
and teaching EBM as well as the EBM toolbox.<br />
• Centre for <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong> (University of Toronto).<br />
http://www.cebm.utoronto.ca/ The goal of this website is to help develop,<br />
disseminate, and evaluate resources that can be used to practise and<br />
teach EBM for undergraduate, postgraduate and continuing education<br />
for health care professionals from a variety of clinical disciplines.<br />
• Critical Appraisal skills program (CASP).<br />
CASP is a UK project that aims to help health service decision makers<br />
develop skills in the critical appraisal of evidence about effectiveness,<br />
in order to promote the delivery of evidence-based health care. The<br />
Open Learning Resource, can be used by individuals or groups to<br />
develop knowledge and skills to implement evidence-based health care<br />
effectively in practice. The CD ROM and Workbook learning resource is<br />
aimed at people who do not have wide internet access but want to work<br />
in their own time using an interactive, supported educational tool. The<br />
5 modules introduce the concept of EBM and cover the 5 basic steps<br />
of EBM. Offprints of all the articles and source materials needed to<br />
complete the activities in the resource and a comprehensive glossary<br />
are provided.<br />
• Netting the <strong>Evidence</strong>:<br />
ScHARR Introduction to <strong>Evidence</strong> <strong>Based</strong> Practice on the Internet.<br />
http://www.shef.ac.uk/scharr/ir/netting/ An alphabetical list of databases,<br />
journals, software, organizations, resources for searching, appraising<br />
and implementing evidence.<br />
• An Introduction to <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong>/ Information Mastery<br />
Course:<br />
A free web-based course (7 modules) prepared by Mark Ebell.<br />
http://www.poems.msu.edu/InfoMastery.<br />
• The <strong>Evidence</strong>-<strong>Based</strong>-Health Mailing list:<br />
Organized by the Centre for <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong> in Oxford.<br />
http://www.jiscmail.ac.uk/lists/evidence-based-health.html<br />
-57-
-58-
References<br />
For further readings<br />
1. <strong>Evidence</strong>-<strong>Based</strong> Care Resource Group. <strong>Evidence</strong>-based medicine: a new<br />
approach to teaching the practice of medicine. JAMA 1992;268(17):2420-<br />
2425.<br />
2. Silagy C, Lancaster T. The Cochrane Collaboration in primary health care.<br />
Fam Pract 1993;10:364-365.<br />
3. Guyatt GH, Rannie D. Users’ Guides to Reading the medical literature:<br />
Editorial. JAMA 1993;270(17):2096-2097.<br />
4. Weatherall, D.J. The Unhumanity of <strong>Medicine</strong>. British Medical Journal, 1994,<br />
(308) 1671-72.<br />
5. <strong>Evidence</strong>-<strong>Based</strong> Care Resource Group. <strong>Evidence</strong>-<strong>Based</strong> Care: 1. Setting<br />
Priorities: how important is this problem Can Med Assoc J 1994;150:1249-<br />
1254.<br />
6. <strong>Evidence</strong>-<strong>Based</strong> Care Resource Group. <strong>Evidence</strong>-<strong>Based</strong> Care: 2. Setting<br />
guidelines: how should we manage this problem Can Med Assoc J<br />
1994;150:1417-1423.<br />
7. <strong>Evidence</strong>-<strong>Based</strong> Care Resource Group. <strong>Evidence</strong>-<strong>Based</strong> Care: 3. Measuring<br />
performance: how are we managing this problem Can Med Assoc J<br />
1994;150(11):1575-1579.<br />
8. <strong>Evidence</strong>-<strong>Based</strong> Care Resource Group. <strong>Evidence</strong>-<strong>Based</strong> Care: 4. Improving<br />
performance: how can we improve the way we manage this problem Can<br />
Med Assoc J 1994;150(11):1793-1796.<br />
9. <strong>Evidence</strong>-<strong>Based</strong> Care Resource Group. <strong>Evidence</strong>-<strong>Based</strong> Care: 5. Lifelong<br />
learning: how can we learn to be more effective Can Med Assoc J<br />
1994;150(12):1971-1973.<br />
10. Ridsdale L. <strong>Evidence</strong>-<strong>Based</strong> General Practice. A critical reader. W.B.<br />
Saunders Company Ltd. 1995.<br />
11. Richardson WS, Wilson MC, Nishikawa J, Hayward RSA. The well-built<br />
clinical question: a key to evidence-based decisions. Editorial. ACP J Club<br />
1995 Nov/Dec;123:A12-A13.<br />
12. Ham C, Hunter DJ, Robinson R. <strong>Evidence</strong> based policymaking. BMJ<br />
1995;310:71-72.<br />
-59-
13. Fahey T, Griffiths S, Peters TJ. <strong>Evidence</strong> based purchasing: Understanding<br />
results of clinical trials and systematic reviews. BMJ 1995;311:1056-1060.<br />
14. Guyatt GH, Cook DJ, Jaeschke R. How should clinician use the results of<br />
randomized trials Editorial. ACP J Club 1995 Jan/Feb;122:A12-A13.<br />
15. Guyatt GH, Jaeschke R, Cook DJ. Applying the findings of clinical trials to<br />
individual patients. Editorial. ACP J Club 1995 Mar/Apr;122:A12-A13.<br />
16. Cook RJ, Sackett DL. The number needed to treat: a clinically useful<br />
measure of treatment effect. Br Med J 1995;310:452-454.<br />
17. Bero L, Rennie D. The Cochrane Collaboration: preparing, maintaining<br />
and disseminating systematic reviews of the effects of health care. JAMA<br />
1995;274:1935-1938.<br />
18. Freemantle N, Grilli R, Grimshaw J et al. Implementing findings of medical<br />
research: the Cochrane collaboration on effective professional practice.<br />
Quality in Health Care 1995;4:45-47.<br />
19. <strong>Evidence</strong>-based medicine, in its place. Editorial. Lancet 1995;346:785.<br />
20. Davidoff F, Haynes B, Sackett D, smith R. <strong>Evidence</strong>-based medicine.<br />
A new journal to help doctors identify the information they need. BMJ<br />
1995;310:1085-1086.<br />
21. Ellis J, Mulligan I, Rowe J, Sackett DL. In patient general medicine is<br />
evidence based. Lancet 1995;346:407-410.<br />
22. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. <strong>Evidence</strong><br />
based medicine: What it is and what it is not. BMJ 1996;312:71-72.<br />
23. Sackett DL, Richardson WS, Rosenberg W, Haynes RB. <strong>Evidence</strong>-based<br />
medicine: how to practice and teach EBM. BMJ 1996; 313:1410 (30<br />
November).<br />
24. Meade MO, Richardson WS., Eds:Mulrow C, Cook DJ. Selecting and<br />
appraising studies for a systematic review. Ann Intern Med 1997; 127:531-<br />
537.<br />
25. Counsell C. Eds: Mulro C, Cook DJ. Formulating questions and locating<br />
primary studies for inclusion in systematic reviews. Ann Intern Med<br />
1997;127:380-387.<br />
26. Bero LA, Jadad AR. How consumers and policymakers can use systematic<br />
reviews for decision making. Ann Intern Med 1997;127:37-42.<br />
27. Cook DJ, Greengold NL, Ellrodt AG, Weingarten SR. The relation between<br />
systematic reviews and practice guidelines. Ann Intern Med 1997;127:210-<br />
216.<br />
-60-
28. Mulrow C, Cook D. Integrating heterogeneous pieces of evidence in<br />
systematic reviews. Ann Intern Med 1997;127(11):989-995.<br />
29. Mulrow C, Langhorne P, Grimshaw J. Eds. Mulro C, Cook DJ. Integrating<br />
heterogenous pieces of evidence in systematic reviews. Ann Intern Med<br />
1997;127:989-995.<br />
30. Brouwers M, Haynes RB, Jadad A, Hayward RSA, Padunsky J, Yang<br />
J. <strong>Evidence</strong>-based health care and the Cochrane Collaboration. Clin<br />
Performance Quality Health Care 1997;5:195-201.<br />
31. Ellrodt G, Cook DJ, Lee J, Hunt D, Weingarten S. <strong>Evidence</strong> –based disease<br />
management. JAMA 1997;278(20):1687-1692.<br />
32. Cook DJ, Mulro CD, Haynes RB: Synthesis of Best <strong>Evidence</strong> for Clinical<br />
Decisions. Ann Intern Med 1997;126:376-80.<br />
33. Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: Synthesis of best<br />
evidence for clinical decisions. Ann Intern Med 1997;126:376-380.<br />
34. Hunt DL, McKibbon A. Locating and appraising systematic reviews. Ann<br />
Intern Med 1997;126:532-538.<br />
35. Rosser WW, Shafir MS. <strong>Evidence</strong>-based family medicine. Hamilton: B.C.<br />
Decker Inc., 1998.<br />
36. McColl A, Smith H, White P, field J. General practitioner›s perceptions of<br />
the route to evidence-based medicine: a questionnaire survey. BMJ 1998;<br />
316:361-5.<br />
37. Green L. using evidence-based medicine in clinical practice. Prim care<br />
1998; 25(2):391-400.<br />
38. Kathleen N. Lohr, Kristen Eleazer, Josephine Mauskopf. Review Health<br />
policy issues and applications for evidence-based medicine and clinical<br />
practice guidelines, Health Policy 46 (1998) 1-19.<br />
39. Alastair McColl, Paul Roderick, John Gabbay, Helen Smith, Michael Moore.<br />
Performance indicators for primary care groups: an evidence based<br />
approach. BMJ Vol. 317 (1998) Page (1354-1360).<br />
40. Campbell H, Hotchkiss R, Bradshaw N, Porteous M. Integrated care<br />
pathways. Education and Debate. BMJ 1998;316:133-137.<br />
41. Haynes B, Haines A. Barriers and bridges to evidence based clinical<br />
practice, BMJ 1998; 317:276-6.<br />
42. Salisbury C, Bosanquet N, Wilkinson E, Bosanquet A, & Hasler J, The<br />
implementation of evidence-based medicine in gerenral practice prescribing.<br />
Britsh Journal of general practice 1998; 48: 1849-1852.<br />
-61-
43. Jefferson Health System: From editor. Higher Quality at lower cost: Is<br />
evidence-based <strong>Medicine</strong> the answer. Health policy newsletter January<br />
1999; 12 (1): 1-7.<br />
44. Stewart A. <strong>Evidence</strong>-based medicine: a new paradigm for the teaching and<br />
practice of medicine. Annals of Saudi <strong>Medicine</strong> 1999; 19(1):32-36.<br />
45. Smeeth L, Haines A, Ebrahim S. Numbers needed to treat derived from metaanalyses<br />
sometimes informative, usually misleading. BMJ 1999; 318:1548-<br />
155.<br />
46. Dawes M, Davies P, Gray A., et al. <strong>Evidence</strong>-based Practice. A primer for<br />
health care professionals. London, Churchill Livingstone 1999.<br />
47. Gabbay M(ed). The <strong>Evidence</strong>-<strong>Based</strong> Primary Care Handbook.<br />
London; The Royal Society of medicine Press Ltd.1999.<br />
48. Hannay DR. The evidence-based primary care handbook: Review. BMJ<br />
2000; 321:576 (2 September).<br />
49. Emma K. Wilkinson, Alastair McColl, Mark Exworthy, Paul Roderick, Helen<br />
Smith, Michael Moore. John Gabbay. Reactions to the use of evidencebased<br />
performance indicators in primary care: a qualitative study. Quality<br />
in Health Care 2000;9: 166-174.<br />
50. Greenhalgh T. How to read a paper. The basics of evidence-based medicine<br />
second edition. 2nd edition London: BMJ Books, 2000.<br />
51. Rodrigues RJ. Information systems: the key to evidence-based<br />
health practice. Bull of the WHO, 2000,78(11)1344-1351.<br />
52. Dawes M.. How I would manage <strong>Evidence</strong>-based medicine: The RCGP<br />
Members Reference book 2000/2001: 330 - 331.<br />
53. Quick J. Maintaining the integrity of the clinical evidence base. Bull of the<br />
WHO, 2001,79(12):1093.<br />
54. Quick J. Editorials. Maintaining the integrity of the clinical evidence base.<br />
Bull of the WHO 2001;79(12): 1093.<br />
55. Silagy C, Haines A (ed). <strong>Evidence</strong>-based practice in primary care. 2nd<br />
edition. London; BMJ books, 2001.<br />
56. Gray JAM. <strong>Evidence</strong>-based healthcare: how to make health policy and<br />
management decisions. 2nd edition London: Churchill Livingstone, 2001.<br />
57. Guyatt G, Rennie D, (ed) for the evidence-based medicine working group.<br />
User›s guides to the medical literature: a manual for evidence-based clinical<br />
practice. Chicago: AMA Press. 2002.<br />
-62-
58. Khoja TA. Glossary of Health Care Quality «Interpretations of Terms».<br />
Executive Board of the Health Ministers› Council for GCC States, Riyadh,<br />
Saudi Arabia, 1st ed 2002; 79-80.<br />
59. MC Watson, CM Bond, JM Grimshaw, J. Mollison, A. Ludbrook, and AE<br />
Walker. Educational strategies to promote evidence-based community<br />
pharmacy practice: a cluster randomized controlled trial (RCT).. Family<br />
Practice, Oxford University Press 2002, Vol. 19, No. 5 Page (529-536).<br />
60. Ross C. Brownson, Elizabeth A. Baker, Terry L. Leet, Kathleen N. Gillespie.<br />
<strong>Evidence</strong>-based Public health. Oxford University Press, New York, 2002.<br />
61. Mansoor I. Online Electronic Medical Journals. Journal of the Bahrain<br />
Medical Society, 2002;Vol.14(3):96-100.<br />
62. Badenoch D, Heneghan C. <strong>Evidence</strong>-based medicine toolkit. BMJ Publishing<br />
Group, 2002. ISBN 0-7279-16017.<br />
63. Fritsche L, Greenhalagh T, Yuter YF, Neumayer HH, Kunz R. Do short courses<br />
in evidence-based medicine improve knowledge and skills Validation of<br />
Berlin questionnaire and before and after study of courses in evidence<br />
based medicine. BMJ 2002, Vol 325;1338-1341.<br />
64. Al-ansary L, Khoja T. The place of evidence-based medicine among primary<br />
health care physicians in Riyadh region, Saudi Arabia. Family Practice 2002;<br />
Vol. 19 (5): 537-542.<br />
65. Khoja TA, Akerele TM. <strong>Evidence</strong>-<strong>Based</strong> <strong>Medicine</strong>: A challenge to medical<br />
and health practice. Middle East Paediatrics 2003; Vol. 8 (1): 24-27.<br />
66. Brian S. Alper. Practical <strong>Evidence</strong>-<strong>Based</strong> Internet Resources. Fam<br />
Pract. Management, 2003(49-52).<br />
67. Burgers J, Grol R, Zaat J, et-al, Characteristics of effective clinical guidelines<br />
for general practice. Brit J of Gen Prac, 2003; 53:15-19.<br />
68. Alper B. Practical <strong>Evidence</strong>-<strong>Based</strong> Internet Resources. Fam Pract<br />
Mangement, July-Aug 2003: 49-52.<br />
69. Doig GS, Simpson F. Efficient literature searching: a core skill for the<br />
practice-based medicine. Intensive care med 2003; 29:2119-2127.<br />
70. Schumacher DN, Stock JR, Richards JK. A model structure for an EBM<br />
program in a multi hospital system. Jour Healthcare Quality, July/Aug 2003;<br />
Vol.25 (4): 10-12.<br />
71. Stuart Diog G., & Simpson F., Efficient literature searching: a core skill<br />
for the practice of evidence-based medicine, Intensive Care Med (2003)<br />
29:2119-2127 DOl 10.1007/s00134-003-1942-5.<br />
-63-
72. Dale N. Schumacher, MD MPH; Joseph R. Stock, MD FACR; Joan K. Richards,<br />
MBA MSN, A Model Structure for an EBM Program in a Multihospital System,<br />
Journal for Healthcare Quality, Vol. 25, No. 4. July/August 2003.<br />
73. Mainz J. Developing evidence-based clinical indicators: a state of the art<br />
methods primer, International Journal for Quality in Health care 2003; Vol.<br />
15, Supplement l: pp i5-i23.<br />
74. Waters E. Doyle J. and Jackson N. <strong>Evidence</strong>-based public health: improving<br />
the relevance of Cochrane Collaboration systematic reviews to global<br />
public health priorities. Journal of Public Health <strong>Medicine</strong> 2003; Vol. 25 (3):<br />
263-266.<br />
75. Hiroshi A, Yanagisawa S, Kamae I. The number needed to treat needs an<br />
associated odds estimation. J. of Public Health 2004, 26(1);84-87.<br />
76. Howes F, Doyle J, Jackson N, Waters E. Cochrane Update. <strong>Evidence</strong>-based<br />
public health: the importance of finding ‹difficult to locate› public health and<br />
health promotion intervention studies for systematic reviews. Journal of<br />
Public Health, 2004; 26(1):101-104.<br />
77. Aino H, Yanagisawa S, Kamae I. The number needed to treat needs an<br />
associated odds estimation. Journal of Public Health, 2004; Vol. 26, No. 1:<br />
84-87.<br />
78. Al-Aansary L, Alkhenizan A. Towards evidence-based clinical practice<br />
guidelines in Saudi Arabia. Saudi Med J 2004;vol.25(11):1555-1558.<br />
79. Ross C. Brownson, Elizabeth A. Baker, Terry L. Leet, Kathleen N. Gillespie,<br />
<strong>Evidence</strong>-based public health, Oxford University Pres, New York; 2002,<br />
Bulletin of the World Health Organization, April 2004, 82 (4).<br />
80. Grol R., Wensing M., What drives change Barriers to and incentives for<br />
achieving evidence-based practice. 15 March 2004; Vol. 180: S57-S60.<br />
81. Saan H. The road to evidence: the European path. IUHPE – Promotion &<br />
Education supp l, 2005:1-7.<br />
82. Molleman GR, Bouwens GM. Building the evidence base: from tool<br />
development to agenda – setting and defining a joint programme for health<br />
promotion in Europe. IUHPE – Promotion & Education supp 1, 2005: 8-9.<br />
83. Speller V, Winbush E, Morgan A. <strong>Evidence</strong>-based health promotion practice:<br />
how to make it work. IUHPE – Promotion & Education supp 1, 2005:15-20.<br />
84. Jane-Liopis E. From evidence to practice: mental health promotion<br />
effeictiveness. IUHPE – Promotion & Education supp l, 2005: 21.27.<br />
-64-
85. Lorenz KA, Ryan GW, Morton CS. Chan KS. Wang S and Shekelle PG. A<br />
qualitative examination of primary care providers› and physician managers›<br />
uses and views of research evidence. International Journal for Quality in<br />
Health Care 2005; vol. 17 (5): 409-414.<br />
86. Hall SE, Holman CJ, Finn J and Semmens JB. International Journal for<br />
Quality in Health Care 2005; vol. 17 (5): 415-420.<br />
87. Siddiqi K, Newell J, Robinson M. Getting evidence into practice: what works<br />
in developing countries. Int. J for Quality Health care 2005; vol 17, (5):447-<br />
453.<br />
88. Al-Shahri MZ, Alkhenizan A. Palliative care for Muslim patients. The Journal<br />
of Supportive Oncology 2005;3:432-436.<br />
89. Strite S. & Michael E. Stuart. What is an <strong>Evidence</strong>-<strong>Based</strong>, Value-<strong>Based</strong><br />
Health Care System (Part1), the physician executive. January - February<br />
2005; Vol. 31 issue (1): 50-54.<br />
90. Amin FA, Fedorosicz Z, Montgomery AJ. A study of knowledge and attitudes<br />
towards the use of evidence-based medicine among primary health care<br />
physicians in Bahrain. Saudi Med J, 2006; Vol.27(9):1394-1396.<br />
91. American Family Physician www.aafp.org. Vol 74, 8: October 15, 2006.<br />
92. Alfaris E, Abdulgader A, Alkhenizan A. Towards <strong>Evidence</strong>-based<br />
Medical Education in Saudi Medical Schools. Ann Saudi Med Nov-Dec<br />
2006: 26(6): 429-432.<br />
93. Tugwell P, Robinson V, Grimshaw J, and Santesso N. Systematic reviews<br />
and knowledge translation. Bull. WHO, Aug. 2006; 84(8) 643651.<br />
94. Soltani A, Moayyeri A. Towards evidence-based diagnosis in developing<br />
countries: The use of likelihood ratios for robust quick diagnosis. Ann Saudi<br />
Med. May-June 2006; 26(3): 211-215.<br />
95. World Health Organization, Health <strong>Evidence</strong> Network - <strong>Evidence</strong> for<br />
decision makers Europe. What is the evidence on school health promotion<br />
in improving health or preventing disease and specifically, what is the<br />
effectiveness of the health promoting schools approach March 2006;2-<br />
26.<br />
96. Perkins N et al. The last thing the world needs is another website, the role of<br />
evidence in integrating information and communication into development<br />
policy. Health Link, October 2006;<br />
97. Promoting evidence-based sexual and reproductive health care. Progress<br />
in Reproductive Health Research. No. 71.<br />
-65-
98. Shaneyfelt T, Baum KD, et al. Instruments for evaluating education in<br />
evidence-based practice. Jama 2006;vol 296(9):1116-1127.<br />
99. Petrova M, Dale J Fulford B. Values-based practice in primary care: easing<br />
the tensions between individual values, ethical principles and best evidence.<br />
Brit J. of Gen Pract, 2006;56:703-709.<br />
100. Mossa S Y. From Best <strong>Evidence</strong> to Distinguished Practice, SGH Med Jour<br />
2006; Vol. 1 (2) : 84-85.<br />
101. Fahad K. Al-Omari, ABFM, Saeed M. Al-Asmary, Attitude, awareness and<br />
practice of evidence based medicine among consultant physicians in<br />
Western region of Saudi Arabia. Saudi Med J 2006; Vol. 27 (12): 1887-<br />
1893.<br />
102. T.A. Khoja and L. A. Al-Ansary, Attitudes to evidence-based medicine of<br />
primary care physicians in Asir region, Saudi Arabia. Eastern Mediterranean<br />
Health Journal. 2007; Vol. 13 (2): 408-419.<br />
103. Brownson R C. et. al., Training practitioners in evidence-based chronic<br />
disease prevention for global health, IUHPE- promotion & education. 2007;<br />
Vol. XIV, (3):159-163.<br />
104. Nay R., Fetherstonhaugh D. <strong>Evidence</strong>-<strong>Based</strong> Practice Limitations and<br />
Successful Implementation, Ann. N.Y. Acad. Sci. 2007; 1114: 456-463.<br />
105. Mark R. Kresse, Maria A. Kuklinski, Joseph G. An <strong>Evidence</strong>-based template<br />
for implementation of multidisciplinary evidence-based practices in a<br />
tertiary hospital setting. American Journal of medical quality. May/June<br />
2007; Vol. 22 (3): 148-163.<br />
* * * * *<br />
-66-
-67-
-68-