Medical ReportingIs This Claim Valid?To determine this, a reporter shouldfigure out whether the promised effectsor danger signals seem at all realistic.Here, the golden rule is that thegreater the claim, the more reason tobe skeptical. The stronger the claim,the more determined ought to be thedemand for evidence. For example, ifresearchers claim they’ve developed adrug which cures Alzheimer’s disease,the demand for evidence should begreater than if the claim centers on adrug that changes the behavior of braindamagedrats.Exposing unrealistic claims also callsfor the ability to penetrate the rhetoricaltechniques some medical expertsuse to be persuasive. Be on the lookout,for example, when medical sourcestry to impress a reporter with the use ofunnecessary technical jargon or excessivelyprecise figures (4.86 percent insteadof 5 percent). And, sometimes,untested medical technologies are successfullylaunched amid false accusationsof a “conspiracy” against the technology.Such rhetoric can whet theappetite of a naive reporter, drawn to amore dramatic slant. And good medicalreporters shine a light on vaguehypotheses that are not supported byscientific evidence. For example, whena disease is shown to be associatedwith a particular genetic disorder, solidmedical reporting tells readers that thisdoes not support the hypothesis that acure has been found.Another key issue is whether an expertwho makes a claim is sufficientlyknowledgeable about the topic. In whatspecific field is he or she an expert?Since medical research is so highlyspecialized, reporters cannot assumethat an expert on gastrointestinal canceris sufficiently knowledgeable to alsoaddress gynecological tumors. What isthe reputation of this expert among hisor her colleagues? What type of researchhas he or she published? Especiallyimportant to find out—and report—isinformation about sponsoringorganizations, companies or other importantaffiliations. Are factors such aspotential research grants or media exposureplaying a role in the release ofthis information? Will the source gainfrom publicity? Some naive reportersmight regard doctors and other medicalexperts as objective seekers of truth.However, figuring out if there is a hiddenagenda is as essential in medicalreporting as it is on any other beat.Where Is the Evidence?When medical reporters face tight deadlines,finding the evidence means acquiringat least a rough idea whetherrelevant studies are available to supportthe claim. Sweeping statementsby experts, such as “breakthrough,” or“research shows,” should not be acceptedor quoted unless evidence canbe produced. Sources should be ableto back up their claims by peer-reviewedarticles in recognized journals.If they can’t, then the absence of suchevidence should be reported. Reportersshould always ask to see the articlesor references that experts cite.There are questions medical journalistsshould be asking to get at thisevidence. A few of these include:1. Where is the evidence? (Ask to seearticles or references. Are the journalswell known?)2. Who has been studied and who isaffected? (What was the status oftheir disease, their age and gender,social/cultural background, and follow-up?)3. Are the research methods reliable?(With regard to treatment methods,retrospective studies are generallyweaker than prospective, uncontrolledstudies generally weaker thancontrolled, and nonrandomized trialsgenerally weaker than randomized.)4. How great were the effects? (Changesshould be reported not only in percentages,but also in absolute numbers.How many patients underwenttreatment as compared with thenumber of successful cases?)5. How precise are the results? (What isthe margin of error? Are the resultsstatistically significant or not? Bewareof statements such as “up to”or “as little as” if they are not presentedtogether.)6. How well do the conclusions concurwith other studies? (Ask if otherstudies point in the same direction.If so, the results are probably morereliable. Small, single studies can beunreliable. Systematic reviews ofmany studies, sometimes includingmeta-analyses, are often more reliable.)Reporters can use a handbook (suchas “Clinical Evidence,” published regularlyby BMJ Publishing Group) as astarting point for important questions.Systematic reviews, such as Cochranereviews (www.cochrane.org), cover awide range of verified information andidentify beneficial or harmful interventions.Other helpful resources includehealth technology assessments, includingeconomic, social and ethical analyses(see, for example, http://agatha.york.ac.uk).Is the Evidence Strong andRelevant?Medical reporters are flooded withpublished research findings fromsources who want to promote theirproducts and ideas. Given the timepressure under which most journalistswork, a complete assessment of scientificquality is unrealistic. However, askeptical attitude and a few basic principlesgo a long way.For example, good reporters realizethat weak findings about treatmentsoften emerge from studies that do notuse control groups, have not been randomized,or are based on few observationsor a narrow sample. Similarly, ahigh dropout rate among trial subjectsoften leads to false conclusions, as doesan excessively short follow-up time.Many researchers draw conclusionsabout a method’s benefits based solelyon changes in lab values and test results,so-called surrogate endpoints.However, as a rule, special studies ofhard endpoints—patients’ symptoms,quality of life, and survival—are necessaryto backup claims about the benefitsand risks of an intervention. For62 <strong>Nieman</strong> Reports / Summer <strong>2003</strong>
Medical Reportingexample, a study showing that a treatmentreduces tumor size in cancer patientsdoes not necessarily mean it alsosaves <strong>live</strong>s. It might, in fact, do moreharm than good.Judging whether or not the evidenceis relevant to alarger group ofpatients involvesasking who hasbeen studied andwho is affected bythe condition.Therefore, thebasic questionsinclude: Do theseresults really applyto other patients?How do you know?When looking for clues about whatis weak scientific evidence for treatments,what follows is a list of familarcharacteristics:• Preliminary results (often presentedat conferences and said to be “basedon my experience”)• No control group (only before andafter measurements)• No randomization (often resultingin systematic errors)• Few observations (often making itimpossible to draw conclusions)• Biased samples (particularly sick/healthy or old/young, or narrow subgroups)• Major dropout (resulting in systematicerror)• No use of blinding (allowing expectationsto influence the results andhow they are interpreted)• Short follow-up (leading to prematurejudgments of treatment successor failure)• Lab values only (rather than symptoms,quality of life, and survival,which matter the most to patients).How Can the News BeReported Fairly andAccurately?Balance is often considered a hallmarkof fair reporting. In the medical beat,this means, for example, reporting theeffects and the side effects, as well asSeasoned medical reporters are distinguishedfrom gullible ones by their ability to remainskeptical toward unproven claims—whether ininterviews, in press releases, at conferences, injournal supplements, and on the Internet.the benefits and harm. Thus, when anexpert discusses treatment from a singlepoint of view, a good medical reporterwill inquire about the other side of theissue and ask for such evidence.Balance also means conveying importantambiguity and controversy.Both sides of an argument should bepresented. More specifically, exposinga lack of scientific support for eitherside of an argument is equally important.In fair medical reporting, it is alsoimportant to learn how to choose typicalexamples. At times, medicine offersexamples of odd phenomena—incurablediseases that mysteriously disappearafter a treatment that has beenshown to be ineffective or perfectlyhealthy people who die suddenly froma chemical that has been proven quiteharmless. Given their rarity, these casesattract journalistic attention. But whenreporting such events, journalists mustmake it clear that these are exceptionsto the rule. And when interviewing apatient with a particular disease, thepublic needs to know whether the patientis a typical or an exceptional case.Accurate reporting also entails helpingthe audience distinguish betweencorrelation and cause. When two eventsoccur at the same time—for example, apatient’s symptoms improve when anew treatment is started—this doesnot necessarily imply that one causesthe other. Correlation is not causation.In reporting about a particular healthrisk, it may be helpful to give the odds,but to also compare them with theodds of other risks to allow the publicto have information by which to compare.For example, the risk of acquiringcancer from a particular food canbe compared to the risk of acquiringcancer from smoking. Finally, goodmedical reporters return to importanttopics and follow-up their reports. Theymight reevaluate claims by approachingthe subject from new angles.Seasoned medical reporters are distinguishedfromgullible ones bytheir ability to remainskeptical towardunprovenclaims—whether ininterviews, in pressreleases, at conferences,in journalsupplements, andon the Internet.While doing researchfor my textbook on medicaljournalism, I interviewed many excellentmedical journalists. The lessonthey had learned was clear: It does nottake a medical degree to be a goodmedical reporter. What it requires isbasic knowledge of a few scientificground rules (many of which I describein my book) and, above all, commonsense and a whole lot of healthyskepticism.As psychiatrist Thomas Szasz said,“Formerly, when religion was strongand science weak, men mistook magicfor medicine; now, when science isstrong and religion weak, men mistakemedicine for magic.” Let us not add tothe confusion, but try to help the audienceby sorting the wheat from thechaff. ■Ragnar Levi, M.D., is an awardwinningmedical editor with abackground in both medicine andjournalism. Since 1992, Levi hasbeen the executive editor of “Science& Practice,” published by SBU, Sweden.He has written “Medical Journalism—ExposingFact, Fiction,Fraud” (Iowa State Press, 2001) andalso authored a monograph onevidence-based medicine.levi@pi.se<strong>Nieman</strong> Reports / Summer <strong>2003</strong> 63