Measuring Exposure to Political Advertising in Surveys - College of ...
  • No tags were found...

Measuring Exposure to Political Advertising in Surveys - College of ...

Polit Behavmarket without competitive races may see less political advertising than aninfrequent television watcher in a market with competitive races. These estimatesthus greatly improve the main deficiency of survey measures of ad exposure, theclear knowledge of exposure, while easily trumping experiments when it comes toexternal validity.Freedman and Goldstein (and their collaborators), in particular, have writtenseveral carefully argued articles using CMAG data (Freedman, Franz, & Goldstein,2004; Freedman & Goldstein, 1999; Freedman, Goldstein, & Granato, 2000;Goldstein & Freedman, 2002a, 2002b; Ridout et al., 2004). Other authors interestedin questions of political advertising also now employ these data (e.g., Kahn &Kenney, 2004; Martin, 2004). Importantly, Freedman and Goldstein acknowledgepossible weaknesses, pointing out that their estimates are probably the upper boundof true exposure because individuals are unlikely to have seen or paid attention toall the ads aired at a certain time. Overall, however, they argue that they should getthe relative volume of exposure among individuals about right.To be sure, measurement error is endemic to almost all survey measures,particularly those that rely on self-report and recall. There is an extensive literaturethat documents exactly this, including reports of behavior with reference totelevision (Chang & Krosnick, 2003; Price & Zaller, 1993; Tourangeau, Rips, &Rasinski 2000). Chang and Krosnick, for example, show that answers about mediause in the ‘‘typical’’ week differ from answers pertaining to the ‘‘past’’ week, withthe former having greater predictive validity. Intriguingly, they also find that thedifferences are largest among the most educated respondents. They theorize that forthese respondents small changes in question wording affect their memory search,whereas less educated respondents are more likely to draw on the same informationregardless.The motivation for this paper is that too little is known about the measurementproperties of the key variable of exposure used in estimates that combine CMAGdata and self-reported television viewing habits. The objective is different in criticalways from Ridout et al.’s (2004) examination of CMAG based measures. Theycompare the construct validity of estimates of exposure using CMAG data to othermethods of estimation and conclude that CMAG measures have greater validitybecause they are more reliably tied to when and what individuals were exposed to.However, Ridout et al. do not examine measurement error and they do not providethe rationale discussed here for using logged measures of exposure. In other words,this paper asks a more fundamental question about CMAG measures. Its purpose isnot to argue that CMAG data should not be used, but to urge more caution in usingappropriate measures of ad exposure.DataThe data I use to examine these issues come from two sources: an experiment andthe 1998 ANES pilot survey, in which respondents were asked about their televisionviewing habits. The experiment was conducted at a southern university in twoclasses of undergraduates in the spring of 2004 and the spring of 2005. There were123

Polit Behav95 respondents in total, of whom 91 completed both stages of the experiment, thestated purpose of which was to be used for discussion in a later class, whereparticipants were assured of their anonymity. In the first stage subjects were askedto keep a diary of the television they watched over a four week period. The task wasnot onerous; participants simply noted the times they watched television each day.In the 2005 study, the diaries also included boxes for subjects to check each day ifthey watched network news or any edition of the local news. Subjects receivedcredit for maintaining the diaries and were consistently reminded about them. Twoweeks after they handed in their diary the same students were given an ostensiblyunrelated survey that included survey questions about television viewing habits inthe different formats used by the ANES. Participants were then debriefed about thetrue purpose of the two stages of research, in which none of the students indicatedthey were aware of a connection between the two.The object of the experiment was to examine the discrepancies between theamount of television subjects typically watched, according to their diaries, and theamount they claimed to watch when answering survey questions. The survey itemsalso allowed me to examine responses to three different ANES measures oftelevision viewing habits that have been combined with CMAG data to estimate adexposure in the studies cited earlier (see Appendix for question wording). The threemeasures are:The Daypart MethodThe method divides the day into chunks or ‘‘dayparts’’ (Freedman & Goldstein,1999) and asks, ‘‘Thinking about this past week, about how many hours did youpersonally watch television on a typical weekday morning, from 6am to 10am?’’,and about the other 20 hours of the day. Questions about weekend viewing are askedseparately.The Shows MethodThe second method asks respondents how often they watch particular programs(e.g., from the ANES 2000 survey, ‘‘How many times in the last week have youwatched Jeopardy?’’) or types of programs, such as daytime soap operas, andconstructs a scale of the overall extent of television viewing (Freedman et al., 2000).Sometimes the frequency of watching specific shows is first aggregated into thefrequency of watching shows in a particular genre, such as game shows, from whichthe overall extent of television viewing is then calculated (Goldstein and Freedman2002a). This method of calculating exposure from specific shows and types ofshows is most similar to Ridout et al.’s (2004) ‘‘genre-based measure.’’ Toillustrate, if an individual watches Jeopardy and Wheel of Fortune almost every day,daytime talk shows regularly, but almost never watches morning news, eveningnews, or late evening news programs, she might be at .5 on a 1-point scale oftelevision viewing. If 1,000 ads aired in her market during the campaign she wouldbe estimated to have seen 500 of them. Another individual in the same market whowatched news programs more often but never watched game shows or talk shows,123

Polit Behavmight be at .25 on the 1-point scale of television viewing and therefore be estimatedto have seen 250 ads.The Ads within Shows MethodRather than trying to build a scale of television watching and then multiplying it bythe total number of ads aired in a market, the ‘‘ads within shows’’ method is basedon the fact that candidates tend to concentrate their advertising during particularprograms such as news broadcasts. An avid news watcher, during which 294,376ads were aired in the 2000 election, for example (Freedman et al., 2004), is likely tosee a larger number of ads than a regular viewer of Judge Judy, where 10,036 adswere aired. The ‘‘ads within shows’’ measure is based on the ads that were airedduring particular programs and how often respondents claim to watch those shows.As with the shows method these are a combination of specific programs such asJudge Judy and types of programs such as daytime television talk shows. Anindividual who watched news programs seven days a week but never watched JudgeJudy would be estimated to have seen 294,376 ads, 4 whereas an individual whowatched Judge Judy every day but never watched the news would be estimated tohave seen 10,036 ads. According to Freedman et al. (2004), in 2000 roughly twothirdsof all ads were aired during the shows about which the ANES asked. 5 Theycalculated likely exposure to the other third using The shows method (i.e.,multiplying the total number of ads that were not aired during the specified showsby a measure of mean television viewing). 6I compare the diaries with the daypart and shows methods. The comparison ofthe diaries with the ads within shows method is less comprehensive because it islimited to how often subjects claimed to watch national and local news rather thanall the shows the ANES asks about. Nevertheless, about 44 percent of ads are airedduring news programs, making the accuracy of reports of news watching moreconsequential to ads within shows estimates than how accurately, for example, anindividual recalls how often he or she watches Jeopardy. Discrepancies between thediary and survey measures of news watching thus have important implications forthe ads within shows method.The second data source is the 1998 ANES pilot study. This survey took place inthree states: California, Georgia, and Illinois. All respondents were first asked howmany hours of television they watch on a typical weekday morning, afternoon andevening. Later, a random half of the sample was also asked how many hours oftelevision they watched during five segments, or ‘‘dayparts,’’ of the past week; the4 In fact, the calculation is slightly more complicated because the ads on news programs are the totalacross the three networks. The estimate is therefore divided by three.5 The shows were ‘‘Jeopardy’’, ‘‘Wheel of Fortune’’, ‘‘morning news programs such as ‘Today,’ ‘GoodMorning America,’ or ‘The Early Show’’’, ‘‘daytime television talk shows such as ‘Oprah Winfrey,’‘Rosie O’Donnell,’ or ‘Jerry Springer’, network news programs in the late afternoon or early eveningsuch as ‘World News Tonight’ on ABC, ‘NBC Nightly News,’ ‘The CBS Evening News,’or some othernetwork news, and local TV news shows in the late afternoon or early-evening, such as ‘EyewitnessNews’ or ‘Action News.’6 The ads within shows method is similar to Ridout et al’s (2004)‘‘five program measure.’’123

Polit Behav4035Week 1Week 2Week 3Week 4Overall Average30weeHours watched per25201510501 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91Fig. 1 Hours of television watched from diaries. Data from: Student diaries and surveysslightly stronger correlations between adjacent weeks. Thus, from all appearancesthe average over the four weeks was a valid measure of a typical week’s viewing.Figure 2 compares the average number of hours watched according to the diarieswith subjects’ estimates of how much television they watch using the ANES daypartquestion format (the x-axis represents respondents and the order is the same as inFig. 1). 7 The fact that subjects had kept diaries for four weeks only two weeks priorto the survey should mean that, if anything, awareness of television viewing habitswas heightened. Figure 2 makes it clear that subjects vastly overestimated howmuch television they watch each week. The average amount of television watchedby these respondents according to the survey estimates was 27.9 h rather than 10.4.Because the daypart questions ask about the ‘‘past week’’ I cannot be certain thateveryone in the sample did not watch much more television than usual, but it seemsa remote possibility and the timing of the study aimed to avoid periods during whichviewing habits were likely to change. Only three respondents estimated that theywatched fewer hours of television than their diaries suggested. If one treats theaverage number of hours watched per week from the diaries as the ‘‘true score,’’ thereliability of the daypart questions as measures of television viewing—the varianceof the true score divided by the variance of the measure—is only .18.What explains these discrepancies? Chang and Krosnick (2003) argue that suchoverestimates of typical behavior are routine. It looks here as though when asked the7 The daypart questions were phrased identically to the ANES 1998 pilot (see Appendix). In 2000 theANES asked questions about specific programs. In 2002 and 2004 the ANES asked only about newsprograms. For the questions about programs, I used the phrasing of the 2000 ANES.123

Polit Behav8070Average from diaryDaypart estimate60WatchedAverage Hours504030201001 4 7 10131619222528313437404346495255586164677073767982858891Fig. 2 Average hours of television watched according to diaries and daypart questions. Data from:Student diaries and surveysdaypart questions subjects think of the number of hours of television they are everlikely to watch at certain times rather than the hours they typically watch or are likelyto have watched in the past week. Alternatively, Allen (1965) showed that about 40percent of the time during which the television was switched on either no one waswatching (i.e., there was no audience), or the audience was inattentive. Perhaps thedaypart questions better capture times during which the television is on than when arespondent is actually watching. The discrepancies are also in keeping with Robinsonand Godbey’s (1997) finding that relative to time diary data individuals are prone tooverestimate the amount of time engaged in household work in a typical week.However, their comparison was with diaries of the past 24 hours’ activity, whereasthe comparison here is with behavior over a four week period.My experiment suggests that estimates of exposure to advertising based ondaypart questions are likely to be much too high and that this is not, as generallyclaimed, an ‘‘upper bound’’ because of the assumption that individuals wereviewing and paying attention to ads at the times they were watching television; it is,rather, an overestimate because the other part of the equation—the estimation ofhours watched—is inflated. Nevertheless, the argument has been that the relativefrequency of viewing captured by survey questions such as in the ANES is aboutright, so perhaps this does not matter. Indeed, the correlations between the diariesand the self-reports of television watching from daypart questions were .64(Pearson’s) and .62 (Spearman’s). Table 1 presents Pearson’s correlations for thedaypart and other two methods of estimating exposure discussed above. There were123

Polit Behavreasonably strong correlations between the diaries and the daypart questions—thedaypart questions captured about half the variance in average television watchingfrom the diaries—but they were not overwhelming. Even given typical levels ofmeasurement error, if one accepts the accuracy of the diaries there was genuineerror in the relative amounts of television watching elicited by the daypartquestions.The second method, based on total shows, does not look as valid. The correlationbetween this index and the average number of hours of television watched per weekfrom the diaries was only .33. The correlation between typical television viewingaccording to the survey questions and the index of total shows was slightly higher at.38—there is a relationship—but if one imagines multiplying this index by CMAGdata (e.g., Freedman et al., 2000) and using the product to estimate what aregenerally thought to be small influences of advertising it is no wonder that there areconflicting findings in the literature. It could be argued that the low correlations arebecause students are less likely to be viewers of Wheel of Fortune and Jeopardy, butthen one could counterargue that they are more likely to be viewers of talk shows;indeed the premise of these questions is that the balance of shows captures overallviewing habits at all ages.On the other hand, correlations between local and national news viewing in thediaries—so crucial to the ads within shows method—and from the answers given inthe surveys were at the higher level of the daypart questions, .69 and .61. It shouldbe remembered though that even if 44 percent of ads are aired during the news(Freedman et al., 2004), 56 percent are not, and one-third of the ads in 2000 wereaired during programs the ANES did not ask about. Exposure to those ads isestimated from the total shows method. The diaries suggest that this will reduce thecorrelation between these estimates and true television watching, particularly forindividuals who watch a lot of television other than the shows asked about.Perhaps we can afford to be sanguine; after all, it could be argued, themeasurement error in survey measures of exposure to advertising may be greaterthan acknowledged but it is just random error in an independent variable. In thebivariate case, this implies that the estimated slope of the impact of exposure toadvertising will be attenuated, particularly if the reliability is as low as .18, and weare never as concerned by error that makes our estimates more conservative. Themultivariate case is more complicated, however: ‘‘In the most general case[however] all one knows is that estimated regression coefficients will be biasedwhen measurement error is present. The direction and magnitude of the error isusually unpredictable’’ (Berry & Feldman, 1985, 30). In other words, inTable 1 Correlations betweenaverage television watching indiaries and from ANESquestions in student surveys(n = 91)Data from: Student diaries andsurveysDaypart method .64Total shows method .33Local news .69National news .61Overall diary averageper week123

Polit Behavmultivariate models of the type routinely used in the political advertising literature,any kind of measurement error may present a problem. Nonrandom error isparticularly problematic, however, and the ANES pilot data will suggest that theerror in self-reported television viewing habits may indeed be nonrandom.To recap, in the ANES 1998 pilot respondents were first asked how many hoursof television they watched on a typical weekday morning and afternoon, on a typicalweekday evening, and on a typical weekend morning and afternoon. Later in thesame survey half the sample were asked how many hours of television they watchedduring five weekday dayparts and between 6 am and 7 pm at the weekend during thepast week. Another half of the sample was asked about specific shows. Theestimates of weekly television viewing that result from the ANES 1998 pilot studyecho those from the diary study. First, the daypart questions yield higher estimatesthan the typical weekday and weekend questions; almost three-quarters of thesample claimed to watch more television when the questions were asked in daypartform. The 1998 ANES pilot, like the diary study, suggests that daypart questionslead to higher estimates of television viewing and thus of exposure to advertising.Second, the correlation between the measures is, however, reasonably high at .67.And third, the estimate using the total shows method has a weaker correlation withthe typical weekday and weekend questions of .47. 8While this evidence reinforces the experimental results, the ANES pilot dataprovide an additional opportunity to examine the individual-level correlates withdiscrepancies in reported television watching. I created a dependent variable of thediscrepancy, in hours, by subtracting implied weekly hours of television viewingfrom the daypart question format from the implied weekly hours of televisionviewing in response to the typical weekday and weekend questions. Table 2 showsthe results of regressing this discrepancy variable on key respondent characteristicsfrom the advertising and voting behavior literature: strength of party identification(from 0, Independent, to 3, strong identifier), internal and external efficacy (1 to 5scales where 5 represents the strongest sense of efficacy), political knowledge (a 0to 4 scale based on factual questions), mobilization by a party or candidate (a 0 to 3scale based on types of contact), sex (female = 1), and age.I also look in more detail at the properties of the daypart measure by including itas a control variable. It may be that the discrepancies between the daypart andtypical weekday and weekend measures are constant (e.g., a respondent whoestimates 10 h given the latter format says 20 with the former, a respondent whoestimates 40 h given the latter format says 50 with the former, and so on), in whichcase the coefficient on the variable will not be statistically different from zero. It isalso possible, however, that individuals who watch the least television according tothe typical daypart measures have the largest discrepancies because they watchmuch less according to the typical day measures (the coefficient would be negative),8 This echoes the diary study (i.e., the shows method has the lowest correlation) but I cannot calculate thecorrelation with the daypart questions because of the daypart and shows questions being asked of differenthalves of the sample.123

Polit BehavTable 2 Regression of discrepancies between ANES questions on individual level characteristicsVariableCoefficient (standard error)Total daypart estimate .45 (.03)**Political knowledge .96 (.47)*Mobilized by a party/candidate 1.66 (.68)*Strength of party identification .47 (.61)Internal efficacy .21 (.57)External efficacy .39 (.44)Age .05 (.04)Sex .83 (1.18)Constant 10.91 (2.85)**N 555Adjusted R 2 .35** p < .01, * p < .05, # p < .10 (two-tailed)Data from: ANES 1998 pilot studyor that the largest discrepancies are characteristic of those who watch the mosttelevision according to the daypart questions (the coefficient would be positive). 9Table 2 illustrates that several individual characteristics are associated withgreater sensitivity to question format; that is with larger discrepancies in estimatedtelevision watching. In addition, the positive and statistically significant coefficienton the daypart estimate shows that the discrepancies with the ‘‘typical day’’questions are not constant but grow larger as the daypart estimates grow larger. It isthe relationships with political knowledge and mobilization by a party or candidatethat are most interesting, however. Politically knowledgeable individuals and thosesubject to the most intense mobilization efforts, who we also know are likely to havethe greatest resources of time and money and to be politically engaged, are the mostsensitive to question format (i.e., the discrepancies in their answers tend to begreatest). This echoes Chang and Krosnick’s (2003) finding for highly educatedrespondents and the explanation may well be similar: differences in questionwording prompt different memory searches for these individuals but do not for thosewho lack political knowledge or are disengaged. 109 I excluded 12 respondents who, in answer to the typical weekday day, evening, or weekend questions,said they watched more than 10 h a day because they were all coded as an ‘11’ in the ANES survey ratherthan by the exact number of hours. Because the hours they watch may exceed 11, the discrepancy with thedaypart questions could be exaggerated. This is not a conventional case of censoring for which tobitestimation would be appropriate. The censoring affects a component of a dependent variable(discrepancy), stopping us from knowing whether the two methods of self-report offer very similaranswers for these 12 respondents, rather than there being censoring of the dependent variable itself at itsupper or lower levels.10 Indeed, replacing political knowledge with level of education in Table 2 shows the same robust,positive relationship. With the inclusion of both political knowledge and education in the same model,however, the coefficients for each are reduced and political knowledge drifts to statistical insignificance;they share variance because educated individuals tend to be more politically informed. They each indicatethat political sophistication is associated with sensitivity to question wording. In the remainder of thepaper I continue to focus on political knowledge because it is the more common indicator of politicalsophistication in this literature (e.g., Freedman et al., 2004; Kahn & Kenney, 1999)123

Polit BehavMore importantly, however, these nonrandom discrepancies are greatest amongprecisely those individuals most interested in campaigns, most likely to vote, and soon. The daypart questions inflate estimates of television watching generally, butbecause discrepancies with other measures are systematically more pronouncedamong these individuals the relationships between, for example, political knowledge,exposure to negative advertising, and attitudes and behavior will be sensitiveto the questions used to construct the exposure estimates, varying in sign and size(Berry and Feldman 1985). As implied by this evidence, the literature using CMAGdata has been inconsistent, suggesting both that the least politically knowledgeableare unaffected or confused by exposure to (negative) advertising (Stevens, 2005)and that they benefit the most from exposure to advertising (Freedman et al., 2004).In either case the normative implications are profound, either implying that, all elseequal, the modern campaign exacerbates cynicism and inequality in politicalparticipation or that campaign ads ‘‘represent the multivitamins of Americanpolitics’’ (Freedman et al., 2004, 725).The 1998 ANES pilot data allow a more explicit analysis of how questionformats may contribute to the confusion. I estimated identical models of therelationship between exposure to negative advertising, political knowledge and fourdependent variables typical of the literature: engagement with the campaign(frequency of discussing politics during the past week and awareness of the issuesthe candidates for governor had been discussing during the campaign), views ofgovernment (external efficacy), and the probability of voting (see Appendix fordetails). 11 One set of models operationalized total exposure to negative advertisingusing the daypart method, another calculated total exposure from the ‘‘typical day’’questions, while a third constructed total exposure using the shows method. Inaddition I controlled for standard variables in the literature (e.g., Freedman et al.,2004): the total amount of negative advertising in the respondent’s market—ameasure of how intense the campaign was in that locale 12 —dummy variables fortwo of the three states (Georgia and Illinois), strength of party identification, theextent of mobilization from the parties, education, race, age, and income. The keyvariables of interest are exposure to negative advertising and the interaction betweenexposure to negative advertising and political knowledge. Table 3 presents the11 The CMAG data for 1998 do not include information about gubernatorial advertising. However,Stevens (2005) argues that because both the gubernatorial and Senate elections in California, Georgia,and Illinois shared similar characteristics, such as competitiveness, and because candidates tend to air adsat the same time it is a reasonable assumption that exposure to advertising in the gubernatorial race washighly correlated with exposure to the Senate race. In Tables 3 and 4 I include one dependent variable thatis specific to the gubernatorial races in these states, the number of issues that respondents recognize thecandidates have talked about: if exposure to negative advertising increases awareness of issues andindividuals who saw a lot of Senate ads also saw a lot of gubernatorial ads we would expect exposure tonegative advertising to have a positive relationship with recognition of issues.12 Total negative advertising in a television market is arguably a better measure of campaign intensitythan total advertising because we tend to see more advertising, and more negative advertising, incompetitive races. I also estimated all the models in Tables 3 and 4 with total advertising as a proxy forcampaign intensity. It made no difference to the results.123

Polit BehavTable 3 The impact of exposure to negative advertising and political knowledge using different methods of estimating exposureIndependent variable Dependent variable# days in past week talked aboutpolitics# issues recognize that candidateshave talked aboutExternal efficacy Intention to voteDaypart Typical day Daypart Typical day Daypart Typical day Daypart Typical dayPolitical knowledge .359 (.137)* .354 (.125)** .270 (.178)# .106 (.162) .171 (.088)# .092 (.079) .200 (.055)** .140 (.050)**Exposure to negativeadvertising(daypart method)Exposure to negativeadvertising(daypart method) ·PoliticalknowledgeExposure to negativeadvertising (typicalday)Exposure to negativeadvertising (typicalday) · Politicalknowledge.0003 (.0014) .0043 (.0019)* .0015 (.0009)# .0011 (.0006)#.0011 (.0005)* .0006 (.0006) .0005 (.0003)# .0003 (.0002)#.0004 (.0014) .0014 (.0016) .0004(.0008) .0004(.0005).0010 (.0004)** .0003 (.0005) . .0000 (.0002) .0001 (.0001)Mobilized by parties .659 (.146)** .655 (.147)** .708 (.193)** .713 (.194)** .073 (.095) .081 (.095) .212 (.059)** .218 (.060)**Strength of partyidentification.229 (.128)# .230 (.128)# .240 (.161)# .253 (.162)# .266 (.079)** .266 (.079)** .234 (.050)** .232 (.051) **Total negative spots .0004 (.0003) .0004 (.0003) .0004 (.0005) .0002 (.0005) .0001 (.0002) .0003 (.0002)# .0002 (.0001)# .0001 (.0001)in marketGeorgia .246 (.324) .253 (.324) .389 (.417) .413 (.420) .134 (.205) .133 (.206) .194 (.129)# .193 (.129) #Illinois .498 (.373) .489 (.375) .842 (.477)# .849 (.491)# .004 (.234) .018 (.236) .171 (.147) .149 (.148)123

Polit BehavTable 3 continuedIndependentvariableDependent variable# days in past week talked aboutpolitics# issues recognize that candidateshave talked aboutExternal efficacy Intention to voteDaypart Typical day Daypart Typical day Daypart Typical day Daypart Typical dayEducation .120 (.245) .119 (.245) .266 (.303) .285 (.305) .302 (.149)* .291 (.150)# .052 (.094) .061 (.094)African-American.045 (.391) .046 (.391) .320 (.502) .273 (.505) .241 (.248) .219 (.249) .233 (.158)# .247 (.159) #Income .013(.149) .008(.150) .090 (.188) .096 (.191) .151 (.093)# .139 (.095)# .064 (.058) .052 (.049)Age .024 (.009)** .024 (.009) ** .003 (.011) .003 (.011) .007(.005) .007(.005) .007 (.003)* .007 (.003) *Constant .237 (.668) .251 (.642) 4.162 (.789)** 4.573 (.772)** 1.368 (.391)** 1.592 (.381)** .946 (.247)** 1.121 (.241) **N 320 320 377 377 373 373 372 372Adjusted R 2 .15 .15 .07 .06 .08 .07 .18 .18** p < .01, * p < .05, # p < .15 (two-tailed test)Data from: ANES 1998 pilot study123

Polit Behavresults using the daypart and typical day questions side by side 13 (estimates usingthe shows method are available on request).Focusing first on the relationships from estimates based on the daypart methodwe see some influence of exposure to negative advertising in all four models, andinteraction coefficients between exposure to negative advertising and politicalknowledge that are statistically significant, or close to it at conventional levels, inthree of the four models. In each of these models the sign on the main effect ispositive while the interaction term is negative. The implication, echoing recentCMAG-based findings (Freedman et al., 2004) is that it is the least politicallysophisticated who derive the greatest benefit from exposure to negative advertising.The daypart estimates in Table 3 suggest that as a result of exposure to negativeadvertising, relative to political sophisticates, the least politically sophisticatedbecome more likely to talk about politics, have an enhanced sense of governmentalresponsiveness to its citizens, and are more certain that they will vote. The estimatesusing the typical day measures of television viewing in Table 3 are, however, quitedifferent in implication. While the result is the same for the relationship betweenexposure to negative advertising and discussion of politics, the other relationshipsare overwhelmingly insignificant, suggesting neither an influence of exposure tonegative advertising nor any moderating impact of political knowledge. In addition,estimates using the shows method indicate no influence of exposure to negativeadvertising on any of the dependent variables.It therefore appears as though the relationships, and the conclusions one woulddraw about the impact of advertising and the moderating influence of politicalknowledge on the relationship between exposure to negative advertising andcampaign learning, attitudes toward government, and voting behavior are highlysensitive to question wording. Perhaps it is not a startling claim that differentoperationalizations of independent variables produce different results. However,other literatures are more settled both theoretically and empirically. There isrelatively little controversy about what party identification or trust in government is,or how to measure them, nor about key variables such as vote choice or turnout inthe area of voting behavior. The field of political advertising is not so fortunate;there is not a settled approach to the operationalization of exposure in surveyresearch.So how should survey researchers deal with the measurement problems I haveoutlined? As always, one should begin with theory. Fortunately the theoreticallymost defensible specification of exposure also alleviates some of these problems ofsensitivity to question wording. In Table 3 I adopted the approach of some researchin the field (e.g., Goldstein & Freedman, 2002a) by specifying a relationship inwhich the marginal effects of exposure to advertising are constant; the impact ofexposure to the first ad is assumed to be the same as the impact of exposure to theone hundred and first. This seems unrealistic, however. Much of the qualitative13 The relatively small sample sizes in Table 3, for an ANES survey, are because, first, the daypartquestions were asked of a half sample and, second CMAG data cover only the top 75 television markets,containing about three-quarters of the U.S. population, meaning there is no information about advertisingwhere many of the respondents lived (which is why there are roughly one-third fewer respondents inTable 3 than in Table 2).123

Polit Behavevidence about negative advertising indicates a growing weariness and weakenedimpact with increased exposure (e.g., Patterson & McClure, 1976, 150). Moreimportantly from a theoretical perspective, a wealth of psychological research onmessage repetition (Cacioppo & Petty, 1989) and primacy effects (Holbrook,Krosnick, Visser, Gardner, & Cacioppo, 2001) indicates that the impact ofcommunications is non linear; it tends to decline over time. 14There are two principal ways of operationalizing nonlinear relationships. Thefirst is to add a quadratic term that allows not only for a decline in the marginaleffects of exposure but also for their possible reversal. However, one would notexpect the reversal of marginal effects across the entire range of commonlyanalyzed dependent variables. With levels of information, for example, decliningmarginal effects seem likely but a reversal of marginal effects—the notion thatindividuals start to lose information at higher levels of exposure—does not.Moreover, the empirical evidence for such a reversal is weak; it only appears atlevels of negativity higher than those actually observed (Lau & Pomper, 2001).The most theoretically defensible operationalization of exposure is the second,taking the log of the estimate, which accounts for diminishing marginal effects ofexposure. Some research has done this (Freedman et al., 2004; Freedman andGoldstein 1999; Ridout et al. 2004) but what is new in this paper is the argumentthat this operationalization may also have a payoff in measurement terms. Thereason is because, first, taking the log of estimated exposure compresses much of thevariation that is an artifact of question wording and, second, overestimates ofexposure that are a consequence of overestimates of television viewing in thedaypart format in particular are rendered less consequential. Taking the log of thedaypart and typical day questions increases the correlation between the twomeasures to .96; to all intents and purposes the variation is the same, while thecorrelation between estimates using the typical day and shows questions is .90, notas strong as Ridout et al. (2004) find but stronger than the correlations between the‘‘raw’’ estimates of exposure. 15Table 4 shows the model results using the daypart, typical day, and show basedoperationalizations. Two aspects are noteworthy: there is greater consistency in theestimates across the different methods, with one important exception, and the resultsof the models using the more realistic logged measures of exposure are somewhatdifferent than those assuming a linear impact of exposure. They suggest a morelimited influence of exposure, with no impact on perceptions of external efficacy orlikelihood to vote, and few differences that are a result of political knowledge; theyare confined to the frequency of discussing the campaign.But there is still some inconsistency. The signs on the coefficients for the daypartand typical day questions indicate that exposure to negative advertising stimulates14 On-line models of attitude formation and updating also imply that the capacity of new information toalter impressions diminishes.15 Using the log of their estimates is likely the reason why Ridout et al. (2004) find high correlationsbetween their three estimates of exposure using CMAG data. It is not, as they imply, because daypart andshow methods provide essentially the same information about television viewing habits but because thecorrelations are between logged estimates of exposure, meaning the variation due to discrepancies hasbeen reduced.123

Polit BehavTable 4 The impact of exposure to negative advertising and political knowledge using logged measures of exposureIndependent variable Dependent variable# days in past week talked about politics # issues recognize that candidates have talked aboutDaypart Typical day Shows Daypart Typical day ShowsPolitical Knowledge .526 (.196)** .499 (.193)** .112 (.150) .268 (.247) .148 (.243) .263 (.202)Logged exposure to negative advertising (daypart method) .206 (.139)# .314 (.170)#Logged exposure to negative advertising (daypartmethod) ·Political knowledge.089 (.041)* .027 (.051)Logged exposure to negative advertising (typical day) .246 (.146)# .223 (.179)Logged exposure to negative advertising (typical day) ·Political knowledge.089 (.043)* .006 (.054)Logged exposure to negative advertising (shows) .094 (.115) .284 (.147)#Logged exposure to negative advertising (shows) ·Political knowledge.064 (.035)# .034(.046)Mobilized by parties .640 (.147)** .635 (.148)** .388 (.145)** .722 (.193)** .717 (.193)** 1.149 (.194)**Strength of party identification .226 (.130)# .239 (.129)# .116 (.120) .245 (.162)# .248 (.161)# .222 (.165)Total negative spots in market .0000 (.0004) .0001 (.0004) .0004 (.0003) .0004 (.0005) .0004 (.0005) .0004 (.0004)Georgia .183 (.332) .205 (.335) .433 (.341) .583 (.427) .602 (.430) 1.013 (.466)*Illinois .468 (.379) .485 (.377) .191 (.383) 1.010 (.485)* .969 (.482)* 1.516 (.519)**Education .203 (.245) .206 (.245) .278 (.213) .290 (.302) .299 (.302) .430 (.272)#African-American .204 (.389) .215 (.389) .340 (.333) .219 (.497) .206 (.497) .532 (.427)Income .024 (.151) .009 (.152) .018 (.138) .090 (.188) .103 (.189) .489 (.190)*Age .024 (.009)** .023 (.009)** .012 (.008) .004 (.011) .004 (.011) .007 (.011)Constant .270 (.787) .291 (.780) 1.985 (.637)** 3.787 (.920)** 4.173 (.918)** 3.133 (.804)**N 320 320 340 377 377 416Adjusted R 2 .14 .13 .08 .07 .07 .14123

Polit BehavTable 4 continuedIndependent variable Dependent variable# days in past week talked about politics # issues recognize that candidates have talked aboutDaypart Typical day Shows Daypart Typical day ShowsPolitical knowledge .036 (.123) .038 (.120) .131 (.097) .184 (.077)* .150 (.075)* .128 (.060)*Logged exposure tonegative advertising (daypart method)Logged exposure to negativeadvertising (daypart method) ·Political knowledgeLogged exposure tonegative advertising (typical day)Logged exposure to negativeadvertising (typical day) ·Political knowledgeLogged exposure to negativeadvertising (shows)Logged exposure to negativeadvertising (shows) ·Political knowledge.055 (.083) .038 (.053).012 (.025) .009 (.016).106 (.087) .016 (.056). .013 (.027) .000 (.017).021 (.071) .013 (.044).008 (.022) .006 (.014)Mobilized by parties .076 (.095) .081 (.095) .187 (.093)* .215 (.060)** .217 (.060)** .313 (.058)**Strength of party identification .271 (.079)** .268 (.079)** .111 (.080) .234 (.051)** .233 (.051)** .159 (.049)**Total negative spots in market .0003 (.0002) .0004 (.0002)# .0000 (.0002) .0002 (.0001) .0001 (.0001) .0002 (.0001)#Georgia .122 (.210) .080 (.211) .321 (.224) .180 (.132) .204 (.133)# .133 (.140)Illinois .006 (.238) .008 (.236) .115 (.249) .156 (.150) .168 (.149) .172 (.156)123

Polit BehavTable 4 continuedEducation .299 (.149)* .293 (.148)* .010 (.131) .056 (.094) .062 (.094) .134 (.082)#African-American .238 (.246) .233 (.245) .103 (.205) .243 (.157)# .248 (.157)# .143 (.127)Income .145 (.094)# .134 (.094) .000 (.092) .061 (.059) .057 (.059) .139 (.057)*Age .007 (.005) .007 (.005) .007 (.005) .007 (.003)* .007 (.003)* .009 (.003)**Constant 1.757 (.457)** 1.830 (.455)** 2.848 (.388)** .965 (.289)** 1.111 (.288)** .934 (.244)**N 373 373 415 372 372 412Adjusted R 2 .07 .08 .02 .18 .17 .21** p < .01, * p < .05, # p < .15 (two-tailed test)Data from: ANES 1998 pilot study123

Polit Behavdiscussion of the campaign (the first column of results in Table 4); while thenegative interaction with political knowledge implies that the effects are strongeston those with the least political knowledge. Simulations based on these estimates (inwhich all control variables were set at their mean or mode, while knowledge wasallowed to vary from its lowest to its highest value and exposure to negativeadvertising from one standard deviation below to one standard deviation above itsmean) suggest that more exposure to negative advertising increases frequency ofdiscussion of the campaign from about two days a week to three days a week amongthose lowest in political knowledge. The highly politically knowledgeable,meanwhile, are unaffected, and continue to discuss the campaign roughly threedays a week regardless of exposure to negative advertising. 16 In other words, theimplication would be that exposure to negative advertising benefits those who knowthe least about politics by making them more like those who know the most bydiscussing the campaign more frequently.However, the shows based estimates of exposure imply that negative advertisinghinders discussion of the campaign, especially among the least politicallyknowledgeable, not only the reverse relationship but one with entirely differentnormative implications. Instead of exposure to negative advertising reducing thedifferences in frequency of discussion, similar simulations suggest that itexacerbates them. According to simulations from this model, at high levels ofexposure those lowest in political knowledge discuss the campaign an average ofone and a half days a week compared to slightly over three days for the mostpolitically knowledgeable (i.e., low sophisticates behave less and less like highsophisticates when exposed to more negative advertising).Discussion and ConclusionThe impact of exposure to political advertising has aroused great interest inacademia and beyond; that interest has only increased as advertising campaignsgrow more negative (Geer, 2006). Researchers using survey estimates of adexposure that draw on CMAG data have presented a rosy image of advertisingeffects. Exposure to advertising, they argue, especially negative advertising,informs, stimulates, and ultimately enhances political participation. They suggestthat less politically sophisticated voters may even benefit the most from exposure,gaining information, growing more interested in the campaign, and voting in largernumbers. My findings indicate that we should be far less sanguine about advertisingeffects because the measures of ad exposure on which these conclusions are basedcontain error that is both large and nonrandom.I have demonstrated that while CMAG data offer a remarkably comprehensivepicture of the ads that were aired in major television markets in the United States,the estimates of individual ad exposure derived from these data depend on selfreportsof television viewing that are riddled with measurement problems.16 The conditional effects of exposure for high sophisticates, the combination of main effect andinteraction, are statistically insignificant.123

Polit BehavAn experiment in which students kept diaries of the television programming theywatched and later answered standard ANES survey questions about their televisionviewing habits revealed not only a pervasive tendency to overestimate in surveyresponses the time spent watching television but also large discrepancies inestimates of television watching among different survey methods. I then showed asimilar pattern of differences in a large random sample of adults, among estimatesof television viewing of the kind commonly used in survey research incorporatingCMAG data (the 1998 ANES pilot survey, which asked about television viewinghabits with three different question wordings). These discrepancies are not random;indicators of political sophistication, such as political knowledge, are systematicallyassociated with larger discrepancies. As a result, estimates of the relationshipbetween exposure to ads, political sophistication, and political behavior are unstableand hinge on the questions used to gauge television viewing habits.I have also offered two potential, partial solutions. First, by taking the log ofestimated exposure (which has the two advantages of accounting for the decreasingmarginal effects of additional exposure to advertising and, by ‘‘compressing’’estimated exposure, reducing the inflated estimates of exposure that appear endemicto these questions) we can diminish those differences in results that are artifactsprimarily of question wording. This approach will not eliminate the problem,however. Even using logged estimates, I have shown that researchers can drawsharply contrasting normative inferences from how exposure to negative advertisinginfluences the propensity to talk about politics. My analysis could be used to supportone picture in which exposure to negative ads makes low political sophisticatesbehave like high political sophisticates but also the opposite view in which exposureto negative ads exacerbates the differences between low and high politicalsophisticates. In either case, the interpretation is purely an artifact of the questionsused to gauge television viewing habits. A second potential solution uses multiplemeasures to gauge ad exposure. Bartels (1996, 2), who is often cited in support of the‘‘shows’’ method of gauging viewing habits, is similarly circumspect about using asingle set of measures; he suggests that we should weigh the net benefit of investing‘‘entirely in specific exposure items’’ against the advantages of using ‘‘somecombination of specific exposure items, general exposure items, and quiz items.’’The implications for research on the impact of advertising are profound. Themixture of findings in experiments and surveys may be the result of much more thanbasic differences in research design. Survey estimates of the effects of ad exposureare themselves highly unstable. Any attempt to estimate exposure to televisionshould be wary of individual sensitivity to even the most subtle changes in questionwording that can have vast effects on inferences. It is no wonder that carefullyconducted studies offer the conflicting interpretations that negative advertising is aboon or a burden to American democracy. Perhaps a combination of approaches, inwhich we return to multiple measures of ad exposure in order to be more certain ofthe stability of relationships, while also evaluating effects based on a single commonoperationalization of exposure, such as logged estimates, will point the way forward.Acknowledgementsand suggestions.Thanks to Barbara Allen, Andrew Seligsohn, and the editors for helpful comments123

Polit BehavAppendixCoding of VariablesDaypart Questions. Question Wording: Thinking about this past week, about howmany hours did you personally watch television on a typical weekday morning/afternoon, from [6:00 to 10:00 AM/ 10:00 AM to 4:00 PM/4:00 PM to 8:00 PM/8:00 PM to 11:00 PM/11:00 PM to 1:00 AM]. Thinking about this past weekend,about how many hours did you personally watch television from 6:00 AM to7:00 PM? Coding: The total number of weekday hours (multiplied by 5) werecombined with the total number of weekend hours to estimate the total number ofhours of TV watched per week.Typical Week Questions (from ANES 1998 Pilot). Question Wording: On atypical weekday, about how many hours of television do you watch during themorning and afternoon? About how many hours of television do you watch on atypical weekday evening? On a typical weekend day, about how many hours oftelevision do you watch during the morning and afternoon? Coding: The totalnumber of weekday hours (multiplied by 5) were combined with the total number ofweekend day hours (multiplied by 2).Show Questions (ANES 1998 Pilot). Question Wording: How many days/timesin the past week have you watched [The Today Show/The Rosie O’Donnell Show/daytime soap operas like General Hospital or Days of Our Lives/Jeopardy or Wheelof Fortune/a sports event/local news]? Coding: The sum of all six genres (eachgenre was rescaled from zero to one) divided by six.Show Questions (Experiment). Question Wording: How many times in a typicalweek do you watch [Jeopardy/Wheel of Fortune/morning news programs such asToday, Good Morning America, or The Early Show/daytime television shows suchas Oprah Winfrey or Jerry Springer/national network on news/local TV news shows,either in the late afternoon or early evening]?.Efficacy. Question Wording: Please tell me how much you agree or disagree withthese statements ... agree strongly, agree somewhat, neither agree nor disagree,disagree somewhat, disagree strongly, don’t know? Public officials don’t care whatpeople like me think; Sometimes politics seems so complicated that a person likeme can’t really understand what’s going on; People like me don’t have any sayabout what the government does. Coding: The average response on the 1 to 5 scale.Number of Days in the Past Week Talked About Politics. Question Wording:How many days in the past week did you talk about politics with family or friends?Number of Issues Recognize that Candidates Have Talked About. QuestionWording: For each issue we would like to know if you think either one of thecandidates, both, or neither is talking about these issues (private school vouchers,abortion, gun-related crimes, campaign contributions from PACs, protecting thequality of the air and water, improving discipline in schools). Coding: Total ofnumber of issues each candidate is talking about.Intention to Vote. Question Wording: (Half sample 1) So far as you know, doyou expect to vote in the elections this coming November? Would you say that youare definitely going to vote, probably going to vote, or are you just leaning towards123

Polit Behavvoting? (Half sample 2) Please rate the probability you will vote in the elections thiscoming November (on a 0 to 100 scale). Coding (Half sample 1): Not going tovote = 0, leaning = 1, probably = 2, definitely = 3. Coding (Half sample 2): 0–19 = 0, 20–50 = 1, 51–80 = 2, 81–100 = 3.Contacted by a Party/Candidate. Question Wording: Thus far in the campaign,have you received any mail from a candidate or political party about the election?How about door-to-door campaigning? Thus far in the campaign, have anycandidates or party workers made any phone calls to you about the election?Coding: 1 for each contact for a range of 0 to 3 (mean = .8).Party Identification. Question Wording: Generally speaking, do you consideryourself to be a Republican, a Democrat, an Independent, or what? [If Republican orDemocrat] Would you call yourself a strong [Republican or Democrat] or a not verystrong [Republican or Democrat]? [If Independent] Do you think of yourself ascloser to the Republican or Democratic party? Coding: Strong identifiers with eitherparty were coded as 3, those saying they considered themselves a not very strongRepublican or Democrat as 2, those claiming to be Independent but closer to one ofthe parties as 1, and those Independent and closer to neither party, or Other as 0.Political Knowledge. Question Wording: Who has the final responsibility todecide if a law is constitutional or not... is it the President, Congress, or the SupremeCourt? Whose responsibility is it to nominate judges to the Federal Courts... thePresident, Congress, or the Supreme Court? Do you happen to know which party hasthe most members in the House of Representatives in Washington? Do you happento know which party has the most members in the U.S. Senate? Coding: each correctanswer was coded 1, and answers to the four questions combined to create a 0–4scale.Education. Question Wording: What is the highest grade of school or year ofcollege you have completed? Did you get a high school diploma or pass a highschool equivalency test (GED)? What is the highest degree that you have earned?Coding: 0 for 12 years or less and no high school diploma, 1 for 12 years or lesswith high school diploma or GED, 2 for 13 or more years.The Validity of the Diary StudyA student sampleA frequent objection to student samples is that college students are not ‘‘real’’people. Indeed, Chang and Krosnick’s (2003) research suggests that, as relativelyeducated individuals, students might be more sensitive to question wording abouttelevision viewing habits. However, there is no reason to believe that the differencesin recall across the questions should be different for student and adult samples.Moreover, sampling educated students who had been keeping diaries for four weeksand were therefore atypically alerted to their viewing habits should, if anything,lessen the discrepancies between the diaries and surveys.123

Polit BehavStudent subjects may alter their television viewing habits to impress an instructor,or simply lie about them to indicate watching less television or more seriousprogramsThe initial instructions students were given strove to limit false reporting bystressing they should not change their habits, that they would only be noting thetimes they watched television, not the programs they watched (with the exception ofnews in the second study), and that the instructor would form no judgments on thebasis of how much or when they watched television. Empirically, the results do notsuggest social desirability biases in student diary entries. According to StudentMonitor, for example, college students watch an average of 11 h of television aweek. 17 The average amount of television subjects watched per week over the fourweeks, according to their diaries, was 10.4 h, with a range of 9.6 h in Week 3 to11.0 h in Week 4. The average number of times their diaries said they watchednational and/or local news a week was .8 times each (i.e., less than once a week),which would not impress many instructors. Finally, I asked members of the Spring2005 class, after they had received credit for maintaining the diaries and after theyhad received their course credit, to let me know whether or not they had kept thediaries accurately. 18 Roughly 50 percent of the class responded and, withoutexception, said that their entries had been accurate; some subjects even went tosome length to describe the methods by which they had ensured accuracy. Icompared the discrepancies between diaries and surveys for this subsample ofavowedly accurate diary keepers to the rest of the class. One might think that thissubsample would show smaller discrepancies but there was no statisticallysignificant difference in the size of the discrepancies; in fact, if anything theywere larger for those subjects who testified to the accuracy of the diaries.In a four week period subjects may have grown increasingly weary of keeping thediary, implying growing rather than constant inaccuracyAgain, the consistent reminders subjects received were intended to guard againstthis but it is a possibility that can also be tested empirically. If students wereincreasingly inaccurate in their diary entries, the correlation between the typicalviewing habits they gave in the surveys and the earlier weeks of the diaries shouldbe stronger than in later weeks. However, the correlations were very consistent: .57,.59, .57, and .60 in weeks 1 through 4 respectively.ReferencesAllen, C. (1965). Photographing the tv audience. Journal of Advertising Research, 5, 2–8.Ansolabehere, S., Iyengar, S., & Simon, A. (1999). Replicating experiments using aggregate and surveydata: The case of negative advertising and turnout. American Political Science Review, 93, 901–909.17 See www.studentmonitor.com18 There would not have been concerns about future classes with me because I was in the throes ofleaving the university.123

Polit BehavBartels, L. (1996). Entertainment television items on 1995 pilot study. Report to the National ElectionStudies Board of Overseers.Berry, W., & Feldman, S. (1985). Multiple regression in practice. Newbury Park: Sage.Brooks, D. (2006). The resilient voter: Moving toward closure in the debate over negative campaigningand turnout. Journal of Politics, 68, 684–697.Cacioppo, J., & Petty, R. (1989). Effects of message repetition and position on argument processing,recall, and persuasion. Journal of Personality and Social Psychology, 107, 3–12.Chang, L., & Krosnick, J. (2003). Measuring the frequency of regular behaviors: Comparing the ‘typicalweek’ to the ‘past week. Sociological Methodology, 33, 55–80.Clinton, J., & Lapinski, J. (2004). ‘Targeted’ advertising and voter turnout: an experimental study of the2000 presidential election. Journal of Politics, 66, 69–96.Finkel, S., & Geer, J. (1998). A spot check: casting doubt on the demobilizing effect of attack advertising.American Journal of Political Science, 42, 573–595.Freedman, P., Franz, M., & Goldstein, K. (2004). Campaign advertising and democratic citizenship.American Journal of Political Science, 48, 723–741.Freedman, P., & Goldstein, K. (1999). Measuring media exposure and the effects of negative ads.American Journal of Political Science, 43, 1189–1208.Freedman, P., Goldstein, K., & Granato, J. (2000). Learning, expectations, and the effect of politicaladvertising. Chicago: Paper presented at the annual meeting of the Midwest Political ScienceAssociation.Geer, J. (2006). In defense of negativity: Attack ads in presidential campaigns. Chicago: University ofChicago Press.Goldstein, K., & Freedman, P. (2002a). Campaign advertising and voter turnout: new evidence for astimulation effect. Journal of Politics, 64, 721–740.Goldstein, K., & Freedman, P. (2002b). Lessons learned: Campaign advertising in the 2000 elections.Political Communication 19, 5–28.Holbrook, A., Krosnick, J., Visser, P., Gardner, W., & Cacioppo, J. (2001). Attitudes toward presidentialcandidates and political parties: Initial optimism, inertial first impressions, and a focus on flaws.American Journal of Political Science, 45, 930–950.Kahn, K. F., & Kenney, P. (1999). Do negative campaigns mobilize or suppress turnout? Clarifying therelationship between negativity and participation. American Political Science Review, 93, 877–890.Kahn, K., & Kenney, P. (2004). No holds barred: Negativity in U.S. Senate Campaigns. Upper SaddleRiver: Prentice Hall.Kan, M. Y., & Gershiny, J. (2006). Infusing time diary evidence into panel data: an exercise in calibratingtime-use estimates for the BHPS. ISER Working Paper 2006-19. Colchester: University of Essex.Lau, R., & Pomper, G. (2001). Effects of negative campaigning on turnout in U.S. Senate elections,1988–1998. Journal of Politics, 63, 804–819.Lau, R., Sigelman, L., Heldman, C., & Babbitt, P. (1999). The effects of negative politicaladvertisements: A meta-analytic assessment. American Political Science Review, 93, 851–875.Martin, P. (2004). Inside the black box of negative campaign effects: Three reasons why negativecampaigns mobilize. Political Psychology, 25, 545–562.Patterson, T., & McClure, R. (1976). Political advertising: Voter reaction to televised politicalcommercials. Princeton: Citizen’s Research Foundation.Price, V., & Zaller, J. (1993). Who gets the news? Alternative measures of news reception and theirimplications for research. Public Opinion Quarterly, 57, 133–64.Ridout, T., Shah, D., Goldstein, K., & Franz, M. (2004). Evaluating measures of campaign advertisingexposure on political learning. Political Behavior, 26, 201–225.Robinson, J., & Godbey, G. (1997). Time for life: The surprising ways Americans use their time.University Park: Pennsylvania State University Press.Stevens, D. (2005). Separate and unequal effects: Information, political sophistication and negativeadvertising in American elections. Political Research Quarterly, 58, 413–426.Tourangeau, R., Rips, L. R., & Rasinski, K. (2000). The psychology of survey response. Cambridge:Cambridge University Press.Wattenberg, M., & Brians, C. (1999). Negative campaign advertising: Demobilizer or mobilizer?American Political Science Review, 93, 891–899.West, D. (1994). Political advertising and news coverage in the 1992 California U.S. Senate campaigns.Journal of Politics, 56, 1053–1075.123

More magazines by this user
Similar magazines