11.01.2013 Views

Selecciones - Webs

Selecciones - Webs

Selecciones - Webs

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Belch: Advertising and<br />

Promotion, Sixth Edition<br />

VI. Monitoring, Evaluation,<br />

and Control<br />

19. Measuring the<br />

Effectiveness of the<br />

Promotional Program<br />

environment as well as one in which teenagers were<br />

comfortable (teens are online 50 percent more than<br />

adults). The website was supported by an economical<br />

media plan designed to drive traffic to the site. Radio<br />

(95 percent of teens listen 10 or more hours a week)<br />

and other Internet venues received a lot of the<br />

resources. TV was used in a more traditional sense—<br />

e.g., MTV as well as a direct-response medium to generate<br />

leads to be followed up on by Navy personnel.<br />

In the first few months of the campaign, the Life<br />

Accelerator had more than 200 million hits, which<br />

were in turn referred to recruitment tables. Potential<br />

prospects spent 50 percent more time at the site and<br />

viewed more pages, and recruitment offices reported<br />

an increase in candidates who came in with Life Accelerator<br />

pages in hand. Overall, 225,318 leads were generated,<br />

13 percent above the goal. The goal of 53,000<br />

enlistments was easily achieved.<br />

Reasons Not to Measure Effectiveness<br />

Companies give a number of reasons for not measuring the effectiveness of advertising<br />

and promotions strategies:<br />

1. Cost. Perhaps the most commonly cited reason for not testing (particularly among<br />

smaller firms) is the expense. Good research can be expensive, in terms of both time<br />

and money. Many managers decide that time is critical and they must implement the<br />

program while the opportunity is available. Many believe the monies spent on<br />

research could be better spent on improved production of the ad, additional media<br />

buys, and the like.<br />

While the first argument may have some merit, the second does not. Imagine what<br />

would happen if a poor campaign were developed or the incentive program did not<br />

motivate the target audience. Not only would you be spending money without the<br />

desired effects, but the effort could do more harm than good. Spending more money to<br />

buy media does not remedy a poor message or substitute for an improper promotional<br />

mix. For example, one of the nation’s leading brewers watched its test-market sales<br />

for a new brand of beer fall short of expectations. The problem, it thought, was an<br />

insufficient media buy. The solution, it decided, was to buy all the TV time available<br />

that matched its target audience. After two months sales had not improved, and the<br />

product was abandoned in the test market. Analysis showed the problem was not in<br />

the media but rather in the message, which communicated no reason to buy. Research<br />

would have identified the problem, and millions of dollars and a brand might have<br />

been saved. The moral: Spending research monies to gain increased exposure to the<br />

wrong message is not a sound management decision.<br />

2. Research problems. A second reason cited for not measuring effectiveness is that it<br />

is difficult to isolate the effects of promotional elements. Each variable in the marketing<br />

mix affects the success of a product or service. Because it is often difficult to measure<br />

the contribution of each marketing element directly, some managers become<br />

frustrated and decide not to test at all. They say, “If I can’t determine the specific<br />

effects, why spend the money?”<br />

This argument also suffers from weak logic. While we agree that it is not always<br />

possible to determine the dollar amount of sales contributed by promotions, research<br />

can provide useful results. As demonstrated by the introduction and examples in IMC<br />

Perspective 19-1, communications effectiveness can be measured and may carry over<br />

to sales or other behaviors.<br />

3. Disagreement on what to test. The objectives sought in the promotional program<br />

may differ by industry, by stage of the product life cycle, or even for different people<br />

© The McGraw−Hill<br />

Companies, 2003<br />

Not all of the Ogilvy Awards are for services or nonprofits<br />

(in the chapter we will discuss some winners in<br />

the other categories), and not all of the winners<br />

employ the same research methods or media strategies.<br />

All do, however, use research to guide their strategies,<br />

as well as to measure its effectiveness.<br />

What is really interesting is that the Ogilvy Awards,<br />

started decades ago as an advertising award in which<br />

the ad campaign was based on research, have now<br />

become much broader in scope. The above two examples<br />

demonstrate that the award is really now more of an<br />

IMC award, with an emphasis on the overall campaign—<br />

whether or not advertising is a major component.<br />

Sources: “2002 ARF David Ogilvy Research Award Winners,”<br />

www.arfsite.org, Nov. 4, 2002; “The 2002 ARF David Ogilvy<br />

Research Awards,” Journal of Advertising Research, July–August<br />

2002 (special section).<br />

623<br />

Chapter Nineteen Measuring the Effectiveness of the Promotional Program

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!