by Magid Abraham and Leonard M. Lodish
Until recently, believing in the effectiveness of advertising and promotion has largely been a matter of faith. Marketing departments might collect voluminous statistics on television program ratings and on coupon redemptions and carefully compare the costs of marketing with total sales. But none of this data measures what is really important: the incremental sales of a product over and above those that would have happened without the advertising or promotion.
Thanks to a new kind of marketing data, that situation is changing. The data correlate information on actual consumer purchases (available from universal-product-code scanners used in supermarkets and drugstores) with information on the kind of television advertising those consumers receive or the frequency and type of promotion events they see. Armed with this consumer products data from a “single source,” managers can measure the incremental impact of marketing-mix variables such as advertising, merchandising, and pricing.
Forward-looking senior managers are beginning to realize that single-source data provide an unparalleled opportunity to increase their company’s marketing productivity—if they know how to take advantage of it. Doing so requires developing new marketing strategies and radically redefining the responsibilities of a company’s sales force.
At the strategic level, managers must evaluate marketing data differently and put incremental sales and profits into management objectives. This means continually examining the appropriate balance between advertising and promotion, based on marginal-productivity analysis.
The search for fresh, innovative television advertising to boost sales of established products should be constant. Until such advertising is found, it may pay to cut back on advertising spending. Using single-source test markets as “lead markets” for national advertising campaigns can substantially lower the risk of this approach.
Managers must also cut back on unproductive promotions in favor of hard-to-imitate promotion events that directly contribute to incremental profitability. And they must use the new data to shape distinctive promotional efforts for specific local markets and key accounts.
In this dynamic marketing environment, the sales force will have a different and extremely important job: to demonstrate to retailers the consumer pull of its company’s advertising and promotion programs, as well as the effect these programs have on retailer profitability. New strategies that benefit both retailer and manufacturer must replace the traditional practice of using advertising and promotion as inducements to carry a product.
Above all, senior managers must throw out much of the conventional wisdom about advertising and promotion that has formed over the years. Replacing these widely held but unsupported beliefs with marketing strategies based on hard data is the key to attaining a new kind of market power.
What’s Wrong with the Conventional Wisdom
Because they have been unable to measure the incremental sales of advertising and promotion before now, marketing managers have had to rely on a number of unexamined assumptions. For example, those who believe advertising works also tend to assume that in all cases, more of it is better than less. This assumption is frequently justified by another: that advertising takes a long time—many months or, sometimes, even years—to increase sales.
Another by-product of the traditional lack of data on incremental sales is the common belief that once advertising does start producing sales, its impact is short term. A popular rule of thumb is that if increased advertising spending does not generate enough sales to pay for the incremental expense within a year, then a company shouldn’t implement the advertising.
Finally, many marketing managers will tell you that even if advertising is not directly boosting sales, it still serves an important function. When sales-people can point to a big ad budget, this convinces retailers that the manufacturer supports the product, thus assuring its distribution in the stores.
So too with promotions. Traditionally, the focus has been on gross rather than incremental sales. The conventional wisdom is that a successful promotion is one where a company sells a lot of goods to the trade and that a promotion for an established brand can be used to attract and retain new users of the brand. In fact, promotions have become so popular that they now account for more than 65% of typical marketing budgets.
Our research challenges all of these beliefs. Since 1982, we have been using single-source data to examine the productivity of the marketing dollar spent on advertising and promotions for consumer packaged goods. The results are striking:
- In 360 tests in which the only variable was advertising weight—the amount of television advertising to which consumers are exposed—increased advertising led to more sales only about half the time.
- Analyses of trade promotions for all brands in 65 different product categories suggest that the productivity of promotion spending is even worse. Only 16% of the trade promotion events we studied were profitable, based on incremental sales of brands distributed through retailer warehouses. For many promotions, the cost of selling an incremental dollar of sales was greater than one dollar.
- Judging from our aggregate statistics, managers have been spending too much of their marketing budgets on promotion (in lieu of advertising). Many companies could reduce their total advertising and promotion budgets and improve profitability.
To measure the productivity of television advertising, we use a technique known as a “split cable” market test. About 3,000 households in test markets receive ID cards that household members show when they purchase goods at scanner-equipped supermarkets. These supermarkets typically account for more than 90% of the total volume of all products sold in the area.
The test markets are far enough away from television stations that residents’ only choice for good reception is cable TV. By agreement with the cable company and the advertiser, we intercept the cable signal before it reaches each household and send different advertisements to different households. To test advertising copy, some households receive advertisement A, while others simultaneously receive advertisement B. To test advertising weight, households receive different amounts of advertising for the same brand.
Split-cable tests typically run for one year, and we have conducted them for both new and established products. We control for variables such as past brand and category purchases and statistically adjust the sales data to account for the impact of promotions for the test brand or for competing brands. This instrumented test environment provides the ultimate degree of experimental control and is well suited for isolating the sales effect of advertising.
Some of the findings from our 360 split-cable tests conducted over the past decade support traditional assumptions. For example, most people believe that advertising is more effective for new brands than for established brands, and this turns out to be the case. We found that 59% of new-product advertising tests showed a positive impact on sales, compared with only 46% of the tests for established brands. Furthermore, when advertising showed a significant effect on a new product, the increase in sales averaged 21% across all new-product tests.
But in most respects, our findings clearly contradict conventional wisdom. In more than half of the established-brand experiments, increased advertising did not result in more sales.
Nor does advertising take a long time to work. When a particular advertising weight or copy is effective, it works relatively rapidly. Incremental sales begin to occur within six months. The converse of this finding is even more important. If advertising changes do not show an effect in six months, then they will not have any impact, even if continued for a year.
When advertising does boost sales, the extra profits often do not cover the increased media costs—at least in the short term. Company payout analyses are highly sensitive, and we have only partial pay-out statistics on a subset of our test database. They show that only about 20% of advertising-weight tests pay out for established brands during the first year. For new products, profitable advertising ranges from 40% to 50%, reflecting the higher productivity of advertising spending on new products.
However, the long-term effect of advertising is at least as substantial as its short-term effect. This is the upside to the downside that advertising works only about half the time. Even if increased advertising returns only half the money spent over the course of one year, it will break even on average if the long-term effects are taken into account.
We have evaluated the sales effect of advertising over the long term by analyzing 15 market tests up to two years after they ended. In these experiments, the test group viewed more advertisements than the control group during the test year. We then stopped the extra advertising and sent both groups the same amount. Across 15 cases, there was a demonstrable carryover effect. The sales increase for the groups receiving more advertising averaged 22% in the test year, 17% in the second year, and 6% in the third year. Although the carryover effect declined on average, in six cases it actually widened.
More important than the pattern is the magnitude of the carryover. On average, 76% of the difference observed in the test year persisted one year after the advertising increase was rolled back. Over a three-year period, the cumulative sales increase was at least twice the sales increase observed in the test year.
Why Most Promotions Lose Money
To evaluate trade promotions, we have developed computer programs that measure the marginal productivity of promotional events.1. Anywhere from 30% to 90% of the time, a consumer product is not on promotion in a particular store. Using sales data from individual stores, the programs compare sales from these nonpromotion weeks with those from promotion weeks. Algorithms then project what the sales of the product would have been during the promotion week if the promotion had not taken place. This provides a baseline against which we can measure the incremental impact of the promotion. The only possible bias is that our programs may overestimate the incremental sales of a particular event since promotions tend to accelerate purchases by consumers. Thus we may mistakenly count purchases borrowed from a later period’s normal sales as incremental sales caused by the promotion.
At first glance, our finding that only 16% of the promotions studied were profitable may seem surprising. But when you consider the economics underlying promotions, it is easy to see why. Consider the hypothetical example of a brand with very good support from retailers (see the exhibit “The Unprofitable Economics of Trade Promotions”). The brand promotes to the trade at a 15% price discount over a four-week period. Assume that all the stores in the market feature the brand for one week in their weekly newspaper advertising supplement. What’s more, half the stores support the brand with three weeks of in-store display and consumer price reductions, while the other half only reduce the price but for the full four weeks. These are excellent trade-support statistics that would be hard to achieve in reality.
The Unprofitable Economics of Trade Promotions Despite the ideal conditions of this hypothetical example, the promotion ends up costing the manufacturer 64 cents for each incremental dollar it generates. Unless the product’s gross margin is greater than 64%, the promotion will lose money.
Nevertheless, when we compute the incremental sales generated from this excellent trade activity (also assuming above-average consumer response), the promotion ends up costing 64 cents for each incremental dollar it generates. In other words, unless the product’s gross margin is more than 64%, the promotion will lose money. The reason is that the manufacturer has to sell an extraordinarily high number of cases at the discounted price to cover the normal base sales that would have taken place without the promotion. What’s more, the manufacturer must cover the practice of retailer “forward buying”—accumulating discounted inventory in the warehouse during the time window of the promotion and selling it later at the regular price. In fact, only about 23% of the cases sold on promotion are incremental in this example.
Forward buying helps explain why promotions often have a dramatic—and highly misleading—impact on a manufacturer’s shipments. Typically, a retailer will take in thousands of cases during a promotion. But after a promotion, shipments will halt for several weeks while the retailer depletes its forward-buying inventory. Normally, that inventory has no benefit to the manufacturer. On the contrary, it substantially raises the costs of promotions and makes them unprofitable.
Another disadvantage of promotions is that unlike advertising, they almost never have a positive long-term effect on established brands. Promotions for new products may be quite productive because they encourage consumers to try an unfamiliar product. But the probability that consumers who buy an established brand on promotion will purchase it the next time is about the same as their likelihood of doing so even if no promotion had taken place. In fact, promotions for established brands usually attract either current users who would buy the product anyway or brand switchers who bounce between brands on deal.
Another hidden cost of promotions is competitive escalation. The advantage of running an extra promotion or offering higher incentives is usually short-lived. Competitors retaliate with promotions of their own, neutralizing whatever incremental volume is generated. The most insidious escalation is that of trade promotion discounts. When retailers are offered higher discounts once, they come to expect them regularly.
The flip side is de-escalation—a cycle where competitors refrain from undercutting each other’s profits through promotions. Discontinuing a money-losing promotion not only stops a manufacturer’s losses; it also sends a de-escalation signal, which, if heeded by competitors (and chances are higher if the manufacturer’s brand is a market leader), ends up improving profits even more. However, if de-escalation doesn’t take place, then cutting promotions will cost sales and market share even as it increases profits. Only if de-escalation works can profits be enhanced without losing sales or share.
Fact-Based Strategies and Tactics
With single-source data, managers can balance investments in advertising and promotion to improve the contribution of each to long-term profit. Intelligent use of the data can help the ad manager determine not only when and where to increase spending but also when and where to decrease it.
The idea is to start with a zero budget and allocate money incrementally to various advertising and promotion options. The goal is to identify the option that marginally contributes most to the long-term profitability of the product. Allocations should continue on this incremental basis until all options that provide a suitable return on the incremental investment are found.
Since advertising doesn’t always work, the first challenge is to maximize the chances of getting productive campaigns. Ad execs should increase spending as long as a particular campaign remains productive and cut back as soon as market tests show its productivity declining significantly. Meanwhile, they should constantly search for new, more compelling advertising and test it against the old.
For new products, advertising can provide significant help when it fulfills its primary role of communicating product news. Increasing weight behind effective new-product advertising is a productive strategy. Because new-product advertising primarily influences trial, which may lead to repeat purchases, its effectiveness is likely to be long term. The combination of a successful new product and successful advertising is rare. When this happens, it is no time for skimping.
To determine whether a particular new-product advertisement is working, test it at different weight levels in test markets before the national rollout. If the new product sells as well in those groups with low exposure as in those with high exposure, then heavy spending is not necessary. Conversely, if the higher weight groups try the product faster or more frequently, then higher levels of advertising make sense—if the company considers the long-term value of the new triers greater than the advertising cost. Thus testing “How high is up?” is an important tactic for new products.
Once new-product advertising has generated trials and positioned the new product in the market, continuing with the same large advertising budgets may not be necessary. In fact, without compelling new copy, approximately one-half of established-brand advertising does not produce any incremental sales. On the other hand, fresh copy for established products can prove extremely productive. Positive advertising effects on sales will continue long after the advertising has stopped, generally for at least one year.
These findings imply a very different form of “pulsing” for many established products. Current practice is to pulse in short bursts of two to four weeks, on and off, using the same advertising each time. We would recommend pulses of at least six months, carried out over several years and using different advertising campaigns.
When advertising cannot demonstrate that it is incrementally contributing to sales of an established product (as shown by tests comparing the current advertising level with lower budgets), cut it back to some lower maintenance level—perhaps even to zero. Do not increase spending until a new campaign has demonstrated greater productivity. It is possible to estimate the likely incremental effect of a new campaign by showing both the old campaign at the old weight and the new campaign at several different weights to matched groups in test markets.
Having identified an effective new campaign, a company should run it at a high level nationally until it no longer shows any incremental sales effect, measured by comparing it with no advertising in a test market. As soon as this new campaign’s incremental sales effect stops, the advertiser should cut it back until yet another effective campaign can be developed.
Because of the risk associated with radical decreases in advertising, an even safer approach is to use the single-source test markets as “lead markets” for national advertising. For example, conduct a six- to nine-month test comparing a lower advertising weight with the current national weight. If the lower weight does not harm sales in the test markets, implement it nationally. In the test markets, however, continue sending the “normal” weight advertising to the group that has been receiving it. That way, if sales to the test households exposed to the lower advertising weight begin to decline compared with sales to normal-weight households, the national advertising budget can be immediately returned to the higher levels.
This strategy gives the decision maker a cushion so that decreasing national advertising poses little risk. Should the original decision prove to be a mistake, the company can return the national campaign to normal levels some six to nine months before sales begin to decline. Of course, a continuous search for new and more effective campaigns should occur simultaneously with this lower advertising.
Companies can use similar techniques to identify productive promotions. In promotions as in advertising, there is a premium on ingenuity and creativity. An effective promotion idea can be three or four times as efficient as the typical prior promotion. A company should spend significant resources to develop creative, hard-to-imitate promotion events, then use single-source data to test the idea. Not all ideas will make it past the test, but those that do will enhance profits. And depending on diminishing returns and competitive response, a company may be able to use the new event or idea more than once, helping further to amortize the investment in promotion development.
Finally, marketing managers can also apply the same analytical concepts to promotion and advertising decisions for particular local areas and for a manufacturer’s key accounts—if they use the single-source data carefully. For example, the data can provide market-by-market estimates of promotional response and retailer support that may offer insights for allocating promotion funds and making necessary tactical changes.
The exhibit “Customizing Promotions for Local Markets” divides geographic markets according to their levels of promotion response and trade-promotion support for a particular product, then summarizes suggested actions. We index each market to national averages for the number of weeks (weighted by store volume) that the brand was on some type of promotion, as well as by the markets’ average response (incremental sales per week of feature or display activity) and the weeks the brand was on price reduction only and not supported by feature or display (“unsupported price reductions”).
Customizing Promotions for Local Markets The numbers index promotion activity and consumer response in local markets to the national average (= 100) and suggest ways to improve future pormtions in each market.
The decision rules that support the actions are only a somewhat crude way to point management in a generally more profitable direction. Those markets with above-average unsupported price reductions might need greater featuring and display support from the retailer. Those low in promotion response probably require higher quality promotions—larger newspaper features, say, or better display locations.
Similarly, companies can use single-source data to target key accounts and isolate mutually beneficial situations for the retailer and the manufacturer. The exhibit “Identifying Mutually Beneficial Markets” shows how Jewel Food Stores could have almost doubled its profits from a particular promotion event (here called the XYZ event) by adding one more week of feature and display. The result would be good for the manufacturer as well because incremental cases sold would have increased from 933 to 1,633 without any additional investment.
Identifying Mutally Beneficial Promotions By comparing the results of the XYZ event with single-source data from other promotions in the same market we determined that with only one more week of feature and display, Jewel would have increased its porfit by $18,688, and the manufacturer would have sold an extra 700 cases.
A New Role for the Sales Force
One final caveat about these new marketing strategies merits discussing. Single-source data measure the effect of advertising and promotions on consumers, not on the distribution of a given product by retailers. One of the traditional uses of both advertising and promotion has been to convince retailers that the manufacturer supports the product and that the brand will pull consumers into the stores. Thus if a company cuts back on unproductive advertising between pulses or discontinues ineffective promotions, it runs the risk that retailers will interpret the move as a lack of support and therefore cut back on distribution.
To avoid this predicament, the sales force has a new and extremely important job to do. It must communicate to retailers that unproductive advertising with no consumer pull has no value to the retailer or the manufacturer. Likewise, smart retailers will begin demanding hard evidence on the consumer pull of advertising instead of merely being impressed with large media budgets.
The role of the sales force in promotion also will change. Instead of viewing trade promotions as a competitive payment to make sure the brand has distribution, sales personnel have to demonstrate to retailers how specific promotions will increase their incremental profits.
Taking advantage of this opportunity will require salespeople to have greater analytical abilities than they have needed in the past. In effect, they will have to become marketers in partnership with retailers. The retailers, like the manufacturers, now know what items are moving because they are seeing the same single-source data. As more retailers become sophisticated users of this information, it will be more difficult for manufacturers to get them to execute promotion programs that are not in retailers’ best interests. Over time, there will be a bigger and bigger productivity difference between simply giving a retailer a price discount and hoping for the best and giving a price discount to support a well-documented, mutually beneficial promotion program.
1. For the technical details of these programs, see our “Promoter: An Automated Promotion Evaluation System,” Marketing Science, Spring 1987, p. 101; and “Promotionscan: A System for Improving Promotion Productivity for Retailers and Manufacturers Using Scanner Store and Household Panel Data,” Wharton School Marketing Department Working Paper (Philadelphia: University of Pennsylvania, February 1990).