New Product Sales Forecasting
The development and introduction of a new product is an inherently risky venture.
Many corporate executives’ careers have floundered on the rocks and shoals of new product launches. In an effort to reduce the risks associated with new products, the forecasting of year-one sales has become an established practice within the marketing research industry.
Despite many claims of high precision, forecasting sales of new products is fraught with risks, and estimates can often be off the mark. The risk of great error is particularly high for new products that represent a paradigm shift; that is, something fundamentally new and different. Also, the forecasting of new durable goods and of services is more daunting than the forecasting of new consumer packaged goods. The goal of this article is to take a bit of the mystery out of the methods used to derive year-one sales forecasts for new consumer packaged goods (durable goods and services will be addressed in subsequent articles).
Typically, the objective is to predict year-one “depletions”; that is, the actual volume of goods that consumers will buy in retail stores (hence, the use of the term “volumetric forecasting” as a description of new product sales forecasting). The term “depletions” excludes new products in the factory, in warehouses, on trucks, or in the retailer’s distribution system (i.e., all inventory build is excluded). Most often, these sales estimates are in retail dollars, not the manufacturer’s selling prices. So, after receiving a retail depletions estimate of new product sales, the manufacturer must discount the retail sales numbers to arrive at the manufacturer’s actual sales (or actual depletions) in year one.
The first (and perhaps most common) method of forecasting new product depletions is historical review. If a company has introduced similar new products into similar markets in the past, these histories can often be good predictors of future outcomes. If a company has no such history, then histories of similar new products introduced by competitors or other companies can serve as historical guidelines to help derive a new product sales forecast. The historical approach has limitations, however. History is not always a good predictor of the future; it is often difficult to find accurate historical data relevant to the new product under consideration; and what other companies have been able to do does not necessarily tell us what the next company can do. That is, different companies have varying levels of ability when it comes to successfully introducing new products. Lastly, histories of two new products may look the same on the surface, but actually be driven by completely different underlying variables (trial rates, repeat purchase rates, purchase cycle lengths, etc.).
A second method of forecasting new product success is the test market. The new product is developed and introduced into one or more test markets. Actual store sales and market shares are tracked via Nielsen or IRI, or data from retailers in some instances. Often this sales tracking is supplemented by surveybased tracking of consumer awareness, trial, usage, and repeat purchase patterns. In some instances, consumer diary panels or purchase panels are used to track consumer trial, repeat purchases, and share.
The test-market approach has much to offer. It is a real-world experiment. No variables are excluded from the test. Success in test markets is highly predictive of success nationwide (especially if multiple test markets are used). The test market gives the manufacturer the opportunity to work the “bugs” out of the new product, its packaging, its shipping, its display on store shelves, etc., so that a national rollout later is likely to be relatively trouble-free.
The greatest downside of test markets is the risk that competitors will read the test market with their own marketing research and be ready to “go national” at about the same time you are. Another risk is the possibility that competitors will take marketing actions to distort or destroy the reliability of the test market. For example, a major competitor might run a deep discount promotion so that category users will stock up with the competition’s product, and this could effectively block or delay trial of your new product.
A third method of forecasting is before-after retail simulation. A sample of target-audience consumers is presented with simulations showing the in-store retail environment and a realistic display of all the major brands in the category. The consumer is asked to choose or “purchase” brands as they normally would, or as they might on their next 10 purchases. The new product is missing from the simulation during this “before” measurement. Then the consumer is exposed to the new product concept and/or advertising that conveys the new product concept. Later, the consumer sees the exact same simulated display (except now it contains the new product) and is asked to make the same choices or purchase decisions as before. So, we have market shares for all brands in the category before the new product is introduced, and the same data after the new brand is introduced.
The market share achieved by the new brand is a predictor of the brand’s “instantaneous sales rate” at retail (or “running sales rate”) at the end of year one, once discounted by predicted awareness, trial, and distribution levels for the new brand (and assuming the product itself is at parity with major competitors). Year-one trial volume is partly excluded from this sales or depletions forecast, however; and the sales estimate is not for year one, but for the “going rate” at the end of year one.
This approach tends to overstate the true market potential of a new product, so the results must be discounted to compensate for this tendency. With a few other adjustments (discounting for expected advertising awareness, distribution levels, pricing resistance, and adjusting for trial volume, rate of share build by month, etc.), a reasonably good estimate of year-one depletions can be derived. This method is conceptually sound and can yield good estimates of year-one sales volume. With the inclusion of 3D simulation to create realistic in-store purchase experiences, this method will become more accurate and play a more important future role.
A fourth technique for forecasting new product sales is the “normative” approach. A database of historical norms for new products is assembled (trial rates, repeat purchase rates, purchase cycles, and so on) by product category. A mathematical model sits atop the normative database and includes the marketing plan variables that might cause a new brand to perform above, at, or below the norms.
For example, if the new brand has outstanding television advertising (based on advertising research), then the model would bump the trial rate higher within the normative distribution. If an in-home usage product test has proven that the new brand is better than its major competitors, then the model would adjust the repeat purchase rate higher within the distribution of historical norms. Therefore, based on inputs from concept testing, product testing, and copy testing, the model decides where (within the normative possibilities) this new brand will fall. Then the model simply combines all of this into predicting a trial curve and a repeat purchase curve, which yields a year-one forecast of sales or retail depletions. This method can produce accurate forecasts, depending upon the accuracy of the normative data, the quality of the model, and the accuracy of the marketing inputs.
The last method is the traditional awareness-trial- repeat purchase model. We might refer to these as the orthodox, legacy systems of new product forecasting. Most of these legacy models were developed 30 or so years ago, when the world of marketing was a much different place. Are these old forecasting models really viable in today’s media and marketing environment? But, that is a debate for another day.
Here is how the traditional model works. Awareness is forecast based on year-one advertising and media plans. All media advertising (including print, radio, Internet, etc.) is converted into television GRP (gross rating point) equivalents. These GRP equivalents are fed into a mathematical model to forecast awareness week by week during the brand’s first year of life. The model converts this awareness into a cumulative trial rate, week by week, based on predicted distribution levels, promotional plans, and inputs from concept research and advertising testing.
Samples of the new product are placed in homes for consumers to use under normal conditions for a period of days or weeks (i.e., an in-home usage product test). The results of the product test are used to predict the repeat purchase curve and the purchase cycle. Then the model combines the trial predictions and the repeat purchase curve into a forecast of first-year retail depletions. These types of models, in the hands of experienced analysts working in familiar product categories, can often generate accurate new product forecasts for consumer packaged goods (within 10% to 15% of actual depletions, plus or minus).
None of the foregoing models or systems of forecasting is perfect. All are based on hidden assumptions and include human judgment, and if these underlying assumptions or judgments are off the mark, then the corresponding forecast can be inaccurate (that is, error greater than 15%). Nevertheless, volumetric forecasting is typically accurate enough to be a valuable ally in new product decision-making, and is very economical compared to the cost of a major test market, or the cost and embarrassment of new-product failure in the marketplace.
Author
Jerry W. Thomas
Chief Executive Officer
Jerry founded Decision Analyst in September 1978. The firm has grown over the years and is now one of the largest privately held, employee-owned research agencies in North America. The firm prides itself on mastery of advanced analytics, predictive modeling, and choice modeling to optimize marketing decisions, and is deeply involved in the development of leading-edge analytic software. Jerry plays a key role in the development of Decision Analyst’s proprietary research services and related mathematical models.
Jerry graduated from the University of Texas at Arlington, earned his MBA at the University of Texas at Austin, and studied graduate economics at SMU.
Copyright © 2020 by Decision Analyst, Inc.
This posting may not be copied, published, or used in any way without written permission of Decision Analyst.