Choice Modeling for New Product Sales Forecasting


Over the last decade, choice modeling has gained credibility as a viable technique for forecasting sales of new products and new services.

Choice Modeling Sales Forecasting

The movement toward choice modeling took wings in the year 2000, when Daniel McFadden, Ph.D., an American professor, was awarded the Nobel Prize in Economics for his pioneering work in the application of choice modeling to economic decision making. Choice modeling's popularity has also been fueled by software advances, more powerful computers, and improvements in 3D-animation technology.

But why is choice modeling increasingly applied to new product sales forecasting? The main reason is choice modeling provides more and better information than old legacy forecasting systems. Consumers must make brand “choices” when they go to the grocery store, the drugstore, or the car dealership. Choice modeling makes it possible to simulate this shopping and decision-making process, with all of the important variables carefully controlled by rigorous experimental design, so that the new product’s sales revenue can be accurately predicted. Equally important, choice modeling helps marketers understand the many variables that underlie that forecast.

An example might make this easier to understand. Let’s suppose that a new brand of peanut butter is about ready to enter the market, and the manufacturer wishes to forecast first year sales (i.e., retail depletions) before moving ahead. The product formulation is ready, but there are unresolved marketing issues: four package designs, three pricing levels, five vitamin additives, and four claim possibilities for the package. It's immediately obvious that the unresolved issues add up to 240 unique possibilities (4 x 3 x 5 x 4 = 240). This is where choice modeling comes to the rescue. By choosing a small subset of all these possibilities, following an experimental design, choice modeling permits the results for all 240 possible combinations of variables to be accurately estimated.

Here’s how it works. Once the experimental design is determined, each respondent sees and responds to a number of shopping scenarios. Think of a scenario as one shopping trip. The whole shopping experience can be simulated online via 3D animation, or just the “shelf set” itself can be simulated by 3D animation. Each participant sees a typical retail display, with all of the products in a category shown (the new product plus the major competitive brands). The respondent is asked to look at the shelf display and indicate how many of each brand she is likely to buy in the next week or next 30 days. That completes one scenario. Then, the online “shelf set” systematically changes—prices change, claims change, package designs change, etc. The respondent is then asked to shop the peanut butter category again and choose exactly what she would buy in the next week or next 30 days, given the new shelf set. This completes the second scenario, and the process continues. Each respondent typically completes 6 to 10 scenarios, and in each scenario the marketing variables are different.

Since all of the marketing variables are carefully manipulated following an experimental design, the predicted market share (and sales forecast) for the new peanut butter can be calculated for all 240 combinations. The equations that underlie the model are then used to build a forecasting simulator, so that a research analyst or brand manager can play "what if" games by changing one or more marketing variables, to see the effects on market share (and year one sales).

Now, this elementary choice model is not quite ready for volumetric forecasting yet. One of the most important variables is the quality or performance of the new product itself, and this variable must be incorporated into the choice model based on objective product testing data, since product quality largely determines the repeat purchase rate. Distribution build (i.e., percent of stores that will sell the new product month by month after introduction, weighted by store sales volume) is a major factor in the success of new products, so this variable must be added to the choice model. Another major variable is awareness build (how fast will awareness of the new brand grow during its first year, and what level will it attain by the end of year one), and this must be added to the choice model. The final step is calibrating the choice model to actual category size, trends, and market shares. With these enhancements, the choice model is ready for sales forecasting. A number of additional refinements to the model are possible, but these additional bells and whistles don’t add much predictive power.

Choice modeling offers a number of advantages over the old legacy methods of new product sales forecasting:

  • Choice modeling is more “shopper” focused, or more focused on the “retail” shopping experience. This is relevant because new products today are more distribution or retail driven than new products in the past. That is, new brands today are seldom supported with media advertising to the degree they were 30 or 40 years ago. Thus, what happens at point-of-purchase in the retail store is now all important. Choice modeling is superior to legacy methods in simulating this in-store shopping experience.
  • Choice modeling measures cause and effect precisely, so that analysts and brand managers know which marketing levers to pull to achieve specific business outcomes.
  • Choice modeling permits hundreds of marketing scenarios to be explored in real time at no extra cost, compared to the inflexibility of the legacy systems (and the legacy costs for each scenario the client wants to evaluate). Most often, plans and budgets change as a new product is introduced, and the forecasting simulator (and underlying choice model) makes it easy to analyze and optimize these on-the-fly changes.
  • Choice modeling is calibration-based, not norm-based like the legacy systems. That is, the legacy forecasting systems rely heavily on normative data from previous new product forecasts, whereas choice models are calibrated to each product category based on current market size, brand shares, and trends. Norms tend to decay in relevance over time (older norms don’t tell us much about present day realities), but calibration is always based on the most current and most relevant data—actual brand shares and sales volumes in the narrowly defined product category.
  • Choice modeling can be applied to almost any product or service category with equal accuracy, since it is not based on normative data. None of the legacy systems have accurate norms for more than a few product categories. Choice models are based on calibration to the category, so lack of historical norms is not a limitation.
  • Choice modeling lends itself to 3D animation, so that a virtual shopping experience with a realistic shelf set can be created. This added realism increases the accuracy of the sales forecasts.

Developing and introducing new products or new services is an inherently risky venture. No forecasting system can guarantee success 100% of the time. Choice modeling, however, can reduce the risks associated with the introduction of new products by providing more accurate forecasts, especially for products without massive advertising support. Choice modeling improves the marketer’s chances of success during the new product introduction, because the important marketing variables are incorporated into the forecasting simulator. This allows the new products team to evaluate and respond quickly to changing circumstances as the new product rolls out. This simulator-based flexibility to respond correctly to changes in market conditions and competitive actions is often the difference between success and failure of many new products.


Jerry W. Thomas

Jerry W. Thomas

Chief Executive Officer

Jerry founded Decision Analyst in September 1978. The firm has grown over the years and is now one of the largest privately held, employee-owned research agencies in North America. The firm prides itself on mastery of advanced analytics, predictive modeling to maximize learning from research studies, and the development of leading-edge analytic software.


Jerry is deeply involved in the firm’s development of new research methods and techniques and in the design of new software systems. He plays a key role in the development of Decision Analyst’s proprietary research services and related mathematical models.

Jerry describes himself as a student of marketing strategy, new product development, mathematical modeling, business survival, and economic growth. In his spare time, he likes to work on his farm in East Texas where he grows grapes, apples, pears, pecans, plums, and peaches; a forest of native trees, grasses, and insects; and wild plants of many types.

He graduated from the University of Texas at Arlington, earned his MBA at the University of Texas at Austin, and studied graduate economics at SMU.

Copyright © 2009 by Decision Analyst, Inc.
This posting may not be copied, published, or used in any way without written permission of Decision Analyst.

Copyright © 2009 by Decision Analyst, Inc.
This posting may not be copied, published, or used in any way without written permission of Decision Analyst.