Strategic Marketing Tracking
Upheaval. Revolution. Transformation.
These are the words that characterize the nature and magnitude of changes swirling through the marketing world. These changes include the rise of digital media, the smartphone, apps galore, virtual reality, social media, blogs, and nonstop social communication that bypasses traditional fact-checking and filtering by professional news organizations; the decline of traditional media advertising; the rise of artificial intelligence, machine learning, gaming, and sports marketing; and the rise of online shopping and distribution systems. All of these changes signal a new era in marketing.
The late-20th-century world of three major television networks and relatively stable retail distribution channels has given way to a proliferation of TV networks and massive shifts in distribution systems. The simple world of supermarkets and Nielsen ratings is gone forever. So what’s a marketing executive to do? How can she keep track of the effects of marketing actions in the midst of upheaval, revolution, and transformation?
In contrast to media and technology, marketing fundamentals (strategy, positioning, awareness, continuity, product quality, concentration, message communication, image projection, etc.) remain as constant and as important as ever. The fundamentals do not change because of the digital revolution, social media, and multiplicity of television channels. Marketing fundamentals must remain as the lodestars in the marketing universe to guide marketers through the cosmic confusion of changing media, changing technology, and changing competitive forces.
We have found that strategic tracking of consumer’s awareness, perceptions, and behavior delivers essential marketing intelligence to help guide marketers through the turbulence and helter-skelter of rapid changes in marketing technology, media, and distribution channels. The ultimate goal of marketing is to influence and control the ultimate consumer. Therefore, if the perceptions, attitudes, and behavior of that ultimate consumer are monitored over time, we will know if the cumulative force of all marketing activities is influencing the ultimate consumer. If we track consistently, it is possible to monitor the effects of specific marketing programs as they are implemented.
Strategic tracking answers a number of important questions:
How is your brand’s awareness trending over time, relative to competition? Awareness is the single most important marketing variable in many product categories.
How is your brand’s image evolving over time? Think of “image” as the character or personality of a brand’s awareness. The strategic management of brand image is one of the most important goals of marketing.
Bold Italic
What advertising messages do your consumers remember about your brand, and how do these messages change over time? Advertising messages tend to undergo learning and memory “distortion” as they are interpreted and remembered by consumers. Therefore, the only way to know for sure the “net, net” communication of your advertising is to track advertising message recall.
What variables define your optimum target market? Who are your brand’s heavy users, nonusers, light users? The identification and monitoring of your brand’s optimum target market is one of the easily calculated outputs of good tracking research. What are the demographics (and the correlates) that define the optimal target market for your brand? Which market segments should you focus upon?
What impact are your competitors having in the marketplace, and how are competitive activities influencing your brand? Overreaction and underreaction to competitive initiatives constitute some of the greatest marketing mistakes historically. It’s really important to know, as early as possible, whether a new competitive product or new competitive advertising campaign is a real threat, or just smoke and vapors. Good tracking research allows you to monitor and assess competitive threats—before it’s too late to react.
If you should decide to pursue strategic tracking research for your brand, here are some suggestions to keep in mind. Tracking research, like everything else, can be good or bad depending upon how you design and execute it.
Telephone surveys and online surveys are typically the best way to track awareness, image, and advertising message recall. . Telephone surveys can be continuous (i.e., conducted every day) or pulsed (conducted at a point in time, such as the last week of each quarter). Some types of tracking research can be conducted by mail (e.g., recognition tracking, image tracking, or brand share tracking), and the quality of the data from mail surveys can be high. Mail surveys, however, are not very good at measuring awareness (because respondents can ask other household members or Google the answers).
Good sampling is essential.. The greatest (and often least visible) mistakes in tracking research are usually sampling errors. The sampling plan and management of the sample are absolutely crucial to consistently accurate tracking data. The samples from month to month and year to year must be identical in every way or else the resulting data will not be comparable. Here are some common sampling errors to avoid:
- Sample definition too narrow. If your target audience is females aged 21 to 29, that’s fine for guiding media placement. All too often, however, the target audience becomes the sampling plan for tracking. Therefore, only females aged 21 to 29 are interviewed in the tracking research. Suppose your advertising turns out to be really effective among women aged 34 to 54 instead of women aged 21 to 29. You might have canceled a very effective campaign because it appeared to be failing among the target audience. Also, it’s possible your advertising is working among 21- to 29-year-olds, but driving all other age groups away. If we were only sampling the 21 to 29 age segment, we would have overlooked this critical failing.
Remember, always define the sample for tracking research very broadly and inclusively. The purpose of tracking is to tell us what’s happening in the marketplace, and a too-narrow sample almost always defeats this objective. - Variable definition of sample. Never allow the things you want to measure to be a part of the screening criteria that admits someone into the survey. For example, you would never want awareness of a product category or awareness of a brand to be part of the screening criteria for a tracking survey if one of the purposes of the tracking research is to measure awareness. Likewise, you would never want “past-30-day usage” of a category or brand to be a part of the sampling criteria, if the purpose of the study is to measure changes in usage over time. Awareness and product usage are variables that can change as a result of your marketing activities or competitive initiatives, or they can change from season to season. As these variables change, they change the composition of the tracking sample and destroy the comparability of the survey data across time.
- Sampling without replacement. If the universe is limited (say you are tracking attitudes among your 1,000 dealers) and you take dealers out of the sample as they are surveyed, then the composition of your sample is gradually changing as interviewing progresses—and this makes the surveys from one time period incomparable to surveys in another time period.
Remember if the universe is small and limited, then sample with replacement. That is, once a respondent is surveyed, put that respondent back into the sample for the next wave of data collection. An alternative solution is to divide the original sample into discrete, matched subsamples, and then use one of these subsamples for each subsequent wave of surveying. - Randomize sample within quota groups. Even though most projects begin with a random sample, things can happen that destroy randomness. For example, most samples are organized by time zone (so that households across the United States have equal time to respond). Sometimes, as part of this processing to organize the sample, the sample is put into some type of order (area code, prefix, or alphabet). As a final quality-control procedure, always randomize the final sample within each quota group. Then, no matter how the sample is worked, you will end up with a random sample.
- Limit sample to force “callbacks”. The research company must limit the size of the original sample so that the “callback” cycle is properly triggered. If too many households are put in the initial sample, then it is likely that no “callbacks” will ever be made. The study is completed before the original sample is expended. The recommended policy is to release sample in a controlled, metered fashion so that every household has an equal opportunity to complete a survey.
The questionnaire must remain the same from month to month and year to year. Changes in the questionnaire (even something as seemingly innocent as a change in question order) can create unexpected changes in the results. Simply changing one word in a question can change the results. Therefore, keep the questionnaire constant over time. If you want to modify, add, or delete questions in a tracking study, do it toward the end of the questionnaire—so that the changes will not distort the key measures in the first 80% of the questionnaire.
All data collection procedures and controls must remain constant over time. Changes in the minutia, scheduling, sampling, email reminders or “callbacks” can inject unplanned changes in tracking study results. The instructions and procedures for each specific tracking study must remain unchanged over time.
Editing, coding, data cleaning, and tabulation must remain constant over time. Changes in the way “no answers” or “blanks” are handled, changes in how many multiple responses are accepted, and a hundred other “minor” tabulation details can change tracking results over time.
A great long-term threat to the accuracy and integrity of a strategic tracking study is gradualism. That is, small incremental changes in methods and procedures accumulate over time and gradually destroy the comparability of the tracking data.
For ongoing tracking studies, it is recommended that monthly meetings be held with everyone in operations working on the project, to review and reinforce exactly how the study is to be executed. Likewise, specific quality-control guidelines and standards must be developed and maintained for each long-term tracking project.
Needless to say, once you choose a research company to do a tracking project, you should stick with that one company (unless that company’s performance is unsatisfactory). Changing research companies every year or two on a long-term project almost always guarantees that the data will not be comparable.
The true strategic value of tracking research is fully realized only after several years of consistent measurement of your ultimate consumers. Several years of longitudinal data really tell a story, but it’s a strategic story, a grand panorama of your performance in the marketplace compared to your competitors, as played out during the different phases of the business cycle. With this strategic road map, it is possible to plot grand strategy, and monitor your successes and failures in pursuing that strategy, regardless of the day-to-day confusion and chaos in the world of upheaval, revolution, and transformation.
Author
Jerry W. Thomas
Chief Executive Officer
Jerry founded Decision Analyst in September 1978. The firm has grown over the years and is now one of the largest privately held, employee-owned research agencies in North America. The firm prides itself on mastery of advanced analytics, predictive modeling, and choice modeling to optimize marketing decisions, and is deeply involved in the development of leading-edge analytic software. Jerry plays a key role in the development of Decision Analyst’s proprietary research services and related mathematical models.
Jerry graduated from the University of Texas at Arlington, earned his MBA at the University of Texas at Austin, and studied graduate economics at SMU.
Copyright © 2017 by Decision Analyst, Inc.
This posting may not be copied, published, or used in any way without written permission of Decision Analyst.