Can You Hear Me Now?: Embracing Advertising Testing
by Tom Allen

  • Advertising Copy Testing

    You may remember the series of ads that ran for years in the early 2000s that featured a bespectacled phone technician (played by an actor named Paul) in various locales asking, “Can you hear me now?” while on his cell phone talking to an unknown colleague.

    The campaign ran for nearly a decade. As is the way with most things in our fast-paced, information-overload, disposable world, the television audience quickly forgot about the Verizon guy in his absence. Or did they?

    Then, starting in 2016, audiences once again, started to see Paul’s familiar face on their television screens. He even mentioned his old, familiar catchphrase, “Can you hear me now?” to jog our faded memories and help identify him. If you looked no further, you might think Verizon had reinstated him. But he was instead pitching for Sprint, as emphasized by his new yellow shirt and modified catchphrase (“Can you hear that?”). Did consumers notice? Were they happy to see him again? Were they motivated to try out Sprint’s service? Did they understand Paul’s primary message?
 

These questions fall within the purview of advertising copy testing. Some brands rigorously test all of their creative executions before or while their ads are in market, as part of an evaluation system that helps them optimize their messaging. Other brands strategically test certain executions that might stray away from their traditional messaging to see how the executions impact their brands. Some brands go with their gut and don’t have a process in place for testing advertising. In some cases it is the ad agency (rather than the brand) that tests the creative, but generally it is driven by the brand.

My suggestion for both agencies and brands is to embrace testing among the target audience before they go to market. Television advertising stimuli can be storyboards, near-finished ads, or finished ads. Testing early-stage creative avoids the expense of creating finished ads that may be less than stellar, but the stimuli is harder for consumers to grasp and react to. Testing finished ads is ideal, but comes at a greater price due to the investment needed to create finished ads that may or may not hold up to scrutiny. Testing digital, print, outdoor, and radio advertising is often done with finished or near-finished ads. Online consumer testing is almost universally favored over other more expensive testing methodologies. Most online surveys are “platform agnostic,” meaning they are perfectly acceptable for mobile, laptop, or desktop survey platforms.

It is important that ads are tested monadically (each respondent testing one ad) in order to remove any exposure or order biases. In general, it is financially beneficial to test multiple ads at once in order to benefit from economies of scale. This also allows for aggregating key metrics of complementary ads or campaigns in a single report. In any case, it is important to build a database of evaluated ads (both yours and key competitors’ ads) in order to establish a set of normative benchmarks for comparison.

Two popular methods of testing TV creative are “break out of clutter reel” tests and “forced exposure” tests. Clutter reel tests show respondents a mix of television content and advertisements before surveying about awareness, recall, messaging, and other key factors related to the advertisement in question. This method attempts to simulate a real-world environment by placing the test ad among other content. Forced-exposure tests show respondents the test ad two or more times in order to assure full exposure. Respondents then answer a number of questions about the brand, the message, and the ad in order to determine whether the ad would be successful in the marketplace or if there are ways the ad could be improved before it goes to market. Most advertising tests involve proprietary algorithms that merge results into a single, composite metric of expected ad success. Comparing key metrics across ads in the database provides agencies and brands with the information they need to determine whether an ad should be aired, modified, or shelved.

Time will tell if Paul’s switch from Verizon to Sprint has been successful for Sprint. They have continued with the campaign for over a year, though, so that would indicate that the campaign appears to be effective for the time being at least. We can hope that the ads were tested, improved, and then launched to great success. Constantly testing and improving ads is a strategy that wise companies will emulate and follow. Like Paul, we ask, “Can you hear that?”

About the Author

Tom Allen (tallen@decisionanalyst.com) is Senior Vice President at Decision Analyst. He may be reached at 1-800-262-5974 or 1-817-640-6166.

 

Copyright © 2020 by Decision Analyst, Inc.
This posting may not be copied, published, or used in any way without written permission of Decision Analyst.