Before joining Seed Strategy, I worked as an analyst at Nielsen BASES where I learned countless lessons about testing, forecasting and launching new products. Without a doubt, one of the biggest lessons I learned was that research results never guarantee in-market success. Even the best-performing concepts can crash and burn in the real world. But why?
I’ve observed that there are six common pitfalls that are often responsible for derailing initiatives after they achieve successful concept scores. By becoming aware of what they are and how to avoid them, marketers can significantly improve their chances of translating a strong concept to in-market success.
In concept testing, concepts serve as a substitute for awareness-generating advertising in order to simulate what will happen if the new product or service is launched into the real world. Because of this relationship, it is critical that the amount of information on the concept board mirror the level of advertising that will take place in-market. If the concept includes more information than would typically be communicated via marketing, it can lead to overstated purchase intent and inflated volume forecasts—setting the stage for heartbreak when in-market performance fails to live up to the test results.
So, if your brand plans to run primarily 30-second commercials (or online video ads), your concept board should not contain any more information than could realistically be communicated in a 30-second spot. Likewise, if your marketing plan calls for mostly 15-second ads, the amount of information in the concept should accurately reflect that timing. Plan on primarily running print ads or banner ads? Consider going with an adcept—a type of concept board that is similar to these vehicles in appearance and amount of communication. Know that you won’t be running any advertising at all? Then test a packcept—a concept board that features a visual of the package, with minimal, if any, additional copy.
At this point, you may be wondering how to tell if your concept board has too much information. A good rule of thumb is to read through your concept out loud and see how long it takes you to get through it. Then, compare this timing to whatever your primary advertising vehicle will be. For instance, if you know you will be running mostly 15-second spots, it should take you roughly 15 seconds to get through the concept.
2. Lineup Changes
There are many reasons a brand may launch a product lineup that’s different than the one that was tested. Sometimes retailers aren’t willing to stock as many SKUs as expected. Other times production challenges or material costs can cause a shift from one variety to another. There are even instances when teams purposely test far more varieties than they’ll launch and then narrow down to the final lineup based on research results. (NOTE: If you find yourself in this last situation, you’d be much better off running a TURF analysis as part of your concept test rather than loading up your concept board with varieties.)
Whatever the reason, making unchecked changes to a successfully tested product lineup can be fatal to a new product launch. Respondents’ answers on purchase intent, frequency and how many units they will buy, are heavily driven by how well they like the varieties offered in the concept. If you start messing with those varieties when it comes to launch time, people may buy fewer units and make purchases less frequently than concept testing predicted. You also risk taking away the only variety that some people liked, turning potentially loyal customers into automatic non-buyers.
Sometimes lineup changes are inevitable, but unless a specific variety flops in testing—making it obvious that a change will only improve things—it’s best to try to stick with the varieties that were tested in the concept. If a change is necessary prior to launch, consider retesting the proposition or check with your research partner to see if it’s possible to run a re-simulation of the forecast based on the changes. (A re-simulation is much faster and cheaper than a retest, but can usually only be done if you are dropping varieties rather than adding or swapping in completely new ones.) That way you’ll know ahead of time what impact changing the lineup will have rather than being surprised about it after it’s too late.
3. Price Changes
Here’s a shocker—consumers are price sensitive. The price on the concept board impacts respondents’ answers to purchase intent, perceived value, the number of units they say they will buy and how often they’ll repurchase—pretty much every single factor that determines a concept’s success. Launching at a higher price than what was tested will have a negative impact on sales volume, causing the initiative to fall short of the volume estimate calculated from the concept test.
Despite this fact, there are many times when taking a price increase is unavoidable. In some instances, the price on the concept board is a best-guess estimate that proves to be inaccurate once actual costs are formulated. Other times, raw material prices can fluctuate making it impossible to make a profit at the price originally listed on the concept board. And still other times, internal margin goals can shift, forcing a price hike.
Because of this, many companies find it worthwhile to add a Price Advisor or similar type of study to their concept tests. This kind of research can produce a curve that shows how different price points impact a concept’s sales volume. Not only does it pinpoint the optimal price point for maximizing volume, it also shows volume estimates for a range of different price points so you’ll always know how price changes will impact an initiative without jumping into the market blindly or having to retest the concept.
4. Inconsistency Between Concept and Advertising Communication
When I was at BASES, my client was one of the top CPG companies in the world. I remember a specific project where one of the brand teams spent a hefty sum of money testing a series of new shampoo concepts. One of the concepts rose to the top, performing extremely well in testing only to flop in-market. Needless to say, the client team was very disappointed and came back to us asking how our forecast could have been so inaccurate. We did a complete forecast validation and landed on a simple answer—we found that the message communicated in their advertising was completely different that what was on the concept board. No wonder the results were different than forecasted!
Making this mistake can have huge ramifications, but fortunately, it’s an easy one to avoid. The most important thing is to make sure that the creative brief aligns with the concept board and that the advertising holds true to the brief. (It sounds simple enough, but I can’t tell you how many times I’ve seen very smart teams successfully shepherd an initiative through testing only to completely abandon everything that worked in research once they moved on to advertising.) Want to go above and beyond to avoid this pitfall? Be sure to give the ad agency a copy of the final concept and consider having an official hand-off meeting between your innovation agency and the ad team to ensure nothing critical gets lost in the transition from concept to advertising.
5. Differences Between Distribution Inputs and Actual Distribution
Distribution is the number one driver of volume—you simply cannot buy what you can’t find. That’s why most concept tests use distribution as a major piece of the volume-forecasting puzzle. Some less sophisticated tests use very simplified assumptions (above average distribution, average distribution, below average distribution, etc.) while others factor in even the smallest details like distribution build by month and specific outlet, packaging design impact, shelf position and number of facings. No matter how simple or sophisticated the test may be, when distribution inputs for the concept test differ from what actually happens in-market, it’s almost guaranteed to result in inaccurate volume estimates.
No doubt, figuring out the specific distribution inputs can be difficult and the forms can often be tedious to fill out, but it’s very important to get these details right. Resist the urge to rush through them at the last minute or to simply use what you’ve used in the past. This is not an area where you want to rush or skimp on thinking—the accuracy of your forecast, and the success of your initiative likely depend upon it. And, by all means, if the distribution factors change after testing, ask the testing company to run a re-simulation of the forecast based on the new distribution inputs.
6. Product Performance Fails to Meet Expectations
Concept test results often single-handedly determine whether or not an initiative moves forward, so it’s no wonder that teams spend a lot of time, money and energy making their concepts the strongest they can be. But there’s a very fine line between making a concept sound as appealing as possible and promising an experience that the product simply can’t deliver. While overpromising (consciously or not) may very well help a concept earn higher test scores, those scores will be inflated, leading to inaccurate projections about how successful the product will be once it gets to market. Trial volume may end up being in line with test results, but once real-world consumers actually try the product and are underwhelmed by what it brings to the table, repeat volume will prove to be much lower than forecasted.
This is exactly why Concept & Use (C&U) tests are the most accurate type of concept tests you can run. By gauging respondents’ purchase interest based on the concept and then seeing how that interest changes after actually trying the product, these tests measure whether or not a product’s performance is in line with respondents’ expectations. This after-use purchase intent score has consistently proven to be the single best predictor of long-term in-market success.
Don’t have the budget or time needed to run a C&U test? You’re definitely not alone. Just remember that no matter what kind of concept test you are running, honesty is always the best policy. Go ahead and sing to the mountaintops about your products’ features and benefits, but be realistic with yourself and respondents about what your proposition will actually deliver—because the goal isn’t just to make it to market, it’s to stay there.
Have you seen any other pitfalls derail a successful concept? Do you have any more tips for ensuring that strong concepts translate into successful product launches? I’d love to continue the conversation—please comment below or connect with me on Linkedin.
Adam Siegel is an Associate Creative Director at Seed Strategy where he draws upon his diverse experience in advertising, research and innovation to craft breakthrough creative and winning concept copy.