Food Quality and Preference 14 (2003) 37–38 www.elsevier.com/locate/foodqual
Discussion
Solicited commentary to Garber et al: Measuring consumer response to food products Dominic Buck* Product Perceptions Ltd., St. George’s House, Yattendon Road, Horley, Surrey RH6 7BS, UK
The paper made enjoyable reading and makes many good points about the importance of variables often ignored by food scientists in product testing. I agree with the authors that realism needs to brought into the testing process in order to achieve a greater measure of external validity. There are, however, several issues where our experience of product testing would qualify the authors’ conclusions. The authors state: ‘‘For food scientists to perform tests that will accurately predict consumer behaviour at the point of purchase, it is necessary that they include. . .marketing variables. . .to assure that consumers are responding to a food product as they would in an actual store setting.’’ I agree with this sentiment, as stated, but its implications raise some concerns. Firstly, I would contend that most product tests are not set up to predict consumer behaviour at point of purchase. Test-markets and STMs (simulated test markets) have this objective, and aim to achieve it, precisely by including all the marketing variables of brand, distribution, price, positioning, heritage, etc., that the authors describe. By contrast, most product tests attempt to understand the true product effect, i.e. the relative standing between competitors’ products without the influence of brand, or for product improvement: the incremental effect on consumer liking by changes to formulation, manufacturing process, packaging, consumer expectation, etc. Our preferred practice in meeting this objective would be initially, to test with products unbranded and unpriced, but with a general context provided to respondents (e.g. ‘‘a toothpaste to provide all-day protection against plaque’’). Then, should significant (in the pragmatic rather than statistical sense) improvements be identified, to conduct a subsequent confirmatory stage. Here, the products will be tested in branded form under usual conditions of use
E-mail address:
[email protected] (D. Buck).
and consumption. This is to reinforce the validity of the unbranded finding, to understand it in a market context and ensure that the ‘improved’ product’s perception is consonant with the brand proposition. Our experience is that a product generally liked under blind conditions is rarely disliked when tested branded. The opposite can be true (product disliked when tested blind, but quite acceptable when tested branded) for brands offering some unique benefit. However it is interesting to note that generally, this only holds true until a better tasting product is launched, offering the same benefit. Secondly, the focus on consumers’ response to the food product in an actual store setting can be unrealistic, or even invalid for many products. The in-store context is of particularly relevance only to the buyer, who may not be the sole consumer. And, indeed, at any stage apart from first purchase, the effect of product consumption (usually at another time and in a location other than the store) can be the defining motivation to repeat-purchase the product. While I agree entirely that all food product testing should be conducted by respondents under an appropriate context for assessment, to set that context purely on in-store variables may simply obscure a valid response. Again, in their section on the Marketing Mix the authors imply that tests should be able ‘‘. . .to validly predict consumer choice in an actual market environment.’’ But there are many objectives where ‘‘. . .a tastetest experiment in which brand identity is withheld. . .’’ is exactly appropriate. We have recently conducted a quantitative product test involving the same respondents testing a number of competing products: first, blind and then on the next day: those same products branded. Tested blind, the brand leader scored 5.46 on a 9-point scale relative to 6.41 from the best own-label brand. The brand-leader was also down-rated significantly on consistency and taste ratings. These diagnostics were confirmed by a sensory panel and subsequently, it turned out that an inferior batch had been selected for the test.
0950-3293/02/$ - see front matter # 2002 Elsevier Science Ltd. All rights reserved. PII: S0950-3293(02)00031-9
38
D. Buck / Food Quality and Preference 14 (2003) 37–38
When the same products were tested branded, the brand leader scored 7.17 compared with the best own label product’s 7.15. The product attribute ratings for the brand leader when tested branded were also at parity with the best own-label product. So, blind: the brand leader is much less liked, while branded: the two products are at parity. Does this mean that product quality stands for nothing? Of course not. Luckily, having seen the blind results we also asked respondents about their branded scores as they were leaving the hall. The following comments were typical: . . .I rated it well because it’s my usual brand, but frankly I was disappointed. . .; . . .I know I like this brand, but this time it wasn’t all that good. If the next jar I bought wasn’t any better, I’d try a different brand. . . Our conclusion was that the branding could overcome an infrequent and short term drop in quality, but the brand’s equity could not be sustained unless the product quickly returned to its usual high quality. Here, the objective was to conduct a quality benchmark exercise,. Had the test been carried out only under branded conditions, information vital to our client would have been lost. I do take strong issue with the authors contention that ‘‘. . .all commercial food products are carefully designed and tested to assure that they taste good, not bad, to consumers, or they are never even considered for
launch.’’ Any researcher who has conducted market appraisals or multi-product optimisation studies will tell you that in most markets, commercially available products exist which are strongly disliked by some consumers, even if they are thought to be delicious to others. I therefore believe that the authors’ appended questionnaire seriously undermines the reality of some consumers’ perceptions as, even at the worst end of the scale, they can only report their lack of appreciation of a test product as ‘indifferent’. The authors’ suggestion for means to disentangle and separate the sensory and cognitive effects of a stimulus are interesting but it is difficult to see such extensive pretesting as a viable option in most commercial market research, product testing applications. In conclusion I welcome the timely reminder to food scientists to create among research respondents, an appropriate context for product assessment. I believe that to include information on store/buying variables in every test, regardless of the objectives may be inappropriate: particularly when one needs to understand the true product effect. At the extreme, if a valid prediction of consumer choice at point of purchase is required then we should be conducting a full STM with considerations of promotional scenario’s, perceived risk, as well as brand, distribution, positioning, advertising, etc. Otherwise, store/buying variables should be included selectively to maximise the external reality without obscuring the information needed to meet the test objectives.