Best practices in identifying customer-driven improvement opportunities

Best practices in identifying customer-driven improvement opportunities

Industrial Marketing Management 32 (2003) 455 – 466 Best practices in identifying customer-driven improvement opportunities Michael S. Garver* Depart...

173KB Sizes 2 Downloads 92 Views

Industrial Marketing Management 32 (2003) 455 – 466

Best practices in identifying customer-driven improvement opportunities Michael S. Garver* Department of Marketing, Central Michigan University, Mount Pleasant, MI 48859, USA Received 12 June 2001; accepted 28 May 2002

Abstract This article makes a significant contribution to the literature by modifying traditional performance – importance analysis and by expanding customer-driven improvement models. The model significantly advances theory, addressing many important research issues and gaps. It is extremely useful to practitioners, providing a guide for identifying continuous and breakthrough improvement opportunities. D 2002 Elsevier Science Inc. All rights reserved. Keywords: Customer satisfaction; Customer value; Loyalty; Service quality; Continuous improvement; Total quality management; Customer-focused; Market orientation; Marketing strategy; Performance – importance analysis

1. Introduction Delivering superior customer value and satisfaction are critical to a firm’s competitive advantage [1,2]. Yet in today’s competitive environment, marketplace advantages are often short-lived. With this in mind, many practitioners and academic researchers are touting continuous improvement strategies to stay ahead of the competition [3]. To drive continuous improvement, academic researchers are placing more importance on measuring organizational performance from the customer’s perspective [4,5]. As a result, customer value and satisfaction research is the most prevalent type of research conducted by companies today [6,7]. Customers decide who has the best offering, and they are the ultimate judge of quality products and services. Researchers have put forth performance – importance matrix (i.e., quadrant analysis) as the primary tool to identify improvement opportunities from customer satisfaction data [8]. Performance – importance analysis is extremely valuable to practitioners, yet many important research issues and gaps currently exist. For example, Oliver [7] suggests that there is confusion concerning the most appropriate method for calculating and distributing performance and importance scores along their respective axes. Brandt [9] suggests that practitioners are unaware of limitations associated with performance – importance ana* Tel.: +1-989-774-6576; fax: +1-989-774-7401. E-mail address: [email protected] (M.S. Garver).

lysis, and shows that implementing different methods of distributing performance scores results in very different conclusions. Implementing an inappropriate method to calculate performance –importance scores will likely result in committing scarce resources to improving the wrong process. Furthermore, researchers and practitioners often discuss performance –importance analysis as the sole tool to identify continuous improvement opportunities, leading to a myopic view of improvement strategies [10]. In short, research attention needs to address appropriate methods for calculating performance – importance scores and examine new variables to complement current improvement frameworks. The purpose of this article is to examine (1) performance – importance analysis and (2) customer-driven improvement models. To fulfill this purpose, a literature review of performance – importance analysis and improvement models will be presented. Then, the research methodology will be discussed followed by the results of this study. The contribution of this research rests in examining the strengths and limitations of performance – importance analysis and in developing a more comprehensive customerdriven improvement model.

2. Literature review The literature review will discuss the following concepts: (1) performance – importance analysis, (2) methods of cal-

0019-8501/02/$ – see front matter D 2002 Elsevier Science Inc. All rights reserved. doi:10.1016/S0019-8501(02)00238-9

456

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466

culating performance scores, (3) methods of calculating importance scores, and (4) customer-driven improvement models. 2.1. Performance –importance analysis For decades, performance –importance analysis has been discussed as a tool to evaluate a firm’s competitive position in the market, to identify improvement opportunities, and to guide strategic planning efforts [8,11– 13]. Using customer satisfaction data, product and service attributes (i.e., characteristics of the firm’s offering such as quality, price, and delivery service) are plotted on a 2  2 matrix (see Fig. 1). Typically, customer perceptions of performance represent the horizontal axis and customer perceptions of importance are distributed on the vertical axis. Depending on cell location, customer satisfaction attributes are deemed as major or minor strengths and weaknesses. Attributes representing high importance and low relative performance are considered major weaknesses. Attributes that are low in importance and low in relative performance are considered minor weaknesses. In contrast, attributes representing high importance and high relative performance are major strengths. Attributes with low importance and high relative performance are considered minor strengths. From this analysis, certain improvement opportunities can be suggested. For example, researchers often suggest that major weaknesses should receive top priority and be targeted for immediate improvement efforts [11]. In contrast, attributes deemed as major strengths should be maintained, leveraged, and heavily promoted [14]. Essentially, these attributes represent the firm’s competitive advantage in the marketplace. Attributes with high relative performance and low importance may be overkill, suggesting that too many resources are being diverted from other more important customer service attributes. Finally, minor weaknesses, attributes with low relative performance and importance, are given lower priority for improvement opportunities. Performance – importance analysis is widely used in practice, yet research issues surround its development,

Fig. 1. Traditional performance – importance analysis.

interpretation, and role in identifying continuous improvement opportunities [9]. Oliver [7] suggests that traditional performance – importance analysis has serious limitations that are often overlooked by both practitioners and academic researchers. For example, the literature discusses different methods of calculating and plotting performance and importance scores, and research suggests that implementing different methods will likely result in identifying dramatically different attributes for improvement [9]. Our discussion now turns to examining performance and importance scores. 2.1.1. Performance Typically, customer satisfaction surveys have attribute questions that ask participants to rate the performance (i.e., satisfaction) of the firm’s products and services. For example, ‘‘Please rate the performance of our product quality.’’ Once the data are collected, researchers have traditionally examined both actual and relative performance for input into performance –importance analysis [6]. This section presents both methods of examining customer satisfaction performance scores along with their advantages and limitations. 2.1.2. Relative performance Typically, practitioners are more concerned with relative performance than with actual performance. Examining relative performance, researchers often calculate performance scores by comparing their performance to that of a best competitor. Relative performance methods include: (1) gap analysis, (2) performance ratios, and (3) comparative scales. 2.1.2.1. Gap analysis. Some business-to-business researchers have suggested calculating a gap analysis—subtracting the best competitor’s performance from the focal firm’s performance [14,15]. A positive gap means the firm is winning on the attribute, while a negative gap represents a disadvantage. The performance break in performance – importance analysis (the midline) would be at zero, where the competition and focal firm are equal in performance. Implementing this technique, researchers have traditionally interpreted a difference of one in attribute mean performance as a significant difference in performance [14,15]. For example, if the mean performance for delivery speed is 6, and the best competitor’s mean performance is 4.8, then the gap is 1.2. Thus, the firm would possess a significant advantage in delivery speed. While this method is intuitively appealing, it possesses many limitations. One limitation of this method is that the majority of customer satisfaction attributes are deemed as ‘‘equal to the competition,’’ when significant differences in performance may actually exist. The problem rests in the size of the interpreted gap and its lack of standardization across a different number of scale points. Reexamining data from a previously published academic study using this method, researchers interpreted a customer service attribute

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466

with a relative performance difference of .83 as parity with the competition [14]. Closer examination shows that the difference between the firm and its best competitor represents a statistically significant difference at an alpha level below .01. Using this method, the relative performance of all attributes in this study was interpreted as ‘‘equal to the competition,’’ when some attributes possessed statistically significant differences between best competitor and firm performance. This approach also lacks standardization across a different number of scale points. For example, a one-point difference in performance is relatively smaller for a 10point scale as compared to a five-point scale. A gap of one is dramatically different for various numbers of scale points, and thus has serious implications for interpreting a firm’s relative performance. 2.1.2.2. Performance ratios. Performance ratios take a slightly different approach and were first introduced by Gale [3]. Performance ratios are calculated by dividing the focal firm’s mean performance by the best competitor’s mean performance. If the result is greater than 1, then the focal firm has a market advantage, whereas a score of less than 1 represents a market disadvantage [16]. The ‘‘parity zone’’ equals 1, plus or minus some fraction of 1. Gale signifies that 1 ± .03 equals the parity zone. If a firm’s attributes are in the parity zone, then customers consider those attributes as relatively equal in performance to the competition. The performance break in performance – importance analysis (the midline) occurs at 1, where the competition is equal to the focal firm. Calculating performance ratios overcomes limitations associated with analyzing traditional gap analysis. Most importantly, it more accurately captures real differences in performance and is standardized across a varying number of scale points. The key issue with performance ratios is: What is the true size of the parity zone? Is 1 ± .03 truly the parity zone? Should the parity zone equate to statistical significance? 2.1.2.3. Comparative scales. The third approach to capturing relative performance is to measure comparative performance directly on the survey [10]. For example, for each customer satisfaction attribute question, response categories may include ‘‘much better than competition,’’ ‘‘equal to competition,’’ and ‘‘much worse than competition.’’ The performance break in performance –importance analysis (the midline) occurs at ‘‘equal to competition.’’ The advantage of this method is that it directly captures relative performance, with a minimum number of survey questions. The major limitation rests in that it does not capture performance deficiencies by all competitors in the industry. For example, airline satisfaction scores are substantially below that of other industries. Using this method in the airline industry, performance that is above the competition is still likely to be well below customer expectations. Yet, examining comparative performance scores would suggest

457

that the leading airline is doing well with customers. This method raises the following issue: Are customers happy with industry performance, or do all suppliers deliver subpar performance? Implementing this method, actual firm performance could be poor, yet receive high relative performance scores. 2.1.3. Actual performance Actual performance scores are also obtained from customer satisfaction surveys, yet these performance scores are not compared to the competition. Instead, actual performance or satisfaction scores (typically means) are simply plotted on the performance axis. Using this method, two methods are typically used to split the performance axis— either the midpoint of the measurement scale, or the mean performance for all attributes is used [9]. Implementing the midpoint of the measurement scale, a ‘‘4’’ on a seven-point rating scale would be used to split the performance axis. Attributes possessing means above the midpoint are placed on the high end of the performance axis, while attributes possessing means below the midpoint are placed on the low end. The major limitation of this method is that for most firms, a performance score of 5 or below may actually represent subpar performance. Peterson and Wilson [17] suggest that satisfaction scores are often inflated and that midrange performance scores actually signify poor performance. Using this method, it would not be surprising that all attributes would be in the ‘‘highperformance’’ quadrant. The average mean performance rating for all attributes is also used to split the performance axis. Attributes possessing means above the average are placed on the high end of the performance axis, while attributes possessing means below this average are placed at the low end of the performance axis. The limitation of this method is that the results are guaranteed to produce both attribute strengths and weaknesses. Comparatively, the attributes deemed as ‘‘weaknesses’’ are so because their performance is below that of other attributes—not any market-based comparison standard. The major limitation is that customers buy products and services in the presence of a competitive market. The focal firm can display outstanding performance, but if that performance is below the competition, then the competition will likely obtain the sale. Jones and Sasser [18] suggest that relying either on the midpoint of the scale method or the attribute mean performance method results in a biased and misleading interpretation of performance. Their research suggests that ‘‘top-box’’ performance is the only true measure of superior performance. Their research shows that ‘‘top-box’’ performance scores (i.e., 9 –10 on a 10-point rating scale) are the only performance scores that result in loyal customers—those customers who are safe from defection and those who spend more ‘‘of their wallet’’ with the firm. This would suggest that satisfaction researchers use ‘‘topbox’’ scores as the split in the performance scores. If a 10-

458

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466

point scale is used, 9 would be the split. Scores over 9 would represent a strength, with scores below 9 representing areas needing improvement. This clearly overcomes the problem of making comparisons to poor-performing companies throughout the industry. 2.1.4. Importance To determine importance ratings for attributes, practitioners and researchers debate the merits of stated and statistically inferred methods [19]. Both have merit, as well as significant limitations. Many business-to-business researchers have relied typically on stated importance ratings [10,14,15]. Implementing this method, researchers ask survey respondents to rate the importance of each customer satisfaction attribute, typically ranging from not at all important to very important. Although commonly used, this method has major limitations. First, asking stated importance questions significantly adds to survey length. In the current environment, response rates continue to decline and increased survey length only contributes to this problem. Second, research has shown that importance questions are often misinterpreted by respondents and researchers [7]. The problem often lies in the word ‘‘importance,’’ and in the variance associated with attribute performance between key competitors. For example, survey respondents may state that product quality is the most important attribute of a firm’s offering. Yet, if product quality is equal between main competitors, then it may not be important for selecting and retaining vendors [14]. Is it important for a refrigerator to maintain cold temperatures? Is this not the most important function of a refrigerator? Yet, most consumers take this attribute as a given, and differentiating attributes such as styling, size, and color may actually become much more important in the selection of the refrigerator. Stated importance ratings often display a lack of discriminating power between customer satisfaction attributes [12]. In short, everything is often ‘‘very important’’ to customers. Reexamining prior business-to-business research, analysis reveals that 78% of customer service attributes is ‘‘very important’’ with little variance in importance between these attributes [10]. The purpose of performance – importance analysis is to determine the relative importance of attributes and to prioritize improvement opportunities accordingly. If everything is ‘‘very important,’’ then this purpose is defeated. While rank-order methods may overcome this limitation, they are seldom employed because of the demands placed on respondents. Not only is survey length increased, but rank-ordering 20 variables is often too difficult for respondents. To overcome these problems, satisfaction researchers often use statistically inferred importance ratings [7,20]. Implementing this technique, researchers would simply regress attribute performance ratings (independent variables) against overall performance measures such as overall satisfaction or performance (dependent variable). As a

result, survey length is dramatically reduced, and respondent bias associated with importance questions is eliminated. Furthermore, statistically inferred importance ratings display much more variance between attributes, allowing researchers to identify relative differences in attribute importance [12]. Calculating statistically inferred importance ratings has distinct advantages, yet this technique also has limitations. Many researchers and practitioners use multiple regression or structural equation modeling to determine attribute importance ratings. Both statistical techniques assume the following: (1) the data are relatively normal; (2) the relationships between independent and dependent variables are linear; and (3) multicollinearity between independent variables is relatively low [21]. In customer satisfaction research, these assumptions are almost always violated. Data distributions are often highly skewed. Typically, performance ratings are inflated, with scores congregating at the upper end of the scale [17,22]. Furthermore, recent research suggests that bivariate relationships are often nonlinear [22,23]. Finally, multicollinearity among customer satisfaction attributes is typically very high [22]. If regression or structural equation modeling is implemented, b coefficients may be biased, and interpreting attribute importance ratings may be misleading. The method chosen to determine importance rating is critical because stated and statistically inferred ratings often produce different results. For example, it is common for price to have high stated importance ratings, yet posses much lower statistically inferred importance ratings [7,12]. While statistically inferred importance ratings have limitations, satisfaction researchers generally find them more useful in gauging the relative importance of attributes [20]. 2.1.5. Continuous improvement frameworks To this point, the discussion has focused solely on performance – importance analysis and its role in identifying attributes for improvement opportunities. In far too many situations, practitioners rely almost exclusively on this analysis to identify improvement opportunities [24]. Given the limitations of determining both performance and importance scores, overreliance on this analysis is dangerous and may result in organizations investing limited resources to improve the ‘‘wrong’’ process. In turn, processes that need improvement may be neglected. To complement performance – importance analysis, business-to-business researchers have developed improvement models which include: (1) the cost – time matrix and (2) the return on quality (ROQ) methodology. 2.1.5.1. Cost –time matrix. In a world of limited resources (i.e., employee time and financial resources), practitioners have scarce resources to fix problems. The cost –time matrix often works in conjunction with performance – importance analysis and simply examines the cost and time to improve a given customer satisfaction attribute [10]. For example,

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466

459

what are the costs to improve certain attributes selected from performance – importance analysis? How much time is required to improve performance of those attributes? Can the attribute be improved in 6 months, or will it require 3 years? Assuming that two attributes have similar positions on the performance –importance matrix, researchers have suggested that attributes which can be improved with fewer resources and in a shorter time horizon would be preferred, more feasible, and more profitable to improve.

(2) collecting qualitative data, (3) analyzing qualitative data, and (4) testing for data and interpretation quality.

2.1.5.2. ROQ. Firms often undertake initiatives that ensure the long-term viability of the firm and provide the highest return on investment (ROI). Rust et al. [25] introduced the ROQ method, a technique for measuring the impact of attribute performance on ROI. The ROQ method identifies the optimal level of customer service attribute performance and estimates the projected ROI, given that level of attribute performance. Researchers recommend that attributes with the highest ROI are given higher priority for improvement [25]. As an alternative, some researchers conduct this analysis with other important business measures such as market share, sales revenue, profitability, and customer retention [9]. Both the cost – time matrix and the ROQ method make contributions to the literature. Yet, it is uncertain if these techniques have been widely adopted by practitioners. In short, do practitioners use these techniques? Furthermore, what other variables, techniques, and methods do leadingedge practitioners use to identify improvement opportunities from the customer’s perspective? In short, there is scant research attention given to identifying customer-driven improvement opportunities. The following qualitative method was employed to explore these issues and to fill these gaps in the literature.

 

3. Research methodology The purpose of this study is to examine how leadingedge practitioners identify improvement opportunities from their customer satisfaction data. How do they gain insight into identifying improvement opportunities? Once improvement opportunities are identified from data analysis, how are certain initiatives chosen over others. Are methods used by practitioners similar to or unique from those discussed in the literature? Qualitative research was chosen to explore this phenomenon because it is especially proficient at allowing new concepts to emerge from the data [26]. Qualitative research is an excellent tool to gain deep insight into the nature of constructs and their roles in theoretical models, making qualitative research an excellent tool to explore the purpose of this study. Even as research matures, qualitative research is a valuable tool able to confirm, contrast, and contribute unique findings to the literature [6,27]. This section discusses: (1) selecting a sample of best practice companies,

3.1. Selecting and recruiting a sample of best practice companies Selecting appropriate companies was an important part of the research process. Best practice companies were selected on the following criteria: The company has won a quality award. The company has been discussed as a best practice company in a respected publication.  The company has presented best practice at a recent practitioner conference. Each company selected for this study meets at least one of the criteria, and most meet all three. Participants from these companies were selected and recruited for the study, resulting in participants from the following companies: Hewlett-Packard, Sun Micro Systems, Federal Express, Subaru, New Holland, AT&T WorldNet Services, US West, and Eastman Chemical. 3.2. Qualitative data collection To collect data for this study, in-depth interviews were conducted with the person responsible for leading and managing the collection, analysis, and use of customerdriven data. Often, participants had titles such as Director of Customer Satisfaction, Vice President of Quality, or Manager of Customer Loyalty. An interview guide was sent to some participants prior to the interview, although the actual interviews were loosely structured and free flowing. Some interviews were conducted at the participant’s workplace to preserve the context, while others were conducted over the phone because of the participant’s location. Interviews began with general background statements (i.e., ‘‘Tell me about your responsibilities.’’). Then, open-ended statements that allowed participants to discuss whatever they felt was important about their program were stated (i.e., ‘‘Tell me about your customer satisfaction program.’’). The goal was to allow participants to tell their story from their own perspective. Probing statements were used to clarify responses and to dig deeper into the participant’s thought processes (i.e., ‘‘Tell me more about that’’ or ‘‘What exactly do you mean by that?’’). Each interview ranged in time from 1.5 to 2.5 h. All interviews were audio-taped, then transcribed for data analysis purposes, resulting in over 300 single-spaced pages of text. The researcher also attended each participant’s presentation at a customer satisfaction and continuous improvement conference. Notes from these presentations along with other supporting documentation were used as data for this study. On average, the researcher collected 3 h of verbal reports from each participant.

460

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466

3.3. Qualitative data analysis Grounded theory analysis techniques were used to examine the data. Grounded theory analysis techniques use a systematic set of procedures to develop constructs, themes, and theoretical models [28]. Grounded theory is quickly gaining acceptance by the research community as a rigorous approach for conducting qualitative research [2,29]. Primarily, two analysis techniques were employed—open and axial coding. 3.3.1. Open coding Open coding is an analysis technique where the researcher breaks apart qualitative data to assess its characteristics, properties, and conceptual dimensions [28]. Open coding techniques were employed with the raw data to discover how participants identify customer service attributes for improvement opportunities. In this stage of analysis, the researcher asked numerous questions of the data such as, ‘‘What is going on here?’’ and ‘‘What does it mean?’’ At first, the analysis was conducted at a holistic level. For example, major themes of each interview were examined. This process continued, yet at more specific and detailed levels (i.e., passages, paragraphs, and even particular sentences, phrases, and words). This analysis was conducted with each interview, then employed across interviews. Results were stringently challenged and tested across different interviews, looking for both similarities and differences. Results were communicated back to some participants for clarification and the appropriate interpretation. 3.3.2. Axial coding Axial coding focuses on developing relationships between different themes or constructs developed from open coding [28]. During this process, the researcher developed working hypotheses between constructs, particularly those with identifying improvement opportunities. Negative case analysis was used across interview transcripts, attempting to disconfirm working hypotheses. If the proposed hypothesis holds, then there is stronger evidence for this relationship. The result of open and axial coding is an improvement model that is grounded in data. 3.4. Assessing trustworthiness of qualitative research In qualitative studies, it is important to ensure data quality and interpretation. To assess the trustworthiness of the data and analysis, we applied a framework proposed by Lincoln and Guba [30]. In this framework, different techniques were employed to ensure and test for (1) data integrity, (2) credibility, (3) dependability, and (4) confirmability. For more information about these categories and their corresponding techniques, please see Appendix A. Concerning analysis techniques used to ensure trustworthiness in the findings, the interviewer first promised to safeguard the participant’s identity. This promise, along

with the development of rapport, allowed the participants to have psychological freedom to discuss a wide range of issues. From the nature of their candid statements, participants seemed quite comfortable discussing important issues. Next, triangulation across sources was used to test for redundant results, ensuring that findings do not come from a single participant, but instead from the majority of participants. Redundant results among participants were quite evident. Negative case analysis (as discussed earlier) is at the heart of the constant comparative process from grounded theory and was used in all data analysis activities [28]. This process was also used in conjunction with theoretical sensitivity, a technique that uses familiarity with the literature to help researchers become more exacting in their analysis. Theoretically sensitive researchers may examine working hypotheses from the literature, testing these hypotheses in the data. This was done throughout the analysis. Once preliminary findings were generated, a confirmability audit was conducted with an independent auditor. An auditor was given several transcripts and the preliminary findings from data analysis, and assessed whether or not the findings were represented in the data. As a result, some modifications were made, yet the auditor confirmed the researcher’s conceptualization. Finally, member checks with a subset of participants were conducted. After data collection and analysis for individual interviews, the researcher provided a subset of participants with preliminary findings of their interviews. Participants confirmed the researcher’s findings as valid representations of their interview. Once the entire study was completed, a report was sent to participants and their feedback was requested concerning the report’s accuracy. While not all participants responded to this request, confirmation was given by those who did. While no research is without limitations, the author has confidence in the research results.

4. Research results The research results will first discuss practitioners’ use of performance –importance analysis, followed by its role in a customer-driven improvement model (see Table 1). Throughout this section, participant quotes will be supplied in displayed quotation to add insight into the results. 4.1. Performance – importance analysis Consistent with the literature, performance – importance analysis is at the core of identifying improvement opportunities for best practice companies. The following methods are used to assess the performance and importance of customer satisfaction attributes.

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466 Table 1 Customer-driven improvement model Analyze performance

Analyze importance

Use multiple customer listening tools

Conduct complimentary improvement analysis Assess firm capabilities

Examine improvement costs Assess ROI

Select attributes, set goals, and monitor improvements

Performance ratios are typically used to assess performance relative to standards such as top-box performance, best competitor, and satisfaction goals. Employing a variety of statistical tools, practitioners in this study tend to rely on statistically inferred importance ratings, with some companies examining both stated and inferred ratings simultaneously. Performance – importance analysis is conducted using data from multiple customer listening tools. The following customer listening tools are employed: (1) critical incident survey, (2) relationship survey, (3) benchmark survey, (4) customer complaints, (5) won – lost and why survey, and (6) customer contact employees. In addition to performance – importance analysis, best practice suggests examining trends, verbatim responses, and delta charts for each customer listening tool. Once potential improvement opportunities are identified, firms look internally to assess their expertise, capabilities, and resources to determine whether or not it is feasible to undertake the improvement initiative. Best practice companies focus on the costs of each improvement initiative, assessing whether or not the firm can afford to improve the selected attributes. More important than costs, firms want to evaluate the financial impact of improving attribute performance, assessing the ROI of each improvement initiative. Once attributes are selected to be improved, best practice companies set reasonable customer satisfaction goals. Future satisfaction is assessed to measure the success of the improvement initiative.

4.1.1. Performance Consistent with the literature, our data show that best practice firms measure performance relative to a best competitor. In contrast to measuring comparative performance directly on surveys, data from this study reveal that best practice firms measure actual performance of the firm, followed or preceded by the performance of typically their best competitor. Most best practice firms use performance ratios (dividing firm performance by best competitor performance) put forth by Gale [3], yet two firms also implemented a gap analysis—subtracting the performance of a best competitor from that of the firm. In contrast to the literature, these firms did not adhere to a ‘‘difference of one or more’’ as the threshold for a significant difference in performance. Instead, these firms used statistical significance to identify differences. No company measured comparative performance scores directly on their satisfaction surveys. Consistent with the literature, best practice companies also examine actual performance to find shortfalls in industry performance. This method identifies those attributes in

461

which industry performance is low, an area that represents a significant opportunity to gain a competitive advantage. Data from this study confirm this approach, and suggest that best practice companies also use ‘‘top-box’’ scores to identify excellence in performance. To interpret actual performance, best practice companies use ‘‘top-box’’ performance as the standard of excellence. For example, utilizing a 10-point performance scale, mean performance scores below 9 are considered deficiencies in performance. In this example, the split in the performance axis would occur at 9, with those scores under 9 being likely candidates for improvements. Best practice companies also use top-box customers as one of their primary goals. Thus, these companies often examine both the number and percentage of top-box customers. Much of their efforts is centered around moving customers to top-box customers. From their perspective, top-box customers are the only ‘‘safe-from-defection’’ customers. Additionally, participants showed data that display a curvilinear relationship between sales revenue and top-box performance. Thus, moving a customer from an 8 to a 9, a 10% increase in satisfaction, may result in a 25 – 40% increase in sales revenues. Moving ‘‘merely satisfied’’ customers to ‘‘top-box’’ customers often results in a dramatic increase in revenue: Our job is to continually increase the number and percentage of customers who we classify as top-box customers. Employees seem to rally around this concept, it just makes sense to them. And we know that top-box buy more and buy over a longer time frame from us.

Making a contribution to the literature, companies are also concerned with customer satisfaction performance relative to internal goals. Customer satisfaction goals are often set in conjunction with the strategic direction of the firm. Employees often develop customer performance goals for attributes and processes under their direct control. For example, logisticians may set goals for the following customer satisfaction attributes: on-time delivery, availability, and order cycle time. Depending on performance levels and the strategic direction of the firm, specific customer satisfaction performance goals would be set for each attribute. Relative performance to internal goals can also be used as an input to performance –importance analysis. Best practice companies develop performance ratios, yet employ satisfaction goals instead of best competitor performance. Thus, satisfaction goals are used as the standard instead of competitive performance. Satisfaction attributes with performance levels below goal (i.e., a performance ratio below 1) would be identified for improvement. Utilizing relative performance to satisfaction goals as an input to performance –importance analysis is new to both the literature and to many satisfaction practitioners, thus

462

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466

making a contribution to the literature. Furthermore, assessing performance relative to three different standards (a best competitor, a top-box performance, and satisfaction goals) and using these scores as inputs into a series performance – importance analyses also make a significant contribution to the literature. 4.1.2. Importance In contrast to prior business-to-business research [10,14,15], all participating companies in this study rely on statistically inferred importance ratings. Most of these works were conducted by outside marketing research firms; thus, exact details were not obtained on the specific techniques utilized to determine relative importance. However, most research firms were utilizing regression, structural equation modeling, or bivariate correlation analysis. One firm used both stated and statistically derived attribute importance ratings. They implemented a matrix that examines both stated and statistically inferred importance ratings. Attributes that receive high importance ratings from both methods are confirmed as most important and are labeled as ‘‘key drivers.’’ These attributes are confirmed as most important. Similarly, attributes that receive low importance ratings from both methods are confirmed as least important and are labeled as ‘‘secondary issues.’’ Attributes that receive high stated importance ratings and low statistically inferred ratings are deemed as basic requirements. Customers say these attributes are important, but performance on these attributes does little to affect overall satisfaction. It is likely that performance on these attributes is similar across a number of firms. Attributes that receive low stated importance ratings and high statistically inferred ratings are labeled as ‘‘enhancers.’’ Customers may not ask for these attributes, yet they have a significant impact on satisfaction. This technique is new to both the literature and to the majority of customer satisfaction practitioners, thus making a contribution. Given the research issues and limitations surrounding statistically inferred importance ratings, future research needs to explore new methods to overcome these limitations. Additionally, research attention should study the implications of using both stated and statistically importance ratings. 4.2. Customer-driven improvement model Traditionally, performance –importance analysis is the primary method for identifying improvement opportunities. Research suggests that practitioners use performance – importance analysis, yet leading-edge practitioners go well beyond this analysis to identify and select customer-driven improvement opportunities. Best practice companies utilize the following improvement model (see Fig. 2): (1) examine performance-importance analysis with multiple customer listening tools; (2) conduct complementary improvement analysis; (3) assess firm capabilities; (4) examine improve-

ment costs; (5) estimate ROI of improvements; (6) select attributes, set goals, and monitor improvement performance. 4.2.1. Examine multiple customer listening tools Data from this study confirm recent research and suggest that best practice companies use multiple customer listening tools to better understand customers and to identify customer service attributes for improvement [29]. This study clearly depicts that best practice companies use data from different listening tools to conduct performance – importance analysis from different perspectives. In these analyses, they look for some degree of convergence among listening tools to identify continuous improvement opportunities. Customer listening tools include: (1) critical incident survey, (2) relationship survey, (3) benchmark survey, (4) customer complaints, (5) won –lost and why survey, and (6) customer contact employees. The data revealed that critical incident surveys are transaction-specific surveys, administered immediately following a certain type of customer service interaction. For example, assume that ‘‘product return’’ is a critical incident from the customers’ perspective. A sample of customers who returned products would receive a survey soon after this interaction. Participants discussed that critical incident surveys are excellent at quickly identifying service problems and are used at primarily a tactical level. These surveys are used to guide continuous improvement of specific processes. Relationship surveys are administered on a periodic basis, and their purpose is to capture the customer’s overall perception of the supplier’s performance. These surveys gather overall perceptions, looking across many different interactions with the supplier. Relationship surveys are viewed as traditional customer satisfaction surveys and are the core foundation to any program designed to listen to customers. In contrast to critical incident surveys, relationship surveys tend to be used at more strategic levels. Data analysis revealed that benchmark surveys are periodic measurements that capture perceptions of performance of all major competitors in the marketplace. Whereas relationship surveys primarily sample a firm’s current, regular customers, benchmark surveys gather perceptions of performance from the entire market. These surveys usually gather customer perceptions of performance about the top competitors in an industry, allowing the firm to examine their strengths and weaknesses in the overall marketplace, not just with their customers. Benchmark surveys are used in the strategic planning process by senior management and executives and are the most effective tool to accurately identify the firm’s competitive advantage in the marketplace. While continuous improvement may be a result of this tool, the real value should be breakthrough thinking to gain a sustainable advantage. Gathering customer complaints is standard practice for many companies, yet integrating complaint data with cus-

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466

463

Fig. 2. Customer-driven improvement model.

tomer satisfaction data to identify improvement opportunities is rarely done. Best practice companies carefully monitor complaints and track performance deficiencies that cause complaints. Additionally, they gather data concerning the severity of complaints and their ability to satisfy and fix the complaint. Most important, complaint data are used in with other listening tools to identify customer service attributes needing improvement. Data analysis revealed that won – lost and why surveys are an excellent tool to gather perceptions of recently won or lost customers. This tool gathers actual customer behaviors, yet the real value is the customer’s reasoning behind the behavior. For example, are we losing customers because of order accuracy, damaged product, or lead time reliability? Why are we losing strategic customers? What element of customer service is retaining or winning customers? In the last quarter, how many customers were lost due to ontime delivery? Finally, best practice companies listen to their customer contact employees, typically customer service representatives, service technicians, and salespeople. These employees are often in contact with customers everyday and are the first to recognize problems experienced by their customers. While employee perspectives will likely be biased, outstanding representatives understand their customers and are often first to recognize important customer issues. Each customer listening tool has certain strengths and is most appropriate in certain situations. Yet, data analysis revealed that best practice companies use multiple customer listening tools to identify improvement opportunities. Data from each of these sources are used to conduct performance –importance analyses. Similar to principles associated with research triangulation, firms look for convergence

among listening tools to identify improvement opportunities: Before I recommend improvement opportunities, I want to see convergence on what the customers are saying from all our sources of customer data. If all our measurements are saying fix this, then lets go for it!

4.2.2. Conduct complementary improvement analysis To identify continuous improvement opportunities, our data suggest that practitioners complement performance – improvement analysis with other analysis tools. The data suggest that (1) trend analysis, (2) verbatim analysis, and (3) delta chart analysis are extremely relevant. Trend analysis examines performance over a period of time and projects performance trends into the future. Declining company performance and/or improving key competitor performance in certain attributes may alert practitioners to take preemptive actions to stay ahead of the competition. Whereas performance – importance analysis has a current orientation, trend analysis projects performance into the future. Thus, trend analysis is an excellent tool that can predict changes in performance and importance that are not accounted for in performance – importance analysis: We really look at the trends in performance to see where we have been and where we may be going. Are we improving, getting worse, and at what rate of change? When looking for attributes to improve, we often look at trends in performance, relative performance, and importance.

The data revealed that best practice companies often ask open-ended questions (i.e., How can we improve?) on their customer satisfaction surveys. Performance –

464

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466

importance analysis quantitatively infers what customer service attributes should be improved, yet open-ended improvement questions directly ask customers what attributes should be improved. Because of this difference, verbatim comments often provide insight from a different perspective. In addition to identifying what attributes should be improved, verbatim analysis sheds insight into how and to what level customer service should be improved. Finally, the data revealed that internal employees connect and truly listen to customer verbatim comments: We have a system set up now with our surveys, where we can take all of verbatim comments. We gather these up and publish them. We send them out to the key people within the company, so they can see what the customer is saying. Now we’re about a month away from posting all these verbatim comments from customers on our Intranet. The vision is a guy in a factory in Pennsylvania, on his lunch hour, he has been going to the computer terminal and asked the computer what is the customer saying these days about this product that I build? It is amazing for individual employees once they see what the actual customer is saying, they can act on it. They are really tied into the verbatims, which blows me away. But they are words, and they understand them.

Because performance – importance analysis is plotted on a matrix, precisely examining the distance in performance gaps may be difficult [9]. Delta charts help practitioners interpret actual and relative performance scores more precisely and accurately than performance – importance analysis. Delta charts contain all customer service attributes in descending order of attribute importance, with the most important attributes starting at the far left of the chart. For each attribute, the performance gap or ratio is graphed. Delta charts display attributes in descending order of importance along with the direction and magnitude of the performance gap or ratio. This allows practitioners to examine performance while simultaneously examining the relative importance of attributes. Best practice suggests that practitioners conduct this analysis with both relative and actual performance. This analysis is often conducted immediately after or in conjunction with performance – importance analysis: Delta charts are a great way to look at this data, more precise than quadrant analysis (performance – importance analysis) and just easier to look at, it just makes more sense to me.

4.2.3. Assess firm capabilities To this point, data analysis tools have identified potential customer service attributes for improvement. Data from this study suggest that customer service attributes are now qualitatively examined. In the current environment, companies are often undergoing numerous strategic initiatives. In some situations, the organization may be stretched too

thin and some improvements may be necessary, yet not feasible: Once you have analyzed the data, you have to ask yourself, can we fix this? Do we have the expertise to fix this? Does our strategy say we should fix this? Is now a good time to fix this? Do we have the capabilities to do it right? We simply cannot do everything the customer may want us to do.

In this assessment, the feasibility of undertaking such improvements should be considered along with the current and potential capabilities of the firm. The following questions should be asked:      

All things considered, is it currently feasible to undertake this improvement initiative? Do we currently have expertise in this area? Do we want to develop expertise in this area? Do we have the capabilities to make the necessary improvements? Can we outsource the necessary capabilities and expertise? Concerning outsourcing, what level of performance can we expect to obtain?

4.2.4. Cost, not time, analysis Consistent with the literature, practitioners examine the cost to conduct certain improvement initiatives [24]. In an environment of limited resources, firms need to examine the necessary resources and costs to improve performance. Internal competition for resources is often intense. During downturns in financial performance, cash flow may be constrained and competition for resources increases. Regardless of an initiative’s return, it may not receive funding in tough financial times. During data collection for this study, one best practice company was experiencing a downturn in financial performance, and many necessary improvements were put on hold due to scarce resources. Although the initiatives were projected to supply a significant ROI, the cash flow required to make these investments was too high: Can we afford to make the improvement? You always have to look at the costs involved with improving performance.

In contrast to the literature, best practice companies did not take the time necessary to make an improvement into consideration. While they examine the time necessary to make the improvement, long time frames did not seem to exclude an initiative from being undertaken. 4.2.5. ROQ While cost to improve a customer service attribute is relevant and important, the data show that in most circumstances, the financial return on that investment is a more compelling measure. In this study, the ROQ method was not

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466

currently utilized, yet participants were moving in this direction. Participants discussed implementing ROQ and similar methodologies as their goal to achieve in the future. In short, participants wanted to be able to examine the ROI of any and all improvement efforts: We’re not there yet, but that is where we are trying to go. We want to know if we improve this attribute by 10%, what will the impact be on sales, profit, and retention. That’s powerful.

4.2.6. Select attributes, set goals, and monitor performance Taking the prior steps into consideration, attributes and internal processes are selected for improvement initiatives. Once attributes are selected, practitioners then set performance goals. Given improvement initiatives, what level of customer performance can be expected? As with most management frameworks, it is imperative to monitor performance and compare this performance to goals. In short, this allows practitioners to know if the initiative was successful and the degree of success.

5. Implications and future research The customer-driven improvement model makes a significant contribution to the literature by refining, modifying, and expanding current customer-driven improvement models. Traditional performance –importance analysis is modified and incorporated into an integrative framework for identifying customer-driven improvement opportunities. In short, practitioners need to appropriately conduct performance –importance analysis, yet go well beyond this technique in identifying improvement opportunities. The customer-driven improvement model makes a significant contribution to the literature, yet future research is needed and suggested. Determining importance ratings for customer service attributes is an important issue. While this article examines many relevant issues, consensus among academic researchers and practitioners is far from being reached [21]. Future research needs to address the issue of determining customer importance ratings and develop new methods for determining importance ratings. Data mining techniques, such as neural networks and decision trees, overcome limitations of traditional statistics and may offer promise in this area [22]. Furthermore, latent class regression may be an important tool to calculate statistically inferred importance ratings. Latent class regression essentially develops segments of customers based on similar regression coefficients. The output is customer segmentation driven by attribute importance ratings. Data from this study suggest that integrating customer performance measures with internal performance measures (internal quality, productivity, etc.) to identify improvement opportunities is critical. Future research needs to explore how different types of performance measures are integrated

465

into an early warning system that is capable of quickly identifying areas needing improvement. Appendix A. Summary of trustworthiness analyses Category

Techniques

Result

Data integrity— How do we know whether the findings are based on false information from the informants?

Safeguarding participant identities

Participants had the psychological freedom to discuss a wide range of possibly sensitive issues

Triangulation across participants Negative case analysis

Researcher looked for redundant results from across participants A process used throughout data analysis that stringently tests working hypotheses identified in the data

Credibility—How do we know whether or not to have confidence in the findings? Dependability— How do we limit interpretation instability?

Member checks

A subset of participants was presented with preliminary findings from the study to make sure that the researcher’s interpretation was reasonable Confirmability Independent auditors examined Confirmability—How audit both the data (i.e., transcripts) do we know the and preliminary findings degree to which the communicated in a rough draft findings emerge from the context and the participants and not solely from the researcher?

References [1] Kotler P, Armstrong G. Marketing: an introduction. New York (NY) USA: Prentice-Hall, 1998. [2] Weitz BA, Jap SD. Relationship marketing and distribution channels. J Acad Mark Sci 1995;23:305 – 20. [3] Gale BT. Managing customer value. New York: The Free Press, 1994. [4] Daugherty PJ, Stank TP, Ellinger AE. Leveraging logistics/distribution capabilities: the effect of logistics service on market share. J Bus Logist 1998;19(2):35 – 51. [5] Reichheld FF. Loyalty-based management. Harv Bus Rev 1993; 64 – 73. [6] Garver MS, Cook RL. Best practice customer value and satisfaction cultures. Mid-Am J Bus 2001;16(1):11 – 21. [7] Oliver RL. Satisfaction: a behavioral perspective on the consumer. New York (NY): McGraw-Hill, 1997. [8] Hawes JM, Rao CP. Using importance – performance analysis to develop health care marketing strategies. J Health Care Mark 1985;5:19 – 25. [9] Brandt R. Customer satisfaction management frontiers, vol. II. Fairfax (VA): Quality Univ. Press, 1998. [10] Harding FE. Logistics service provider quality: private measurement, evaluation, and improvement. J Bus Logist 1998;19(1):103 – 20. [11] Martilla JA, James JC. Importance – performance analysis. J Mark 1977;41:77 – 9. [12] Myers J. Measuring customer satisfaction: hot buttons and other measurement issues. Chicago (IL): American Marketing Association, 2001. [13] Swinyard WR. Strategy development with importance/performance analysis. J Bank Res 1980;228 – 34. [14] Lambert DM, Sharma A. A customer-based competitive analysis for

466

[15] [16] [17] [18] [19] [20]

[21]

[22] [23]

M.S. Garver / Industrial Marketing Management 32 (2003) 455–466 logistics decisions. Int J Phys Distrib Logist Manage 1990;20(1): 17 – 24. Lambert DM, Stock JR. Strategic logistics management. New York (NY) USA: McGraw-Hill, 1993. Higgins KT. The value of customer value analysis. Mark Res Mag Manage Appl 1998;10(4):39 – 48. Peterson RA, Wilson WR. Measuring customer satisfaction: fact and artifact. J Acad Mark Sci 1992;20(1):61 – 71. Jones TO, Sasser WE. Why satisfied customers defect. Harv Bus Rev 1995;88 – 99. Di Paula A. Assessing customer values: stated importance versus derived importance. Mark News 1999;33(12):39. Woodruff RB, Gardial SF. Know your customer: new approaches to understanding customer value and satisfaction. Cambridge (MA): Blackwell, 1996. Taylor SA. Assessing regression-based importance weights for quality perceptions and satisfaction judgments in the presence of higher order and/or interaction effects. J Retail 1997;73(1):135 – 59. Garver MS. Data mining applications in customer satisfaction research. Mark Res 2002;14(1):8 – 12. Mittal V, Ross WT, Baldasare PM. The asymmetric impact of negative

[24] [25]

[26] [27] [28] [29] [30]

and positive attribute-level performance on overall satisfaction and repurchase intentions. J Mark 1998;62(1):33 – 47. Sethna BN. Extensions and testing of importance – performance analysis. Bus Econ 1982;17:28 – 31. Rust RT, Zahorik AJ, Keiningham TL. Return on quality (ROQ): making service quality financially accountable. J Mark 1995;59: 58 – 70. Glaser BG, Strauss AL. The discovery of grounded theory: strategies for qualitative research. New York: Aldine Publishing, 1967. Garver MS, Mentzer JT. Salesperson logistics expertise: a contingency framework. J Bus Logist 2000;21(2):113 – 32. Strauss A, Corbin J. Basics of qualitative research. Newbury Park: Sage Publications, 1990. Garver MS. Listening to customers. Mid-Am J Bus 2001;16(2):41 – 54. Lincoln YS, Guba EG. Naturalistic inquiry. Beverly Hills (CA): Sage Publications, 1985.

Michael S. Garver is an assistant professor of marketing at Central Michigan University and a best practice researcher in customer value and satisfaction measurement and management.