New service development competence in retail banking: Construct development and measurement validation

New service development competence in retail banking: Construct development and measurement validation

Journal of Operations Management 25 (2007) 825–846 www.elsevier.com/locate/jom New service development competence in retail banking: Construct develo...

624KB Sizes 30 Downloads 87 Views

Journal of Operations Management 25 (2007) 825–846 www.elsevier.com/locate/jom

New service development competence in retail banking: Construct development and measurement validation Larry J. Menor a,*, Aleda V. Roth b,1 a

Richard Ivey School of Business, The University of Western Ontario, 1151 Richmond Street North, London, Ont. N6A 3K7, Canada b College of Business and Behavioral Science, Clemson University, 343A Sirrine Hall, Clemson, SC 29634, United States Received 7 September 2004; received in revised form 9 June 2006; accepted 21 July 2006 Available online 7 November 2006

Abstract New service development (NSD) has emerged as an important area of research in service operations management. However, NSD empirical investigations have been hindered by the lack of psychometrically sound measurement items and scales. This paper reports a two-stage approach for the development and validation of new multi-item measurement scales reflecting a multidimensional construct called NSD competence. NSD competence reflects an organization’s expertise in deploying resources and routines, usually in combination, to achieve a desired new service outcome. This competence is operationalized as a multidimensional construct reflected by five complementary dimensions: NSD process focus, market acuity, NSD strategy, NSD culture, and information technology experience. In the first stage of measure development, we analyse judgment-based, nominal-scaled data collected through an iterative item-sorting process to assess the tentative reliability and validity of the proposed measurement items. Our results demonstrate that a reduced set of measurement items have reasonable psychometric properties and, therefore, are useful inputs for multi-item measurement scale development. In the second stage of measurement development, we conduct a confirmatory factor analysis of the five NSD competence dimensions using survey data collected from a sample of retail bank key informants and confirm the unidimensionality, reliability, and validity of the proposed five multi-item scales. The NSD competence scales developed in this research may be used to advance scholarly understanding and theory in NSD. Further, these NSD scales may provide a useful diagnostic and benchmarking tool for managers seeking to assess and/or improve their firm’s service innovation expertise. # 2006 Elsevier B.V. All rights reserved. Keywords: New service development; Scale development; Empirical measurement methodology

1. Introduction New service development (NSD) has emerged as an important research topic in service operations management (Menor et al., 2002; Fitzsimmons and Fitzsimmons, 2000). While the development of new services has long * Corresponding author. Tel.: +1 519 661 2103; fax: +1 519 661 3959. E-mail addresses: [email protected] (L.J. Menor), [email protected] (A.V. Roth). 1 Tel.: +1 864 656 2011; fax: +1 864 656 2015. 0272-6963/$ – see front matter # 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.jom.2006.07.004

been considered by scholars and managers as an important competitive necessity in many service industries (Miles, 2005; eBRC, 2005; Tidd and Hull, 2003; Meyer and DeTore, 1999; Gallouj and Weinstein, 1997), it has remained among the least understood topics in the service management and innovations literature (Drejer, 2004; de Jong et al., 2003; Johnson et al., 2000; Tax and Stuart, 1997). As a result, current theory and understanding of the strategies and tactics for developing new services is inadequate, especially given the conventional wisdom that service innovations are among the critical drivers of competitiveness for most service

826

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

firms (Berry et al., 2006; Gustafsson and Johnson, 2003; Cooper and Edgett, 1999; Karmarkar and Pitbladdo, 1995; Bharadwaj et al., 1993). Additional NSD understanding and theory that advances managerial insights and practices is required given the prevailing anecdotal evidence suggesting that service innovation efforts are typically carried out non-systematically (Thomke, 2003a; Griffin, 1997; Roth et al., 1997). Retail banks, for example, ‘‘have traditionally downplayed product and service development, which was reflected by a near universal absence of R&D departments’’ (Thomke, 2003b: 115); this partly explains why the current innovation success rate of financial services, for example, hovers around 3% (Business Week, 2005). We define a new service as an offering not previously available to the firm’s customers that results from either an addition to the current mix of services or from changes made to the service delivery process. Hence, our definition reflects both service concept and service delivery system innovations. However, the need to align service concepts and delivery systems with target market requirements for effective service encounters (Roth and Menor, 2003) complicates most NSD efforts. For the longest time, the commonly accepted view in NSD was that service innovations ‘‘happen’’ as a result of intuition, flair, and luck (Langeard et al., 1986). Scholars have begun to seriously question this assertion and now suggest that the effective development of new services requires formal processes and practices like those typically found in new product development (NPD) (Fitzsimmons and Fitzsimmons, 2000). This has sparked new debate about what constitutes, and contributes to, effective NSD (Menor et al., 2002; Johne and Storey, 1998). The objective of this research is to advance NSD theory and understanding through conceptually and empirically examining the critical managerial dimensions underlying service innovation. We present a twostage procedure for the development and validation of new multi-item measurement scales in NSD, specifically those reflecting the NSD competence construct. A careful review of the literature reveals that empirical research in NSD has largely been the domain of services marketing scholars (see Johne and Storey, 1998) and that, for the most part, understanding and theory has been hindered by the lack of psychometrically sound or generally accepted multi-item measurement scales. Thus, the development of theoretically and psychometrically sound metrics reflecting NSD competence would be a valuable research contribution. This study, while specifically utilizing retail banking data, contributes to the measurement effort of theoretically important constructs related to innovation (e.g.,

Gatignon et al., 2002), and is among the first of its kind in the literature to adopt a multidimensional, complementary conceptualization of NSD resources and practices. The remainder of this paper is organized as follows. First, we introduce the NSD competence construct and review the relevant literature relating to each of its five complementary dimensions. Second, we illustrate a two-stage approach for evaluating the psychometric properties of the measurement items posited to reflect theoretically important constructs representing each of the five NSD competence dimensions. In the first, or ‘‘front-end’’, stage of measurement item and scale development, we establish tentative measurement item reliability and validity using multiple rounds of item sorting to obtain responses from independent panels of informed judges. In the second, or ‘‘back-end’’, stage we further demonstrate the measurement properties of the items and new multi-item scales by applying confirmatory analyses on survey data collected from NSD key informants. Finally, we offer research and practice implications associated with the measurement of NSD competence before concluding. 2. New service development (NSD) competence The contribution of operations management scholars to the study of NSD has been limited to conceptual frameworks (e.g., Bitran and Pedrosa, 1998; Voss et al., 1992), a few field-based studies in NSD practices (e.g., Froehle et al., 2000; Noori et al., 1997), and several volumes of papers on NSD (e.g., Verma et al., 2002; Fitzsimmons and Fitzsimmons, 2000). We draw on the extant service management and innovation literatures, and a series of interviews conducted with service professionals involved in NSD-related activities, to identify five critical, complementary, dimensions reflecting a firm’s competence in NSD: NSD process focus, market acuity, NSD strategy, NSD culture, and information technology (IT) experience. Operational definitions as well as related measurement items for each dimension are examined in this paper. We posit that an NSD competence reflects an expertise that enables an organization to deploy resources and routines, usually in combination, to achieve a desired new service end. Our conceptualization of ‘‘competence’’ is consistent with the business strategy literature where a firm’s competence includes the portfolio of skills and resources it possesses along with the way those skills and resources are used to produce outcomes (Sanchez et al., 1996; Fiol, 1991). Our conceptualization is also consistent with the

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

prevailing operations management literature where a competence is viewed as a: ‘‘bundle of aptitudes, skills, and technologies that the firm performs better than its competitors, that is difficult to imitate and provides an advantage in the marketplace.’’ (Coates and McDermott, 2002: 436) We further conceptualize NSD competence as a critical antecedent to innovation performance. However, an organization’s NSD competence – like most multidimensional constructs – is not directly observable, and its study requires scrutiny of the underlying dimensions that reflect such a competence. The innovation literature identifies a common set of factors – strategic, process, market/environment, and organizational – generally having a impact on NPD performance (Schilling and Hill, 1998; Brown and Eisenhardt, 1995; Montoya-Weiss and Calantone, 1994) and NSD performance (Cooper and Edgett, 1999; Meyer and DeTore, 1999). Previous NSD studies have typically explored the individual effects of one or more of these factors on performance (Menor et al., 2002; Johne and Storey, 1998). Our conceptual research model is depicted in Fig. 1. As diagrammed in Fig. 1, and verified through our interviews with NSD practitioners, five generally agreed upon factors jointly reflect the firm’s NSD competence or, more generally, its ability to plan,

827

analyse and implement a new service efficiently and effectively. The NSD competence construct that we propose is multidimensional, and it is the covariation, or complementarity, among these underlying dimensions that is critical to the consistent and replicable development of new services. Our modeling of the complementarity among innovation elements is consistent with arguments underlying recent studies of innovation strategies (Cassiman and Veugelers, 2006; Miravete and Pernias, 2006). Further, the multidimensionality – and underlying complementarity – of the NSD competence construct is represented by a secondorder factor, which is a parsimonious representation of the covariation of the five competence factors (Edwards, 2001). We specifically model our NSD competence construct as a multidimensional reflective indicator construct such that any changes to this set of competence indicators, being manifestations of the construct, should not alter the conceptual domain of the NSD competence construct (Jarvis et al., 2003). 2.1. New service development competence dimensions While each dimension has been independently examined as a ‘‘best practice’’ in prior research and, in turn, linked to new service development success, this research suggests that the examination of the

Fig. 1. Conceptual model of NSD competence dimensions.

828

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

complementarity of several best practices – parsimoniously represented by NSD competence – is required to improve understanding and theory on the relevant antecedents of NSD performance. We provide below an operational definition of each NSD competence dimension and review the relevant scholarly literature. The initial set of representative items tapping each construct are summarized in Appendix A. 2.1.1. New service development process focus An NSD process focus indicates the existence of a formalized process for conducting NSD efforts, which allows for simplicity and repetition that fosters greater NSD efficiency and effectiveness. Our definition is consistent with the work of Bowers (1985) and Cooper et al. (1994), who were among the first scholars to recognize the importance of breaking down the NSD process into stages. According to Bowers (1985), an NSD process comprises those activities, tasks, and information flows required of service firms to conceptualize, develop, evaluate, and prepare for market new intangible performances of value to customers. Cooper et al. (1994) hold that the new product process in services includes the activities, actions, tasks and evaluations (such as project screening, market research, product development, and test marketing) that move the new service from the idea stage to launch. More recently, support for NSD processes has been advocated by a number of researchers who propose that a performance advantage likely accrues to service firms with formalized processes in place used specifically for developing new services (see Johnson et al., 2000 for a review of this literature). Such processes specify the critical activities and participants in the NSD effort. While the development activities and outcomes may vary based on the level of investment or newness of the service (Johnson et al., 2000), what remains important is this: process-focused service firms possess a systematic means of transforming an idea into a new offering. Griffin (1997) notes that NSD processes tend to be less formal than those found in NPD. However, the ‘‘best’’ service firms identified in Griffin’s benchmarking study were found to possess more formal NSD processes across the program of development projects than did the less innovative service firms. Similarly, formal NSD processes were central to services identified as ‘‘world class’’ by Roth et al. (1997). 2.1.2. Market acuity In this research, market acuity is defined as ‘‘the ability of the service organization to see the competitive environment clearly, and thus to anticipate and respond

to customers’ evolving needs and wants’’ (Roth, 1993: 22). The basic notions behind market acuity are echoed by Cooper et al. (1994: 295), who claim that being market-driven is the ‘‘dominant success ingredient for top performing new service products.’’ Sunbo (1997) finds that consideration of customers, competitors, and market possibilities is the foundation for successful innovation efforts. Market acuity is valuable to NSD success because it requires that the organization continuously collect information on customer needs and competitor capabilities, and uses this information to create new services that deliver superior customer value; it contributes to, and facilitates, the service innovation effort (Lucas and Ferrell, 2000). The inclusion of market acuity as a reflective indicator of a firm’s NSD competence represents our interdisciplinary NSD research focus that integrates marketing and operational insights (Roth and van der Velde, 1991). Indeed, market orientation – which refers to the organization-wide creation of market intelligence, the dissemination of this intelligence across departments, and an organization-wide responsiveness to it (Kohli and Jaworski, 1990) – has been shown to have an impact on performance (Kirca et al., 2005; Slater and Narver, 1999). Similarly, in OM, market acuity has been linked to business performance (Menor et al., 2001). 2.1.3. New service development strategy Perhaps the most consistently held prescription for development success is that the firm’s new product or new service strategy must be related to the overall business strategy (Cooper and Edgett, 1999; Griffin, 1997; Sunbo, 1997). NSD strategy aligns the overall business strategy with new services/products and service design/delivery decisions. As such, an NSD strategy enables management to plan for and make available the appropriate resources for specific new service development efforts. An NSD strategy also contributes to distinguishing a service firm’s ‘‘strategic vision’’ (Heskett, 1986)—an understanding of what the firm and its offerings should be. Fitzsimmons and Fitzsimmons (2004) discuss two prominent strategic frameworks within which a firm can evaluate the alignment between its business strategy and its new service offerings decisions: Heskett’s (1986) notion of the strategic service vision and Chase and Hayes’ (1991) four stages of service firm competitiveness. The common link between these frameworks is found through equating the strategies of ‘‘breakthrough’’ services with the practices of world-class service providers. Roth and van der Velde (1992: 3) define world-class service providers as

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

‘‘[having] created dynamic processes that provide distinctive value-added products and services, competitive advantage and delight to internal and external customers, stakeholders, suppliers, and partners. World class organizations have internal core capabilities that foster accelerated improvements in human assets, technology, methods, and information flows. These capabilities are synergistic with the total business, and provide a sustainable competitive position in the firm’s target market, given a global economy.’’ NSD competence represents such a core capability. Accordingly, an effective NSD strategy reflects a firm’s NSD competence because it ensures that the appropriate resources and practices necessary to develop services are in keeping with the overall business strategy, and because it ensures that the new service offering’s characteristics, and its delivery, match customer expectations and demands. 2.1.4. New service development culture NSD culture captures the values and beliefs fostered by the service organization that indicate a willingness and desire to innovate. A positive NSD culture, in theory, facilitates a climate for new service development activity. While the literature is filled with diverse definitions of culture (Kotter and Heskett, 1992), our definition integrates the general notions of an organizational culture and a service culture within the context of NSD. Numerous definitions of organizational culture exist. Mintzberg (1979: 98) defines organizational culture as ‘‘the traditions and beliefs of an organization that distinguish it from other organizations and infuse a certain life into the skeleton of structure.’’ According to Schein (1985: 9), organizational culture is ‘‘a pattern of basic assumptions – invented, discovered or developed by a given group as it learns to cope with its problems of external adaptation and internal integration – that has worked well enough to be considered valid and, therefore, to be taught to new members as the correct way to perceive, think, and feel in relation to those problems.’’ Deshpande and Webster (1990: 4), in turn, characterize organizational culture as ‘‘the pattern of shared values and beliefs that help individuals understand organizational functioning and thus provide them with norms for behavior in the organization.’’ Thwaites (1992) notes that various organizational influences have an impact on the efficiency and effectiveness of NSD processes. Many of these organizational influences, such as risk management, management style, and

829

organization structure fall under the overarching construct of organizational culture. Schlesinger and Heskett’s (1991) portrayal of the service-driven service company highlights the managerial importance of a service culture, which Gro¨nroos (1990: 244) describes as ‘‘a culture where an appreciation for good service exists, and where giving good service to internal as well as ultimate, external customers is considered a natural way of life and one of the most important norms by everyone.’’ A service culture plays a crucial role for service management success, as it allows contact personnel to act with considerable autonomy in the delivery of effective service encounters and the design and delivery of new services (Chase et al., 2000; Roth et al., 1997). Griffin (1997) notes that a positive culture and climate for new product development (NPD) is a necessity for innovation success. Just as in NPD, we propose that NSD culture is an important dimension of NSD competence. Yet despite culture’s theoretical importance to business success, the NSD culture construct has received limited attention in the literature to date, in part – we feel – due to the lack of reliable and valid metrics. 2.1.5. Information technology experience A central research theme emanating from studies in information systems has been the role of information technology (IT) in creating and sustaining competitive advantage (Price, 1998; Ross et al., 1998). Fiedler et al. (1996) observe that IT utilization enhances information processing and coordination activities in operating environments characterized by uncertainty. Additionally, Keller’s (1994) empirical analysis of the performance of research and development project groups supports the hypothesis that a fit between technology and information processing needs in non-routine activities is a predictor of project quality. Keller’s findings also supports Daft and Lengel’s (1986) proposition that the amount of information processing and the communication media used should be appropriate for the uncertainty and ambiguity inherent in the task. These studies provide explanatory insights on earlier research such as Quinn and Pacquette (1990), who show that technology deployment can facilitate innovation, especially when systems are synergistic and supportive. As a dimension of NSD competence, and consistent with the within-firm IT knowledge and utilization arguments of Tippins and Sohi (2003), a firm’s IT experience refers to the use of IT for facilitating or improving inter- and intra-organizational coordination of activities and information processing in the NSD process. It enables the creation of services that are more

830

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

responsive to customer needs (Sasser and Fulmer, 1990). Indeed, Heskett et al. (1990) note that most successful service organizations are information-centered largely through the adoption of IT systems. Therefore, we posit that a firm’s scope and intensity of IT use for NSD purposes also reflects greater NSD competence. As such, IT is posited to be an important component of the portfolio of NSD resources since, in development activities, levels of uncertainty are often high and actions non-routine. For NSD activities, IT experience makes greater NSD competence possible through improved information processing (Goodhue et al., 1992). IT competence is posited to be highly related to other NSD competence dimensions previously discussed, such as NSD process focus and market acuity. 2.2. NSD competence construct summary Perhaps the most significant takeaway from the NSD literature reviewed here is the likely existence of complementarities between the five NSD competence dimensions identified in Fig. 1. Complementarity exists when the marginal return to an activity/resource is increased by the presence of another activity/resource (Powell and Dent-Micallef, 1997), and is consistent with the system perspective currently being advocated in the study of innovation practices and strategies (Edquist, 2005). The vast majority of NSD empirical studies published in the service management literature to date have focused on the independent effects of NSD practices (Menor et al., 2002), which has resulted in mixed findings on what constitutes the critical antecedents for service innovation success. However, empirical studies by Griffin (1997) and Roth et al. (1997) have reinforced the importance of looking at NSD holistically. Roth et al. (1997), for example, found that the NSD process is one component of several interrelated ‘‘practice’’ constellations that requires systematic management for improved services performance. We contend that the NSD competence dimensions are complementary and encompass facets of new service development planning, analysis and implementation (cf. Krishnan and Ulrich, 2001). This complementary, in turn, is associated with higher NSD performance (see Fig. 1). Our arguments are consistent with Van de Ven (1986), who notes that the interdependence and management of ideas, people, transactions, and context are critical to the management of successful innovation. In order to further managerial understanding and scholarly theory in the management of NSD comple-

mentarities, we next report on a two-stage approach used in the development and validation of a multidimensional measure of NSD competence that reflects the complementary system of innovation practices and organizational traits discussed earlier. 3. Development and validation of NSD competence measurement items and scales Good measurement is a prerequisite for good empirical science; however, multi-item measurement and scale development must be preceded by sound conceptual development of the theoretically important construct(s) being defined (see Hinkin, 1998; Churchill, 1979). The new multi-item measurement scales that we develop and validate posited to reflect NSD competence were derived from measurement items either previously cited in – or motivated by – existing theoretical and empirical studies (see Appendix A). Our review of the literature was complemented by in-depth discussions with executives familiar with NSD practices in their organizations. Hence, the constructs and associated measurement items for each of the five dimensions of NSD competence were deemed to be content valid at the outset of stage one. Given the increasing use of survey-based empirical research in OM (Rungtusanatham et al., 2003; Scudder and Hill, 1998), several scholars have begun to question and reassess the methods employed in OM scale development (Hensley, 1999; Malhotra and Grover, 1998). Two ongoing challenges in the development of new multi-item scales in OM, which are intended to reduce measurement error by providing a more robust representation of complex variables (Drolet and Morrison, 2001), are the selection of appropriate measurement items (Little et al., 1999), and the coverage of the construct domain with the desired reliability and validity. To address these challenges, we apply a two-stage approach for developing and validating measurement items and their associated multi-item scales (see Fig. 2). Specifically, we first employ an item-to-construct sorting analysis to establish tentative item reliability and validity at the frontend stage. Then, at the back-end stage, we apply confirmatory analyses to derive stronger assessments of the psychometric properties of our multi-item scales reflecting the NSD competence dimensions. 3.1. Stage one: item-sorting analyses In the front-end stage, we subject to rigorous empirical scrutiny the perceived adequacy of the NSD competence

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

Fig. 2. Two-stage approach for new measurement development.

measurement items and construct definitions through four rounds of item-sorting exercises. Each item-sorting iteration was administered to an independent sample of judges. In the last two rounds, we used expert judges with the appropriate knowledge, skill, experience, and motivation to evaluate NSD competence in practice. Since our target population was financial services, we selected retail-banking professionals who were the most knowledgeable about their organization’s NSD efforts to be expert judges. The instrument used for item sorting consisted of a definition of each the five NSD competence dimensions, a related NSD performance dimension, and a randomized listing of all measurement items (see Hinkin, 1998). Our approach is a modified version of Qsorting (McKeown and Thomas, 1988), in which respondents are asked to classify items based on their similarity with definitions and descriptions of underlying construct categories. For each item-sorting round, judges were directed to carefully read the descriptions of each of the five construct dimensions and to match each item to the one dimension that they felt was the best fit. Itemsorting analysis, which has not been commonly employed or reported in the OM literature (Hensley, 1999), has long been advocated as an important approach

831

for assessing face validity when developing new measurement items and scales. However, its underutilization even in more empirically mature disciplines like marketing continues to be a source of concern regarding measurement quality assessment (Hardesty and Bearden, 2004). Each round of item sorting produced independent samples of judgment-based, nominal-scaled data, in the form of item-to-construct definition classifications. The resulting judgment-based, nominal-scaled data were then used to assess interrater reliability, substantive validity and construct validity of measurement items. Following Hinkin’s (1998) prescriptions, this analysis focused on careful attention to the manner in which items were created and scrutinized prior to their utilization in a back-end stage survey instrument. Fig. 3 further illustrates the front-end framework and specifies the wide array of statistical approaches that can be employed in evaluating judgment-based data (cf. Nahm et al., 2002). The statistical results of the frontend stage are reported in Section 3.1.1. Due to the design of this item-sorting analysis, which called for the simultaneous scrutiny of measurement items tapping NSD competence dimensions and NSD performance, we also included – solely for this stage of item purification – an assessment of the tentative reliability and validity of the measurement items corresponding to a construct labeled NSD performance. However, we do examine NSD performance factors based on these items in our examination of nomological validity in Section 3.2.3. This allowed us to further distinguish our NSD

Fig. 3. Front-end stage: measurement item sorting analyses.

832

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

Table 1 Comparison of interrater reliability Interjudge agreement percentage, A (by sorting roundb)

Cohen’s k (by sorting round)

1

2

3

4

1

2

3

4

1

2

3

4

C1/2 C1/3 C1/4 – C2/3 – C9/10 – C16/17

76 82

81 79

.70 .73

.71

.89 .62 .74 – .70 – .83 – .38

.88 .87

.71

.14 .06 .15 – .47 – .41

.85 .89

75

91 69 78 – 75 – 86 – 48

.70 .77

77

26 19 28 – 56 – 50

.86

.84

.37 .24 .40 – .70 – .65

.94 .79 .86 – .84 – .91 – .61

Average

78

78

60

75

.73

.74

.53

.70

.86

.86

.70

.83

Interjudge combination, Ca/ba

Perreault and Leigh’s Ir (by sorting round)

a

Interjudge combination between judges a and b by round. The number of judgments for each interjudge combination per round corresponds to the number of measurement items classified. For each interjudge combination, the total number of pairwise judgments evaluated in rounds 1, 2, 3, and 4 are 90, 72, 72 and 64, respectively. b Independent samples of n judges per sorting round: round 1, n = 3; round 2, n = 3; round 3, n = 10; round 4, n = 17.

competence measurement items from those reflecting NSD performance. 3.1.1. Item-to-factor sorting results In the first two rounds of the four item-sorting iterations, we employed convenience samples of OM graduate students and professors, and in the last two, we used practitioner judges with NSD expertise. For each round, we first scrutinized the measurement items in terms of interrater reliability. Listed in Table 1 are the raw interjudge agreement percentages, Cohen’s k (Cohen, 1960), and Perreault and Leigh’s Ir (Perrault and Leigh, 1989). To provide a rough-cut assessment of the items and constructs in rounds 1 and 2, we used a total of six judges (three per round). Hence, in each of the first two rounds, there were only three interjudge combinations to assess: C1/2, C1/3, and C2/3. The number of interjudge combinations to assess in rounds 3 and 4, which had 10 and 17 judges, respectively, increased dramatically to 45 and 136. Results from a representative sample of interjudge combinations are reported in Table 1. The interjudge agreement percentage (A) is the ratio of pairwise agreements in item classifications made between judges to the total number of pairwise judgments possible in each round. The percentages of average interjudge agreements ratings were 78, 78, 60 and 75% for rounds 1–4, respectively (Table 1). Because of the simplicity of this measure of interrater reliability, there are no established standards for assessing adequate percentages of agreement. The implied level of reliability may be overstated due to

chance and the number of categories of constructs; however, this statistic is generally reported as a baseline and is used in conjunction with other measures of reliability, such as in the computation of Ir (Perrault and Leigh, 1989). Perreault and Leigh (1989: 146) note that ‘‘if the interjudge reliability is low (for example, Ir < .8, or perhaps .7 in more exploratory work) for a subsample of responses initially coded and evaluated, corrective adjustments can be made early in the research process.’’ Consequently, round 3 results indicate that several of the measurement items warrant further improvement (see Table 1). This assertion is further supported by Cohen’s k, which is generally regarded as a conservative estimator of interrater reliability. Except in sorting round 3, the values of Cohen’s k appear acceptable (Table 1). Moore and Benbasat (1991) note that a Cohen’s k greater than .65 indicates an adequate interjudge agreement, occurring beyond chance. As in the case of Ir, the marked diminution of item scores obtained in itemsorting round 3 indicated that we needed to review the measurement items and/or construct definitions. Our results also suggest that practitioner judges may differ significantly from academic judges when it comes to this specific type of cognitive exercise; therefore, the use of expert judges who are most familiar and informed with the subject matter, in practice, provides a more stringent test for the adequacy of the construct definitions and measurement items. Rust and Cooil (1994) extend Ir from the two-judge case to the multiple judge case. Their reliability measure for qualitative judgments from multiple judges

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

is called the proportion reduction of loss (PRL), which is inversely proportional to the loss expected from using a combined rating. Focusing specifically on sorting round 4, the use of responses from 17 judges and an interjudge agreement of .75 for six construct categories leads to a calculated PRL of 1.0, which provides evidence of the reliability of the judges’ classifications. Given a specific number of distinct classification categories and a desired level of interjudge agreement, Rust and Cooil’s tables and formulae can also be used to determine the appropriate number of judges to use in order to increase the likelihood of obtaining a prespecified level of internal consistency. Having confidence that our measures were reliable, we next scrutinized the substantive, or face, validity of these items. As noted in the discussion of interrater reliability above, the low values of Ir and Cohen’s k suggested the need to review measurement items and/or construct definitions. In refining our measurement scales, the determination of which items or definitions to review depended upon the agreement between the judges’ item classifications and the intended construct and was facilitated by an analysis of the items’ substantive validity (Anderson and Gerbing, 1991). We utilized two substantive validity measures: the proportion of substantive validity ( psa) ranges from 0 to 1.0, with larger values indicating greater substantive validity; and the coefficient of substantive validity (csv) ranges from 1.0 to 1.0, with more positive values indicating greater substantive validity and larger negative values indicative of the greater substantive validity of an item with an unintended construct. Those items with acceptable psa (>.70) and csv (>.41) were retained after the fourth sorting round. For parsimony, the specific psa and csv values – for each item in each round – are not reported in this paper but are available upon request. Finally, we assessed the convergent and discriminant validity of the NSD measurement items (see Table 2). All actual item classifications were cross tabulated against their theoretical construct classifications and were considered simultaneously using Moore and Benbasat’s (1991) overall placement ratio (OPR). OPR is a summary statistic that provides evidence of item misclassifications and, therefore, is useful to the researcher in detecting sources of measurement item error. This facilitated decisions about which items to review for revision or deletion and which construct definitions to refine. The details of individual round misclassifications are worth scrutinizing when the OPRs for individual construct dimensions are less than 75% (Moore and Benbasat, 1991). The fact that the average OPRs for

833

Table 2 Overall placement ratios Theoretical construct

Overall placement ratio (%) (by sorting rounda) 1

2

3

4

NSD process focus Market acuity NSD strategy NSD culture IT experience NSD program performance

49 79 67 80 92 96

71 100 52 79 94 93

60 84 56 65 85 87

88 90 69 78 94 88

Sorting round average

76

83

74

83

a

Sorting round samples: round 1, n = 3; round 2, n = 3; round 3, n = 10; round 4, n = 17.

sorting round 4 exceed the cut-off (see Table 2) provides statistical evidence for the content validity of the scales’ items; that is, they form a representative sample of the theoretical construct domains (Churchill, 1979). As a result of our front-end analyses and item purification, 47 measurement items capturing the five dimensions of NSD competence were retained for stage two of this study. 3.2. Stage two: survey analyses In order to confirm measurement reliability and validity using a large sample, we moved to the back-end stage of our multi-item scale development process, which involved an analysis of survey data (Ahire and Devaraj, 2001; O’Leary-Kelly and Vokurka, 1998). We chose to survey respondents from a single industry in an effort to control for potential contextual influences that might be associated with an interindustry sample (Gerbing et al., 1994). The sampling frame consisted of 696 retail banks selected form the Atlanta Region Federal Deposit Insurance Corporation Institutions database listing of financial institutions. Banks have proven useful in the study of service operations strategy research, as they provide an excellent laboratory for studying service competitiveness issues (Menor et al., 2001). Additionally, a number of the extant empirical studies in new service development utilize data collected from the financial services sector (e.g., Sunbo, 1997; Cooper et al., 1994; Bowers, 1985). Hence, our results could be used to advance the overall understanding of the NSD phenomenon, especially as it applies to financial services. The unit of analysis in this research is the NSD program (cf. Johne and Storey, 1998), which for this research is defined as the portfolio of NSD projects the service organization has initiated within the last 3 years.

834

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

Attempts were made to contact each of the institutions in the sample to identify the appropriate key informant (Huber and Power, 1985) who could accurately portray the NSD efforts for his or her respective institutions. In instances where these phone calls were not successful, senior operations/marketing executives identified in individual state banking directories were chosen as appropriate key informants who we felt could identify the correct key informant for their institution. The survey questionnaire containing these measurement items, in addition to other NSD-related questions, was mailed to these key informants and over an 8-week period, 168 surveys were received representing a 24% response rate. However, two surveys with non-random missing responses were deleted from the data sample for the present investigation. Non-response bias was assessed through comparisons of early and late respondents on variables such as strategic importance of NSD, total assets, and total annual revenue (Moore and Tarnai, 2002; Armstrong and Overton, 1977); no statistically significant differences were detected. 3.2.1. Additional measurement item refinement The moderate size of the survey sample (Hoyle, 1999), could have a negative impact on the statistical power of a confirmatory factor analysis (CFA) (Marsh and Hau, 1999). Given the results of our item sorting analysis, we felt that it was not necessary to retain all the measurement items, as a fewer number of items would still allow for adequate domain sampling for the five NSD competence dimensions (cf. Drolet and Morrison, 2001). The list of items was further reviewed by NSD practitioners and, where redundant, items were further removed (e.g., the NSD strategy items). Through this process, each of the construct dimensions was represented with at least four measurement items, which is desirable from a structural equations modeling

standpoint (Hinkin, 1998). The remaining 21 items were further analysed for violations to multivariate normality and kurtosis (West et al., 1995); no violations were detected. 3.2.2. Measurement scale refinement results CFA was then employed on the 21 items to assess measurement scale unidimensionality, reliability and convergent and discriminant validity for the five NSD dimensions. A scale is said to be unidimensional if its items measure a single construct. Measurement models for each scale were estimated, with errors allowed to freely correlate with each other; the overall fit of each model was then examined (see Table 3 for a listing of these fit indices for each of the NSD competence dimensions). Other than for the IT experience multiitem scale, all x2 values were non-significant, suggesting that these measurement models are consistent with the data and exhibit acceptable overall goodness of fit. Given the numerous problems associated with the sole use of the x2-statistic in evaluating overall model fit (MacCallum, 1990), we also examined several absolute and incremental fit measures. All of these fit indices for the multi-item scales for the NSD competence dimensions were greater than .90, indicating that they meet generally accepted criteria for unidimensionality. We assessed the reliability of each multi-item scale using the CFA standardized factor loadings (see Table 3). The composite construct reliability assesses whether measurement items sufficiently represent their respective constructs (Bagozzi and Yi, 1988). All the composite reliability values, except that for NSD culture, exceeded the suggested .70 standard, indicating that these indicators are sufficient in their representation of their respective constructs. The average variance extracted values (Fornell and Larcker, 1981), which assess the amount of variance that is captured by the construct in

Table 3 Unidimensionality and reliability analyses of NSD competence scales NSD competence construct dimension

Items

x2 ( p-value)

GFIa

NNFIa

CFIa

Composite reliabilityb

Average variance extracted c

NSD process focus Market acuity NSD strategy NSD culture IT experience

4 4 4 4 5

1.08 3.99 1.07 0.09 19.76

.99 .99 .99 1.00 .96

1.01 .97 1.01 1.06 .92

1.00 .99 1.00 1.00 .96

.79 .77 .78 .69 .85

.50 .52 .53 .37 .56

a

( p = .58) ( p = .14) ( p = .58) ( p = .95) ( p < .001)

Goodness-of-fit index (GFI), non-normed fit index (NNFI), and comparative fit index (CFI) values equal or exceeding .90 indicate strong scale unidimensionality. b Composite reliability values equal or exceeding .70 indicate strong scale reliability. c Average variance extracted values equal or exceeding .50 indicate that the measures are reflective of the construct.

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

relation to the amount of variance due to measurement error, are reported in Table 3. Except for the NSD culture construct, all values exceeded .50, indicating that a large amount of the variance is captured by each construct rather than due to measurement error. Given the ongoing, iterative nature generally associated with the process of improving construct validity (Edwards, 2003), the low composite reliability and average variance extracted values for NSD culture suggest that this multi-item scale requires additional item and scale re-examination and refinement (see Fig. 2). Absent the ability to further refine the NSD culture measurement model with additional survey data, as is the case in this study, we considered the removal of this construct as we continued our assessment of the measurement properties of our NSD competence operationalization. The multidimensional reflective indicator specification employed here suggests that dropping NSD culture should not alter the conceptual domain (Jarvis et al., 2003), or the complementary nature, of the NSD competence construct. However, given our earlier discussion of the importance of having an NSD culture, we decided to retain our measures of this first-order factor while assessing convergent, discriminant, and nomological validity. Future research should address the need to further empirically refine a measure of NSD culture, as the lack of a reliable and valid measure of NSD culture may be the primary reason behind the limited study of this construct. Convergent validity was assessed by the magnitude and sign of the factor loadings of the measurement items (see Table 4). Inspection of the standardized loadings indicated that each was in its anticipated direction (i.e., positive correspondences between firstorder constructs and their posited indicators), and was statistically significant at p < .05. For example, a higher level of market acuity is reflected by greater collection and use of competitive and customer (internal and external) information. A higher level of NSD competence, in turn, would be reflected on average by higher levels of NSD process focus, market acuity, NSD strategy, NSD culture, and IT experience. In summary, these findings corroborate the substantive validity results obtained from the item sorts. Due to the complementary nature of our NSD competence dimensions, discriminant validity of the multi-item scales was assessed through the estimation of 20 models (10 constrained, 10 unconstrained) and the conducting of 10 x2 difference tests of nested models (see Table 5). Specifically, CFA was run on pairs of constructs in which the latent factors were allowed to

835

freely correlate (i.e., unconstrained model). CFAs were also run on each pair with the correlation between the latent factors constrained to one (i.e., constrained model). The x2 differences that are statistically significant at p < .001 suggest that the scales being modeled capture unique constructs. All the x2 differences in Table 5 are statistically significant, establishing the discriminant validity of each of the multi-item scales. Further, we found that the magnitude and statistical significance of the correlations between these first-order factors empirically supports our view of the complementarity of these NSD competence dimensions. This complementarity, from a statistical standpoint, can be parsimoniously represented by a second-order factor (Venkatraman, 1989)—namely NSD competence. 3.2.3. Examination of nomological validity We examined the nomological validity of our NSD competence measures in order to provide evidence consistent with the literature we reviewed, and in accordance with theory, suggesting that NSD competence is positively associated with NSD performance. To simultaneously assess and evaluate the nomological validity (Carmines and Zeller, 1979) – and the complementary nature – of our NSD competence measures, we conducted a set correlation analysis (Cohen, 1982). Set correlation analysis is a generalization of simple and multiple correlation analysis that is applicable to measuring multivariate association ðR2YX Þ, and allows for a multiplicative decomposition of association in terms of squared (multiple) partial correlations (van den Burg and Lewis, 1988; Cohen and Nee, 1984). Specifically, using the SETCOR procedure in SYSTAT, we calculated the multivariate R2YX between the NSD competence first-order factors – constituting a set of independent variables (X) – and a set of dependent factors (Y) measuring NSD performance (i.e., NSD competitiveness and NSD effectiveness). The NSD competitiveness (composite reliability = .79) and NSD effectiveness (composite reliability = .77) factors resulted from a subsequent, and separate, confirmatory factor analysis of the NSD performance measurement items that emerged from the item-sorting analysis (see Appendix A for the items constituting each factor). Given our primary research interest in the assessment of our NSD competence measures, and for parsimony, we do not report the CFAs associated with these NSD performance factors; this analysis of retail banking survey data is available from the authors. Table 6 reports our set correlation analysis.

836

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

Table 4 Standardized path loadings from CFA and descriptive statistics NSD competence scales and measurement items NSD process focus Our new service/product development efforts are comprised of formal stages of development activities Our service firm employs standard resources and routines in all new service development projects Our service firm employs formalized processes for all new service development projects All new service development projects are planned based on a fixed sequence of development activities Market acuity Our service firm actively seeks out information about our company’s business environment New offerings are designed based on information actively collected on evolving market shifts and customer demands for these offerings Our service firm uses collected information to respond quickly to changes in the competitive environment Customers, both internal and external, are viewed as potential and valuable sources of new offering ideas and opportunities NSD strategy Current service capabilities are critical factors in determining the ‘‘go/no go’’ decision for the development of new services/products Ideas for new service/product development are largely driven by the service’s overall business strategy Our firm’s new service development strategy and new offerings decisions are always formulated with the overall business strategy in mind Senior managers are always willing to commit resources to promising new service/product development projects NSD culture Our firm encourages entrepreneurial efforts and is accepting of risk-taking efforts The glue that holds our organization together is a commitment to innovation and new service/product development Our firm emphasizes its human resources and places a premium on high cohesion and morale in its new service development activities Non-monetary rewards are employed in new service/product development projects as a means of recognizing employee efforts IT experience Information technologies are used to speed up the introduction of new services and products IT is used to identify and diagnose customer needs IT is used to share information that coordinates new service/product development activities Communication flow within the new service development project groups is facilitated through IT-based channels Our service firm utilizes technology to facilitate the flow of information to people participating in the new service development process

Standardized path loadings

Critical ratio

Meana

Standard deviation

.745



2.30

.99

.493

6.45

2.56

1.00

.904

9.08

2.10

.78

.476

6.22

2.24

.84

.484

6.32

3.05

1.04

.843

8.85

3.28

.83

.793

8.77

2.90

.96

.716



3.55

.96

.577

6.51

3.76

.96

.737



3.56

.99

.821

7.98

3.55

1.13

.589

6.63

3.45

1.15

.548 .766

5.07 –

2.81 2.83

.86 .91

.486

4.70

3.27

.99

.581

5.22

2.81

1.15

.471



3.45

1.19

.635 .858

4.40 4.75

2.77 3.08

.91 1.02

.911

4.79

3.11

1.09

.819

4.70

3.13

1.12

a Likert-scale responses from 1 (strongly disagree) to 5 (strongly agree). Key informants were asked the following question: please indicate the extent to which you agree or disagree with the following statements as they pertain to your retail banking unit’s new service/product development activities.

The set of NSD competence measures accounted for .188 of the generalized variance of the NSD performance set. This whole R2YX value was highly statistically significant (Rao F = 3.482, p < .001). Also statistically

significant were the Y-semipartial R2YX values where the NSD competence set was associated, after partialling, with NSD competitiveness (R2YX ¼ :074, Rao F = 2.610, p < .05) and NSD effectiveness (R2YX ¼ :066, Rao

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

837

Table 5 Results of discriminant validity pairwise tests for NSD competence constructs Critical ratio

Constrained model, x2a

NSD process focus with Market acuity .33 NSD strategy .04 NSD culture .24 IT experience .07

3.28 .443 2.45 .78

144.78 152.65 135.45 200.90

Market acuity with NSD strategy NSD culture IT experience

.53 .59 .40

4.54 4.87 3.15

NSD strategy with NSD culture IT experience

.76 .32

NSD culture with IT experience

.56

Test

a b

Correlation estimate

Unconstrained model, x2a

(20) (20) (20) (27)

90.23 60.18 76.46 87.85

x2 difference b

(19) (19) (19) (26)

54.55 92.47 58.99 113.05

169.93 (20) 160.48 (20) 195.17 (27)

130.69 (19) 123.44 (19) 126.68 (26)

39.24 37.04 68.49

5.12 2.78

184.87 (20) 289.49 (27)

107.46 (19) 215.67 (26)

77.41 73.82

3.63

220.42 (27)

163.01 (26)

57.41

The values in parenthesis denote degrees of freedom. All x2 differences are statistically significant at the p < .001 level.

Table 6 Set correlation analysis ðR2YX Þ for association between NSD competence (and its unique components) and NSD performance variables NSD competence components a

Dependent variables NSD performance (COMP, EFF)

PF j MA, ST, CU, IT MA j PF, ST, CU, IT ST j PF, MA, CU, IT CU j PF, MA, ST, IT IT j PF, MA, ST, CU NSD competence (PF, MA, ST, CU, IT)

0.005 Rao F = 0.503 (d.f.: 2, 159) p = .605 0.027 Rao F = 2.473 (d.f.: 2, 159) p = .088 0.047 Rao F = 4.101 (d.f.: 2, 159) p = .018 0.006 Rao F = 0.585 (d.f.: 2, 159) p = .558 0.014 Rao F = 1.196 (d.f.: 2, 159) p = .305 0.188 Rao F = 3.482 (d.f.: 10, 318) p = .000

Unique NSD performance components b COMP j EFF

EFF j COMP

0.000 Rao F = 0.032 (d.f.: 1, 160) p = .858 0.000 Rao F = 0.079 (d.f.: 1, 160) p = .797 0.047 Rao F = 8.183 (d.f.: 1, 160) p = .005 0.000 Rao F = 0.036 (d.f.: 1, 160) p = .851 0.011 Rao F = 1.947 (d.f.: 1, 160) p = .605 0.074 Rao F = 2.610 (d.f.: 5, 159) p = .027

0.002 Rao F = 0.408 (d.f.: 1, 160) p = .524 0.013 Rao F = 2.263 (d.f.: 1, 160) p = .134 0.016 Rao F = 2.721 (d.f.: 1, 160) p = .101 0.003 Rao F = 0.479 (d.f.: 1, 160) p = .490 0.012 Rao F = 2.019 (d.f.: 1, 160) p = .157 0.066 Rao F = 2.278 (d.f.: 5, 159) p = .049

Note: Bold numerical values are the R2YX set correlation measures of multivariate associations (i.e., whole, semi-partial, or bi-partial). a NSD competence components: NSD process focus (PF); market acuity (MA); NSD strategy (ST); NSD culture (CU); IT experience (IT); components are treated as factor-based scores. b NSD performance components: NSD competitiveness (COMP); NSD effectiveness (EFF); components are treated as factor-based scores.

F = 2.278, p = .05), respectively. These statistically significant multivariate association values provide support for the nomological validity and complementary nature of the NSD competence measures. Further examination of the X-semipartial R2YX values (see second column statistics for the X j Xp rows) indicates that, after partialling for the four other NSD competence variables, only the market acuity (Rao F = 2.473,

p = .088) and NSD strategy (Rao F = 4.101, p = .018) factors were uniquely (i.e., controlling for spurious relationships) associated with the NSD performance set. This basic comparative analysis of the Xsemipartial R2YX values further reinforces the benefits in considering the complementarities, as opposed to independent effects, associated with the NSD competence dimensions.

838

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

4. Conclusion How are service organizations likely to improve their innovative capability? To address this question requires that we first have valid and reliable measures of key service innovation constructs. The present study attempts to further theory development and understanding in NSD through the conceptual development and empirical validation of a set of multi-item scales that reflect NSD competence and, in so doing, to provide a likely answer to the above question. While our measures were derived using retail-banking data, we know of no similar empirical operationalization of development competence in the general innovation literature. This research reports the development and validation of new multi-item measurement scales for NSD competence. NSD competence, which captures a service organization’s expertise in deploying resources and routines to affect a desired new service outcome, is conceptualized as a multidimensional construct reflected by the following complementary first-order dimensions: NSD process focus, market acuity, NSD strategy, NSD culture, and information technology experience. Each of these dimensions, in turn, is represented by a unidimensional multi-item scale. In the first stage of our two-stage approach (see Fig. 2), we analysed judgment-based, nominal-scaled data collected through an iterative item-sorting process to assess the tentative reliability and validity of our proposed measurement items. In the second stage, we estimated NSD competence measurement models using CFA, which allowed for the confirmation of scale unidimensionality, reliability, and convergent and discriminant validity. This two-stage approach, utilizing two different data samples, allowed for rigorous statistical analyses to refine both individual measurement items and multi-item scales. By utilizing two distinct stages and data samples, we were able to monitor the quality of measurement items at an early stage in the research process.

Our resulting multi-item NSD competence measurement scales offer the potential for promising insights in the study and practice of NSD. As described earlier in this paper, research in NSD has largely been exploratory in nature and the understanding of NSD practices, processes and theory have not been advanced as far as that in NPD. The extant NSD literature examining NSD performance has yielded inconclusive findings regarding the importance of particular development factors and their relationship with performance (Menor et al., 2002). One potential explanation for these findings is the misspecification of the antecedents for NSD performance. Unlike earlier research, we argue for the importance of the complementary effect of development factors. Examination of this complementary effect to date has been hindered by the lack of reliable and valid NSD measurement scales specifically developed with such a holistic conceptualization in mind. That said, this research – as it applies to the retailbanking context – is especially relevant today as improving NSD efforts is widely viewed as being important to the creation of future value in financial services (Melnick et al., 2000). Service managers, such as those in financial services, can apply the NSD competence measures developed in this study as a diagnostic tool to assess their organizations’ innovative capability—no easy task given the multiple resources and routines typically involved in NSD. These NSD competence measures can also be used profitably for competitive benchmarking. While we believe that similar uses of this NSD competence measure are to be found in other service sectors, future research should examine the generalizability of this measure and our multidimensional, complementary conceptualization of NSD competence. Ongoing research applying our conceptualization of NSD competence is needed to further understanding and theory in this evolving, and important, area of study in service operations management.

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

839

Appendix A. Construct items and supporting literature NSD process focus represents the availability and use of systematic service development practices and routines. An NSD process focus is an indicator of a firm having a formalized process for conducting NSD efforts. Additionally, an NSD process focus allows for a simplicity and repetition in the NSD process that fosters greater NSD efficiency and effectiveness. Measurement items

Supporting literaturea

Our new service/product development efforts are comprised of formal stages of development activitiesc

Griffin (1997), Cooper and Kleinschmidt (1995), Montoya-Weiss and Calantone (1994) and Cooper et al. (1994) Cooper and Kleinschmidt (1995)

All new service development projects are planned based on a fixed sequence of development activitiesc Standard new service/product development processes allow our firm to engage in multiple development projects concurrentlyb Our service firm utilizes systematic routines for screening and selecting new service development ideasb NSD personnel are formally trained as new service/product specialistsb Our service firm employs formalized processes for all new service development projectsc Our service firm employs standard resources and routines in all new service development projectsc Our service firm continuously embarks on new service/product development projects Our service firm customizes its new service/product development activities based on the particular characteristics of the NSD project Our new service/product development projects are comprised of cross-functional teams

Our service firm has an organized new service/product development activities department New service/product development personnel are technically proficient with NSD tools Standard performance metrics are utilized in managing all new service/product development projects

Schilling and Hill (1998), Mitchell Madison Group (1995) and Cooper (1994) Roth et al. (1997), de Brentani (1995) and Cooper and de Brentani (1991) Montoya-Weiss and Calantone (1994) and Kuczmarski (1993) Griffin (1997), Roth et al. (1997) and Cooper et al. (1994) Schilling and Hill (1998) and Griffin (1997) Noori et al. (1997) and Cooper (1985) Cooper and Kleinschmidt (1995) Cooper (1999), Noori et al. (1997), Cooper and Kleinschmidt (1995) and Page (1993) Griffin (1997) and Noori et al. (1997) Noori et al. (1997) Noori et al. (1997)

a Note that for parsimony, not all the citations in Appendix A are given in the reference list for this paper. A complete listing is available upon request. b These items were retained after the item-sorting analysis. c These items were retained for the confirmatory factor analysis.

Market acuity describes the ability of the service organization to see the competitive environment clearly and to anticipate and respond to customers’ evolving needs and wants. Market acuity is valuable because it requires that the organization (1) continuously collect information on customer needs and competitor capabilities and (2) use this information to create new services that deliver superior customer value. Measurement items

Supporting literature

Our service firm actively seeks out information about our company’s business environmentb

Griffin (1997), de Brentani (1995), Roth (1993) and Cooper (1985) Storey and Easingwood (1996)

Information collected on competitors, customers, and markets is distributed through structured channels and mechanisms Our service firm details customers’ needs so that we can offer customized products and servicesa,b New offerings are designed based on information actively collected on evolving market shifts and customer demands for these offeringsb Our service firm collects information on our competitors’ service/product offeringsa Customers, both internal and external, are viewed as potential and valuable sources of new offering ideas and opportunitiesb

Cooper and Kleinschmidt (1995) and Cooper et al. (1994) de Brentani (1995) de Brentani (1995) and Roth (1993) Schilling and Hill (1998), Brown and Eisenhardt (1995), Griffin and Hauser (1993) and Cooper et al. (1994)

840

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

Appendix A (Continued ) Measurement items

Supporting literature

Our service firm uses collected information to respond quickly to changes in the competitive environmentb We utilize the ‘‘voice of the customer’’ throughout our new service development processa

Storey and Easingwood (1996) and de Brentani (1995) Cooper (1999), Bitran and Pedrosa (1998), Noori et al. (1997), Griffin and Hauser (1993) and Behara and Chase (1993) Cooper (1999), Bitran and Pedrosa (1998) and de Brentani (1995, 1989) de Brentani (1995)

New service development projects are started only when we have a complete understanding of customer needsa Our service personnel freely share customer demand or market-shift information across functional boundaries New services and products typically are developed only for markets with high growth potential a b

Cooper (1985)

These items were retained after the item-sorting analysis. These items were retained for the confirmatory factor analysis.

New service development strategy defines the role of development within the overall business strategy. It integrates the overall business strategy with the new services/products strategy and service design/delivery decisions. Thus, an NSD development strategy enables management to plan for and to make available adequate resources for specific new service development efforts while keeping in mind the fit between service operations capabilities, delivery processes and procedures, and the needs of the market. Measurement items

Supporting literature a

New services have end uses similar to that of the firm’s existing services

New services/products are designed to complement the firm’s current services offeringa Our service firm has a documented new service development strategy that is broadly communicateda Resource availability is a critical factor in determining the ‘‘go/no go’’ decision for the development of new services/productsa New service/product ideas are assessed in light of our firm’s current service delivery processes and systems capabilitiesa

Adequate resources are always deployed for promising projectsa

Introducing new services and products is critical to creating a dominant position in markets we compete ina New services/products introduced typically fit with the firm’s existing contact personnel skills and resourcesa Our firm’s new service development strategy and new offerings decisions are always formulated with the overall business strategy in mindb New services/products introduced typically fit into the existing product mixa Current service capabilities are critical factors in determining the ‘‘go/no go’’ decision for the development of new services/productsb A critical component in our new service/product development effort is an audit of the existing service delivery processes and system Ideas for new service/product development are largely driven by the service’s overall business strategyb Senior managers are always willing to commit resources to promising new service/product development projectsb The firm and its personnel have a clear view about how to achieve its new service development goals

Cooper and de Brentani (1991) and Cooper et al. (1985) Tax and Stuart (1997) Cooper and Kleinschmidt (1995) Cooper and de Brentani (1991) and de Brentani (1989) Bitran and Pedrosa (1998), Tax and Stuart (1997), Easingwood and Storey (1993) and Cooper and de Brentani (1991) Noori et al. (1997), Cooper and Kleinschmidt (1995), Cooper et al. (1994), Edgett and Parkinson (1994) and Wheelwright and Clark (1992) Voss et al. (1992) and Cooper (1985) Tax and Stuart (1997), Cooper and de Brentani (1991) and Cooper (1985) Khurana and Rosenthal (1998), Noori et al. (1997), Roth and van der Velde (1991) and Utterback (1982) Easingwood and Storey (1993) and Cooper (1985) Brown and Eisenhardt (1995) and Cooper and Kleinschmidt (1995) Bitran and Pedrosa (1998) and Tax and Stuart (1997) Noori et al. (1997) Cooper and Kleinschmidt (1995) and Wheelwright and Clark (1992) Bates et al. (1995)

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

841

Appendix A (Continued ) Measurement items

Supporting literature

New service/product ideas are evaluated based upon the firm’s current technological skills and resources

Bitran and Pedrosa (1998), Cooper and Kleinschmidt (1995), de Brentani (1995), Cooper and de Brentani (1991), de Brentani (1989) and Cooper (1985) Khurana and Rosenthal (1998), Schilling and Hill (1998) and Wheelwright and Clark (1992) Brown and Eisenhardt (1995) and Cooper (1985) Roth and van der Velde (1992)

Senior management provides consistent leadership in integrating new information and decisions New service/product development is pursued with the aim of gaining market share Our service firm is at the cutting edge of development of technologically based new service development New service/product development projects are targeted to specific groups of consumers New service/product development activities employ design technologies familiar to the firm The focus of our service firm is designing services that build the loyalty of existing customers while also attracting new ones Our service firm continuously undertakes new service/product development projects in order to build capabilities in advance of its needs Market requirement decisions supercede any service operations design decisions New services/products typically are developed only for markets with high growth potential Existing facilities are typically redesigned to complement new service offerings Our new service/product development efforts are proactively pursued in order to generate innovation opportunities

Our firm designs its new service concepts in terms of results produced for customers Technology-based new services are combined with non-technology-based service offerings Our service firm competes primarily on service offerings differentiation a b

Cooper (1985) Bitran and Pedrosa (1998) and Cooper (1985) Heskett et al. (1990) Tushman and Nadler (1986) and Van de Ven (1986) Tax and Stuart (1997) and Roth and van der Velde (1991) Cooper (1985) Tax and Stuart (1997) Voss et al. (1992), Chase and Hayes (1991), Tushman and Nadler (1986) and Van de Ven (1986) Cooper et al. (1994) and Heskett et al. (1990) Dabholkar (1994) Roth and van der Velde (1992)

These items were retained after the item-sorting analysis. These items were retained for the confirmatory factor analysis.

New service development culture captures the values and beliefs fostered by the service organization that indicate a willingness and desire to innovate. A positive NSD development culture facilitates a climate for new service development and is a necessity for new service development success. Measurement items

Supporting literature

Employing multi-functional teams facilitates our firm’s new service/product development effortsa

Schilling and Hill (1998), Noori et al. (1997), Ittner and Larcker (1997), Brown and Eisenhardt (1995), Cooper and Kleinschmidt (1995) and Page (1993) Denison and Mishra (1995), and Cooper and Kleinschmidt (1995) de Brentani (1995)

Our firm encourages entrepreneurial efforts and is accepting of risk-taking effortsb Our firm emphasizes its human resources and places a premium on high cohesion and morale in its new service development activitiesb Supervisors generally encourage people who work with them to exchange opinions and ideasa New service development efforts are proactively pursued in order to generate innovation opportunitiesa

The firm uses teams to solve new service development problems Individuals involved in new service development are empowered to make decisions that advance a project’s progress without having to consult with their bossesa

de Brentani (1995) Voss et al. (1992), Chase and Hayes (1991), Tushman and Nadler (1986) and Van de Ven (1986) Noori et al. (1997) and Cooper and Kleinschmidt (1995) Bates et al. (1995)

842

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

Appendix A (Continued ) Measurement items

Supporting literature

The glue that holds our organization together is a commitment to innovation and new service/product developmentb Non-monetary rewards are employed in new service/product development projects as a means of recognizing employee effortsb The head of our service firm is generally considered to be an innovator and a risk-taker The firm and its personnel have a clear view about how to achieve goals The firm has a definite idea about how new service/product development activities should be done

Deshpande and Webster (1990)

a b

Page (1993) Deshpande and Webster (1990) Bates et al. (1995) Bates et al. (1995)

These items were retained after the item-sorting analysis. These items were retained for the confirmatory factor analysis.

IT experience refers to the use of IT for facilitating or improving interorganizational coordination of activities and information processing in the NSD process. It enables the creation of services that are more responsive to customer needs. Measurement items

Supporting literature

Databases with customer-related information are utilized in planning new service/product development activitiesa IT is used to design new service encountersa

Fitzsimmons and Fitzsimmons (1998)

Individual computers (including PCs) in the service organization are networked and can communicate with other computersa Communication flow outside of the NSD project group is accomplished through IT-based channelsa Communication flow within the new service development project groups is facilitated through IT-based channelsb Information technologies are used to speed up the introduction of new services and productsb Our service firm utilizes technology to facilitate the flow of information to people participating in the new service development processb Individual computers (including PCs) in the service organization are capable of sharing common data and applications through a networka Transferring and processing information is facilitated in new service/product development projects through information technology (IT)a IT is used to share information that coordinates new service/product development activitiesb IT is used to identify and diagnose customer needsb Our service firm focuses on developing new information-based services/products a b

Bitran and Pedrosa (1998), Cooper et al. (1994), Bitran and Lojo (1993) and Sasser and Fulmer (1990) Fiedler et al. (1996) Roth et al. (1997), Brown and Eisenhardt (1995) and Keller (1994) Roth et al. (1997), Brown and Eisenhardt (1995) and Keller (1994) Froehle et al. (2000) and Ittner and Larcker (1997) Ittner and Larker (1997), Cooper et al. (1994) and de Brentani (1995) Fiedler et al. (1996) Noori et al. (1997) Noori et al. (1997), Fiedler et al. (1996) Sasser and Fulmer (1990) Heskett et al. (1990)

These items were retained after the item-sorting analysis. These items were retained for the confirmatory factor analysis.

New service development performance is multidimensional. Instead of focusing on the performance of individual NSD projects, this construct focuses on the performance of the NSD program. The NSD program refers to the portfolio of NSD projects the service organization has initiated within the last 3 years. Measurement itemsa

Supporting literature

Overall speed-to-market performance of new service/product development projects for services introduced over the past 3 years (EFF)b

Kessler and Chakrabarti (1999, 1996), Roth et al. (1997), Brown and Eisenhardt (1995), Cooper and Kleinschmidt (1994), Wheelwright and Clark (1992) and Blackburn (1991)

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

843

Appendix A (Continued ) Measurement itemsa

Supporting literature

The degree to which the company’s new service/product development program has been successful in meeting customer requirements for new offerings (EFF)b

Cooper (1999), Brown and Eisenhardt (1995) and Voss et al. (1992) Cooper and Kleinschmidt (1995)

Percentage of new service development projects launched within the past 3 years that achieved marketplace success (EFF)b The degree to which the company’s new service/product development program has been successful in meeting corporate profit objectives for new offerings (on average) (COMP)b Success/failure rate (# offerings launched/total projects started) for new service/product development efforts over the past 3 yearsb Overall performance of the new service/product development program relative to competitors over the past 3 years (EFF)b Degree to which new offerings developed over the past 3 years fit the overall business strategy objectiveb The profitability of the firm’s new service/product development program, initiatives, or activities relative to its competitors over the past 3 years (COMP)b ROI for the new service/product development program, initiatives, or activities for the past 3 years (COMP)b The degree to which the company’s new service/product development program has been successful in meeting the corporate sales objectives for new servicesb Percentage of profits provided by new offerings less than 3 years old (COMP)b Percentage of sales represented by new offerings less than 3 years oldb The technical success of the new service/product development program relative to spending Degree to which new service offerings lead to future opportunities Percentage of company sales represented by new services introduced during the previous 3 years Degree to which new service/product development hit the organization’s 3-year new services objective

Wind and Mahajan (1997), Cooper and Kleinschmidt (1995) Griffin and Page (1996) Roth et al. (1997), Cooper and Kleinschmidt (1995) Griffin and Page (1996) Wind and Mahajan (1997), Cooper and Kleinschmidt (1995) Griffin and Page (1996) Cooper and Kleinschmidt (1995) Wind and Mahajan (1997) Mendelson and Pillai (1999) Cooper and Kleinschmidt (1995) Griffin and Page (1996) Cooper and Kleinschmidt (1995) Griffin and Page (1996)

a Items designated as COMP (or EFF) were included in the NSD competitiveness (NSD effectiveness) factor-based scores that were scrutinized in the set correlation analysis reported in Table 6. b These items were retained after the item-sorting analysis.

References Ahire, S.L., Devaraj, S., 2001. An empirical comparison of statistical construct validation approaches. IEEE Transactions on Engineering Management 48 (3), 319–329. Anderson, J.C., Gerbing, D.W., 1991. Predicting the performance of measures in a confirmatory factor analysis with a pretest assessment of their substantive validities. Journal of Applied Psychology 76 (5), 732–740. Armstrong, J.S., Overton, T.S., 1977. Estimating nonresponse bias in mail surveys. Journal of Marketing Research 14, 396– 402. Bagozzi, R.P., Yi, Y., 1988. On the evaluation of structural equation models. Journal of the Academy of Marketing Science 16 (1), 74– 94. Berry, L.L., Shankar, V., Parish, J.T., Cadwallader, S., Dotzel, T., 2006. Creating new markets through service innovation. MIT Sloan Management Review 47 (2), 56–63. Bharadwaj, S.G., Varadarajan, P.R., Fahy, J., 1993. Sustainable competitive advantage in service industries: a conceptual model and research propositions. Journal of Marketing 57, 83–99. Bitran, G., Pedrosa, L., 1998. A structured product development perspective for service operations. European Management Journal 16 (2), 169–189. Bowers, M.R., 1985. An exploration into new service development: process, structure and organization, Unpublished Ph.D. Dissertation, Texas A&M University.

Brown, S.L., Eisenhardt, K.M., 1995. Product development: past research, present findings, and future directions. Academy of Management Review 20, 343–378. Business Week, 2005. A creative corporation toolbox. Business Week 72–74. Carmines, E.G., Zeller, R.A., 1979. Reliability and Validity Assessment. Sage Publications, Newbury Park, CA. Cassiman, B., Veugelers, R., 2006. In search of complementarity in innovation strategy: internal R&D and external knowledge acquisition. Management Science 52 (1), 68–82. Chase, R.B., Hayes, R.H., 1991. Beefing up operations in service firms. Sloan Management Review 15–26. Chase, R.B., Roth, A.V., Voss, C., 2000. How do financial services stack up? Findings from a benchmarking study of the US financial service sector. In: Melnick, E.L., Nayyar, P.R., Pinedo, M.L., Seshadri, S. (Eds.), Creating Value in Financial Services—Strategies, Operations and Technologies. Kluwer Academic Publishers, Boston, MA. Churchill, G.A., 1979. A paradigm for developing better measures of marketing constructs. Journal of Marketing Research 6, 64–73. Coates, T.T., McDermott, C.M., 2002. An exploratory analysis of new competencies: a resource-based view perspective. Journal of Operations Management 20, 435–450. Cohen, J., 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20, 37–46. Cohen, J., 1982. Set correlation as a general multivariate data-analytic method. Multivariate Behavioral Research 17, 301–341.

844

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

Cohen, J., Nee, J.C.M., 1984. Estimators for two measures of association for set correlation. Educational and Psychological Measurement 44, 907–917. Cooper, R.G., Easingwood, C.J., Edgett, S., Kleinschmidt, E.J., Storey, C., 1994. What distinguishes the top performing new products in financial services. Journal of Product Innovation Management 11, 281–299. Cooper, R.G., Edgett, S.J., 1999. Product Development for the Service Sector—Lessons from Market Leaders. Perseus Books, Cambridge, MA. Daft, R.L., Lengel, R.H., 1986. Organizational information requirements, media richness, and structural design. Management Science 32, 554–571. de Jong, J.P.J., Bruins, A., Dolfsma, W., Meigaard, J., 2003. Innovation in service firms explored: what, how and why? Strategic Study B200205, EIM Business & Policy Research, Zoetermeer, The Netherlands. Deshpande, R., Webster, F.E., 1990. Organizational culture and marketing: defining the research agenda. Journal of Marketing 53, 3–15. Drejer, I., 2004. Identifying innovation in surveys of services: a Schumpeterian perspective. Research Policy 33, 551–562. Drolet, A.L., Morrison, D.G., 2001. Do we really need multiple-item measures in service research? Journal of Service Research 3 (3), 196–204. Edquist, C., 2005. Systems of innovation: perspectives and challenges. In: Fagerberg, J., Mowery, D.C., Nelson, R.R. (Eds.), The Oxford Handbook of Innovation. Oxford University Press, Oxford, UK. Edwards, J.R., 2001. Multidimensional constructs in organizational behavior research: an integrative analytical framework. Organizational Research Methods 4 (2), 144–192. Edwards, J.R., 2003. Construct validation in organizational behavior research. In: Greenberg, J. (Ed.), Organizational Behavior—The State of the Science. 2nd ed. Lawrence Erlbaum Associates, Mahwah, NJ. eBRC, 2005. Service innovations and new service business models, eRBC White Paper, Pennsylvania State University, University Park, PA (see www.smeal.psu.edu/ebrc). Fiedler, K.D., Grover, V., Teng, J.T.C., 1996. An empirically derived taxonomy of information technology structure and its relationship to organizational structure. Journal of Management Information Systems 13 (1), 9–34. Fiol, C.M., 1991. Managing culture as a competitive resource: an identity-based view of sustainable competitive advantage. Journal of Management 17 (1), 191–211. Fitzsimmons, J., Fitzsimmons, M., 2000. New Service Development—Creating Memorable Experiences. Sage Publications, Thousand Oaks, CA. Fitzsimmons, J.A., Fitzsimmons, M.J., 2004. Service Management— Operations, Strategy, and Information Technology, 4th ed. McGraw-Hill Inc., New York. Fornell, C., Larcker, D.F., 1981. Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research 18, 39–50. Froehle, C.M., Roth, A.V., Chase, R.B., Voss, C.A., 2000. Antecedents of new service development effectiveness: an exploratory examination of strategic operations choices. Journal of Service Research 3 (1), 3–17. Gallouj, F., Weinstein, O., 1997. Innovation in services. Research Policy 26, 537–556. Gatignon, H., Tushman, M.L., Smith, W., Anderson, P., 2002. A structural approach to assessing innovation: construct develop-

ment of innovation locus, type, and characteristics. Management Science 48 (9), 1103–1122. Gerbing, D.W., Hamilton, J.G., Freeman, E.B., 1994. A large-scale second-order structural equation model of the influence of management participation on organizational planning. Journal of Management 20 (4), 859–885. Goodhue, D.L., Wybo, M.D., Kirsch, L.J., 1992. The impact of data integration on the costs and benefits of information systems. MIS Quarterly 293–311. Griffin, A., 1997. PDMA research on new product development practices: updating trends and benchmarking best practices. Journal of Product Innovation Management 14, 429–458. Gro¨nroos, C., 1990. Service Management and Marketing: Managing the Moments of Truth in Service Competition. Lexington Books, Lexington, MA. Gustafsson, A., Johnson, M.D., 2003. Competing in a Service Economy. Jossey-Bass, San Francisco, CA. Hardesty, D.M., Bearden, W.O., 2004. The use of expert judges in scale development: implications for improving face validity of measures of unobservable constructs. Journal of Business Research 57, 98–107. Hensley, R.L., 1999. A review of operations management studies using scale development techniques. Journal of Operations Management 17, 343–358. Heskett, J.L., 1986. Managing in the Service Economy. Harvard Business School Press, Boston, MA. Heskett, J.L., Sasser, W.E., Hart, C.W.L., 1990. Service Breakthroughs: Breakthroughs Changing the Rules of the Game. The Free Press, New York. Hinkin, T.R., 1998. A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods 1 (1), 104–121. Hoyle, R.H., 1999. Statistical Strategies for Small Sample Research. Sage Publications, Thousand Oaks, CA. Huber, G.P., Power, D.J., 1985. Retrospective reports of strategic-level managers: guidelines for increasing their accuracy. Strategic Management Journal 171–180. Jarvis, C.B., Mackenzie, S.B., Podsakoff, P.M., 2003. A critical review of construct indicators and measurement model misspecification in marketing and consumer research. Journal of Consumer Research 30, 199–218. Johne, A., Storey, C., 1998. New service development: a review of the literature and annotated bibliography. European Journal of Marketing 32 (3/4), 184–251. Johnson, S.P., Menor, L.J., Roth, A.V., Chase, R.B., 2000. A critical evaluation of the new service development process: integrating service innovation and service design. In: Fitzsimmons, J.A., Fitzsimmons, M.J. (Eds.), New Service Development—Creating Memorable Experiences. Sage Publications, Thousand Oaks, CA. Karmarkar, U.S., Pitbladdo, R., 1995. Service markets and competition. Journal of Operations Management 12, 397–411. Keller, R.T., 1994. Technology-information processing fit and the performance of R&D project groups: a test of contingency theory. Academy of Management Journal 37 (1), 167–179. Kirca, A.H., Jayachandran, S., Bearden, W.O., 2005. Market orientation: a meta-analytic review and assessment of its antecedents and impact on performance. Journal of Marketing 69, 24–41. Kohli, A.K., Jaworski, B.J., 1990. Market orientation: the construct, research propositions, and managerial implications. Journal of Marketing 54, 1–18. Kotter, J.P., Heskett, J.L., 1992. Corporate Culture and Performance. The Free Press, New York, NY.

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846 Krishnan, V., Ulrich, K.T., 2001. Product development decisions: a review of the literature. Management Science 47 (1), 1– 21. Langeard, E., Reffiat, P., Eiglier, P., 1986. Developing new services. In: Venkatesan, M., Schmalennee, D.M., Marshall, C. (Eds.), Creativity in Services Marketing: What’s New, What Works, What’s Developing? American Marketing Association, Chicago, IL. Little, T.D., Lindenberger, U., Nesselroade, J.R., 1999. On selecting indicators for multivariate measurement and modeling with latent variables: when ‘good’ indicators are bad and ‘bad’ indicators are good. Psychological Methods 4 (2), 192–211. Lucas, B.A., Ferrell, O.C., 2000. The effect of market orientation on product innovation. Journal of the Academy of Marketing Science 28 (2), 239–247. MacCallum, R.C., 1990. The need for alternative measures of fit in covariance structural modeling. Multivariate Behavioral Research 38 (1), 113–139. Malhotra, M.K., Grover, V., 1998. An assessment of survey research in OM: from constructs to theory. Journal of Operations Management 16, 407–425. Marsh, H.W., Hau, K.T., 1999. Confirmatory factor analysis: strategies for small sample sizes. In: Hoyle, R.H. (Ed.), Statistical Strategies for Small Sample Research. Sage Publications, Thousand Oaks, CA. McKeown, B., Thomas, D., 1988. Q Methodology. Sage Publications, Beverly Hills, CA. Melnick, E.L., Nayyar, P.R., Pinedo, M.L., Seshadri, S., 2000. Creating value in financial services. In: Melnick, E.L., Nayyar, P.R., Pinedo, M.L., Seshadri, S. (Eds.), Creating Value in Financial Services: Strategies, Operations and Technologies. Kluwer Academic Publishers, Norwell, MA. Menor, L.J., Roth, A.V., Mason, C.H., 2001. Agility in retail banking: a numerical taxonomy of strategic service groups. Manufacturing & Service Operations Management 3 (4), 273–292. Menor, L.J., Tatikonda, M.V., Sampson, S.E., 2002. New service development: areas for exploitation and exploration. Journal of Operations Management 20, 135–157. Meyer, M.H., DeTore, A., 1999. Product development for services. Academy of Management Executive 13 (3), 64–76. Miles, I., 2005. Innovation in services. In: Fagerberg, J., Mowery, D.C., Nelson, R.R. (Eds.), The Oxford Handbook of Innovation. Oxford University Press, Oxford, UK. Mintzberg, H., 1979. The Structuring of Organizations. Prentice-Hall, Englewood Cliffs, NJ. Miravete, E.J., Pernias, J.C., 2006. Innovation complementarity and scale of production. The Journal of Industrial Economics 54 (1), 1–29. Montoya-Weiss, M.M., Calantone, R., 1994. Determinants of new product performance: a review and meta-analysis. Journal of Product Innovation Management 11, 397–417. Moore, D.L., Tarnai, J., 2002. Evaluating nonresponse error in mail surveys. In: Groves, R.M., Dillman, D.A., Eltinge, J.L., Little, R.J.A. (Eds.), Survey Nonresponse. John Wiley & Sons, New York, NY. Moore, G.C., Benbasat, I., 1991. Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research 2 (3), 192–222. Nahm, A.Y., Solis-Galvan, L.E., Rao, S.S., Ragu-Nathan, T.S., 2002. The Q-sort method: assessing reliability and construct validity of questionnaire items at a pre-testing stage. Journal of Modern Applied Statistical Methods 1 (1), 114–125.

845

Noori, H., Munro, H., Deszca, G., Cohen, M., 1997. Managing the P/ SDI process: best-in-class principles and leading practices. International Journal of Technology Management 13 (3), 245– 268. O’Leary-Kelly, S.W., Vokurka, R.J., 1998. The empirical assessment of construct validity. Journal of Operations Management 16, 387– 405. Perrault, W.D., Leigh, L.E., 1989. Reliability of nominal data based on qualitative judgments. Journal of Marketing Research 135– 148. Powell, T.C., Dent-Micallef, A., 1997. Information technology as competitive advantage: the role of human, business, and technology resources. Strategic Management Journal 18 (5), 375–405. Price, R.M., 1998. Technology and strategic advantage. IEEE Engineering Management Review 26–36. Quinn, J.B., Pacquette, P.C., 1990. Technology in services: creating organizational revolutions. Sloan Management Review 32, 67–87. Ross, J.W., Beath, C.M., Goodhue, D.L., 1998. Develop long-term competitiveness through IT assets. IEEE Engineering Management Review 37–47. Roth, A.V., 1993. Performance dimensions in services: an empirical investigation of strategic performance. Advances in Services Marketing and Management 2, 1–47. Roth, A.V., Chase, R.B., Voss, C., 1997. Service in the US: Progress Towards Global Service Leadership. Severn Trent PLC, London, UK. Roth, A.V., Menor, L.J., 2003. Insights into service operations management: a research agenda. Production and Operations Management 12 (2), 145–164. Roth, A.V., van der Velde, M., 1991. Operations as marketing: a competitive service strategy. Journal of Operations Management 10 (3), 303–328. Roth, A.V., van der Velde, M., 1992. World Class Banking. Bank Administration Institute, Chicago, IL. Rungtusanatham, M.J., Choi, T.Y., Hollingworth, D.G., Wu, Z., Forza, C., 2003. Survey research in operations management: historical analyses. Journal of Operations Management 21 (4), 475– 488. Rust, R.T., Cooil, B., 1994. Reliability measures for qualitative data: theory and implications. Journal of Marketing Research 1–14. Sanchez, R., Heene, A., Thomas, H., 1996. Towards the theory and practice of competence-based competition. In: Sanchez, R., Heene, A., Thomas, H. (Eds.), Dynamics of Competence-based Competition—Theory and Practice in the New Strategic Management. Elsevier Science Inc., Tarrytown, NY. Sasser, W.E., Fulmer, W.E., 1990. Personalized service delivery systems. In: Bowen, D.E., Chase, R.B., Cummings, T.G. (Eds.), Service Management Effectiveness. Jossey-Bass, San Francisco, CA. Schein, E.H., 1985. Organizational Culture and Leadership. JosseyBass, San Francisco, CA. Schilling, M.A., Hill, C.W.L., 1998. Managing the new product development process: strategic imperatives. Academy of Management Executive 12 (3), 67–81. Schlesinger, L.A., Heskett, J.L., 1991. The service-driven service company. Harvard Business Review 71–81. Scudder, G.D., Hill, C.A., 1998. A review and classification of empirical research in operations management. Journal of Operations Management 16, 91–101. Slater, S.F., Narver, J.C., 1999. Market-oriented is more than being customer-led. Strategic Management Journal 20 (12), 1165–1168. Sunbo, J., 1997. Management of innovation in services. The Service Industries Journal 17 (3), 432–455.

846

L.J. Menor, A.V. Roth / Journal of Operations Management 25 (2007) 825–846

Tax, S.S., Stuart, I., 1997. Designing and implementing new services: the challenges of integrating service systems. Journal of Retailing 73 (1), 105–134. Thomke, S.H., 2003a. R&D comes to services. Harvard Business Review 3–11. Thomke, S.H., 2003b. Experimentation Matters. Harvard Business School Press, Boston, MA. Thwaites, D., 1992. Organizational influences on the new product development process in financial services. Journal of Product Innovation Management 9, 303–313. Tidd, J., Hull, F.M., 2003. Service Innovation. Imperial College Press, London, UK. Tippins, M.J., Sohi, R.S., 2003. IT competency and firm performance: is organizational learning a missing link? Strategic Management Journal 24, 745–761. Van de Ven, A.H., 1986. Central problems in the management of innovation. Management Science 32 (5), 590–607.

van den Burg, W., Lewis, C., 1988. Some properties of two measures of multivariate association. Psychometrika 53 (1), 109–122. Venkatraman, N., 1989. Strategic orientation of business enterprises: the construct, dimensionality, and measurement. Management Science 35 (8), 942–962. Verma, R., Fitzsimmons, J., Heineke, J., Davis, M., 2002. New issues and opportunities in service design research. Journal of Operations Management 20, 117–120. Voss, C., Johnston, R., Silvestro, R., Fitzgerald, L., Brignall, T., 1992. Measurement of innovation and design performance in services. Design Management Journal 40–46. West, S.G., Finch, J.F., Curran, P.J., 1995. Structural equation models with nonnormal variables—problems and remedies. In: Hoyle, R.H. (Ed.), Structural Equation Modeling Concepts, Issues, and Applications. Sage Publications, Thousand Oaks, CA.