An action-research based instrument for monitoring continuous quality improvement

An action-research based instrument for monitoring continuous quality improvement

European Journal of Operational Research 166 (2005) 293–309 www.elsevier.com/locate/dsw Production, Manufacturing and Logistics An action-research b...

308KB Sizes 1 Downloads 83 Views

European Journal of Operational Research 166 (2005) 293–309 www.elsevier.com/locate/dsw

Production, Manufacturing and Logistics

An action-research based instrument for monitoring continuous quality improvement Victor R. Prybutok a, Ranga Ramasesh a

b,*

Center for Quality and Productivity, College of Business Administration, University of North Texas, Denton, TX 76203, USA b M.J. Neeley School of Business, Texas Christian University, P.O. Box 298530, Fort Worth, TX 76129, USA Received 18 March 2003; accepted 11 February 2004 Available online 7 May 2004

Abstract Despite containing an extensive body of normative or prescriptive studies, quality management literature offers little by way of generally applicable guidance concerning how to measure or monitor the critical factors underlying strategic quality management initiatives, such as total quality management and continuous quality improvement. Although several studies have relied on survey data from a multiple set of sources to unearth models of such factors, they do not offer general guidelines to select factors appropriate in a specific setting. In this paper, in contradistinction to the multiple-source survey methodology, we take an action-research approach and present the findings of a contextually specific, single-site empirical research that we carried out at Lockheed Martin Tactical Aircraft Systems, in Fort Worth, Texas. We discuss the implications of our findings for extending our empirical understanding of the factors underlying strategic quality management programs and for the development of reliable and valid instruments to monitor them.  2004 Elsevier B.V. All rights reserved. Keywords: Quality management; Continuous quality improvement; Survey instrument design; Measurement of perception of quality practices

1. Introduction Continuous quality improvement (CQI) has emerged as a dominant theme for survival and growth in today’s fiercely competitive business environment. CQI is at the culmination of a progressive transformation of quality management

* Corresponding author. Tel.: +1-817-257-7194; fax: +1-817257-7227. E-mail addresses: [email protected] (V.R. Prybutok), [email protected] (R. Ramasesh).

themes that has evolved through the stages of ‘‘quality by inspection,’’ ‘‘statistical quality control (SQC),’’ ‘‘quality assurance (QA),’’ and ‘‘total quality management (TQM).’’ This quality transformation points to a fundamental shift that goes beyond the quality of products or services and focuses on quality improvement as a day-to-day mindset. CQI is a never-ending process that seeks to achieve defect-free, high quality products or services. Because CQI is an ongoing process, it is imperative that firms monitor the CQI program on a regular basis to ensure that it is working well and to continually identify areas for improvement.

0377-2217/$ - see front matter  2004 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2004.02.013

294

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

In order to effectively monitor CQI one needs a reliable and valid instrument to collect data on the factors underlying CQI. However, a review of the extensive literature on quality management reveals that, currently there is no theory to guide the selection of such factors. Consequently, a number of studies have endeavored to identify the critical factors of quality management and TQM using data collected from surveys (Saraph et al., 1989; Flynn et al., 1994; Black and Porter, 1996; among others). These are excellent survey-based studies, and careful attention has been given to ensure the reliability and the validity of the items included in the survey instruments used in these studies. Yet, the findings of the surveys have limited general applicability to guide the selection of the factors underlying quality management initiatives such as TQM and CQI because of several reasons. First, the basis for the choice of the preconceived factors is different across surveys. The survey in Saraph et al. (1989) was based on the normative prescriptions of the acknowledged quality experts or ‘‘gurus’’ while the survey by Flynn et al. (1994) focused on the practitioner and empirical literature on quality practice in the US and Japan. Second, the different surveys have focused on respondents at different levels. For example, while the survey by Saraph et al. (1989) focused on administrative and quality managers at the business unit level, the survey by Flynn et al. (1994) focused on respondents at the plant level. Taking a slightly different approach, Black and Porter (1996) relied exclusively on the Malcolm Baldrige National Quality Award (MBNQA) framework in selecting the critical factors for their survey of a sample of the members of the European Foundation for Quality Management. Third, surveys are based on the perceptions and the experiences of the respondents, which vary widely across industries, firms within industries, and functional responsibilities of respondents within the firms. However, since the adequacy of a CQI depends on perceptions and the experiences of the personnel at all levels and a variety of functional responsibilities within a single firm, the findings of the surveys have limited applicability in specific settings. Following the development of a very comprehensive surveybased instrument for use at the plant level, Flynn

et al. (1994) expressed this concern succinctly. They state ‘‘Although we believe this to be a strength of the instrument, it also limits its usefulness, not permitting assessment of quality management strategy at the corporate and division levels, nor a comparison of the initiatives between various levels’’ (Flynn et al., 1994, p. 361). It is thus clear that, since we do not as yet have a well-founded theory of quality management it is not feasible at this stage to develop a universally applicable instrument for monitoring quality management initiatives such as TQM and CQI. Nor can we find in the survey-based studies, a single ‘‘model’’ (i.e., a set of factors) that has established itself as a generally acceptable basis for CQI. This has meant a lack of easily applicable methods for identifying the key factors that are theoretically sound and empirically valid to monitor a CQI program. Further, since it is impossible to generalize the findings of the surveys conducted in diverse settings, we are in a dilemma that is aptly described by Miles (1979). Miles describes the frustrations of trying to codify diverse situations and concludes by asking: ‘‘what are the possible conceptual and organizational solutions to the steady tension between the unique, contextually specific nature of single sites, and the need to make sense across a number of sites? Must we trade close-up descriptive validity for accurate but ‘thin’ generalization?’’ (Miles, 1979, p. 599). Embracing the spirit of this observation, in this paper, we present the findings of a context-specific, single-site ‘‘action-research’’ project we carried out in Lockheed Martin Tactical Aircraft Systems (LMTAS) in Fort Worth, Texas, that led to the development of a reliable and valid survey instrument to monitor the firm’s CQI program. In presenting our action-research conducted at a single site, which is a departure from the traditional survey methodology, we are further motivated by assertive stance of the organizational theorist (Mintzberg, 1979): ‘‘Organizational theory has, I believe, paid dearly for the obsession with rigor. Too many of the results have been significant only in the statistical sense. What, for example, is wrong with samples of one? Should Piaget apologize for studying his own children, a physicist for splitting only one atom?’’ (Mintzberg, 1979, p. 583).

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

‘‘Action research’’ can be seen as variant of ‘‘case research’’ but it goes beyond case research. Whereas a case researcher is an independent observer, an action researcher is a participant in the study and implements a system, but simultaneously wants to evaluate a certain intervention technique. ‘‘The action researcher is not an independent observer, but becomes a participant, and the process of change or development becomes the subject of research. . .’’ (Benbasat et al., 1987, p. 371). In the spirit of action-research, we became a part of the team, which included production managers and quality control specialists from LMTAS. The research project was approved by the top management at LMTAS and was assured of its full support. None the less, it was not a contract to provide a specific service as in a consultancy engagement. Nor was it a survey to collect and disseminate data. The focus of the research was to understand the key factors underlying the CQI initiative at LMTAS and to develop an instrument to monitor the CQI program. The goal of this paper is to present key insights from this single-site, context-specific research, and to make meaningful contributions to the quality management literature and the practice of quality management by accomplishing two objectives. First, we describe the ‘‘process’’ based on a rigorous ‘‘action-research’’ methodology that we used for the development of the survey instrument. Although action-research methodology is thought to be most effective for technique development and theory building it has not yet seen much application in production and operations management. It is time to expand our limited set of worn-out paradigms and consider new research methods from paradigms used in our sister fields (Meredith et al., 1989). In describing the process, besides emphasizing the attempts made to ensure validity and reliability at all stages, we will also explore the paths and obstacles to success and the need to recognize the uniqueness of the firm’s operating environment. Second, we present a comparison of the ‘‘product,’’ i.e., the model of the critical factors underlying the CQI program that emerges from

295

our study with the models of critical factors reported in three earlier studies, two related to quality management in general, and the other related to TQM. Through this comparison, we endeavor to make a contribution toward the development of an empirical understanding of the critical factors of the self-assessment frameworks in quality management. ‘‘The availability of hard evidence based on a sound research methodology should enable the development of reliable and valid diagnostic instruments for organizations to use in the development and improvement of their TQM systems. Only by this route can a positive, meaningful and sustainable step be made towards filling current gaps in our knowledge and understanding of TQM’’ (Black and Porter, 1996, p. 4). The rest of the paper is organized as follows. We first provide an overview of the related literature. We next present our research by describing the setting, the details of our methodology for data collection, and the statistical techniques we used for data analysis. Then, we discuss the results of our study. Finally, we conclude by summarizing the implications of our study for policy implementation and monitoring decisions pertaining to strategic quality management initiatives.

2. Overview of the literature The literature on quality management related to CQI is extensive and has appeared in a variety of contexts. Our intent in this section is not to engage in a review of this extensive literature, but to provide the readers with an overview of the literature that we have reviewed as a precursor for our action research. We present this overview by classifying the extensive body of literature into three categories: (A) anecdotal and normative or prescriptive literature, (B) certification or awards guidelines, and (C) survey-based research. 2.1. Anecdotal and normative, prescriptive literature CQI literature consists mainly of case studies, anecdotal evidence and the prescriptive measures attributed to the recognized experts in the field of quality. CQI finds its philosophical underpinnings

296

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

in the works of Deming (1975), Juran (1962), Taguchi (1986), and Crosby (1979). Some popular prescriptions of the quality experts are the ‘‘14 points’’ of Deming (1982), the ‘‘10 steps’’ of Juran (1962) and the ‘‘14 steps’’ of Crosby (1979). References to CQI may be found in the literature on a wide range of management approaches for which CQI is a focal point. These approaches include: • benchmarking (Camp, 1989; Zairi, 1992), • customer feedback (Destatnik, 1992; N.I.S.T., 1992, 1997, 2000), • just-in-time management (Saraph et al., 1989; Monden, 1983; Schonberger, 1982), • leadership (Crosby, 1979; Deming, 1982; Oakland, 1993), • planning (Groocock, 1986; Juran, 1962), • process management (Juran, 1962; Deming, 1982; Oakland, 1993; Shewhart, 1931), • process control (Deming, 1975; Ishikawa, 1985; Juran, 1962; Shewhart, 1931), • quality policies and systems (Crosby, 1979; Feigenbaum, 1961; Juran, 1962; Oakland, 1993), • reengineering (Hammer and Champy, 1993), • supplier management (Crosby, 1979; Deming, 1982; Feigenbaum, 1961; Juran, 1962), • teamwork (Ishikawa, 1985; Joiner, 1986; Juran, 1962; Kanji, 1990; Oakland, 1993), • training (Deming, 1982; Feigenbaum, 1961; Ishikawa, 1985; Juran, 1962), and • zero defects (Crosby, 1979).

today are striving to achieve ISO 9000. Early publications in this arena were of an anecdotal nature and include descriptions such as how Florida Power and Light successfully received the Deming Award from the Japanese Union of Scientists and Engineers (Weinberg, 1993). More recently we see theory building research approaches that include comparison and validation of award and certification criteria. The resulting trend among organizations has been toward adapting TQM/CQI frameworks developed using the criteria associated with key quality awards and certifications, such as the ISO 9000 and the MBNQA, for internal assessment. Rao Tummala and Tang (1996) describe the conceptual framework for the core concepts of strategic quality management and the relationships among these quality management concepts and the Malcolm Baldrige criteria, the European Quality Award criteria, and ISO 9001 certification requirements. Other research has addressed quality management from an award or certification framework in conjunction with survey research. For example, Curkovic and Handfield (1996) report on the use of ISO 9000 and the Baldrige Award criteria in suppler evaluation and validate the comparison via survey methods. There exists an evolving body of literature that shows the development of survey instruments based on the award and certification criteria. 2.3. Survey-based research

There is no consensus as to which prescriptive framework should form the basis for CQI and the result is a lack of a practical model that is useful for the monitoring of a firm’s CQI program. 2.2. Certifications and awards guidelines Various national and multinational standards of quality were developed in the quality systems arena for commercial, industrial and military use. The publication of the ISO 9000 series in 1987 provided a firm set of criteria for adopting CQI and brought the CQI transformation process into an international scale. ISO 9000 has been widely adopted by many nations and regional bodies. To enhance global competitiveness, many companies

Many studies have relied on a questionnairesurvey methodology to capture the perceptions and experience of a range of practitioners from industry in an attempt to develop models of quality management in terms of a limited set of critical factors. In a seminal article in this stream Saraph et al. (1989) first identified eight critical factors based on a synthesis of the quality management literature. They then developed operational measures of these factors using data collected from a survey of 162 general managers and quality managers of 89 divisions of 20 companies. Presented in Flynn et al. (1994), is a comprehensive framework for quality management research and a proposed quality measurement

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

instrument based on a survey at the plant level. In a later study, Black and Porter (1996) relied on the MBNQA framework to identify the critical factors of TQM. Based on the survey data from a sample of the members of the European Foundation for Quality Management (EFQM), they identified 10 critical factors. Powell (1995) examined the competitive advantage of TQM. Ahire et al. (1996) examined the development and validation of TQM implementation constructs. Still a need exits for instrument development and research that empirically contributes to the development of TQM practices and self-assessment frameworks for organizations (Black and Porter, 1996; Wu et al., 1997). Therefore, a quality practices assessment instrument was developed by using the content of each of the seven sets of the Malcolm Baldrige National Quality Award (MBNQA) criterion and writing items requiring either a Likert style response (strongly disagree to strongly agree), or the option to leave the item blank if it is not applicable by Prybutok and Spink (1999). The seven MBNQA sets of criteria are leadership, strategic planning, customer and market focus, information and analysis, human resource focus, process management, and business results (Prybutok and Spink, 1999 and Wilson and Collier, 2000). The approach in this work parallels the development of Prybutok and Spink’s (1999) MBNQA instrument for healthcare that assessed the criteria in the MBNQA with some notable exceptions. First the context of the application is not healthcare but rather defense manufacturing. Second this work has a theory building component related to the question of how to modify the measures specific to the development and monitoring of an organization’s quality programs. Further, while the selection of which criteria to emphasize is organization specific the process of criteria selection and modification of items for the purpose of providing internal feedback is quite general. Wilson and Collier (2000) examine the MBNQA performance criteria and found support for the causal relationships in the MBNQA model. Flynn and Saladin (2001) examine the MBNQA modifications that took place in 1992 and 1997. Their path analysis of the survey data supports the

297

appropriateness of the modifications. Evans and Jack (2001) further support for the relationship between the MBNQA criteria and key results. However, these studies have assessed and/or compared the validity of the MBNQA criteria from 1987, 1992, and 1997. We found no published research that examines the validity and reliability of the 2000 MBNQA, which represents the most recent changes in the underlying philosophy and intent of the award. With a view to make contributions toward the development of theoretical frameworks for developing and monitoring quality management programs such as TQM and CQI, we provide a comparison of our 5-factor model of CQI with the 8-factor model proposed by (Saraph et al., 1989), the 11-factor model proposed by (Flynn et al., 1994) and the 10-factor model proposed by (Black and Porter, 1996). In Table 1, we have presented these factor models. While there is considerable diversity among these studies, all are concerned with strategic quality management. Thus by recognizing the common dimensions or the factors underlying these models, we make a contribution toward the empirical development of a unifying framework for understanding the key factors underlying strategic quality management. Also, by recognizing the unique findings of our study, which was based on our action-research methodology, we hope to derive some important implications for policy implementation within specific practical applications. The factor labeled as ‘‘Corporate quality emphasis’’ in our study would be comparable to factors that in other studies have been labeled as ‘‘Top management leadership,’’ ‘‘Corporate quality culture,’’ ‘‘Strategic quality management,’’ ‘‘Quality leadership,’’ and ‘‘Quality improvement rewards.’’ However, in our study, the items on this factor are much broader in scope and relate to a ‘‘sense of concern for quality over production output’’ rather than procedural items such as rewards and incentives for quality improvement, that constituted the factors in other studies. The factor labeled as ‘‘Employee empowerment and responsibility,’’ in our study would compare with factors that in other studies have been labeled

298

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

Table 1 Comparison of the critical factor models LMTAS 5-factor solution (final)

Saraph’s 8-factor model

Black and Porter’s 10-factor model

Flynn et al. 11-factor model

Corporate quality emphasis

Top management leadership

Corporate quality culture Strategic quality management

Quality leadership Quality improvement rewards

Auditing quality

Quality data and reporting

Quality improvement measurement systems

Employee empowerment and responsibility

Training

People and customer management Teamwork structures

Selection for teamwork potential Teamwork

Customer satisfaction orientation Communication of improvement information

Customer interaction

Process management Product/service design

Operational quality planning External interface management

Process control New product quality

Superior quality management Role of the quality department

Supplier partnerships

Employee relations Communication

Long range quality program commitment

as ‘‘Employee relations,’’ ‘‘People and customer management,’’ ‘‘Team work structure,’’ ‘‘Customer satisfaction orientation,’’ and ‘‘Labor skills.’’ In LMTAS, the items that constituted this factor emphasized that employees’ must be empowered to place quality above productivity and at the same time place full responsibility on them for quality. Once again, the items on this factor in our study involved broader concepts such as empowerment and responsibility rather than relatively narrower and specific concepts such as training, labor skills and teamwork. The factor labeled ‘‘Communication of corporate goals,’’ compares with factors that in other studies have been labeled as ‘‘Communication of improvement information,’’ and ‘‘Feedback.’’ The factors labeled as ‘‘Quality auditing’’ in our study would compare with factors that in other studies have been labeled as ‘‘Quality data and reporting,’’ ‘‘Quality improvement measurement systems,’’ and ‘‘Process control and feedback.’’ The factor labeled as ‘‘Long range quality program commitment,’’ in our study is unique in the sense that we do not find comparable factors in the other models.

Feedback

Inter-functional design process Supplier relationship Cleanliness and organization

3. Details of the study 3.1. Industry and company background This action-research study was carried out at Tactical Aircraft Systems facility (located in Fort Worth, Texas) of Lockheed Martin Company, a major defense contractor. Defense contractors must maintain the trust of American taxpayers in terms of the cost effectiveness and the quality of the weapons systems. Recent changes in the government procurement procedures have forced these organizations to achieve globally competitive quality standards in order to compete successfully with other aerospace manufacturers. Successful global competition allows aerospace product firms to minimize the effect of downsizing due to federal spending restrictions and reductions by being able to supplement domestic military sales with sales of aerospace equipment in foreign markets. As a proactive measure, Lockheed Martin Tactical Aircraft Systems (LMTAS) sought certification, which would allow it to maintain US government contracts and to move toward a higher quality standard. Following an intensive audit process

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

conducted at the LMTAS plant in Ft. Worth, Texas, during April 1996, LMTAS received ISO 9001 certification from the British Standards Institution. It became the first major US aircraft manufacturer to receive ISO 9000 or ISO 9001 registration from an independent, external source. While ISO 9001 registration is a significant achievement, it is seen only as first step in the process of attaining and maintaining the qualityoriented culture and developing public trust in military aircraft standards and the quality commitment of LMTAS. In order to fully maximize the gains achieved, LMTAS recognized the need to engineer a CQI process that will ensure a solid foundation that is capable of supporting the weight of the cultural change within the company. In developing such a process it was also deemed essential not to overlook the importance of maintaining an acceptable level of continuing public trust in the company and its products. Furthermore, due primarily to declining orders from the Pentagon, LMTAS has had to reduce employment at the study site by more than 20,000 employees within the last decade. As a result LMTAS must increase productivity and quality with a reduced work force while maintaining not only the public trust, but also the trust of its employees. Maintaining employee trust is directly linked to employees’ belief in the security of their employment. Through the implementation of a CQI program, LMTAS management seeks to simultaneously enhance productivity and quality while concurrently right-sizing the firm to match the constraints of the emerging aerospace market. Identification of the factors underlying CQI so that these factors may be effectively monitored on a regular basis is the central focus of this action-research study. 3.2. Research methodology In the tradition of action research, the academic researchers in this study became a part of an independent research team that included researchers from the Center for Quality and Productivity at the University of North Texas and LMTAS personnel who are actively engaged in quality management and production. Our study

299

was carried out in three distinct phases. In the first phase, active involvement of the firm’s top management was sought to clarify the goals of the study. In the second phase, a pilot study was undertaken in which, based on the knowledge base currently available in the literature, seven critical factors were identified, a priori, to constitute a representative factor model of CQI. A preliminary survey questionnaire was designed and refined using a Delphi panel to collect sample data. An exploratory factor analysis was done to check the adequacy of the factors selected. In the third phase, based on the results of the pilot study, a detailed survey questionnaire was designed and an exploratory factor analysis was carried out to identify the set of most appropriate factors that meet the criteria of content validity and measurement reliability. 3.3. Phase 1: Clarifying the goals of the study Recognizing that the key to a successful implementation of the CQI program is a reliable instrument to monitor regularly the company’s current quality status and the quality perceptions among employees, the goal of this study was to develop a reliable and valid survey instrument. The top management of the firm participated in several interview sessions and helped identify key issues and objectives of the study. These key issues and objectives are based on LMTAS’ mission statement with a specific focus on its quality mission. Since the mission statement of an organization is the foundation of TQM and CQI, the inclusion of these issues and objectives is not only germane to the study and to the survey objectives, but critical to the development of a successful CQI process. These issues and objectives serve as the foundation for the survey instrument. Based on interviews with the top management, quality managers and production managers, it was decided that the survey instrument must be designed to measure the perception of employees about issues that were deemed by top management as consistent with the company’s quality mission of assuring conforming products and services:

300

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

1. To identify the strengths and the limitations of the firm’s quality organization, 2. To identify areas with potential for continuous quality improvement, 3. To identify the characteristics of the firm’s quality culture, 4. To determine the employees’ perceptions of the company’s quality organization, and 5. To ascertain the perceptions of the employees as to the relative importance of quality issues among the firm’s critical success factors. 3.4. Phase 2: Pilot study We present the research effort involved in the pilot study under the following headings: (1) selection of an a priori set of factors, (2) development of the preliminary questionnaire, (3) refinement of the questionnaire, (4) pilot survey data collection, and (5) analysis and findings of the pilot survey data. (1) Selection of an a priori set of factors: Focusing on the objectives set by LMTAS management, and based on a review of the extensive literature, the following factors were identified as critical for CQI. While the factors below are based on the MBNQA there is a unique focus on the need to develop and monitor quality programs. For example, unlike Evans and Jack (2001) who also address product quality, our Factor 1, is focused on the internal responsibility for product quality. Evolution of the perception about this responsibility provides executive leadership with feedback about the effectiveness of their programs. In a similar manner factors 2–6 measure MBNQA criteria or dimensions of the CQI framework with a focus on the effectiveness of the programs that the executive leadership established. Our work strives at theory building in its unique attempt to identify the leadership activities that drive CQI. (a) Factor 1: Responsibility for product quality: This factor measures employee perception about the responsibility of their own work, the responsibility of product-quality improvement, the company’s attitude towards individual contributions to quality improvement, and the necessity of management oversight. This factor is important because it provides the information regarding the

most fundamental question about quality improvement: ‘‘Who is responsible for quality?’’ (b) Factor 2: General quality: This factor relates to the general quality practices, measures employee perception about the overall emphasis on quality within the organization, the importance of quality as compared to productivity, the perceived relationship between quality and sales, and the importance of employees’ contribution to quality. Questions related to this factor are general and not directly related to any particular products or quality improvement programs. (c) Factor 3: Perception of the quality department: This factor captures the employees’ perceptions about the quality department at LMTAS. The responses to questions related to this factor would indicate the perceived performance, communication, and acceptance of the quality department and the programs it initiates. (d) Factor 4: Opinion on existing quality improvement programs: This factor captures the employee attitude toward quality programs in general and provides some indication of the readiness of employees for new programs. The questions related to this factor were designed to find out employees’ perception of the effectiveness of current and past quality improvement programs. (e) Factor 5: Willingness/need for new quality improvement programs: This factor captures the management’s concern in ensuring whether the employees are ready for another quality improvement program. In the questions relating to this factor, employees are asked to give their opinion on whether LMTAS needs new strategies or programs to improve product and service quality. (f) Factor 6: Training: This factor measures the strength of the relationship between training and quality improvement efforts and helps in determining the future direction of the training programs. In the questions related to this factor, employees were asked about their opinion on whether the training they receive meets the quality goals of the company. (g) Factor 7: Communication of corporate goals: This factor addresses the reality that the effectiveness of quality programs partially depends on how authentically the goals are communicated to employees. Questions related to this factor seek

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

information on the effectiveness of the communication with respect to the company’s quality, financial, and productivity goals. (2) Development of preliminary questionnaire: A survey questionnaire was developed for administration to a randomly selected target sample that would be drawn from all union and non-union employees and all business units. As the first step in the questionnaire-development process, the research team conducted a thorough literature review on existing instruments for monitoring quality and its improvement. Special attention was devoted to the key quality issues relevant to the aircraft/aerospace industry, and to the management’s concern that the common industry practices may not fit the unique company culture given its special operating environment. In identifying the key quality issues, the company’s historical record of project management and relevant documents pertaining to the company’s quality practices, were reviewed. In the developing the preliminary questionnaire, the research team was guided by a number of principles. Multiple questions were used to assess each key issue. Each individual question was designed such that it measured only one single dimension of an issue. Similar questions were asked multiple times to facilitate reliability checks. The number of questions was limited such that the respondents could complete the questionnaire in about 20 minutes. The preliminary version of the questionnaire consisted of three related parts. The first part was designed to elicit closed-ended responses on 36 statements. Respondents were asked to express the degree to which they agreed or disagreed with each statement using a 5-point Likert scale, where 1 meant the respondent strongly agreed with the statement and 5 meant the respondent strongly disagreed. The second part included seven openended questions, giving the respondents an opportunity to provide suggestions, opinions, and perceptions in their own words. This part was intended to obtain feedback on issues that might not have been covered by the closed-end questions in the first part of the questionnaire. The third part, called ‘‘Critical Success Factor Survey,’’ contained 28 indicators of business success, many of which

301

were general factors for success in any business, while others were specific to the firm’s operating environment. Respondents were asked to rate these factors on a scale of one to five, based on the perceived importance of each factor to the company’s long-term success. (3) Refinement of the questionnaire using a Delphi panel: Next, the preliminary version of the questionnaire was refined using the Delphi (expert panel as per Dalkey et al., 1972) technique with a panel of 10 experts. With the help of project directors, these experts were carefully selected based on several criteria such as experience, education, communication skills, and commitment to the company’s goals. These experts represented a wide range of functional specialization and they were perceived as both highly motivated and dedicated to the long-term future of LMTAS. The experts agreed to serve on the panel voluntarily and allocated their time exclusively for the study. No incentives were given for their participation. The Delphi panel discussed each question/ statement individually in terms of its appropriateness, validity, possible biases, terminology used, semantic noise, redundancy, and other relevant factors. The panel was also encouraged to make suggestions not only regarding the relevance of each question, but also to eliminate irrelevant questions. With the inputs from the Delphi panel, the questionnaire was revised to include only 25 questions (instead of 36) in the first part, while the other two parts were unchanged. (4) Pilot survey data collection: To pilot test the survey instrument, members of the Delphi panel distributed five copies of the revised questionnaire to employees within their functional departments whose opinion they respected. This was done for two reasons. First, the experts already understood the purpose and content of the questionnaire, so no additional detailed explanation or instruction was needed. Second, since the experts represented a wide variety of functional departments, the pilottest sample also would consist of a comparable variety of employees––with different backgrounds, opinions, interests, and perspectives. This broad selection ensured the pilot test would be a rich source of information. Most of the pilot-test questionnaires were returned within two business

302

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

days, and all were returned within seven business days. (5) Analysis and findings of pilot survey data: Survey instruments are designed to collect data about subjects of interest to the surveyors. The results are only valid if the instrument is indeed collecting the information it purports to collect. To determine if the instrument designed for this study is valid and the results obtained can be relied upon, we analyzed the data from the first part of the pilot survey using factor analysis (Hair et al., 1998). Factor analysis is used to determine the groupings (factors) of the items on the instrument. If the instrument is valid, the items should group (load) onto factors identified in the same manner as the perceived or the a priori factors based on which the items/questions were selected. If the items do not load onto factors in the same groupings as the perceived factors, it suggests that the instrument is measuring factors other than the ones that were originally conceived in the design of the survey instrument. Factor analysis of the data from the first part of the questionnaire was carried out using a varimax rotation and forcing seven factors, to match the number of perceived factors. However, the factor loadings revealed that the questions did not load onto factors in the same groupings as the perceived factors. To verify the results of the factor analysis, a unidimensionality test was run on the group of questions for each perceived factor. This test consists of running a factor analysis on each subgroup, using the varimax rotation without forcing a limit on the number of factors. If the questions are all explaining the same underlying dimension, only one factor will be found for each group. With one exception (willingness/need), the results of this test generally confirmed the factor analysis of the entire data set that the perceived factors and the actual factors of the questionnaire are different. The most surprising outcome was in relation to the factor labeled ‘‘responsibility for product quality.’’ This factor was designed to measure employee perception as to who is responsible for product quality. The factor loadings showed that the six questions that were intended to relate to this factor generated very diverse responses.

The results of the factor analysis of the data from the preliminary survey were quite surprising. The preparation of the pilot survey had followed the accepted research techniques, i.e., a thorough literature search to identify relevant variables and techniques, construction of a preliminary instrument, use of a Delphi panel to review and critique the preliminary instrument, and revision of the instrument based on the panel’s recommendation. Since our methodology was rigorous and consistently validated by a panel of experts in the firm, the results of the factor analysis of the pilot data show that the perceived 7-factor construct as a model of CQI and the questionnaire-survey items associated with these factors are not adequate. This finding underscores the difficulty in applying the findings of large surveys and normative prescriptions that one finds in quality management literature to specific applications. We therefore, undertook a more elaborate large-scale study to refine our questionnaire and develop an adequate model of CQI. 3.5. Phase 3: Large-scale study (1) Development of a large-scale survey questionnaire: The results of the factor analysis of the pilot survey and the comments made by LMTAS employees were taken into account in the revision of the survey instrument. These revisions included the removal of seven questions, a rearrangement of one question to a new location, and the addition of nineteen new questions. In the spirit of action research, these questions were initiated by the LMTAS managers and reviewed by the members of the research team which included both academics and practitioners. These modifications were undertaken with the expectation of developing a better survey instrument. The revised final questionnaire again consisted of three related parts. However, the first part was augmented to contain a total of 37 items. The design of the pilot survey questionnaire was based on seven perceived factors or dimensions. In the large-scale survey, one more dimension was perceived to capture the relationship between production and quality. Since the extra efforts directed at improving quality may be perceived as an

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

impediment to the meeting production schedules upon which the company’s livelihood depends, new questions were included to seek the employees’ perception of the relationship between quality and production at LMTAS. As before, each item was designed to elicit closed-ended response to a statement and asked subjects to express the degree to which they agreed or disagreed with each statement using a 5-point Likert scale. The questionnaire items developed for the large-scale study are presented in Table 2. Four thousand and seven hundred large-scale survey questionnaires were sent out and 1003 completed responses were received for a response rate of 21%. After recoding of the reverse scored items, the questions were all scored in a manner such that agreement with the statement indicated a positive response to the firm and/or its programs. The mean response scores for each item are also shown in Table 2. The mean rating across all questions was 3.15. Examination of the clustering of the means showed some interesting results. The questions showing the highest agreement all relate to the ultimate responsibility of the individual employee for the quality of the goods and services produced at LMTAS. Three questions reflect this attitude: ‘‘employees should be responsible for the quality of their own work’’ (4.68), ‘‘individual employees should be accountable for the quality of their own work’’ (4.56), and ‘‘I am responsible for the quality of my own work’’ (4.41). This indicates that the respondents feel they should be and are responsible for quality. (2) Factor analysis of the large-scale survey data: In order to identify the appropriate number of factors that would serve as an adequate factor model of CQI, we performed a series of exploratory factor analyses. Of the 19 questions added to the original instrument, only six loaded cleanly onto a single factor. Of the remaining 14 additional questions, six loaded as noise. The seven questions that loaded on more than one factor show that the item is not measuring the single construct it was theorized to measure. Additionally, a significant number of questions, which had loaded cleanly in the pilot sample, did not respond in the same manner in the large-scale sample.

303

Eight of these questions loaded as noise, while another question that had previously loaded on a single factor now loaded on two. During the course of factor analyses, refinements were made to the survey instrument. Questions that did not load cleanly on even one factor were dropped from further analysis. Any item that failed to exhibit a factor loading of 0.50 in any dimension or exhibited a factor loading of 0.30 in more than one dimension was eliminated. This procedure was repeated until each item loaded with a factor loading greater than 0.50 on a single dimension (Hair et al., 1998). Also, Question 6 on the large-scale survey questionnaire was dropped because it measured the same information as Question 18, and it was intended as a validity check. Question 20 from the large-scale survey was eliminated because it was found to load with noise on multiple factors. With all the refinements, the final factor analysis solution for the large-scale survey resulted in a 5-factor construct with 12 survey items, all of which loaded cleanly. The differences between this final large-scale survey solution and the originally hypothesized factors in the pilot study were anticipated because of the numerous iterations and changes that had taken place to the items within the instrument. In reviewing and comparing the results of the two surveys, it is apparent that the changes made had an effect on the survey. The results of the factor analysis of the pilot survey as well as the comments made by LMTAS employees facilitated a revision of the original instrument. These revisions included removal of questions (7) movement of questions to new locations (1), and the creation of new questions (19). These modifications were undertaken with the expectation that this would produce a better survey instrument. The factors, the items in each factor, and the labels for the factors are presented in Table 3 and briefly discussed here. (a) Factor 1––Corporate quality emphasis: Four questions, all of which focus on the importance of quality to the success of an organization, load onto this factor. Question 2 (Quality is emphasized consistently throughout LMTAS) seeks information about how high quality is value by LMTAS as an organization. Question 11 (At LMTAS, quality

304

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

Table 2 List of large-scale survey questions and response means Question number

Question

Response mean

1 2 3 4 5

Employees at LMTAS should be responsible for the quality of their own work Quality is emphasized consistently throughout LMTAS The quality department of LMTAS contributes to improvement in product quality At LMTAS, the systems that ensure quality are adequate Employees should be periodically monitored to ensure the quality of their work

4.68 3.12 3.24 2.81 3.91

6 7

LMTAS needs a completely new product-quality improvement strategy I receive adequate formal training required for me to meet the quality objectives of my position I am responsible for the quality of my own work LMTAS has an effective quality improvement program Products and services at LMTAS adjust quickly to changing market conditions

3.21a 3.02

8 9 10 11 12 13 14

4.41 2.88 2.48 2.50 3.82 3.01 2.87

15

At LMTAS, quality is more important than productivity Employees should not be monitored to ensure the quality of their work At LMTAS, quality improvement programs contribute to improvement in productivity Product and service quality can be ensured under the current quality systems at LMTAS Individual employees should be accountable for the quality of their own work

16 17 18 19 20

LMTAS places the appropriate emphasis on quality Quality is designed into the products and services at LMTAS LMTAS does not need a completely new product-quality improvement strategy At LMTAS, training meets the quality goals At LMTAS, our internal customers feel we produce quality products and services

2.99 3.12 2.93 2.93 3.23

21 22 23 24 25

Quality programs contribute to improved quality in LMTAS products and services LMTAS’ quality goals are adequately communicated to me LMTAS’ financial goals are adequately communicated to me LMTAS’ productivity goals are adequately communicated to me Product and service schedules should be more important than quality of output

3.15 2.99 2.86 3.10 1.80a

26 27

At LMTAS, the systems that ensure quality are adequate At LMTAS, the main purpose of the quality organization is to monitor compliance with quality standards LMTAS management is committed to the success of quality programs I am recognized and rewarded for the quality of my work at LMTAS At LMTAS, product and service quality is more important than production schedules

2.76 3.33

28 29 30 31 32 33 34 35 36 37 a

4.56

2.84 2.41 2.46

LMTAS encourages individual employee contributions to product quality Employees at LMTAS are accountable for the quality of their own work At LMTAS, communication between management and non-management employees on quality issues is adequate Functional barriers between departments hurt the success of quality programs at LMTAS LMTAS has satisfactory plans to reach its quality goals

2.99 2.92 2.47

LMTAS has appropriate procedures for determining the quality of products and services LMTAS’ procedures and operations focus on the company goals (financial, productivity, and quality)

2.98

3.62 2.88

3.24

Denotes that this item was reverse scored.

is more important than productivity) and Question 30 (At LMTAS, product and service quality is

more important than production schedules) seek information about the relative importance of

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

305

Table 3 Factors, questions, and description Factor number

Label and questionnaire item

Cronbach’s alpha

1

Corporate quality emphasis Quality is emphasized consistently throughout LMTAS 2a 11 At LMTAS, quality is more important than productivity 18 LMTAS does not need a completely new product-quality improvement strategy 30 At LMTAS, product quality is more important than product schedules

0.7884

2

Employee empowerment and responsibility 1b Employees at LMTAS should be responsible for the quality of their own work 8 I am responsible for the quality of my own work 15 Individual employees should be accountable for the quality of their own work

0.7071

3

Communication of corporate goals 23 LMTAS’ financial goals are adequately communicated to me 24 LMTAS’ productivity goals are adequately communicated to me

0.8029

4

Quality auditing 5

0.7430

12 5

a b

Employees should be periodically monitored to ensure the quality of their work Employees should not be monitored to ensure the quality of their work

Long range quality program commitment 25 Product and service schedules should be more important than quality output

1.0000

Italics indicates the question also appeared in the pilot survey. In the pilot survey, this question loaded on a different factor.

quality versus production. Question 18 seeks information on the need for new ways to meet product-quality objectives, whether or not the current system is meeting the need. Taken together, the questions relating to this factor provide a global view of the organization’s commitment to quality. (b) Factor 2––Employee responsibility and empowerment: The three questions in this factor deal with who is responsible for the quality of the products produced at LMTAS. All three questions, i.e., ‘‘Employees at LMTAS should be responsible for the quality of their work,’’ ‘‘I am responsible for the quality of my own work,’’ and ‘‘Individual employees should be accountable for the quality of their own work,’’ focus on the individual and the individual’s impact on the quality of the end product of the firm. (c) Factor 3––Communication of corporate goals: The two questions that relate to this factor

underscore the role of effective communication of the firm’s short-term and long-term goals to the employees. One of the questions (LMTAS’ financial goals are adequately communicated to me) looks at financial-goal communication and the other looks at production-goal communication (LMTAS’ productivity goals are adequately communicated to me). The importance and relationship of these goals is apparently clear to the respondents. (d) Factor 4––Quality auditing: The two questions that relate to this factor emphasize the need for active monitoring of the employees in ensuring quality. Given the fact that Factor 2 emphasizes the role of individual responsibility and accountability, the emergence of a separate factor underscoring the need to regularly monitoring the employees is counter-intuitive, but insightful. This may be indicative of the fact that, while the individual employees believe that they are responsible

306

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

for, and most likely capable of, producing quality products, they have concerns whether or not their fellow workers will demonstrate the same level of commitment to quality. It is also possible that they believe that the criticality of the products (aircrafts, jet fighters, etc.) warrants close monitoring of the employees’ work, even though the employees may be highly responsible. (e) Factor 5––Long range quality program commitment: This factor is interesting in that it consists of only one item, Question 25 (Product and service schedules should be more important than quality of output). For analysis purposes, this item was reverse scored because its wording is the opposite of Question 30 (At LMTAS, product quality is more important than product schedules) which loaded on Factor 1. Conceptually, therefore, Question 25 should have loaded on the same factor, i.e., Factor 1. During the exploratory factor analysis, when a 4-factor solution was forced Question 25 loaded onto the same factor as Question 30. However, the loading was weak and showed inverse relationship to the factor. It also would load with noise onto other factors, indicating that a 4-factor solution was not a good one from a statistical analysis point of view. A 5-factor solution in which Question 25 (which is simply the opposite of Question 30) constitutes a factor is not only good from a statistical analysis point of view, but this 5-factor solution is practically meaningful. The meaning behind the separate loading of this question becomes clear when we look at the mean score for Question 30, which is quite low at 2.46. This means that the respondents feel that the organization does not fulfill the commitment to quality it professes. By loading Question 25 on to a separate factor, emphasis is clearly and appropriately placed on the employees’ perception of the firm’s role in carrying through its commitments. The survey instrument with questionnaire items constituting the five factors satisfied the criteria of reliability and validity (Nunnally and Bernstein, 1994). Reliability is assured by checking the intercorrelation among items that constitute a factor and by determining Cronbach’s alpha for each factor (Cronbach, 1951). A value of 0.7 or above is considered acceptable. In our 5-factor solution, the

alpha values for the first four factors were 0.7884, 0.7071, 0.8029, and 0.7430. Since the fifth factor had only one item its alpha would be, trivially, equal to 1.0000. Content validity is assured naturally as the experts and practitioners were actively involved in the process of developing the instrument. Construct validity was ensured through the use of principal components factor analysis with varimax rotation and deriving a solution in which the items loaded cleanly on a single factor. Based on the foregoing analysis, the 5-factor model was chosen to monitor the CQI program at LIMTAS.

4. Discussion of the results By recognizing the unique findings of our study, which was based on our action-research methodology, we derive some important implications for policy implementation in specific practical applications. In our study at LMTAS, the items that load on the ‘‘Corporate quality emphasis’’ factor are much broader in scope and relate to a ‘‘sense of concern for quality over production output’’ rather than procedural items such as rewards and incentives for quality improvement, which constituted the factors in other studies. While this factor was discovered independently as part of this action research, theoretical support for the factor is found in the recent publication by Terziovski et al. (2003) because they look at the link between employee attitudes and quality culture. The insight from our study is that in the technologically complex or sophisticated operations, in which the consequences of less than absolute quality are severe, top management must emphasize and monitor the ‘‘quality infrastructure’’ or the way people feel about quality as a way of life or day-to-day mindset. In LMTAS, the items that constituted the ‘‘Employee empowerment and responsibility’’ factor emphasized that employees’ must be empowered to place quality above productivity and at the same time place full responsibility on them for quality. Once again, we see that the items that load on this factor in our study involved broader concepts such as empowerment and responsibility rather than relatively narrower and

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

specific concepts such as training, labor skills and teamwork. In our study the items that constitute the ‘‘Quality auditing’’ factor provide an important, but somewhat counter-intuitive insight. Even though the respondents underscored the importance of employee empowerment and assumption of responsibility by the employees in one of the factors, in another factor they indicated a need for their work as well as their colleagues’ work to be carefully audited. This has important implications from a policy implementation perspective. It points out the need to carefully accomplish two seemingly opposite acts: one, empowering the employees and trusting them to be responsible for quality, and the other, closely auditing the quality of their work. The factor labeled as ‘‘Long range quality program commitment,’’ in our study is unique in the sense that we do not find comparable factors in the other models. Furthermore, only one item, which brought into focus the balance between quality and production output, constituted this factor. This unique finding points to the fact that in industries where there is a great deal of product safety and liability considerations, the single most dominant factor is the management of pressures to push production on one hand and not sacrifice quality on the other. This factor is also unique in that a single item, i.e., Item 25 (which is in essence the opposite of Item 30 that loads on a different factor) loads on it to yield a statistically sound factor solution. We view this fifth factor as underscoring the importance of the employees’ perception of the firm’s role in carrying through its commitment to quality programs. In the light of the fact that LMTAS had already gone through such initiatives as TQM certification for its vendors and ISO 9000, the respondents’ concern for long-term commitment as an important factor for CQI is well placed. From a practical point of view, this implies that long-term commitment is imperative to the success of any strategic quality improvement program.

5. Conclusions Despite containing an extensive body of normative or prescriptive studies the quality man-

307

agement literature contains little by way of generally applicable guidance concerning how to measure or monitor the critical factors underlying strategic quality management initiatives. Although several studies have relied on survey data from a multiple set of sources to unearth models of such factors, they do not offer general guidelines to select factors appropriate in a specific setting. In this paper, instead of the multiple-source survey methodology, we have adopted an action-research methodology and presented the findings of a contextually specific single-site empirical research we carried out in Lockheed Martin Tactical Aircraft Systems (LMTAS) in Fort Worth, Texas, that lead to the development of a reliable and valid survey instrument to monitor the firm’s continuous quality improvement (CQI) program. While no claims can be made for the generality of the monitoring instrument that we have developed and 5-factor model underlying it, the methodology that worked for us, and the insights we derive from an analysis of our model in comparison with three other survey-based models have useful implications for extending the empirical understanding and the monitoring of strategic quality management initiatives. Our action-research methodology included a number of key features that ensured the validity and reliability of the monitoring instrument. Some of the key features included: (i) active top management involvement, (ii) Delphi panel of in-house quality experts and production specialists, (iii) confirmatory factor analysis based on preliminary factors conceived based on an extensive literature review and analysis, (iv) exploratory factor analysis to identify the final factors, and (v) detailed statistical analysis in line with the well-established tradition of factor analysis. Our methodology builds-in the ingredients that are essential for the successful development of a reliable and valid document, namely, top management commitment and active participation of the in-house specialists who were part of the research team. By examining the unique findings of our study and relating it to the characteristics of the specific site in our study, we derived some important implications for policy implementation in practice. Although these implications do not have universal

308

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309

applicability, they provide a compelling argument for a careful methodology like the one we have presented here in unearthing the significant factors that are unique to a given setting. We feel that the identification of the significant set of unique factors specific to a given setting would eliminate the need for a number of routine factors and result in a parsimonious, yet reliable factor model. Also, our study demonstrates that it is important from a practical perspective to recognize the subtle differences in the way in which the same generic notion (for example, leadership or communication) is perceived in a specific setting. Finally, we note that this study represents a modest first step toward the empirical understanding of strategic quality management in the tradition of ‘‘action research’’ which is new to operations management. Also, our paper represents an important step to reconnect academia to the ‘‘real world’’ by regaining the relevance of academic research. We recognize that the generalizability of the findings of our study to other organizations and industries remains unanswered at this stage. However, we hope that our study will motivate further work in firms in other industries, both similar and different, and thus should serve toward the development of a knowledge base for designing reliable instruments for monitoring and self-assessment.

Acknowledgements The authors gratefully acknowledge the comments and suggestions for improvement provided by two anonymous referees on an earlier version of this paper.

References Ahire, S.L., Golhar, D.Y., Waller, M.A., 1996. Development and validation of TQM implementation constructs. Decision Sciences 27 (1), 23–55. Benbasat, I., Goldstein, D.K., Mead, M., 1987. The case research strategy in studies of information systems. MIS Quarterly (Sept.), 369–386. Black, S.A., Porter, L.J., 1996. Identification of the critical factors of TQM. Decision Sciences 27 (1), 1–21.

Camp, R.C., 1989. Benchmarking: The Search for Industry Best Practices. ASQC Quality Press, White Plains, NY. Cronbach, L.J., 1951. Coefficient alpha and internal structure of tests. Psychometrika 16, 297–334. Crosby, P.B., 1979. Quality is Free. McGraw-Hill, New York. Curkovic, S., Handfield, R., 1996. Use of ISO 9000 and Baldrige Award criteria in supplier quality evaluation. International Journal of Purchasing and Materials Management 32 (2), 2–12, Spring. Dalkey, N.C., Rourke, D.L., Lewis, R., Snyder, D., 1972. Studies in the Quality of Life; Delphi and Decision-Making. Lexington Books, Lexington, MA. Deming, W.E., 1975. On some statistical aids toward economic production. Interfaces 5 (4), 1–15. Deming, W.E., 1982. Out of Crisis. Cambridge University Press, Cambridge. Destatnik, R.L., 1992. Inside the Baldrige award guidelines, category 7: Customer focus and satisfaction. Quality Progress 25 (12), 69–74. Evans, J.R., Jack, E.P., 2001. Validating key results linkages in the baldrige performance excellence model. Quality Management Journal 10 (2), 7–24. Feigenbaum, A.V., 1961. Total Quality Control. McGraw-Hill, New York. Flynn, B.B., Saladin, B., 2001. Further evidence on the validity of the theoretical models underlying the Baldrige criteria. Journal of Operations Management 19, 617–652. Flynn, B.B., Schroeder, R.G., Sakakibara, S., 1994. A framework for quality management: An associated measurement instrument. Journal of Operations Management 11, 339– 366. Groocock, J.M., 1986. The Chain of Quality. John Wiley, Chichester. Hair, J.F., Anderson, R.E., Tatham, R.L., 1998. Multivariate Data Analysis, fifth ed. McMillan, New York. Hammer, M., Champy, J., 1993. Reengineering the corporation––A manifesto for business revolution. Harper Collins Publishers, Inc., New York. Ishikawa, K., 1985. What is Total Quality Control: The Japanese Way. Prentice-Hall, London. Joiner, B.L., 1986. The quality manager’s new job. Quality Progress 19 (10), 52–56. Juran, J.M., 1962. Quality Control Handbook, second ed. McGraw-Hill, London. Kanji, G.K., 1990. Total quality management: The second industrial revolution. Total Quality Management 1 (1), 3–11. Meredith, J.R., Raturi, A., Gyampah, A.K., Kaplan, B., 1989. Alternative research paradigms in operations. Journal of Operations Management 8 (4), 297–320. Miles, M.B., 1979. Qualitative data as an attractive nuisance: The problem of analysis. Administrative Sciences Quarterly 24 (Dec.), 590–660. Mintzberg, H., 1979. An emerging strategy of ‘direct’ research. Administrative Sciences Quarterly 24 (Dec.), 582–589. Monden, Y., 1983. Toyota Production System––A Practical Approach to Production Management. Industrial Engineering and Management Press, Norcross, GA.

V.R. Prybutok, R. Ramasesh / European Journal of Operational Research 166 (2005) 293–309 N.I.S.T., 1992. Malcolm Baldridge National Quality Award Criteria. Department of Commerce, National Institute of Standards and Technology. N.I.S.T., 1997. Malcolm Baldridge National Quality Award Criteria. Department of Commerce, National Institute of Standards and Technology. N.I.S.T., 2000. Malcolm Baldridge National Quality Award Criteria. Department of Commerce, National Institute of Standards and Technology. Nunnally, J., Bernstein, I.H., 1994. Psychometric Theory. McGraw Hill, New York. Oakland, J.S., 1993. Total Quality Management, second ed. Butterworth–Heinemann, Oxford. Powell, T.C., 1995. Total quality management as competitive advantage: A review and empirical study. Strategic Management Journal 16, 15–37. Prybutok, V.R., Spink, A., 1999. Transformation of a healthcare information system: A self-assessment survey. IEEE Transactions on Engineering Management 46 (3), 299–310. Rao Tummala, V.M., Tang, C.L., 1996. Strategic quality management, Malcolm Baldrige and European quality awards and ISO 9000 certification Core concepts and comparative analysis. The International Journal of Quality and Reliability Management 13 (4), 8–38.

309

Saraph, J.V., Benson, P.G., Schroeder, R.G., 1989. An instrument for measuring the critical factors of quality management. Decision Sciences 20, 810–829. Schonberger, R.J., 1982. Japanese Manufacturing Techniques. McMillan, New York. Shewhart, W.A., 1931. Economic Control of Quality of Manufactured Product. Van Nostrand Company, New York. Taguchi, G., 1986. Introduction to Quality Engineering. Asian Productivity Organization, Tokyo. Terziovski, M., Power, D., Sohal, A.S., 2003. The longitudinal effects of the ISO 9000 certification process on business performance. European Journal of Operational Research 146 (3), 580–595. Weinberg, I.M., 1993. Is TQM really so hard? The Journal for Quality and Participation 16 (Oct./ Nov.), 48–49. Wilson, D.D., Collier, D.A., 2000. An empirical investigation of the Malcolm Baldrige National Quality Award causal model. Decision Sciences 31 (2), 361–390. Wu, H., Wiebe, H.A., Politi, J., 1997. Self-assessment of total quality management programs. Engineering Management Journal 9 (1), 25–31. Zairi, M., 1992. The art of benchmarking: Using customer feedback to establish a performance gap. Total Quality Management 3, 177–188.