Omega 38 (2010) 167 -- 178
Contents lists available at ScienceDirect
Omega journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / o m e g a
Behavior-based analysis of knowledge dissemination channels in operations management Clyde W. Holsapple∗ , Anita Lee-Post School of Management, Gatton College of Business and Economics, University of Kentucky, Lexington, KY 40506-0034, USA
A R T I C L E
I N F O
Article history: Received 18 September 2008 Accepted 16 August 2009 This manuscript was processed by Associate Editor Teo Available online 20 August 2009 Keywords: Bibliometrics Knowledge dissemination Operations management journals Operations management researchers
A B S T R A C T
One essential requirement for the development and vitality of a discipline is a network of channels for knowledge dissemination. These channels, such as scholarly journals, furnish not only a means for knowledge sharing, but also for knowledge generation by the discipline's community of researchers. In the field of operations management (OM), there have been several studies that have sought to rank journals relevant to OM research, using opinion surveys, citation analyses, and author affiliations. However, each of these methods has some limitations. This paper adopts a new approach for discerning journal publication patterns in the OM field. It is based on an examination of the actual publishing behaviors of all full-time, tenured OM researchers at a sizable set of leading research universities in the US. This behavior-based methodology provides three metrics that individually, and in tandem, give a basis for rating publication outlets for OM research in terms of their relative importance. The ratings can be used by scholars and administrators to assist in monitoring, disseminating, and evaluating OM research outlets. © 2009 Elsevier Ltd. All rights reserved.
1. Introduction A perennial topic of interest and importance for any academic discipline is the nature of knowledge dissemination by and to its stakeholders: principally researchers and educators, but also including practitioners, students, and administrators. This is evident from the volume of publications in OMEGA that have sought to define and evaluate channels of knowledge dissemination for various management disciplines in the past 15 years [1–9]. Here, we focus on rating journals that publish operations management research, and do so by adopting a new methodology that is based on actual behaviors of operations management researchers. Knowledge dissemination is crucial for progress in a discipline. It allows ideas, perspectives, and findings to collide—often spawning new advances that would not otherwise exist. It also allows ideas, perspectives, and findings to coalesce—as part of the fulfilling discipline's mission of making sense out of phenomena of interest. This sense-making yields consensual foundations (i.e., instances of critical mass) that can underpin continued advances in the discipline's knowledge base. In considering the role of dissemination, several issues arise that can affect dissemination's value within a discipline. These include timing, concentration/dispersion patterns, target audiences, source options, content (e.g., its accuracy or utility), and
∗ Corresponding author. E-mail address:
[email protected] (C.W. Holsapple). 0305-0483/$ - see front matter © 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.omega.2009.08.002
channels for knowledge dissemination. Here, we focus on the issue of dissemination channels in the discipline of operations management (OM). Research in the knowledge management (KM) field suggests that there are two basic types of dissemination: one as an aspect of emitting knowledge outside of a community (e.g., of a discipline's researchers), and the other as an aspect of assimilating knowledge within the community [10]. KM research has also found that, in either case, dissemination activity can be performed in ways that result in the community being more productive (increasing the ratio of output to input), more agile (increasing alertness and response ability), more innovative (structuring resources and the processes that use them in new, value-increasing ways), and/or more reputable (increasing perceptions of trust and quality). This is known as the PAIR model, which links Productivity, Agility, Innovation, and Reputation to the performance and competitiveness of an organization [10].1 It follows that dissemination channels for researchers who comprise an academic community, such as OM, can influence whether the community is more or less productive, agile, innovative, and/or reputable—depending on the collective nature of those channels.
1 As an interesting side note, there are examples of OM analogs to each PAIR concept, including lean, just-in-time, process design/re-engineering, and total quality management, respectively.
168
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
Among the various channels for disseminating OM research, there are books, monographs, dissertations, Web postings, working papers, conference presentations and proceedings, video recordings, and journals. Here, we focus on peer-reviewed journals that OM researchers use as conduits for supplying knowledge to others. Secondarily, these journals tend also to be those sought by OMknowledge consumers as knowledge sources. The central purpose of this paper is to identify important OM-publishing venues—those where experienced OM researchers, representative of high-stature research universities, collectively tend to concentrate their journal publications. Using actual behaviors of OM faculty members, we gauge the importance (and relative importance) of various journals as channels for knowledge dissemination. Resultant insights into the actual structure of OM dissemination channels offer several benefits for understanding the OM discipline. First, the pattern of dissemination behaviors gives individual OM scholars a sense of what have historically been the most important journals—as potential targets for placing their own research. Second, it gives OM researchers and teachers an indication of the most important journals to monitor as archives of progress in the OM field. Third, it gives administrators guidance in understanding the relative importance of alternative dissemination channels—as one possible basis for making decisions related to hiring, promotion, and merit review of OM faculty members. A fourth benefit is that the identification and rating of OM journal channels is grounded on actual dissemination behaviors, rather than the more conventional approach of aggregating subjective opinions. Finally, the resultant pattern of dissemination channels can shed some light on the substance of OM as a discipline. Section 2 furnishes a brief review and critique of prior efforts to understand dissemination channels for OM research. In Section 3, we describe the behavior-based methodology used here and how we apply it. Results appear in Section 4 along with a discussion of their implications. We conclude by pointing out limitations and possible variations of the study presented here. 2. Background Evolving from manufacturing roots stretching back into the 19th century, today's OM discipline embraces both manufacturing and service operations [11]. Moreover, it has reached out as a major contributor to the rapidly growing field of supply chain management. Chopra et al. [12] see the OM evolution as being a response by researchers to changing industrial reality, in particular, a shift of research focus from tactical issues of a single decision maker to strategic concerns of entire organizations. Dissemination of knowledge has been crucial to this growth. Continued growth can benefit from a map of available dissemination channels to guide OM educators, researchers, administrators who evaluate them, practitioners who seek applicable research ideas, and librarians who need to make sound journal acquisition decisions. In short, the journal map helps make sense of the increasingly complex and diverse array of OM channels as we navigate through knowledge dissemination possibilities. Prior mappings of OM journals have used one of three types of approaches: (1) a subjective approach relying on opinions of survey respondents, (2) a citation-based approach based on citation counts or impact factors, and (3) an author-based approach using a measure called author affiliation index. Although studies using these approaches have certainly advanced our understanding of the OM landscape, they are not without their limitations—which we endeavor to avoid in the new approach adopted later in this paper. 2.1. Subjective approach The subjective approach of using surveys dates back over two decades [13]. Since then, three major survey studies have been
conducted to assess OM-related journals in terms of perceived relevance and quality [14–16]. These studies call attention to the importance of conducting research that identifies and rates OM-specific publication outlets. In general, the subjective approach typically involves asking some set of stakeholders in a field (e.g., researchers, deans, practitioners) to give their perceptions of journal outlets that publish research in that field. Surveys differ in terms of the criterion (or criteria) respondents are asked to apply as they evaluate the journals. For instance, they tend to be asked for their perceptions of “top” or “leading” journals, of journal “quality,” about the relevance a journal's articles to the field, about journal influence on the field, or about desirability of publishing in various journals. Surveys also differ in whether respondents are asked about a pre-specified list of journals or asked to furnish perceptions of their own respective lists of journals. Specifically, profiles of OM surveys are shown in Table 1. There are several issues, based on the nature of survey methodology, that need to be considered when interpreting results from these surveys. The first methodological issue is related to the choice and nature of survey respondents whose perceptions lead to resultant ratings. Respectively, studies conducted by Saladin [13], Barman et al. [14], Soteriou et al. [15], and Barman et al. [16] survey: (a) members of the now defunct Operations Management Association, (b) US members of the Decision Sciences Institute, (c) European members of the Institute for Operations Research and Management Sciences and European Operations Management Association, and (d) US members of the Production and Operations Management Society. Soteriou et al. [15] find that members of a particular association tend to rank their official journal more favorably—resulting in International Journal of Production & Operations Management being ranked highly in Soteriou et al. [15] versus Decision Sciences in Barman et al. [14]. Indeed, society membership and geographical location are identified as factors causing biases in perceived rankings of journals [17]. Other respondent factors found to influence journal rankings include such respondent characteristics as editorial board membership for specific journals, authorship in specific journals, methodological nature of respondents' research work, and OM-specific research experience (administrators versus researchers) [16,17]. Can respondents claim comparable familiarity (or at least pass some familiarity threshold) with all journals that they evaluate? Furthermore, there are respondent qualification factors to consider. These boil down to a matter of expertise. Are all respondents experts, or at least very experienced, in the OM field? Some respondents (e.g., non-tenured faculty members, deans) may tend to reply based on what they have been told by others, rather than in-depth, first-hand experience. Some respondents may have comparatively little knowledge of OM. For instance, many members of DSI and INFORMS focus on information systems or quantitative methods. Are they really sufficiently well informed to comment on individual OM publishing channels? A second methodological issue concerns two facets of the survey instrument design. First, the criterion employed for ranking is not always defined—meaning that it may not be interpreted uniformly by all respondents, which could confound the results. For instance, Barman et al. [14] and Soteriou et al. [15] have respondents rank a list of journals by judging the “quality” of each. Ostensibly, no definition of “quality” is given in the studies. To various respondents, it may mean rigor, validity, conformance to a particular style or methodology, “strong” editorial board, high rejection rate, conformance to a target list for achieving promotion in a respondent's university, conformance to opinions of persons deemed to be leaders in the field, general popularity, some combination of these, or something else. We suggest that, while it may be an interesting dimension to ponder, “quality” does not tell the full story. The relevance dimension included in this opinion survey helps round out the story. However, there are other possibly interesting dimensions for understanding knowledge dissemination channels. For instance, there
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
169
Table 1 Factors affecting survey-based ratings of journals. Opinion survey
Respondent pool
Geographical location
Rating criteria
Survey anchors
[13] [14] [15] [16]
US members of OMA US members of DSI European members of INFORMS & EuroOMA US members of POMS
US US Europe US
Category I/class A Relevance and quality Relevance and quality Relevance and quality
None 20 Journals 35 Journals 21 Journals
is the extent to which innovation is permitted, or even encouraged, by a channel—versus conformance to an extant paradigm [18]. There is the degree of actual influence that a channel has on research, practice, education, and so forth. That is, a particular journal regarded as being of high relevance and/or high “quality” may publish large numbers of articles that have little or no ostensible influence on the field, while other journals can provide articles that are highly influential [19]. There is also the importance of the channel to the development of the field. As described later, we focus on the importance dimension. We contend that journals of higher importance to a field will tend to be higher in “quality” (however one may define that notion), higher in relevance, higher in innovation, and higher in influence than journals that are less important to the field. The other instrument design facet for surveys is identification of journals to be evaluated—either anchor responses to a prescribed list, or allow each respondent to specify his/her own list for evaluation. For instance, Saladin [13] does not use an anchor set of journals. While this gives respondents freedom to include all journals they individually perceive to be relevant, it also relies entirely on each respondent's recall capacity. Each of the other studies gives respondents a pre-selected list of 20–35 journals to offer opinions about, thus anchoring results to that list. While this helps respondents avoid overlooking listed journals, it gives little opportunity for a respondent to evaluate journals not included on the anchor list (e.g., specialized or new journals). In sum, because of the subjective nature and methodological issues of survey-based ratings of journals, they may not provide a complete and definitive picture of the OM-publication landscape. Stated another way, can opinions from subjective evaluations be confirmed by other methods that attempt to avoid limitations of the survey approach? 2.2. Citation-based approach Citation analysis is touted as a more objective approach to journal assessments, thus avoiding the many problems of surveys. There are three reported studies of OM-related journals that rely on the volumes of citations to a journal as a measure of its influence among researchers [7,20,21]. A typical citation study tabulates the number of citations to each journal included in the reference lists of articles published in some base set. The base set is often comprised of the reference lists of all articles published during a specific time period in a selected list of journals. The citation data are then analyzed in various ways to determine a ranking or rating of journals. In its simplest form, the most cited journal is ranked highest, but citation counts can be adjusted in various ways (e.g., normalizing based on journal age). Citation-based approaches have their own limitations. First, their basic assumption that every citation in an article's reference list is equally important for the development of that article is debatable. For instance, one cited paper could be a lynch-pin for the article's development, while another cited paper is only mentioned in an incidental way. Second, some critics have a concern about “self-citation bias” where the author of an article cites some of his/her earlier publications or the article cites other previously published articles in the
same journal. This concern presupposes that such citations are gratuitous, rather than substantive (a dubious supposition in many, if not most, instances). Third, there can be a subjective element involved in this approach. In particular, the base set of articles tends to be grounded on the researchers' arguments for their inclusion in the base. For instance, [20] subjectively reviews each article published in three journals to determine its OM relevancy. Those articles deemed to be OMrelevant are included in the base set used to tabulate citations. Goh et al. [21] use all articles appearing in a list of “top” OM-specific journals independently identified via survey means; this involves a much larger base set of articles and avoids subjective researcher interpretation of each article's content. Fourth, journal rankings can differ depending on how the citation counts are analyzed (total, articles, words, normalized, ageadjustment, etc.). This can cause confusion and difficulty in discerning a definitive measure of a journal's influence, as well as questions about the consistentcy of their application. Instead of directly developing citation data, Olson [22] relies on the Institute for Scientific Information's (ISI) Journal Citation Reports (JCR) data to compute a measure called a five-year “impact factor” to rate 29 journals. It is important to note that this measure is different than ISI's traditional (and quite problematic) two-year “impact factor,” which is sometimes used [44] (inappropriately in the view of ISI) as an indicator of a journal's stature. In addition, she used two surveys sent two years apart to selected faculty members of the “top-25” US business schools listed in the 2001 and 2002 editions of the Best Graduate Schools section of US News and World Report. The respondents were asked to rate journal quality and visibility for two different sets of selected anchor journals, with no specific mention of the journal's relevance to OM. Olson's [22] study confirms that survey-based journal rankings are sensitive to numerous factors including different time periods, respondents' research fields, different sets and numbers of anchor journals, and ranking criteria. Problems related to using impact factors (based on JCR data) for journal rankings also abound. She points out that her five-year impact factors cannot be computed consistently as 10 journals were not found in the 2003 JCR data. In addition, three of the journals (Journal on Computing, Production and Operations Management, and Journal of Scheduling) were too new for inclusion in the JCR database to allow a five-year impact factor to be computed. More importantly, her analyses of the resulting ratings of journals show that there is not a close correlation between perceptions of journal quality and five-year impact factors. 2.3. Author-based approach As an alternative to opinion surveys and citation analyses for journal assessments, Gorman and Kanet [23] propose using a measure originally conceived by Harless and Reilly [24], called author affiliation index (AAI), to evaluate a journal's “quality.” A journal's AAI is the percentage of its US authors that are from “leading” research universities. Its fundamental assumption is that the “quality” of a journal is attributed to its authors, who are inspired to publish in journals where scholars from leading universities publish. Gorman and Kanet [23] compute the AAI for 27 OM-related journals using
170
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
Harless and Reilly's [24] 60-university set. The resulting AAI scores are used to rank the 27 journals with respect to this university set. This approach is repeated to calculate AAI rankings of journals for other high “quality” university sets. Gorman and Kanet find that journal ranking using AAI is sensitive to several parameters such as size and composition of the “topquality” university set, selection criteria for the university set, and number of articles examined per journal. However, these parameters are not found to yield rankings that are different to a statistically significant degree. For instance, for every university set used, Transportation Science is found to be the highest “quality” journal for OM research (i.e., highest AAI score). The next four highest “quality” journals are also the same (but in different orders) regardless of the university set: Operations Research, IIE Transactions, Manufacturing & Service Operations Management, and Management Science. Gorman and Kanet [23] show statistically significant positive correlations between AAI rankings and survey-based rankings of Soteriou et al. [15] and Barman et al. [16], but do not find a significant correlation between AAI rankings and citation-based results of either Goh et al. [21] or Olson [22]—the latter two reported as correlated with each other at > 0.9 and significant at 0.0000. Gorman and Kanet [25] find that when AAI-based rankings are obtained via the same university set and anchor journals surveyed in Olson's [22] study, results are comparable to those of Olson. They conclude that AAI and opinion surveys give similar “quality” assessments. There are several limitations to the AAI approach. First, the resultant journal rankings are limited to the particular journals for which AAI is calculated. A chosen journal could have little relevance to the OM field, yet end up being highly ranked because a large percentage of those who publish in it are faculty members at “high-quality” universities. On the other hand, a highly relevant journal may not be chosen for ranking, even though a large percentage of those who publish in it are faculty members at the same “high-quality” universities. Second, it is simply assumed that AAI is a measure of “quality” but, as in the case of surveys, “quality” is not defined. Not knowing what the term means in this context makes it difficult to assess the extent to which such rankings are useful in understanding knowledge distribution channels that foster progress in the OM field. Is it referring to rigor, popularity, timeliness, or something else? (a) It seems not to be referring to relevance, which may possibly help us to understand contributions of OM knowledge distribution channels. Surveys that distinguish between “quality” and relevance show very different rankings; the AAI approach correlates with the former. (b) It seems not to be referring to influence, which may possibly help us to understand contributions of OM knowledge distribution channels. Citation-based studies, whose rankings are based on measures of influence, do not tend to correlate with AAI results. (c) It seems not to be referring to importance, which may possibly help us to understand contributions of OM knowledge distribution channels. The notion of importance, covering a host of dimensions, is discussed and measured in succeeding sections of this paper. Third, “top” or “quality” universities are sometimes defined, in part at least, based on quantity of publications their faculties have produced in a pre-selected set of journals thought of (e.g., by Business Week writers) as being the highest “quality” journals. This could introduce some bias into the results, as it should not be surprising that journals will have high percentages of authors from schools whose faculty members author the highest quantities of papers in those journals. Thus, caution needs to be exercised when identifying a university set for the AAI approach.
Fourth, the authors advocate using a larger, rather than smaller, university set to diminish the possibility of overlooking a major contributor to a journal and to reduce the possibility of a journal gaming the AAI score to enhance its standing. However, the precise size of such a set is unclear. If it is too small (e.g., five universities), then a bias in the results should not be surprising. If it is too large (e.g., 200 universities), then it is likely that many journals will have much higher AAI scores and that the ability to differentiate among them will decrease. AAI adopters must grapple with the issue of identifying a suitably sized set of the highest “quality” universities. In sum, while complementing the other two approaches, the AAI approach does not seem to flesh out a full picture of knowledge dissemination in relation to progress of the OM field, or in relation to the relative importance of journal outlets for research.
3. A behavior-based approach for assessing OM journal importance Past studies have pointed out the inefficacy of using perceived opinion, citation count, and author-affiliation as an indictor of journal quality [23,26–28]. Here, we adopt an approach to studying OM knowledge dissemination channels that circumvents several previously noted limitations, as summarized in Table 2. More importantly, rather than pursuing a perfect or the “best” measure of journal quality, we suggest using a behavior-based bibliometric technique to assess a journal's importance in advancing the OM discipline. We examine the actual publishing behaviors consistent with holding tenure in the OM field at research universities in the US We argue that by having achieved tenure at a research university, a faculty member has been judged as publishing research of (a) (b) (c) (d)
sufficient clarity and rigor, substantial relevance to the OM discipline, sufficiently high visibility, and sufficient innovativeness and influence
to be regarded by OM peers and experts as making an important contribution to the OM discipline. The collective research publications of these tenured faculty members define major loci of concentration in the OM field's literature. Over time, this publication record encompasses and reveals major channels used for knowledge dissemination in the OM discipline—the most important channels for communicating new ideas and findings, and for provoking and stimulating the generation of new advances. They are the strongest contributors to productivity, agility, innovation, and reputation of the OM discipline. By examining the collective publishing behaviors of tenured OM faculty members at research universities, we can get a sense of the most important journals that have advanced the OM discipline. We do so without relying on opinion surveys, citation analyses, or affiliations of authors publishing in specific journals. The behavior-based approach for identifying important journal outlets in a field was recently introduced as a new bibliometric technique and has been applied to determine the most important journals for the information system (IS) discipline over a 25-year period [29–31]. Basic postulates underlying use of this approach in the case of OM are as follows. The collective publication record of tenured OM faculty members at research universities is representative of the best OM research. These faculty members tend to publish their research in journals that they (and their evaluators) regard as making the greatest contributions to progress in the OM field. The journals containing the heaviest concentrations of the collective publication record can be fairly regarded as the most important journals for the overall progress of the OM discipline.
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
171
Table 2 Limitations of prior approaches in assessing OM journals. Approach
Study
Limitation
Subjective
[13–16]
Citation-based
[7,20–22]
Author-based
[23]
• Factors such as society membership, geographical location, respondent characteristics, survey design, academic training/heritage, etc. can introduce biases in perceived rankings of journals • Journal ranking criteria are not always defined or interpreted uniformly by all respondents • A pre-specified set of journals for ranking is not always given or objectively selected • The basic assumption that every citation in an article's reference list is equally important for the development of that article is difficult to justify • Possible self-citation bias • The base set of articles tends to be grounded on the researchers' arguments for their inclusion in the base • Journal rankings can differ depending on how the citation counts are analyzed • There is not a close correlation between perceptions of journal quality and 5-year impact factors • The resultant journal rankings are limited to the particular journals for which the author affiliation index is calculated • The author affiliation index is proposed as an indictor of journal quality but no definition of quality is given • Journal quality ranking using the author affiliation index is sensitive to the size and composition of the university set as well as the number of articles examined per journal
3.1. Operational parameters Applying the behavior-based approach requires decisions to be made about two parameters. First, what set of research universities should be used to identify the specific tenured OM researchers considered in the study? We might, for instance, use the set of all universities that the Carnegie Foundation has designated as research universities. Not only would data collection for these hundred-plus universities be daunting, a dilution would occur in the sense that tenure standards at some of these universities may not be as high as at other universities. We seek a set of universities representative of those having the highest tenure thresholds. Three selection criteria for determining such a set have been recommended [29]: (1) the set should be large enough to avoid being sensitive to publishing patterns of any single university, (2) the composition of the set should be broad enough to be representative of methods and topics used in the OM field, and (3) membership in the set should be based on analysis by a disinterested, independent agency that rates the research prowess of research universities without considering the publication patterns of their faculties in any pre-specified journals. Here, we use the same selection criteria as the prior IS study [30]. This yields the set of 31 highest-rated U.S. public research universities (there is a tie for the thirtieth position) identified in the 2005 Annual Report of the Center for Measuring University Performance. An alphabetical list of these universities appears in Appendix A. We contend that this set satisfies the three selection criteria, without being so large that dilution may come into play. It is important to note that the OM faculties at the 31 universities are not claimed to include all of the “best” OM faculties, nor that the OM faculties at the 31 universities are somehow “superior” to all other OM faculties. We only assume that the 31 OM faculties at the institutions shown in Appendix A are representative of the leading OM faculties in the U.S., and the 90 tenured OM faculty members at these universities are representative of leading tenured OM researchers. The second parameter is concerned with what time period should be used in collecting data about publication activities of all tenured OM faculty members at the 31 universities. The time period needs to be long enough to capture major developments of the field and to avoid biases towards short term phenomena [29]. Following this guideline, we adopt a time period from 1980 through 2006, as was done in a prior study for the IS field. The start year seems to roughly coincide with the emergence of OM as a discipline—having substantial critical mass and beginning to develop its own set of journals devoted to OM research. Following the decisions made about the two parameters, as discussed above, we gathered data to assemble the collective
publication record of the 90 tenured OM faculty members2 at the 31 research universities as of 2006. We do not include part-time, emeritus, or clinical (instructional) faculty members. As far as possible, data were collected from online vitas and publication lists. Google Scholar was used to check the completeness of these in cases where a vita did not cover the full range of 1980–2006. If additional journal publications were found in this way, they were used to supplement those found in the vita. Where a publication list was not exhaustive (e.g., described as “selected” or “recent”) or did not cover the full 1980–2006 period we again consulted Google Scholar to supplement the list as warranted. We do not treat introductions to special issues, postscripts, errata, replies, notes, or book reviews as being research articles. Henceforth, we refer to the 90 faculty members as the benchmark faculty. Following the previous IS study, we report results for only those journals with at least 10 instances of having published articles by the benchmark faculty members. There are 27 journals that meet/surpass this threshold. Some are journals devoted to OM topics—either in general or specializing in a special topic. Others are journals that publish articles from multiple disciplines, including OM. Still others are devoted to reference disciplines (e.g., quant methods, industrial engineering). 3.2. Applying the behavior-based approach We need to adopt one or more metrics for rating the relative importance of the 27 journals identified by overt behaviors of the benchmark faculty and tacit behaviors of those who evaluate their performance. Here, we examine three metrics of journal importance: (1) publishing breadth, (2) publishing intensity, and (3) publishing mode. 3.2.1. The publishing breadth metric The first of these indicates how widespread the contributions to a journal are among the benchmark faculty. If, collectively, a relatively large proportion of benchmark faculty members has published in a particular journal, then that journal is deemed to be more important as a dissemination channel fostering progress in the discipline than a journal in which a relatively small proportion has published. Here, we report a breadth score for each journal: the total number of benchmark faculty members who have published at least one article in the journal, divided by the total number of benchmark faculty
2 An OM faculty member is one who specifically identifies OM in his/her vita/bio as an area of research focus or one for whom the preponderance of publications have titles that deal with traditional OM topics like those found in OM textbooks .
172
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
Table 3 Breadth scores for OM-related journals. Breadth rank
Journal name
Breadth score
1 2
European Journal of Operational Research Management Science
0.67 0.62
3 4 5 6 7 8 9 10
Production and Operations Management IIE Transactions Journal of Operations Management Operations Research International Journal of Production Research Decision Sciences Naval Research Logistics Interfaces
0.51 0.46 0.44 0.41 0.40 0.39 0.37 0.36
11 12 13 14 15 16 17
Manufacturing & Service Operations Management Annals of Operations Research International Journal of Operations and Production Management Journal of Operational Research Society International Journal of Production Economics Production and Inventory Management Omega Computers and Operations Research Operations Research Letters IEEE Transactions on Engineering Management Business Horizons International Journal of Purchasing and Materials Managementa
0.28 0.24 0.22 0.20 0.18 0.17 0.16 0.16 0.16 0.13 0.13 0.10
20 22
a This journal has undergone several name changes over the years. Although it is currently called Journal of Supply Chain Management, we report the name under which the most benchmark faculty publications appear.
members. Thus, a breadth score of 0.4 means that 40% of benchmark faculty members have authored articles in the journal. Using this measure of journal importance, we compute the breadth score for each of the 27 journals. Table 3 shows an ordered list of the 27 journals' relative importance based on breadth scores, with OM-devoted journals shown in boldface. The highest-breadth journal devoted to the OM discipline is Production and Operations Management. It is the only OM journal in which over one-half of the benchmark faculty members have published. The average breadth score across all journals is 0.26, with a 0.20 median. The journals fall into several clusters. We suggest that there are not huge differences in importance between adjacent journals within each cluster—at least in terms of the breadth metric. The breadth ranking can serve to set a target for young faculty members to aim at as they strive to publish in journals that are important for the OM field. One might argue that they need to produce research that is publishable in the same forums wherein large portions of tenured OM faculty members tend to publish. Doing so puts them in good company. Evaluators of a tenure case are well advised to consider, as one ingredient, the extent to which it includes publications in journals shown in Table 3—particularly OM journals with the highest breadths. 3.2.2. The publishing intensity metric A second metric indicates how intensely the benchmark faculty has published in each journal. If, collectively, the benchmark faculty has published a relatively large number of articles in a particular journal, then that journal is deemed to have been more important as a dissemination channel fostering progress in the discipline than a journal having published a relatively small number of articles. Here, we report an intensity score for each journal: the total number of articles the benchmark faculty has published in the journal, divided by the total number of benchmark faculty members. Thus, an intensity score of 1.0 means that, on average, every benchmark faculty member has authored one article in the journal. Using this measure of journal importance, we compute the intensity score for each of the 27 journals. Table 4 shows an ordered list of relative journal importance based on intensity scores. Observe
that there are three journals devoted to the OM discipline that have intensities exceeding 1.0: Production and Operations Management, International Journal of Production Research, and Journal of Operations Management. The average intensity score across all journals is 0.64, with a 0.36 median. Interestingly, the 11 highest intensity journals are identical to the 11 highest breadth journals, although the ordering is shuffled. This finding suggests that, in the OM discipline, a journal's high intensity tends to be related to fairly widespread publishing in that journal, rather than extremely high publishing levels by a very small number of researchers in the benchmark set. The eight journals whose intensity scores exceed 1.0 fall into three clusters. We suggest that there is essentially no difference in importance among journals within each cluster—at least in terms of the intensity metric. Three of these highest intensity journals are devoted to the OM discipline, two to reference disciplines of quantitative methods and industrial engineering, and three to journals whose stated editorial scopes cover multiple disciplines. So, several types of journals have been very important to development of the OM field. Persons seeking to learn about, and to stay current on, OM are well advised to include journals from Table 4 in their reading lists. Administrators aiming to evaluate a researcher's record are well advised to consider, as one ingredient, the extent to which it includes publications in journals shown in Table 4—particularly OM journals with the highest intensities. A healthy overlap suggests that the researcher is in good company. Even at the lower levels of Table 4, the journals have substantial intensities relative to the myriad journals that do not pass the threshold for inclusion. Unlisted journals (those containing fewer than 10 cases of the 90 tenured faculty members having published articles within their pages in during the1980–2006 period) may be too new or too specialized to pass the threshold. Consider the highest cluster of three journals. There is Journal of Operations Management which began publishing in 1980 and is devoted to OM research. There is Management Science which began in 1954 and since 1980 has published far more articles concerned with OM than any of the other disciplines that it covers [18]. Launched in 1977, the European Journal of Operational Research has published exceptionally large numbers of papers (exceeding 180 volumes), many
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
173
Table 4 Intensity scores for OM-related journals. Intensity rank
Journal name
Journal type
Intensity score
1 2 3
Journal of Operations Management Management Science European Journal of Operational Research
OM Multiple Multiple
1.93 1.92 1.90
4 5
International Journal of Production Research Operations Research
OM QM
1.34 1.32
6 7 8
IIE Transactions Decision Sciences Production and Operations Management
Engineering Multiple OM
1.14 1.07 1.03
9 10 11 12
Naval Research Logistics Interfaces International J of Operations and Production Management Annals of Operations Research Manufacturing & Service Operations Management Production and Inventory Management International Journal of Production Economics Journal of the Operational Research Society International J of Purchasing and Materials Managementa Omega Computers and Operations Research Operations Research Letters Business Horizons Transportation Science IEEE Transactions on Engineering Management Journal of Flexible Manufacturing Systems Quality Management Journal Computers and Industrial Engineering Academy of Management Review
QM Multiple OM QM OM OM OM QM OM Multiple Information Sys QM Multiple OM Engineering OM/Engineering OM Engineering Management
0.81 0.53 0.51 0.43 0.43 0.36 0.30 0.30 0.28 0.24 0.24 0.22 0.19 0.18 0.17 0.14 0.13 0.12 0.12
14 15 17 18 20 21 22 23 24 25 26
a This journal has undergone several name changes over the years. Although it is currently called Journal of Supply Chain Management, we report the name under which the most benchmark faculty publications appear.
of which deal with OM and its quantitative methods reference discipline, among others disciplines. Such differences suggest that there are multiple ways to contribute to the development of a discipline. Importance can be associated with such factors such as full devotion to the discipline, emphasis on the discipline by a long-established journal, and publishing relatively large volumes of articles dealing with the discipline and its sub-disciplines. By themselves, intensity scores do not differentiate among these factors. The question arises as to how to employ the two measures of importance: that is, how to combine the breadth and intensity aspects. One answer is to use intensity only, or breadth only, and ignore the other metric. Another answer is to use one as the primary indicator of importance, and the other as a tie-breaker within a primary cluster. For instance, if intensity is primary and breadth is secondary, then Production and Operations Management would be regarded as somewhat more important than Decision Sciences which has practically the same intensity, but with a thinner base of benchmark faculty authorship. Yet another answer is to combine them with equal weight, such as multiplying a journal's intensity with its breadth to yield a power score as a ranking basis [29]. Another question arises concerning how to treat the different journal types. One answer is simply to adopt the rating clusters displayed in Tables 3 and/or 4 without regard for the different journal types. The result is not a rating of OM journals, but rather of journals related to OM. Another is to focus only on OM-devoted journals, extracting them in order from the tables. Yet another is to examine only multidiscipline journals that are notable outlets for the set of OM researchers. Aside from OM, the disciplines covered by these journals can be quite diverse. For instance, an examination of recent volumes of OMEGA show coverage of such disciplines as operations management (e.g., [32,33]), operations research (e.g., [34,35]), supply chain management (e.g., [36,37]), information systems (e.g., Mirchandani and Lederer [38], Premkumar and Bhattacherjee [39]), and knowledge management (e.g., [40,41]).
Breadth and intensity can also be applied to understand the journals belonging to a particular type. For instance, consider the universe of all multidiscipline journals (e.g., Harvard Business Review, Management Science, Omega, Sloan Management Review, Decision Sciences, Organization Science). Tables 3 and 4 show that, within this universe, there are six journals of notable importance as knowledge dissemination channels for operations management. These are extracted into Table 5. Observe that the six journals belong to two categories: those that emphasize research having implications for managers and those that emphasize research having implications for researchers. Of course, content of journals in the first category can also have implications for researchers, and those in the latter category can also have implications for practitioners. That is, there is a complementary interplay between the two categories, while each has its own primary editorial emphasis in terms of the immediacy and scope of its appeal to two different audiences. 3.2.3. The publishing mode metric A third metric gauges a journal's importance in terms of the extent to which individual members of the benchmark set tend to publish their highest quantities research in it. If a journal is the publishing mode for a disproportionally high segment of tenured OM faculty members belonging to a sizable set of prominent research universities, then it may be regarded as more important than a journal that is not the publishing mode for any of the benchmark faculty members. The former, with its high pressure knowledge flows, may be more important than the latter where the force of knowledge dissemination is not as pronounced. Here, we report a mode score for each journal: the total number of benchmark faculty members for whom that journal is the publishing mode, divided by the total number of benchmark faculty members. Thus, a mode score of 0.2 means that 20% of the benchmark faculty members have authored more articles in that journal than in any other journal.
174
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
Table 5 The six multidiscipline journals of notable importance for OM knowledge distribution. Editorial emphasis
Journal name
Breadth rank
Intensity rank
Manager implications
Interfaces Omega Business Horizons
1 2 3
1 2 3
Researcher implications
Management Science European Journal of Operational Research Decision Sciences
1 2 3
2 1 3
Table 6 Mode scores for OM-related journals. Mode rank
Journal name
Mode score
1 2 3
Management Science Journal of Operations Management European Journal of Operational Research
0.21 0.19 0.16
4
8
International Journal of Production Research IIE Transactions Production and Operations Management Operations Research Decision Sciences
0.09 0.09 0.08 0.08 0.06
9–27
Ave. for remaining 19 journals (including six OM journals with mode scores of 0.01 or 0.02)
0.009
6
For all journals with a publishing mode score in excess of 0.05, Table 6 shows an ordered list of the journals' relative importance based on mode scores. The median for all 27 mode scores is 0.01. In the mode tabulations, we consider only modes that exceed 1 article, and in the case of an author having multiple modes of 2, we consider only those that are strictly bimodal. There are two clusters of high mode scores. There appears to be little difference in importance among journals within each cluster—at least in terms of the mode metric. Interestingly, the eight highest mode journals are identical to the eight highest intensity journals and the eight highest breadth journals, although the orderings are shuffled. This finding suggests that, in the OM discipline, a journal's high intensity tends to be related not only to fairly widespread participation in publishing in that journal, but also to relatively high pressure knowledge flows from substantial proportions of these participants. 4. Discussion The three importance metrics supplement prior approaches that have been followed to study publication patterns in the OM field. Together, the four kinds of studies give us multiple vantage points for understanding knowledge dissemination in the OM discipline, as shown in Table 7: • ratings of journal “quality” and relevance based on opinions; • ratings of journal influence based on various analyses of citations to it; • rating journal “quality” based on the proportion of its articles authored by persons affiliated with “high quality” universities; • ratings of journal importance based on behaviors that have been judged as tenure-able, at an independently determined set of prominent research universities in the U.S. The notion of “importance” is something of an umbrella concept, as it seems that an “important” journal to the OM field is likely to be one with some suitable mix of influence, relevance, “quality,” and perhaps other factors as well—such as longevity or narrowness of focus. “Important” journals are those to which concentrations of research tend to be attracted—for a variety of reasons, which in
their totality are consistent with holding tenured positions in OM at prominent research universities. These define the most important journal conduits of for the knowledge dissemination that has shaped the OM discipline. 4.1. Applying the results The importance ratings can be applied in several ways. Amid the multitude of publication alternatives, they can be used by scholars to construct a focused reading list for staying up-to-date on OM developments. They can be used by educators when constructing reading lists for doctoral seminars. Library acquisitions supervisors can use the importance ratings to ensure the most important OM journals are in their collections. Researchers, who are interested in publishing their research articles in the OM field's most important journals, can use the ratings to be clear about their options. Individual OM faculty members are inevitably subject to review in terms of their research performance. For an untenured faculty member, interim reviews and the eventual review for promotion and tenure are crucial. These happen within time frames that are relatively short. Even for tenured faculty members, there can be short-run evaluations of research performance, such as annual or biennial merit reviews. Although citation tracking (e.g., via Google Scholar) can be used to get a sense of an article's impact over the longer term, such tracking usually gives little insight in the first few years after publication. Here is where an understanding of journal importance is especially useful—in short-run evaluations where the publications are too new (e.g., published within the past 1–4 years) to have had an opportunity to be heavily cited. In each evaluation situation, research performance in terms of journal publications is of prime importance. To the extent that an evaluator has the time and experience to study an individual's portfolio of journal articles in detail, a good sense of the relevance, clarity, rigor, and originality of that person's research accomplishments can be ascertained. The evaluator may even be able to predict the ultimate influence, visibility, and importance of that work on the OM discipline. However, it often happens that an evaluator does not have the time or background to perform detailed analyses of individual articles. In such cases, it is common practice to use an
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
175
Table 7 Multiple vantage points for understanding knowledge dissemination in OM. Rank
1 2 3 4 5 6 7 8 9 10
Subjective
Citation-based
US perspective
European perspective
Citation
5-year impact factor
[15] JOM POM MS DS OR IIE HBR IJPR Interfaces IJOPM
[16] JOM IJOPM MS POM IJPR OR EJOR IJPE HBR SMR
[21] DS HBR JOM IJOPM IJPR MS PIM
[22] JASA JOM MS DS MOR OR EJOR IJPR OMEGA COR
Top tier journals, in alphabetical order
Author-based
Behavior-based
[23] TS MOR OR MSOM IIE POM JOS Interfaces JOM LTR
Tables 3–5 DS EJOR IIE IJPR JOM MS OR POM Journals with the highest scores, in alphabetical order
Notation: COR, Computers and Operations Research; DS, Decision Science; EJOR, European Journal of Operational Research; HBR, Harvard Business Review; IIE, IIE Transactions; IJOPM, International Journal of Operations and Production Management; IJPR, International Journal of Production Research; JASA, Journal of the American Statistical Association; JOM, Journal of Operations Management; JOS, Journal of Scheduling; LTR, Logistics and Transportation Review; MOR, Mathematics of Operations Research; MS, Management Science; MSOM, Manufacturing and Service Operations Management; OR, Operations Research; PIM, Production and Inventory Management; POM, Production and Operations Management; SMR, Sloan Management Review; TS, Transportation Science.
article's placement as a halo indicator, based on the rating of its host journal.3 This practice presupposes an acceptance of the methodology (and limitations) of the journal rating scheme that is adopted. Oftentimes, however, there is little or no consideration of the underlying methodology. We contend that the behavior-based approach for rating OM journals should be considered as a viable candidate for making evaluations of research performance (especially for publication records that have occurred in the preceding four years). That is, if an OM researcher's article has been published in a journal where tenured OM professors collectively tend to concentrate publication of their own research articles over a period of many years, then it is fair to regard this as excellent research performance. As a caveat, we must keep in mind that whatever rating is applied for halo evaluations, there is (in the longer run) a difference between an article in a “top” journal and a “top” article [19]. Many articles in a “top” journal end up having little or no visible influence over a period of many years. How much do such articles contribute to the discipline's progress? Conversely, many articles that do not appear in the very “top” handful of journals, end up being substantially cited over a period of many years. It would seem that these individual articles make a substantial contribution to the development of knowledge in the discipline. 4.2. Limitations Although the importance metrics use a methodology that does not incorporate the limitations of opinion surveys, citation analyses, or proportions based on author affiliation, they do have some limitations in their own right. First, the results can be sensitive to the time window employed: its length and timing. Clearly, a window open for the early 1980s would show different journal importance ratings than a window for the last few years. Many of today's OM journals were not mature or did not exist in the early 1980s. It seems likely that researchers of that era collectively tended to concentrate their best OM work in well-established journals (especially multidiscipline and reference discipline) that would most enhance their
3 When using a ranked list of important journals for halo evaluations, it is important to be mindful that relatively new journals and some specialized journals may not be on the list. In such cases, the list can be supplemented by new or more specialized journals. For instance, this would account for heavy emphasis on researching a specialty that is not heavily studied by the benchmark set.
promotion cases (e.g., Management Science and Operations Research). In contrast, as we today have an array of mature OM journals, they are now more likely to experience notable concentrations of OM researchers' best work. At the extreme, we might expect singleyear windows that show considerable variation due to the vagaries of publishing and special issues of journals. Accordingly, we have used an extended window to ascertain journal importance, thereby covering a major portion of the field's development and maturation. Second, the importance results may be sensitive to size and composition of the benchmark set. A very small set (e.g., say, the first five universities on TheCenter's list) may well be unrepresentative of OM publishing behaviors, yielding importance scores that unduly emphasize some journals and discount others. On the other hand, a set that is too large may well dilute the results, elevating journals that would not otherwise appear in the rankings (unless there is an increase of the reporting threshold). For instance, incorporating the next 60 research universities on the TheCenter's list may begin to encompass universities whose research standards for tenure are not as stringent—thereby introducing substantial volumes of publications from journals that would not otherwise appear in the importance metrics. As for composition of a benchmark set, it may well be that a set drawn from prominent European or Asia-Pacific universities could yield different importance ratings from those reported here (e.g., reflecting different tenure conventions, different native languages). Third, change happens. Faculty members retire, become tenured, and move from one university to another. In the short run, we can expect a benchmark set for 31 universities to be fairly stable. However, in the longer run (say, 10 years from now), there might be a considerably different membership in the benchmark set. This may affect the picture of journal importance ratings in OM because the collective publishing behavior could change from what we observe today. Over the long run, change can also occur with respect to individual journals. The pace at which a specific journal publishes OMrelated articles can grow or diminish. Its editorial scope or board can shift. New, OM-relevant journals can appear and gain “market share” relative to more established journals. Fourth, there may be a concern that the importance metrics are too all-encompassing for some purposes. Wide-ranging measures are needed for a wide-ranging understanding of OM dissemination channels. But it is possible, in some quarters, that more narrow characteristics that focus only on relevance or “quality” or influence may be desired for evaluating research performance. Nevertheless,
176
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
Table 8 Apportionment of important OM dissemination channels by journal type. Metric
Intensity Breadth Mode Total
Number of journals in indicated category Listed
OM
Multi
QM
Eng
Other
OM makes up
27 22 8 57
11 8 3 22
6 6 3 15
5 5 1 11
3 2 1 6
2 1 0 3
2 2 3 7
we contend that positive performance with respect to the most important publishing outlets should be viewed favorably. Relevance does not ensure importance. “Quality” does not ensure importance. Influence would seem to predict importance. Fifth, there may be factors such as journal characteristics (e.g., acceptance rate and number of papers published per year) and joint publications that affect the importance results. At present our importance metrics are a direct measure of the behavioral patterns of benchmark OM faculties. It may be interesting to expand the current study to investigate factors that influence these publishing patterns. For instance, EJOR has published a very large number of papers over the past decade relative to the number in Management Science. Both being multidiscipline journals, we might begin by determining the numbers (or proportions) of articles in each that are OM articles. This might be useful in developing some metric that somehow normalizes their practically identical intensities; such a measure would need to be accompanied by strong justification for the normalization method and a compelling explanation of how to interpret the results. 4.3. The nature of operations management The journal importance results of this study offer insights into the nature of the OM discipline itself. Their identification of journals devoted to OM scholarship as being among the most important OM knowledge dissemination channels clearly indicates that OM is successfully established as an academic discipline, with its own distinct theoretical models and analytical tools. Consider Table 8, which shows how the most important journals for OM over the past 25+ years are apportioned by journal type. Relative to other types of journals, a preponderance of the most important journals for development of the OM discipline are devoted to OM. Journals that include coverage of OM, among multiple other disciplines they cover, comprise the next-most important channels for OM knowledge dissemination. This kind of channel is followed by those involving journals devoted to reference disciplines—quantitative methods and industrial engineering being the most prominent types of these cognate channels. For more than 20 years, questions have been raised as to OM's identity—whether OM has a distinct intellectual boundary that separates it from quantitative methods (operations research, management science) and industrial engineering [11,13,42,43]. Some attribute the root cause of this identity confusion as being the field's early emphasis on examining tactical issues of a single decision maker using optimization methods [12]. It has been argued that with this initial emphasis, OM has failed to develop into own body of literature, as its researchers preferred to publish their best work in established journals that are not OM-specific—such as Management Science and Operations Research [14,43]. A growing presence of prominent OM-specific research outlets among the most important channels for OM research gives credence to the recognition of OM as an academic discipline. Our findings show that OM-centric journals are very prominent among the journals that have been most important for the discipline's progressive expansion. This gives credence to the position that OM is an academic discipline with its own body of literature
of of of of
top top top top
5 5 7 17
OM ⱖ Median 7 of top 16 7 of top 16 3 of top 7 17 of top 39
6 5 11 25
and its own intellectual boundaries and frontiers. Mature disciplines tend to be enriched by and draw on other disciplines. Our results indicate that quantitative methods and industrial engineering are particularly important reference disciplines for the progress in the OM field. Furthermore, the consistently over-riding emphasis on OM articles in the flagship Management Science journal over the past quarter century [18] is a related sign that OM has achieved a distinct identity and prominence within the multidiscipline world.
4.4. Comparison with another discipline To get a larger perspective, it is interesting to compare the pattern of important channels of dissemination that infuse OM progress with patterns existing in other disciplines. In particular, the IS discipline has been analyzed via the same behavior-based methodology employed here in order to identify and rate the most important IS journals. The rise of the OM discipline, with strong roots in the quantitative methods field, roughly paralleled the rise of the IS discipline, with strong roots in the computer science field. Two studies of IS importance are reported in the literature. One of these uses a benchmark set comprised of all tenured IS faculty members (as of 2006) at 20 of the 31 universities shown in Appendix A [29]. The other study's benchmark set contains all tenured IS faculty members (as of 2006) at all 31 universities [30]. Table 9 compares these two studies with the one reported here. The two IS studies yield results that are very similar to each other, even though the second one uses a benchmark set having 45% more IS faculty members from 55% more universities. Because the threshold does not change, a few more journals are included in lower echelons of the second study's ratings. With one exception, the intensity metric yields the same 10 most-important journals for each study. There is no change exceeding one position among the five highest of these, with the highest being the same in both studies. The dozen highest-ranked journals in terms of publishing breadth are the same for the two studies, with no journal shifting more than one position and the three highest positions being identical. Compared to the IS discipline, we find that the OM discipline relies on a smaller proportion of discipline-specific journals and a higher proportion of multi-discipline journals. Nevertheless, discipline-specific journals are the most numerous channels used for knowledge dissemination in each field. The number of publications in important journals per tenured faculty member is essentially the same for the two disciplines. On the other hand, the number of journals surpassing the importance threshold is considerably larger for the IS discipline. This manifests in more diffusion among IS channels versus heavier concentrations within OM channels—as seen by comparing the proportions and scores of high intensity (breadth and mode) journals for IS versus OM. At the same time, we see greater compaction of the most important discipline-specific journals for IS than we do for OM, in terms of intensity, breadth, and mode ratings. We conclude that knowledge dissemination channels for the OM discipline have their own unique characteristics, while simultaneously not being grossly different than another discipline that has developed in a roughly comparable time period.
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
177
Table 9 Cross-disciplinary comparison. Study Characteristics
OM Study
IS Study 1
IS Study 2
Number of universities Type of universities Benchmark set size Time period covered Threshold for analysis Journals above threshold Discipline-specific Reference discipline Multiple discipline Ave articles per faculty member (above threshold) Journals with intensity > 1 Journals with intensity > 0.5 Journals with breadth > 0.5 Journals with breadth > 0.25 Journals with mode > 0.1 Journals with mode > 0.05 Three highest intensity scores Three highest breadth scores Three highest mode scores Rank range of 5 discipline journals of greatest intensity Rank range of 5 discipline journals of greatest breadth Rank range of 3 discipline journals of greatest mode Discipline journals at/above intensity median Discipline journals at/above breadth median Discipline journals at/above mode median Journals in common with the OM study reported here
31 US, public, research 90 1980–2006 10 27 11 (41%) 10 (37%) 6 (22%) 17 8 (30%) 11 (41%) 3 (11%) 11 (41%) 3 (11%) 8 (30%) 1.93 0.67 0.21 1–12 3–13 2–6 6 5 11
20 US, public, research 73 1980–2006 10 37 20 (54%) 12 (32%) 5 (14%) 16 3 (8%) 11 (30%) 1 (3%) 9 (24%) na na 1.86 0.58 na 1–8 2–7 na 13 9 na 6
31 US, public, research 106 1980–2006 10 45 25 (56%) 14 (31%) 6 (13%) 16 4 (9%) 10 (22%) 1 (2%) 8 (18%) 3 (7%) 7 (16%) 1.75 0.59 0.17 1–7 2–7 1–3 11 11 13 7
1.92 0.62 0.19
5. Conclusion Understanding the important channels for OM knowledge sharing and for fostering generation of new knowledge about OM is not an idle historical exercise. It contributes to the gravitas of OM as a discipline. It provides a map of the major channels available for scholarly participation in the OM field. This map shows key routes to travel in order to learn about OM advances of today and yesteryear. A wellgrounded understanding of the most important sources, among the many possible sources, offers the prospect of more focused searches, a likelihood of more efficient traversal of the OM knowledge space, and some assurance that major avenues in that space will not be overlooked. This map helps budding researchers navigate the increasingly complex and diverse OM field, as they contemplate important venues for disseminating their work. One strategy is to aim for channels that are established as being the most important for the discipline. Such targeting provides feedback on what it takes to join in the knowledge flows that mark major progress in the field. Successes in these directions give the young researcher encouragement and confidence that his/her work is deemed by peers to be worthy of inclusion among the discipline's most important streams of research expression. Thus, a well-grounded understanding of the most important dissemination channels is vital for implementing such a strategy. This map guides administrators (and others such as funding agencies) who evaluate performance of individual, or departmental, contributions to the advance of OM. It does so by providing a picture of the most important, most visible, outlets for contributions. This picture is easily comprehended by persons who may well lack OM training and who may have never conducted OM research. When contributions must be evaluated in the short run (say, 1–4 years since acceptance or publication), meaningful citations to each individual contribution tend to be scarce. There has not yet been time for the contribution's eventual degree of impact to have become evident. In such evaluation situations, a well-grounded understanding of the most important journals in the OM field can guide an evaluator toward an assessment of the extent to which a researcher/department
1.90 0.51 0.16
1.33 0.47
1.30 0.47
1.48 0.48 0.15
1.24 0.47 0.10
is in a position to make impacts via the major journals related to the field. Past efforts at rating or ranking OM research outlets give insights into certain aspects of journal importance—such as relevance, “quality,” or influence. In this paper, we directly examine the issue of journal importance in the context of OM, yielding a map that embodies a well-grounded understanding of OM knowledge dissemination channels. The adopted behavior-based methodology gives a multidimensional metric of journal importance in fostering progress in the OM discipline. This methodology sidesteps limitations inherent in prior rating studies. It identifies publication outlets where 90 full-time, tenured OM faculty members from (independently selected) leading research universities in the U.S. tend to collectively concentrate their journal publications. These persons are deemed to be representative of leading researchers in the OM field. We find that the greatest bulk of journal outlets used by the tenured OM researchers, over an extend time period, is comprised of journals wholly devoted to OM topics. This offers compelling confirmation that OM does not stand in the shadows of other disciplines, but is a strong discipline in its own right. Appendix A. TheCenter's leading public research universities having business schools (alphabetical) Arizona, California-Berkeley, California-Davis, California-Irvine, UCLA, California-San Diego Cincinnati, Colorado-Boulder, Florida, Georgia, Georgia Tech, Illinois, Iowa, Maryland, Michigan, Michigan State, Minnesota, North Carolina, North Carolina State, Ohio State, Penn State, Pittsburgh, Purdue, Rutgers-New Brunswick, SUNYBuffalo, Texas-Austin, Texas A&M, Utah, Virginia, WashingtonSeattle, Wisconsin-Madison. References [1] Biehl M, Kim H, Wade M. Relationships among the academic business disciplines: a multi-method citation analysis. Omega 2006;34(4):359–71.
178
C.W. Holsapple, A. Lee-Post / Omega 38 (2010) 167 – 178
[2] Forgionne GA, Kohli R, Jennings D. An AHP analysis of quality in AI and DSS journals. OMEGA 2002;30(3):171–83. [3] Vastag G, Montabon F. Journal characteristics, rankings and social acculturation in operations management. Omega 2002;30(2):109–26. [4] Donohue JM, Fox JB. A multi-method evaluation of journals in the decision and management sciences by US academics. OMEGA 2000;28(1):17–36. [5] Jones MJ. Critically evaluating an applications vs theory framework for research quality. OMEGA 1999;27(3):397–401. [6] Ormerod RJ. An observation on publication habits based on the analysis of MS/OR journals. OMEGA 1997;25(5):599–603. [7] Goh CH, Holsapple CW, Johnson LE, Tanner J. An empirical assessment of influences on POM research. Omega: International Journal of Management Science 1996;24(3):337–45. [8] Doyle JR, Arthurs AJ. Judging the quality of research in business schools: the UK as a case study. OMEGA 1995;23(3):257–70. [9] Holsapple CW, Johnson LE, Manakyan H, Tanner J. Business computing system research: structuring the field. OMEGA 1994;22(1):69–81. [10] Holsapple CW, Jones K. Exploring primary activities of the knowledge chain. Knowledge and Process Management 2004;11(3):155–174; Holsapple CW, Singh M. The knowledge chain model: activities for competitiveness. Expert Systems with Applications 2001;20(1):77–98. [11] Sprague LG. Evolution of the field of operations management. Journal of Operations Management 2007;25(2):219–38. [12] Chopra S, Lovejoy W, Yano C. Five decades of operations management and the prospects ahead. Management Science 2004;50(1):8–14. [13] Saladin B. Operations management research: where should we publish?. Operations Management Review 1985;3(4):3–9. [14] Barman S, Tersine RJ, Bucklye MR. An empirical assessment of the perceived relevance and quality of POM-related journals by academicians. Journal of Operations Management 1991;10(2):194–212. [15] Soteriou AC, Hadjinicola GC, Patsia K. Assessing production and operations management related journals: the European perspective. Journal of Operations Management 1999;17(2):225–38. [16] Barman S, Hanna MD, LaForge RL. Perceived relevance and quality of POM journals: a decade later. Journal of Operations Management 2001;19(3): 367–85. [17] Theoharakis V, Voss C, Hadjinicola G, Soteriou AC. Insights into factors affecting production and operations management journal evaluation. Journal of Operations Management 2007;25(4):932–55. [18] Hopp WJ. Fifty years of management science. Management Science 2004;50(1): 1–7. [19] Smith SD. Is an article in a top journal a top article?. Financial Management 2004;33(4):133–49. [20] Vokurka RJ. The relative importance of journals used in operations management research: a citation analysis. Journal of Operations Management 1996;14(4): 345–55. [21] Goh CH, Holsapple CW, Johnson LE, Tanner J. Evaluating and classifying POM journals. Journal of Operations Management 1997;15(1):123–38. [22] Olson JE. Top-25-business-school professors rate journals in operations management and related fields. Interfaces 2005;35(4):323–38. [23] Gorman MF, Kanet JJ. Evaluating operations management-related journals via the author affiliation index. Journal of Manufacturing and Service Operations Management 2005;7(1):3–19.
[24] Harless, D., Reilly, R., 1998. Revision of the Journal List for Doctoral Designation. Unpublished report, Virginia Commonwealth University, Richmond, VA. http://www.bus.vcu.edu/economics/harless/jourrep4.doc. [25] Gorman MF, Kanet JJ. OM forum: evaluating operations management-related journals via the author affiliation index—do professors at top U.S. business schools do what they say?. Journal of Manufacturing and Service Operations Management 2007;9(1):51–3. [26] Brinn T, Jones MJ, Pendlebury M. Measuring research quality: peer review 1, citation indices 0. Omega 2000;28(2):237–9. [27] Jones MJ, Brinn T, Pendlebury M. Journal evaluation methodologies: a balanced response. OMEGA 1996;24(5):607–12. [28] Doyle JR, Arthurs AJ, Mcaulay L, Osborne PG. Citation as effortful voting: a reply to Jones, Brinn and Pendlebury. OMEGA 1996;24(5):603–6. [29] Holsapple CW. A publication power approach for identifying premier information systems journals. Journal of the American Society for Information Science and Technology 2008;59(2):166–85. [30] Holsapple CW. A new map of the information systems publishing landscape. Communications of the ACM 2009;52(3):117–25. [31] Holsapple CW, O'Leary D. How much and where? Private vs. public universities' publication patterns in the information systems discipline. Journal of the American Society for Information Science and Technology 2009;60(2):318–31. [32] Akinc U, Meredith J. Modeling the manager's match-or-wait dilemma in a make-to-forecast production situation. OMEGA 2009;37(1):300–11. [33] Tang L, Zhao Y. Scheduling a single semi-continuous batching machine. OMEGA 2008;36(6):992–1004. [34] Arsham H, Adlakha V, Lev B. A simplified algebraic method for system of linear inequalities with LP applications. OMEGA 2009;37(4):876–82. [35] Kao C. A linear formulation of the two-level DEA model. OMEGA 2008;36(6): 958–62. [36] Yao D-Q, Yue X, Liu J. Vertical cost information sharing in a supply chain with value-adding retailers. OMEGA 2008;36(5):838–51. [37] Ding D, Chen J. Coordinating a three level supply chain with flexible return policies. OMEGA 2008;36(5):865–76. [38] Mirchandani DA, Lederer AL. The impact of autonomy on information systems planning effectiveness. OMEGA 2008;36(5):789–807. [39] Premkumar G, Bhattacherjee A. Explaining information technology usage: a test of competing models. OMEGA 2008;36(1):64–75. [40] King WR, Marks Jr PV. Motivating knowledge sharing through a knowledge management system. OMEGA 2008;36(1):131–46. [41] Choi B, Poon SK, Davis JG. Effects of knowledge management strategy on organizational performance: a complementarity theory-based approach. OMEGA 2008;36(2):235–51. [42] Buffa ES. Research in operations management. Journal of Operations Management 1980;1(1):1–7. [43] Pilkington A, Liston-Heyes C. Is production and operations management a discipline? A citation/co-citation study. International Journal of Operations and Production Management 1999;19:7–20. [44] Amin M, Mabe M. Impact factors: use and abuse. Perspectives in Publishing 2000;1:1–6.