Research Policy 49 (2020) 103910
Contents lists available at ScienceDirect
Research Policy journal homepage: www.elsevier.com/locate/respol
Universities’ commitment to interdisciplinary research: To what end? a,⁎
Erin Leahey , Sondra N. Barringer a b
b
T
University of Arizona, United States Southern Methodist University, United States
ARTICLE INFO
ABSTRACT
Keywords: Research universities Interdisciplinarity Research centers Departments Productivity Grant funding
In recent decades, research universities have been engaged in fostering interdisciplinary research (IDR) in an attempt to support high-impact research that can benefit not only the greater good, but also their bottom line. A common way to enhance “interdisciplinary momentum” and foster IDR is to alter the organizational structure and its basic units: departments and centers. What are the consequences of such structural changes? In short, we do not know. To date there has been no large-scale quantitative assessment of whether and how universities’ commitments to interdisciplinary research are successful in fostering interdisciplinary research. To address this gap within the literature, we collect a wealth of numeric and textual data on 156 research universities nationwide to assess whether structural commitments to IDR influence general research activity (e.g., publications, external grants) as well as interdisciplinary research activity. Our results suggest that structural commitment to IDR does indeed produce some of its intended effects. We found that universities’ commitment to IDR, as manifested in their organizational structure (i.e., the number and interdisciplinary nature of key research units: departments and centers), spurs both scholarly research and NIH grant activity in general, and interdisciplinary research and NIH grant activity in particular. These results suggest that efforts to develop and reorganize academic units are not futile; rather, when value commitments are made tangible via foundational research units like departments and centers, they can have their intended consequences.
1. Introduction
ideas or products that can be licensed or patented or otherwise facilitates connections to industry (Owen-Smith, 2003). In short, universities are hoping to promote science for its own sake, but also to obtain the social and financial rewards from the research activity taking place on their campuses. This tendency has only been exacerbated by recent changes in the political, financial, and funding environment (Berman, 2012; Geiger and Sá, 2008), which have compelled all universities, and especially research universities, to pursue a diverse set of revenue streams (Barringer, 2016; Hearn, 2006, McClure et al., 2019; Pfeffer and Salancik, 2003; Tolbert, 1985). In recent decades, a common way to pursue these valued ends has been to foster interdisciplinary research (IDR). Interdisciplinary research integrates “perspectives, information, data, techniques, tools, concepts, and/or theories from two or more disciplines” (National Academies of Science, 2005:188).2 Such domain-spanning scholarship is thought to be an important source of novelty and innovation (Hargadon, 2002; Weitzman, 1998), and the best way to advance science and address pressing societal concerns (Fleming, 2001;
In the modern day, few would dispute the main goal of scientific research: to achieve a deep understanding of physical, biological, and social phenomena in order to contribute to the greater good: a sustainable environment; healthier and more prosperous lives; as well as new discoveries and technologies (National Academies of Science, 2005:1). Scientific progress may organically migrate toward these topics, but science policy also hopes to foster groundbreaking, transformative, and high-impact science that will help us solve complex problems of global importance. Universities typically have these same goals in mind. As critical actors in the scientific endeavor,1 research universities are devising ways to support research that can benefit not only the greater good, but also their status and their bottom line. To stay (or climb) to the top of the rankings and also reap financial rewards, universities try to support cutting-edge research that is competitive for external funding from federal agencies (Berman, 2012) and leads to
Corresponding author. Research universities account for more than 80% of academic R&D in the U.S. (Adams and Roger Clemmons (2011)) and are critical to industry R&D as well (Laursen and Salter, 2004). 2 Like Holley (2009c), Jacobs (2013), and Wagner et al. (2011), we acknowledge related concepts of multi- and trans-disciplinarity and recognize that the science policy community is converging on the comprehensive term “interdisciplinarity.” ⁎
1
https://doi.org/10.1016/j.respol.2019.103910 Received 16 January 2019; Received in revised form 27 August 2019; Accepted 30 November 2019 Available online 03 February 2020 0048-7333/ © 2019 Elsevier B.V. All rights reserved.
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
Leahey et al., 2017; Lo and Kennedy, 2015; Schilling and Green, 2011; Singh and Fleming, 2010; Uzzi et al., 2013). Thus, it is no surprise that resources are flowing disproportionately toward interdisciplinary pursuits. This is the case for federal funding from NSF and NIH (Hackett, 2000; Rhoten and Parker, 2004), but also for state grants, private foundations, and internal grants that universities provide as seed money to their faculty (Brint, 2005; Holley, 2009c; Rhoten and Parker, 2004; Sá, 2008). By bridging disconnected knowledge spaces and thereby fostering innovation, IDR should succeed in the marketplace for ideas as well as the market for financial support (National Academies of Science, 2005; National Research Council, 2014). A common way to enhance “interdisciplinary momentum” (Turner et al., 2015) is to alter a university's organizational structure and its basic units: departments and centers. This approach, when undertaken with the explicit aim of creating new interdisciplinary units, has the added benefit of “addressing institutional conflicts between interdisciplinary activities and department structures” (Turner et al., 2015:653). A number of universities have been modifying the traditional departmental structure to be more responsive to new economic and political environments (Biancani et al., 2018:544). They have done this not by eliminating departments, which is rare, but by creating, promoting, differentiating, evolving, and consolidating them (Brint et al., 2011; Gumport and Snydman, 2002; Hearn and Belasco, 2015), leading to an increase in both the number of departments overall and the number of interdisciplinary departments (Brint et al., 2009). We've also witnessed a rapid increase in university research centers (Geiger, 1990; Jacobs and Frickel, 2009), which offer universities a way to adapt to new realities without doing away with the traditional departmental structure (Holley, 2009b; Lee, 2007; Sá, 2008). Indeed, centers have become an important “organizational strategy for universities seeking both to reach into intellectual domains that hold the promise of new kinds of knowledge and to tap lucrative public and private funding sources” (Biancani et al., 2018:545). Although few scholars distinguish interdisciplinary centers from other centers, the former may be even more effective at fostering cutting edge scholarship, grant activity, and interdisciplinary momentum. What are the consequences of such structural changes? In short, we do not know (Gumport, 2000). Organizational research tends to focus on the origins, antecedents, and spread of structural change, to the neglect of the consequences of such change (Scott and Davis, 2007). Research on higher education organizations has recently begun to rectify this (e.g., Jaquette and Curs, 2015; Kraatz and Zajac, 2001; Mathies and Slaughter, 2013), but studies of the “consequences of structural and material resource shifts across academic areas,” such as disciplines, remain rare (see also Gumport, 2000:68, Volk et al., 2001). This is especially true for studies of IDR. The few studies that examine outcomes of investments in (and commitments to) IDR tend to be qualitative case studies, or smaller quantitative studies focused on a specific program, discipline, or university (Biancani et al., 2014; Carr et al., 2017; Mitrany and Stokols, 2005).3 Moreover, they focus on consequences like scholarly productivity, collaboration, and grant activity overall, to the neglect of IDR specifically. To date there has been no large-scale quantitative assessment of whether and how universities’ commitments to interdisciplinary research are successful in fostering
interdisciplinary research. We build upon and extend scholarship in this area by focusing on IDR explicitly. We do this at two points in our analysis. First, our key explanatory variable is universities’ commitment to IDR. Tapping organizational commitment to IDR is not easy. While recognizing that universities often commit to interdisciplinarity in many ways, we focus on structural commitment. This especially durable form of commitment is evidenced by both the number and (interdisciplinary) nature of their key research units: centers and departments. Previous scholarship on research centers (Biancani et al., 2018; Jacobs and Frickel, 2009) presumes that most are interdisciplinary (Bozeman and Boardman, 2013; Sabharwal and Hu, 2013). Previous scholarship on departments tends to presume that departments are mono-disciplinary (Brint et al., 2009, see Jacobs, 2013 for exceptions, Lee, 2007). In contrast, we distinguish interdisciplinary units from others, and use both to develop a comprehensive measure of universities’ structural commitment to IDR. Second, our key outcome variables capture actual conduct of IDR. In this part of our conceptual model, we are tapping individual faculty engagement with, rather than organizational commitment to, IDR. We measure scholarly research productivity and grant funding generally, and the amount of interdisciplinary research and interdisciplinary grant funding in particular. In sum, IDR is critical to both sides of our conceptual model (depicted in Fig. 1), but in distinct ways. We aim to assess whether and how organizational commitment to IDR may enhance faculty members’ interdisciplinary research conduct. These advancements allow us to answer an important and timely question: To what extent, if at all, do universities’ structural commitments to IDR actually bolster research, and interdisciplinary research in particular? Below we detail how greater structural commitment should, in theory, spur productivity, and how it may prompt a rise in both the number and dollar amount of external grants. We also expect that structural commitment to IDR will promote interdisciplinary activity in terms of publications and grants. We might even expect an increase in highimpact, transformative science – the kind of science IDR is meant to unleash. To test these expectations and explore these possibilities, we collect a wealth of both numeric and textual data on 156 research universities nationwide to assess whether structural commitments to IDR influence general research activity (e.g., publications, external grants) and interdisciplinary research activity in particular. 2. Structural commitment to IDR Previous research has documented a wide variety of strategies that universities have used to enhance “interdisciplinary momentum” (Turner et al., 2015). These include making greater use of joint appointments and cluster hiring initiatives (Jacobs, 2013), hosting competitions for internal research grants that cover interdisciplinary topics or are submitted by multidisciplinary teams (Sá, 2008), revising evaluation guidelines for promotion and tenure to be more attuned to the unique challenges of IDR (Holley, 2009b), allocating physical space conducive to IDR (e.g., Harris and Holley, 2008), and sponsoring interdisciplinary graduate education and training programs (Hackett and Rhoten, 2009; Holley, 2009a; Newswander and Borrego, 2009). We see evidence of such initiatives when perusing university websites, and in universities’ strategic plans and presidential speeches (Harris, 2010; Stensaker et al., 2019). We, like Louvel (2016), are interested in the potentially more durable strategies of altering the university's organizational structure and its core units: departments and centers. Indeed, universities have supported the establishment and development of both departments and centers (Clausen et al., 2012). For centuries, universities were organizationally arranged around the “basic building blocks” of disciplinary departments (Gumport and Snydman, 2002:380, Holley, 2009b), and loathe to give up or alter this fundamental arrangement. No university has done away with departments, with the possible exception of Arizona State University (Jacobs, 2013), and this appears to have been
3
For example, (Biancani et al. 2014; 2018) study a few interdisciplinary research centers at a single university, and examine how affiliation affects the quantity and quality of scholarship and grant activity generally, not interdisciplinary research specifically. Carr et al. (2017) studied one interdisciplinary graduate program in water conservation policy, and found that “the programme is leading to a substantial portion of research that is crossdisciplinary in nature” (p.471). Mitrany and Stokols (2005) also studied a single interdisciplinary graduate training program, and found that smaller and more multidisciplinary departments were most effective in helping students produce dissertations with interdisciplinary qualities. 2
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
Fig. 1. Conceptual model.
temporary. Indeed, “the complete elimination of structural units is rare” (Gumport and Snydman, 2002:376). But numerous universities have been modifying and enhancing the traditional organizational arrangement in order to be more flexible and responsive to new economic and political environments (Biancani et al., 2018:544). As Geiger and Sá (2008:3) note, “deliberate steps to restructure academic programs on an interdisciplinary basis are now ubiquitous, as is the proliferation of special institutes.” We have seen an increase in the sheer number of departments, as new interdisciplinary areas like Mexican American Studies and Cognitive Science have developed (Brint et al., 2009). We have also witnessed a rapid increase in university research centers (Geiger, 1990; Jacobs and Frickel, 2009) which are typically founded around practical problems and interdisciplinary themes like sustainability (Berman, 2012; Boardman and Bozeman, 2007). Indeed, Clausen et al. (2012:1252) learned that the “need for cross-disciplinary work” was critical to the establishment of 57% of the hundreds of research units they studied. Because they typically do not have their own faculty lines, disband with greater readiness than departments, and allow faculty to work together across departmental lines, centers offer universities a way to be flexible without doing away with or challenging the centrality of their disciplinary core (Sá, 2008). We agree with Gumport and Snydman (2002:377) that the “multidimensionality of formal organizational structure has been neglected in higher education research,” and thus examine the complementarity of two focal units that are rarely studied simultaneously: departments and research centers. Our approach stands in stark contrast to previous research. Gumport and Snydman (2002) focus on departments (and programs) but not centers, and (Biancani et al., 2014, 2018) focus on centers (and only interdisciplinary centers) but not departments. However, Sá (2008) argues that centers ratchet up and expand faculty research activity only when centers are prevented from competing with departments. To grasp this synergy between traditional disciplinary units (departments) and newer and often interdisciplinary units (research centers), it is imperative to study them together. Information about centers, departments, and their interdisciplinary nature is all relevant to a university's structural commitment to IDR. In a previous companion paper (Leahey et al. 2019), we incorporated such information as indicators of the construct of interest to us: universities’ structural commitment to IDR. In this way we build upon and extend previous research. Clearly, our construct of interest – universities’ structural commitment to a type of knowledge (i.e., IDR) – connects easily with related research on universities’ strategies. Chandler (1962:13) defined strategy as “the determination of the basic long-range goals and objectives of an enterprise, and the adoption of courses of action and the allocation of resources necessary for carrying out these goals.” Although some of the scholarship on university strategies to promote IDR focuses on courses of action and allocation of resources (e.g., cluster hiring (Brint, 2017), graduate training (Hackett and Rhoten, 2009; Newswander and Borrego, 2009), and physical space (Harris and Holley, 2008)), much of
it is restricted to goals and objectives as expressed discursively in university documents.4 We thus prefer the term commitment because it implies not only value commitments and visions, but also more tangible and immediate resource commitments. As Gumport and Snydman (2002:377) note, modifications of academic structure (e.g., departments and centers) “accomplish a critical organizational function for universities and colleges, providing a powerful symbolic mechanism that signals the intention, if not the actual ability, to reconcile competing expectations from the external environment.” Indeed, tangible resource commitments to knowledge areas have the potential to reshape the academic landscape (Gumport, 2000), and it is the impact of these more structural efforts that interest us here. 3. The consequences of structural commitment to IDR Like most organizational research, the literature on higher education organizations (i.e., colleges and universities) has tended to focus on the origins and antecedents of certain commitments, policies, and programs. Scott and Davis (2007:310) note that “organization theorists in general are more interested in why firms choose the strategies and structures they do, and how industries come to be structured the way they are, rather than the performance consequences of these processes.” Lounsbury (2001), for instance, examined how student environmental groups and connections to national social movement organizations helped explain the adoption and staffing of recycling programs at colleges and universities. Similarly, Brint et al. (2009) discussed how employer demands as well as social movements of the 1960s and 1970s changed university course offerings, including growth in studies of nonWestern cultures, gender, and ethnic/racial minorities. Focusing on liberal arts colleges, Kraatz et al. (2010) explored internal factors (e.g., faculty power, leadership, organizational history) and external factors (e.g., competition, prestige) that made colleges more or less susceptible to adopting enrollment management structures and practices. The focus of these studies is not the impact of these programs, but their origins and the factors that encouraged their adoption. Recently, scholars have begun to the study of the consequences of university policies, programs, and commitments,5 especially those pertaining to IDR. For example, a number of studies have investigated how other forms of commitment to IDR influence university output. Harris (2010) examines university documents like strategic plans, 4 Documents that have been studied include strategic plans, institutional reports, mission statements, promotional materials, and promotion and tenure guidelines (Harris, 2010; Holley, 2009a,c; Stensaker et al., 2019). 5 Of course, there's a large body of literature exploring and comparing university-level outputs (Caldera and Debande, 2010; Carayol and Matt, 2004; Gómez et al., 2009; Goodall, 2009; Mathies and Slaughter, 2013; Moed et al., 1998; Slaughter et al., 2014), but few of these assess the effects or consequences of specific policies, program initiatives, or commitments.
3
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
reports, and speeches – which provide a sense of commitment to interdisciplinary pursuits – in order to understand how they aspire to foster interdisciplinary collaboration; but interdisciplinary collaboration is not measured explicitly. And several studies by Borrego and colleagues (Boden and Borrego, 2011; Newswander and Borrego, 2009) examine the impact of NSF-funded Integrative Graduate Education and Research Traineeship Program (IGERT) programs that are intended to train students in interdisciplinary domains, though the outcomes of interest are student satisfaction, participation, engagement, and connections, rather than IDR per se. But the consequences of universities’ organizational arrangements are rarely examined, especially on university-level outcomes. This is surprising given the rise of interdisciplinary departments and research centers, and the popularity of these organizational rearrangements within university settings (Clausen et al., 2012). In his study of 21 universities, Harris (2010:29) found that they commonly pursued structural strategies like eliminating “barriers that prohibited collaboration” and creating “functional units to support interdisciplinary teams.” We know that over the past few decades, universities have increased the number of departments and the number of interdisciplinary departments and programs in particular (Brint et al., 2009). For example, in their case study of San Jose State University, Gumport and Snydman (2002) found that 25 departments existed in 1957, and this rose to 61 departments by 1997. And since the late 1970s, universities and funding agencies have supported the development of research centers (Berman, 2012). We've also witnessed a proliferation of research centers, especially those that span sectors and disciplines (Boardman and Bozeman, 2007). Indeed, “[c]enters are a principal means by which NSF fosters interdisciplinary research” (NSF, 2017:1). According to Geiger (1990), the number of research centers almost doubled between 1980 and 1985, and as of 2009, elite research universities housed an average of 95 centers each (Jacobs and Frickel, 2009).6 To our knowledge, no one has studied how interdisciplinary departments influence IDR activity, but a number of individual-level studies have examined how (presumably interdisciplinary) research center affiliation influences faculty scholarship. We draw upon these studies to develop expectations about how structural commitment to IDR (gauged broadly by considering research centers as well as departments, especially interdisciplinary ones) may influence universitylevel scholarly output. For example, a number of studies have documented that research center affiliation promotes the establishment of new ties and research collaborations (Biancani et al., 2014; Biancani et al., 2018; Kabo et al., 2014), boosts the quality of research (Smith et al., 2016), and garners more attention from the broad scientific community (Rogers et al., 2012). Research center affiliation has also been shown to improve scholarly productivity as well as grant activity – some of the outcomes of interest to us. Studies of NSF-funded multidisciplinary science centers in biotechnology (Gaughan and Bozeman, 2002), ecological synthesis centers (Hampton and Parker, 2011), and NIH-funded Transdisciplinary Tobacco Research Use Centers (Hall et al., 2012) found that center affiliation boosts productivity, though in the field of learning science, this effect only holds for senior faculty (Sabharwal and Hu, 2013). This is likely the case because centers tend to select productive researchers and are founded and funded in order to spur research development (Biancani et al., 2018:548). Other studies of specific centers have found that they ratchet up applications for external research funding and improve grant funding success. How? In addition to drawing in talent, centers pool
resources (Biancani et al., 2014, 2018) and foster physical proximity (Kabo et al., 2014), thereby making their members more eligible for funding (which increasingly requires multidisciplinary teams) and more competitive for funding. Although these studies only focus on centers – just one aspect of universities’ structural commitment to IDR – their consistent results lead us to expect that structural commitment to IDR likely improves levels of both scholarly productivity and external funding at the university level. H1: Universities’ structural commitment to IDR will be positively associated with scholarly productivity H2: Universities’ structural commitment to IDR will be positively associated with grant activity Surprisingly, few studies assess whether (some form of) commitment to IDR actually promotes interdisciplinary research activity (Leahey, 2018). Indeed, the typical outcome of these studies is overall levels of productivity, grant activity, patenting, or collaboration, not interdisciplinary research specifically. We identified three exceptions that examine IDR as an outcome, but they are all case studies of a single center and/or analyze only individual-level (not university-level) outcomes. For example, Bishop et al. (2014) studied the National Institute for the Mathematical and Biological Synthesis to understand the extent to which affiliation with the center affects the interdisciplinarity of the members’ research. Focusing on the 46 scholars affiliated with this center, and they collected bibliometric data and conducted qualitative interviews to compare the variety of fields scholars publish in both before and after center affiliation. Although their quantitative analyses reveal that center affiliation has no significant effect on interdisciplinary research (i.e., the variety of fields one publishes in), the interviewees report some shift in focus over time, toward more mathematical fields. Yang and Heo (2014) assess whether the first governmentfunded university research center program in Korea has been successful in fostering interdisciplinary collaborations. Their network analysis of a sample of research publications (n = 102) resulting from one center funded by the program reveals that the program fosters the dissemination of knowledge across a broad range of fields. Basner and colleagues (2013) employ a prospective design to study center-affiliated cancer researchers. They find that not all interdisciplinary collaborations within the center result in interdisciplinary publications. In fact, just over half do (56%). Results from these studies lead us to expect that universities’ structural commitment to IDR will have a positive but mild relationship with members’ interdisciplinary research activity, whether it be in terms of publications or grants. H3: Universities’ structural commitment to IDR will have a positive but mild association with interdisciplinary productivity H4: Universities’ structural commitment to IDR will have a positive but mild association with interdisciplinary grant activity Our study thus makes multiple contributions to the literature. First, we examine structural commitment to IDR in a holistic way by focusing on two types of relevant organizational units - research centers and departments – and by measuring rather than assuming their interdisciplinary nature. Second, we focus on the consequences of university commitment to IDR, rather than exploring its origins and antecedents. Third, we investigate whether such commitment is related to not only productivity and grant activity in general, but also whether it affects interdisciplinary activity in particular. Fourth, we conduct a universitylevel study and measure outcomes at the university level, rather than examine how affiliation with a certain kind of unit (e.g., interdisciplinary research center) affects individual career outcomes. And last, although we draw from case studies of particular centers, programs, and universities, we scale up our analysis to study over 150 research universities nationwide, providing a stronger foundation for generalization.
6
Funding from federal agencies was critical to this proliferation (Geiger and Sá, 2005). In 2016, the National Institutes of Health spent about 9% of its budget, or 2.64 billion dollars, to specifically support research centers (NIH 2016). That same year, the National Science Foundation invested over $210 million dollars to support 68 research centers (NSF, 2017:1). 4
E. Leahey and S.N. Barringer
Research Policy 49 (2020) 103910
4. Data and methods
we collected on over 20,000 units – 9211 research centers and 12,323 departments – that existed at these 156 universities in 2012–13.10 For each university, we determined the number of centers, the number of departments, and the ratio of centers to departments, which is informative if – as previous research assumes – centers are interdisciplinary and departments are monodisciplinary. Questioning this assumption, we also classified each unit as interdisciplinary or not. We did this by developing a coding scheme (see an abbreviated coding guidelines in Appendix A), which we based on approaches that others have used to identify interdisciplinary text (Bache et al., 2013; Brint et al., 2009; Evans, 2016; Nichols, 2014), on extant definitions of interdisciplinarity (Wagner et al., 2011), and on disciplinary classifications made by NSF and the Classification of Instructional Programs (CIP). The coding scheme delineates indications of interdisciplinarity that tend to appear in the names of departments, and in the names and subject headings of research centers. These include direct reference to the term ‘interdisciplinarity’ or related terms like ‘integration’ and ‘synthesis,’ reference to two or more disciplines or disciplinary stems (like ‘Bio’ and ‘Chem,’ ‘Geophysics,’ and ‘Department of Sociology & Anthropology’), and reference to one of the area/global/ethnic studies fields identified as interdisciplinary by Brint and colleagues (2009). After manually coding example department and centers (to serve as ‘training’ data from which the ‘machine’ can ‘learn’), we developed a classifier using Python's SciKitLearn package and applied it to the data of interest (the names and subject headings of all departments and centers at the 156 universities under study). The output contains two variables: the unit name/identifier, and a binary variable indicating whether, based on our manually coded data and the computer's ability to learn from it, it is considered interdisciplinary. Our classifier attained a high (90%) precision rate, and we validated its output in multiple ways, all of which are documented in (Leahey et al. 2019). After aggregating the counts to the university level and implementing a necessary correction (Hopkins and King, 2010)11, we specified a confirmatory factor analysis (CFA) model to measure our construct of interest: structural commitment to IDR. CFA is a multivariate statistical technique used determine the best way to measure a latent construct. It is appropriate (as in this case) when one has multiple possible indicators of the construct in mind (Bollen, 1989).12 Results from the CFA reveal that universities’ structural commitment to IDR can be captured best with four indicators: the number of departments; the ratio of (the number of) centers to (the number of) departments; the fraction of centers that are interdisciplinary; and the fraction of departments that are interdisciplinary. This CFA model is depicted in Appendix B. From this best-fitting model (SRMR = 0.03; RMSEA = 0.03; TLI = 0.96; CFI = 0.99)13, we derived a continuous and normally distributed factor score (essentially a variable whose values are akin to predicted values based onthe CFA factor loadings).
4.1. Sample To study the relationship between (university) commitment to and (faculty) research conduct – and interdisciplinary research in particular – we study the population of top research universities across the United States. Research universities are an ideal site to examine commitment to IDR given their research-oriented mission and their support of research activity in a wide variety of fields. Like Brint and colleagues (2009:162) we focus on large and prestigious research universities because they “have the wealth and organizational capacity to respond to new developments,” such as shifts in science policy initiatives to foster IDR. We define top research universities as those included in the “very high research activity” category of colleges and universities listed in the 2010 or 2005 Basic Carnegie Classifications and those classified as research extensive institutions in the 2000 Basic Carnegie Classifications (N = 156).7 This approach comprises longstanding core research universities, as well as more peripheral members of this elite group. Given current interest and ongoing investments in IDR (Jacobs, 2013; National Academies of Science, 2005; National Research Council, 2014), we focus on recent years. To empirically assess whether and how (university) commitment to IDR influences (faculty) engagement in IDR, we move beyond a simple cross-sectional design and measure variables in different years, depending on their location in our conceptual model (see Fig. 1 above). We measure the key independent variable (universities’ structural commitment to IDR) and relevant control variables during the 2012–13 academic year, and we measure the outcomes of interest in 2015.8 4.2. Measures Although our interest in IDR is infused throughout the analysis, and is evident on both sides of our conceptual model, the concepts – and thus the measures – are distinct. Our key explanatory variable – structural commitment to IDR – is a novel and inherently latent construct; accordingly, we develop and implement our own sophisticated measurement approach that relies on traditional content analysis, computational techniques for textual data, and statistical measurement modeling. In contrast, our key outcome variables (interdisciplinary publications and grants) have been used in earlier research, and can be measured more directly, relying more on meta-data (like keywords) than gigabytes of raw textual data, and using fewer analytic steps.9 We discuss each in turn below. 4.2.1. Structural commitment to IDR For our key explanatory variable, we use a measure we developed previously. We summarize our measurement process here and further details can be found in (Leahey et al. 2019). The measure relies on data
10 We thus examine the stock (not the age) of units in existence at that time. In our conceptualization and corresponding calculations, new interdisciplinary centers ‘count’ just as much as older ones, and are no less integral to gauging a university's level of structural commitment to IDR in 2012-13. Subsequent research interest in the longevity of units would be illuminating but is beyond the scope of this paper. 11 Because our goal is not to classify any single piece of text (e.g., department name) correctly, but to aggregate the classification to the university level, we implement the correction developed by Hopkins and King (2010). This correction ensures that errors in classifying a given text do not bias the aggregate university-level measures of interest. 12 Exploratory factor analysis is better suited for situations in which the researcher has multiple constructs of interest and multiple possible indicators, but does not have a theoretical sense of which indicators tap which constructs. That is not the case here. 13 For the SRMR (Standardized Root Mean Square Residual) and RMSEA (Root Mean Square Error of Approximation), small values close to 0 are ideal. For the TLI (Tucker Lewis Index) and the Comparative Fit Index (CFI), large values close to 1 are ideal.
7 We excluded Teachers’ College because it had missing values on all the key variables in our analysis. 8 We had measures of all outcomes in both earlier and later years and were thus able to explore the best lag structure. We replicated analyses with outcomes measured in 2014 (available upon request) and results were similar to those reported here. When we examined outcomes measured in 2016 and 2017, the effects reported here were attenuated, suggesting that university commitment is associated with gains in productivity in the subsequent two to three years, but not thereafter. 9 While impressed with the careful work of Bache et al. (2013) and Nichols (2014), it is difficult to adapt their measures of diversity to our work, as they rely on topics derived inductively via topic modeling of extensive textual data (e.g., titles, abstracts, full text), which – for the number of universities we study here (156) would be prohibitive to collect. Because topic modeling is inductive, there would also be no guarantee that the topic set (and thus the diversity measure based on topics) would replicate across studies.
5
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
each activity code as either interdisciplinary or not17; the other author helped reconcile disagreements. Once codes were finalized and applied, we calculated the fraction of NIH grants that were interdisciplinary for each university. To identify the subset of NSF grants that support interdisciplinary research, we considered several options, including grants that are cofunded by two or more NSF programs, awards that are funded by the Office of Multidisciplinary Activities, and awards from cross-cutting (XCUT) programs. NSF staff suggested that the latter would be most valid, so for each university we calculated the fraction of all grants active in 2015 that were awarded by a cross-cutting program.18 We understand that funded interdisciplinary research is also conducted outside the NSF XCUT and the NIH activity codes we have identified. However, we took this approach for several reasons: it is consistent with our (conservative) measures of interdisciplinary productivity (i.e., articles); it was recommended by federal agency staff; and it was most consistent across agencies. Thus, if we find that commitment to IDR, as hypothesized, has a mild but positive effect on interdisciplinary grant activity, then we can be quite certain that its effect would be stronger with more generous measures of interdisciplinary grant activity.
4.2.2. Productivity Like most studies of research productivity across a range of disciplines (Azoulay et al., 2017; Biancani et al., 2018; Foster et al., 2015; Uzzi et al., 2013), we rely on articles published in peer-reviewed scholarly journals, to the exclusion of books.14 From the Center for Science and Technology Studies at Leiden University in the Netherlands, we obtained publication counts from the Web of Science (WoS) for each university in 2015. Relative to other databases like Scopus, Google Scholar, and JSTOR, WoS is the gold standard in terms of its comprehensiveness and representativeness. It indexes articles published in over 20,000 peer reviewed journals that represent all academic fields, even the typically underrepresented humanities. We include in our tally only research and review articles; other types of publications, such as corrections, letters, editorials, and book reviews are excluded as they do not represent original research. Each article contributes fully (as “1” article in the tally) to each university represented in the list of authors’ affiliations;15 we do not use fractional publications weighted by the number of co-authors. 4.2.3. Interdisciplinary productivity We also obtain a count of interdisciplinary publications to measure how much interdisciplinary research is conducted within these universities. Following Wu et al. (2019), we consider papers to be interdisciplinary if they are published in a journal classified as “multidisciplinary,” one of the roughly 250 subject categories that the WoS uses to classify journals. Because interdisciplinary papers can be published in journals not classified as multidisciplinary, our approach under-represents interdisciplinary productivity, and our estimates will thus be conservative.
4.2.6. Control variables We control for potentially extraneous variables that could influence both structural commitment to IDR and the outcomes of interest to us. Brint et al. (2009) suggests that commitment to IDR may be greater at larger, arts and science-oriented universities on either coast and at wealthier universities, and such institutions may also have higher research activity. Urban universities have a greater potential for research collaboration with other universities as well as local companies, which may spur research productivity (Lee and Bozeman, 2005). The subset of elite universities (e.g., AAU institutions) are particularly eager to keep up with the research frontier by promoting cutting-edge interdisciplinary science. Technical institutions (e.g., Georgia Tech or MIT) may be more research active than other institutions and their focus on technical and applied work (that spans industry and academe) may also be more interdisciplinary. It is also important to control for research capacity, land grant status, sector, and the presence of a medical school to capture variation in capacity generally as well as research capacity specifically that may influence research productivity generally and IDR specifically. We obtain measures of these variables from the Integrated Postsecondary Education Data System (IPEDS), the Higher Education Research and Development Survey (HERD), and university websites for the 2012–13 academic year. Categorical variables capture location, including region of the U.S. (West, Northeast, South, with reference category Midwest) and urban setting (1 = yes). We control for attributes of the university itself, like whether the university is a land grant institution (binary), an elite AAU university, a technical institution (i.e., has ‘tech’ in its name), or a R1 (“very high research activity” according to the 2010 Carnegie Classification) university, which proxies research capacity and institutional focus. We measure whether the university has an affiliated medical school, and whether it is a public or private institution. We use continuous measures of total spending on R&D (in inflation adjusted $1000s) and total revenues (in inflation adjusted $1000s) per FTE student to capture the research capacity and resources of these institutions respectively. We opt to count faculty (specifically, the number of tenured and tenure-track faculty) rather than students in order to proxy the size of research-active faculty, which is more relevant to our analysis than number of students.
4.2.4. Grant activity We downloaded grant award data from the National Institutes of Health (NIH) as well as the National Science Foundation (NSF) for each of the universities under study. For NIH, we were able to obtain not only the number of active grants in the year of interest (2015), but also the dollar amount. For NSF, we were able to obtain the number of grants each university was awarded in the year of interest (2015), but not the dollar amount. 4.2.5. Interdisciplinary grant activity Given that award databases and disciplinary classifications vary across federal agencies, our approach to identifying interdisciplinary grants is different for NIH and NSF. To identify the subset of NIH grants that support interdisciplinary research, we relied on NIH's 239 activity codes, one of which is assigned to each grant. Activity codes provide a description of the kinds of research funded. For example, activity code ‘DP2’ is “To support highly innovative research projects by new investigators in all areas of biomedical and behavioral research,” and activity code ‘F37’ is “To provide training to individuals in the synthesis, organization, and management of knowledge. The training should be interdisciplinary - involving medicine, biotechnology, and cognitive sciences, information science, and computer science.”16 After all team members developed coding guidelines, one author and her graduate research assistant classified 14 This underrepresents total productivity and most likely interdisciplinary productivity, given that books tend to be more interdisciplinary in nature (Clemens et al., 1995). However, previous studies have found that article productivity correlates strongly with total productivity that includes books, book reviews, and contributions to edited volumes (Clemens et al., 1995, Leahey, 2007). 15 Although this means that some articles are represented multiple times in the sample of papers across all universities in our sample, we only analyze university-specific values. 16 A description of each activity code can be found at https://grants.nih.gov/ grants/funding/ac_search_results.htm.
17
The inter-coder reliability rate was 93.25. Current and past lists were obtained from https://www.nsf.gov/funding/ pgm_list.jsp?type=xcut using the Internet Archive's WayBack Machine. 18
6
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
4.3. Analytic approach We use descriptive and multivariate statistics to test the hypotheses of interest. All of our outcomes – even those that could, conceivably, be considered count variables (e.g., number of grants received from NIH) – are normally distributed once we take the natural log, so we use ordinary least squares regression techniques. All analyses are conducted in Stata 15. We understand that “in the absence of experimental control, the problem of determining which strategies and structures improve organizational performance is a thorny one” (Scott and Davis, 2007:313). For this reason, we are careful to include a comprehensive set of control variables that may influence both structural commitment and the outcomes of interest. Furthermore, we incorporate a lag structure into our design, measuring the explanatory variable (and all controls) in 2012–13, and each outcome in 2015. This helps reduce concerns of endogeneity and reverse causality. However, we cannot eliminate such concerns because commitment to IDR cannot be randomly assigned, and because we lack panel data and thus the ability to specify fixed effects (Biancani et al., 2018). Thus, we interpret our results cautiously, with the understanding that our theorized causal order cannot be tested empirically.
Fig. 2. Distribution of universities’ structural commitment to IDR.
commitment that are highlighted in the literature. Often, universities like ASU, Stanford, and Berkeley are chosen as exemplars of interdisciplinarity in action, but they only fall in the middle of the continuum in terms of their structural commitment. It is universities like Harvard, UC Riverside, and Brandeis that have most clearly demonstrated their commitment to IDR through structural reorganization of relevant units: centers and departments. Universities also vary widely in the outcomes of interest to us, which are all measured in the 2015 calendar year (Table 1). Scholarly productivity varies across even top research universities like the ones we study. In the Web of Science, we find that some universities are listed as an author affiliation on as few as 113 articles, whereas other universities are named on over 20,000 articles. There is also substantial variation in grant activity, especially grants awarded by NIH. In terms
5. Results We note that there is much variation in universities’ structural commitment to IDR during the 2012–13 academic year. The continuous factor score, derived from confirmatory factor analyses presented in an earlier paper (author reference removed), is normally distributed (0.0, 13.1) with a minimum of −45 and a maximum of 38.5 (Table 1). Although the absolute value of the factor score is not informative, the relative values are; they let us know which universities are more structurally committed (i.e., have a higher value on the factor score) than others. As depicted in Fig. 2, some universities are more structurally committed to IDR than others, and this kind of commitment – indicated by both the number and interdisciplinary nature of both research centers and departments – is distinct from other forms of Table 1 Descriptive statistics for N = 156 research universities. Variable Key Explanatory variable (2012–13) Structural Commitment to IDR Outcome variables (2015) Publications (# articles) NIH Grant Activity (# grants) NIH Grant Activity ($k amount) NSF Grant Activity (# grants) Interdisciplinary Publications (#) Interdisciplinary NIH Grants: Percentage of awards that are ID Interdisciplinary NIH Grant Dollars: Percentage of grant dollars that are ID Interdisciplinary NSF Grants: Percentage of awards that are ID Control Variables (2012-13) Total current funds revenue (cpi adj $) Total R&D spending (cpi adj $) Tenured & tenure-track faculty (#) R1 university (1=yes, 0=otherwise) AAU member (1=yes, 0=otherwise) Medical school affiliated (1=yes, 0=otherwise) Landgrant Univ (1=yes, 0=otherwise) Urban Univ (1=yes, 0=otherwise) Public Univ (1=yes, 0=otherwise) West region (1=yes, 0=otherwise) Northeast region (1=yes, 0=otherwise) Southern region (1=yes, 0=otherwise) Technical Univ (1=yes, 0=otherwise)
7
Mean
Std. dev.
Min
Max
0.00
13.11
-45.56
38.49
2785 204.07 $78,847 215.90 139.80 7.70 14.26 77.98
2643 268.90 $106,345 185.90 167.40 8.64 14.88 7.86
113 1 0 4 1 0 0 23
20,281 1,271 $478,993 880 1,399 51.02 84.07 100
$101,891 26.8 978.7 0.69 0.38 0.58 0.32 0.75 0.67 0.20 0.25 0.34 0.04
$214,763 123.6 494.3 0.47 0.48 0.49 0.47 0.44 0.47 0.40 0.43 0.48 0.19
$16,180 1 54 0 0 0 0 0 0 0 0 0 0
$2,404,000 1,539 2,682 1 1 1 1 1 1 1 1 1 1
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
university's structural commitment to IDR increases 10 points, the number of NIH grants increases by 13% and the amount of NIH grant dollars received increases by 11%, and these effects are significant at p = .006 (one-tailed). On its own, university structural commitment to IDR has a positive and statistically significant relationship with the number of NSF grants awarded (results not shown), but once we add controls (Model D), this significant relationship disappears, suggesting that resources, number of research active faculty, and other university characteristics influence both structural commitment and NSF grant activity. Overall, the models account for 70–82% of the variation in scholarly productivity and grant activity.19 But does commitment to IDR actually foster interdisciplinary research? An empirical answer to this rarely-asked question can be found in Table 3, Model E. We find that universities that house, create, or reorganize units (centers, departments) to manifest their commitment to IDR tend to produce more interdisciplinary scholarship. The coefficient 0.008 suggest that as a university's commitment to IDR increases 10 points (from, say −30 to −20), its interdisciplinary productivity increases by 8% (Table 3, Model E). Note that this coefficient is twice the size of the coefficient for productivity generally (Table 2, Model A), suggesting that higher levels of structural commitment to IDR are positively associated with overall productivity, and interdisciplinary productivity in particular. The association is even greater when we look at NIH grant activity. A 10-point increase in structural commitment to IDR is associated with a 22% increase in the number of NIH grants awarded (Model F), and a 22% increase in the dollar amount of NIH awards (Model G). Although structural commitment has a larger effect on interdisciplinary grant activity than on interdisciplinary publications, the R-square value for the latter is much higher, suggesting that unmeasured university characteristics (perhaps extent of grant-writing support and incentives) are likely relevant. As we found for NSF grant activity generally, structural commitment to IDR has no significant relationship with interdisciplinary NSF grant activity (model H). Given that we study 156 universities, we also examined whether and how the effects we've documented vary by university status, a key sociological concept operationalized here as membership in the AAU. We thus created an interaction term by multiplying structural commitment to IDR by AAU status, and included it in the models presented in Tables 2 and 3. For two measures of general grant activity (total # NIH grants, total # NSF grants) and one measure of interdisciplinary productivity (# ID Pubs), we find that the coefficients for the main effects are positive, and the interaction term is negative and statistically significant (see Table 4). This suggests some sort of substitution effect. It is beneficial (in terms of grant activity and interdisciplinary publications) to be a member of the AAU and to be structurally committed to IDR. But there is no additional benefit (and indeed a small penalty) to being both. Although our main interest lies in how universities’ commitment to IDR influences research productivity and grant activity, and previous research allowed us to develop hypotheses about such effects, we can't help being curious about research quality and impact. Indeed, we motivated our work by noting that research policy is eager to promote groundbreaking, transformative, and high-impact science to help solve complex global problems. According to the National Academies of Science (2005:39), “the potential power of IDR to promote novel and even revolutionary insights is generally accepted”. And Leahey et al. (2017) found that interdisciplinary research papers receive a boost in their visibility and impact, as assessed by citations. But does this effect scale up to the university level? Does a university's structural commitment to IDR prompt more high-impact research? The results of our modeling efforts suggest not. In Appendix C we show that universities’ structural commitment to IDR has no statistically
Table 2 Effect of structural commitment to idr on productivity & grant activity in 2015 (OLS coefficients and SEs).
Key Explanatory Variable Structural Commitment to IDRa Control Variables Total current funds revenue per FTE student (cpi adj $) Total R&D spending per FTE student (cpi adj $) Tenured & tenuretrack faculty (#) Medical School affiliated Landgrant University AAU member R1 university Urban University Public University West region Northeast region Southern region Technical Univ Constant Observations R-squared
Model A Publications (#)
Model B NIH Grants (#)
Model C NIH Grants ($)
Model D NSF Grants (#)
0.004+ 0.00
0.013* 0.01
0.011* −0.01
0.0003 0.00
1.36e-06* 0.00
2.16e-06* 0.00
2.25e-06* 0.00
3.75E-07 0.00
−0.0020* 0.0009
−0.002 0.00
−0.0017 0.00
−0.0018 0.00
0.0009⁎⁎ 0.00 0.279⁎⁎ 0.08 −0.101 0.086 0.315⁎⁎ 0.10 0.590⁎⁎ 0.0932 −0.113 0.08 0.116 0.10 0.246* 0.111 0.276* 0.11 0.100 0.097 0.411* 0.203 5.685⁎⁎ 0.145 156 0.82
0.001⁎⁎ 0.00 0.829⁎⁎ 0.13 −0.547⁎⁎ 0.14 0.418* 0.17 0.661⁎⁎ 0.15 −0.154 0.14 −0.0473 0.16 0.432* 0.18 0.359* 0.18 0.116 0.16 0.235 0.33 2.112⁎⁎ 0.23 156 0.793
0.001⁎⁎ 0.00 0.894⁎⁎ 0.14 −0.674⁎⁎ 0.15 0.437* 0.18 0.818⁎⁎ 0.16 −0.198 0.14 0.006 0.17 0.473* 0.19 0.384* 0.19 0.21 0.17 0.223 0.35 14.79⁎⁎ 0.25 155b 0.787
0.0008⁎⁎ 0.00 −0.189+ 0.11 0.237* 0.12 0.600⁎⁎ 0.14 0.572⁎⁎ 0.13 −0.108 0.12 0.297* 0.14 0.443⁎⁎ 0.15 0.331* 0.15 0.09 0.13 0.781⁎⁎ 0.28 3.197⁎⁎ 0.20 156 0.705
Note: All four outcomes are logged to alleviate skewness. ⁎⁎ p < 0.01, ⁎ p < 0.05, + p < .10 (two tailed tests unless otherwise noted) a one-tailed test b One university was not awarded any dollars, so log value is undefined, and it was dropped from analyses.
of the number of NIH grants awarded to an institution, the standard deviation (268.9) exceeds the mean value (204.1), and this is also the case for NIH award dollars, with a mean of $78,847 and a standard deviation of $106,345. Some universities received as few as four grants from NSF, whereas other universities received as many as 880. As expected, universities’ structural commitment to IDR is positively associated with scholarly productivity and especially NIH grant activity, lending some support to hypotheses 1 and 2 (Table 2). Even after controlling for resources, number of research active faculty, and relevant university characteristics (R1, AAU member, Land grant, public/ private, region, urban setting, and affiliated medical school), we find that universities with greater structural commitment to IDR have higher levels of scholarly productivity relative to those with less structural commitment (Model A). This effect is significant at p = .07 (one-tailed test for directional hypothesis), but not large: the coefficient of .004 suggests that as a university's structural commitment increases 10 points (from, say −30 to −20), its scholarly productivity increases by 4%. The effects on NIH grant activity are larger and reach greater levels of statistical significance. Results from Models B and C reveal that as a
19 These results hold when we limit the sample to universities with some minimal number (5) of grants.
8
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
Table 3 Effect of Structural Commitment to IDR on Interdisciplinary Productivity & Grant Activity in 2015 (OLS coefficients and SEs)
Key Explanatory Variable Structural Commitment to IDRa Control Variables Total current funds revenue per FTE student (cpi adj $) Total R&D spending per FTE student (cpi adj $) Tenured & tenure-track faculty (#) Medical School affiliated Landgrant University AAU member R1 university Urban University Public University West region Northeast region Southern region Technical Univ Constant Observations R-squared
Model E Interdisciplinary Publications (#)
Model F Interdisciplinary NIH Grants: Percentage of awards that are ID
Model G Interdisciplinary NIH Grant Dollars: Percentage of grant dollars that are ID
Model H Interdisciplinary NSF Grants: Percentage of awards that are ID
0.008* 0.004
0.217⁎⁎ 0.06
0.220* 0.10
0.037 0.06
2.03e-06* 0.00 −0.002 0.001 0.0010⁎⁎ 0.00 0.425⁎⁎ 0.12 0.032 0.12 0.36* 0.15 0.81⁎⁎ 0.134 −0.146 0.12 0.087 0.143 0.421⁎⁎ 0.16 0.45⁎⁎ 0.160 0.038 0.14 0.58+ 0.292 2.10⁎⁎ 0.21 156 0.77
1.77E-05 0.00 −0.014 0.02 −0.002 0.00 1.784 1.62 2.872+ 1.67 −0.58 2.00 0.89 1.81 1.531 1.63 −1.87 1.93 1.84 2.15 0.35 2.13 1.127 1.89 −0.24 3.93 5.40+ 2.82 156 0.18
4.35e-05* 0.00 −0.05 0.03 −0.005 0.00 5.805* 2.74 8.898⁎⁎ 2.83 −1.19 3.39 2.25 3.06 3.078 2.77 1.759 3.27 2.468 3.65 2.66 3.62 5.711+ 3.20 −6.11 6.67 2.23 4.78 156 0.21
7.78E-06 0.00 −0.0184 0.02 −0.0006 0.00 −2.143 1.51 1.066 1.56 −2.89 1.87 2.13 1.69 −0.023 1.53 3.103+ 1.80 0.94 2.01 −0.86 1.99 0.749 1.76 −5.49 3.67 76.71⁎⁎ 2.63 156 0.14
p < 0.01, p < 0.05, + p < .10 (two-tailed tests unless otherwise noted) a one-tailed test ⁎⁎ ⁎
and number of faculty receiving other prestigious awards. We also examined the same outcomes in later years (2016, 2017) to assess whether the effects on research quality take longer to emerge, but the reported results hold for these subsequent years.20 Thus, although universities’ efforts to foster IDR promote grant activity and research productivity (i.e., quantity), they do not promote the kind of groundbreaking, high-impact research (i.e., quality) it seeks. At least not yet.
Table 4 How AAU Status Modifies the Effect of Structural Commitment to IDR on 3 Outcomes (OLS coefficients and SEs)
Structural Commitment to IDRa AAU member Structural Commitment to IDR * AAU member Observations R-squared
Model I NIH Grants (#)
Model J NSF Grants (#)
Model K Interdisciplinary Publications (#)
0.019* 0.006 0.43* 0.164 −0.016⁎⁎ 0.009 156 0.77
0.007+ 0.005 0.612+ 0.138 −0.018* 0.007 156 0.69
0.014⁎⁎ 0.005 0.372* 0.147 −0.016* 0.008 156 0.78
6. Discussion and conclusion In the push to maintain their status and remain economically relevant while continuing to push the frontiers of science, universities nationwide have taken steps to promote interdisciplinary research to solve complex problems, make connections outside academia, and even to generate alternative sources of revenue. To what extent have such efforts been successful? Verbal commitments to interdisciplinary scholarship (evident in, for example, strategic plans) may be telling, but are less likely to influence scholarship and grant activity unless they are accompanied by structural change. We thus focus on the tangible commitments manifested in a university's organizational structure: the units – both departments and centers – where research takes place. Such structural commitments have the potential to reshape the academic
p < 0.01, p < 0.05, + p < .10 (two-tailed tests unless otherwise noted) a one-tailed test *Coefficients for intercept and control variables are estimated but not shown. ⁎⁎ ⁎
significant effect on any of the five measures of research impact that we examined: number of highly cited researchers; number of papers published in Nature and Science; number of members of National Academy of Sciences; number of faculty receiving a Nobel prize or Fields medal;
20
9
Results available upon request.
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
landscape (Gumport, 2000) by fostering research productivity and grant activity in general, and in interdisciplinary areas specifically. To examine whether this is the case, we collected qualitative and quantitative data on 156 universities, used our novel measure of universities’ structural commitment to IDR (Leahey et al. 2019), and specified timesensitive, multivariate models to analyze whether such structural commitments spur research and grant activity as intended. Our results suggest that commitment to IDR is positively associated with actual research activity, especially interdisciplinary research activity. We found that universities with greater structural commitment to IDR experienced, in subsequent years, greater levels of productivity and (surprisingly, given the anticipated mild effects) especially interdisciplinary productivity, as measured by the number of journal articles indexed in the Web of Science. The strongest associations were seen between structural commitment to IDR and NIH grant activity: our multivariate results reveal a strong, positive, and highly significant relationship with the number of NIH grants received and even the dollar amounted awarded. Beyond our expectations (which anticipated mild effects), these relationships were even stronger when we examined interdisciplinary NIH grants. The significant bivariate association between commitment to IDR and NSF grant activity disappeared once we controlled for resource availability and number of faculty. Why might structural commitment to IDR have a positive association with NIH grant activity but not NSF grant activity? As Brint (2005) notes, it is challenging to make direct comparisons between NSF and NIH, but we offer two possibilities. First, NIH does not have disciplinary programs; rather, its institutes and centers tend to be problem-focused on particular diseases or body systems. Thus, its components are more closely related than NSF's programs, which tend to be disciplinary and range from Ecology to Sociology to Physics to Environmental Engineering. Second, NIH grant activity may have a stronger relationship with university commitments to IDR because a greater proportion of NIH grants are institutional grants that involve multiple (and often interdisciplinary) departments like “maternal and child health” and “demography” from the same institution, and/or are awarded to institutions with Population Centers.21 NSF grant activity levels are not dependent upon the way in which a university organizes its research activity across departments and centers (i.e., structural commitment to IDR), perhaps because a large fraction of NSF grants are awarded to single PIs or to investigators with collaborators at other universities, rather than to large teams housed within the same university.22 Although efforts to promote IDR are driven, at least in part, by a desire to stimulate ground-breaking, innovative, and transformative research, we find that structural commitment to IDR has no effect on research quality. We examined five indicators of research quality: the number of Highly Cited Researchers affiliated with each university; the number of articles published in the top general science journals Nature and Science; whether any faculty member has won a Nobel Prize or Fields Medal; the number of faculty who are members of the National Academy of Sciences; and the number of faculty who have won prestigious external awards. Because it is possible that structural commitment to IDR (measured in 2012–13) takes longer to influence research
quality, we also modeled each indicator of research quality in later years – 2016 and 2017 – but again we find no significant relationship between structural commitment to IDR and these indicators of research quality. Although universities’ efforts to foster IDR are positively associated with the quantity of research (e.g., publications and NIH grant activity), they have no relationship to the quality of research as measured here. This suggests that universities seeking to promote innovative, high-impact, and potentially transformative research may need to move beyond structural commitment to IDR and pursue additional strategies than the ones studied here. Although our paper advances the literature in many ways, it is, like all research, subject to limitations. First, our time sensitive design, in which the outcomes of interest are measured several years after the explanatory variables, assuages simultaneity and endogeneity concerns but cannot eradicate them. Indeed, NIH may be more likely to award grants to institutions that already have interdisciplinary centers for the study of population and demography, many of which owe their existence to previous NIH funding. Second, we were eager to focus on structural commitments to IDR that likely have more staying power than verbal commitments, which have been the focus of study to date. But certainly, universities are trying to foster IDR in ways we have not measured here, such as hiring faculty in clusters, hiring interdisciplinary faculty, and promoting multidisciplinary graduate training programs, certificates, and the like. Because these efforts may be most conducive to NSF grant activity, we are eager in the future to connect data sources to study universities’ portfolios of activity intended to foster IDR. Much of the science and research policy literature focuses, understandably, on regulations and guidelines at the federal level. In this paper, we moved down to the organizational level to examine the ways in which key actors in the realm of scientific research – research universities (Owen-Smith, 2018) – are trying to stimulate anticipated outcomes like heightened productivity and grant activity, especially interdisciplinary productivity and grant activity. Indeed, in a recent policy report funded by NSF SciSIP program, Leahey (2018) noted a dearth of studies that examine IDR as an outcome in itself; the few studies that do exist focus on a certain program or university but do not make comparisons across many universities. Moreover, much previous work on universities examines what universities say, be it in strategic plans or in promotion and tenure guidelines (Harris and Holley, 2016; Harris, 2010; Stensaker et al., 2019). Like others who have studied initiatives like cluster hiring (Brint, 2017), research centers (Biancani et al., 2014, 2018; Bozeman and Boardman, 2013), and interdisciplinary graduate training (Borrego et al., 2014; Hackett and Rhoten, 2009), we moved beyond stated value commitments and focus on the commitment of resources to structural initiatives. We found that universities’ commitment to IDR, as manifested in their organizational structure (i.e., the number and interdisciplinary nature of key research units: departments and centers), is related to, and potentially spurs, both scholarly research and NIH grant activity in general, and interdisciplinary research and NIH grant activity in particular. These results suggest that efforts to develop and reorganize academic units are not futile; rather, when value commitments are made concrete via research units like departments and centers, they can work as intended.
21 In 2015 (the year in which our outcome variables are measured), 20% of NIH grants were likely to be grants for infrastructure, as indicated by activity codes beginning with the letter U, F, or Z. There is no comparable way to identify NSF grants dedicated to infrastructure. However, we can compare the two funding agencies on grants intended for research centers, and here we find support for our conjecture: 2.6% of NIH grants have the word “center” in the title, more than double the percentage (1.3%) of NSF grants. 22 In 2015, 28% of NSF grants were collaborative (that is, cross-institutional) in nature, and the percentage of NIH grants that are collaborative is unknown (NIH award data do not include all investigators’ institutions). However, we do see a difference in multi-investigator projects: 37% of NSF grants have multiple investigators, compared to only 15% for NIH.
Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This research was supported by NSF SciSIP Collaborative grants to Erin Leahey and Sondra N. Barringer (award #s 1461989 and 1461846). Any opinions, findings, conclusions or recommendations 10
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We are grateful to Steven Brint, Scott Frickel, and Jerry Jacobs for their
foundational work, to Mark Suchman for comments, and to Misty RingRamirez, Karina Salazar, Esme Middaugh, and Benjamin Gaska for impeccable research assistance.
Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.respol.2019.103910. Appendix A. Coding guidelines These abbreviated manual coding guidelines were developed based on NSF's disciplinary classifications (https://www.nsf.gov/statistics/ nsf13327/pdf/tabb1.pdf), CIP codes (https://nces.ed.gov/ipeds/cipcode/browse.aspx?y=55), extant definitions of interdisciplinarity (Wagner et al., 2011), and the foundational work of other scholars, especially Steven Brint and colleagues (2009). The full version is available upon request. The text (e.g., name of a department, or name and subject heading of a research center) is likely interdisciplinary if it references…. ► interdisciplinarity, including terms like: “interdisciplinary,” “multidisciplinary,” “trans-disciplinary,” “integrative,” “synthesis,” “applied,” “cross-disciplinary,” and “integration.” ► two or more disciplines (i.e., CIP and NSF broad categories), or their stems, like: “Center for Pharmacology and Physiology,” “Geophysics” “Bioengineering” “Department of Sociology & Criminology” ► environmental or earth sciences, like: “Institute of Environmental Policy,” “Atmospheric and Oceanic Sciences” ► any of the following stem words in combination with “studies” America-, biblic-, cultural, Islam-, sustain-, community, Slavic, rehab-, peace… ► professional schools, like: Medicine, Nursing, Social Work, Education, Public Health, Law, Business, Public Policy/Administration ► inherently interdisciplinary fields, like: Space science, demography, gerontology, criminal justice, ethics ► sexual minorities or women, such as: “women,” “gender,” “feminist,” “sexuality” ► ethnic/racial minorities, such as: “African American,” “Chicano,” “Hispanic,” “American Indian,” “Asian American” ► area/region/period/religion studies, like: “Institute of Africana/African Studies,” “Department of Latin American Studies” ► international or global orientation “International Relations” or “Center for Global Studies”
11
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
Appendix B. Best-fitting Confirmatory Factor Analysis (CFA) model
Note: N=156; SRMR=0.03; RMSEA=0.03; TLI=0.96; CFI=0.99 Appendix C. Supplementary Analyses of Research Impact We obtain measures of research impact from two data sources. The first is the 2015 Academic Ranking of World Universities (http://www. shanghairanking.com), which rates universities on three relevant indicators: the number of Highly Cited Researchers affiliated with each university; the number of articles published in Nature and Science; and the number of affiliated faculty who have won a Nobel Prize or a Fields Medal (which we dichotomize due to rarity). Each indicator ranges from 0 to 100, with 100 being the best possible score. The second is the Center for Measuring University Performance, which provides university-level data on the number of faculty who are members of the National Academies of Science, and on the number of faculty who have received a wider variety of awards in 2015 (including Fulbright, Guggenheim, Howard Hughes NSF CAREER; a full list can be obtained at https://mup.umass.edu/DataSourceLinks). We take the square root of this latter variable to alleviate skewness. Panel A. Descriptive statistics Variable
Mean
Std. dev.
Min
Max
# Highly Cited Researchers # Articles published in Nature & Science Any faculty member won Nobel Prize / Fields Medal? # faculty who are members of NASd # Faculty Awardsd
26.3 23.3 28% 27.22 11.8
17.4 16.7 0.45 53.9 12.4
3.6 3.3 0 0 0
100 100 1 371 84
Panel B. Effect of Structural Commitment to IDR on Research Impact in 2015 (OLS or Logit coefficients and SEs)
Key Explanatory Variable Structural Commitment to IDRa Control Variables Total current funds revenuec Total R&D spendingc Tenured & tenure-track faculty (#) Medical School affiliated Landgrant University AAU member R1 university Urban University Public University West region
# Highly Cited Researchers
# Articles published in Nature & Any faculty member won Nobel Prize or Science Fields Medal?e
# faculty who are members of NASd
# Faculty Awardsd
−0.001 0.004
0.003 0.003
0.0226 0.025
4.10E-05 0.0145
0.00614 0.0071
1.50e-06* 0.000 −0.002 0.001 0.0006⁎⁎ 0.000 −0.168 0.111 0.005 0.113 0.410⁎⁎ 0.123 0.495⁎⁎ 0.147 −0.158 0.117 −0.077 0.138 0.468⁎⁎ 0.152
1.36e-06⁎⁎ 0.000 −0.001 0.001 0.0004⁎⁎ 0.000 −0.103 0.082 0.052 0.083 0.467⁎⁎ 0.091 0.435⁎⁎ 0.108 −0.002 0.086 −0.191+ 0.102 0.481⁎⁎ 0.112
−5.74E-06 0.000 0.107⁎⁎ 0.041 0.001 0.0009 −1.326+ 0.734 −0.877 0.761 2.279⁎⁎ 0.723 predicts failure perfectly – −0.506 0.749 −1.279 0.873 1.662+ 0.889
1.04e-05⁎⁎ 2.56E-06 −0.0120⁎⁎ 0.004 0.0028⁎⁎ 0.0005 −0.20 0.39 −0.31 0.398 2.488⁎⁎ 0.477 0.792+ 0.43 −0.221 0.389 −0.773+ 0.46 2.207⁎⁎ 0.512
9.07E-07 1.26E-06 −0.0642 0.002 0.0015⁎⁎ 0.0002 0.027 0.19 −0.09 0.195 1.102⁎⁎ 0.234 0.491* 0.211 −0.244 0.191 −0.568* 0.226 0.590* 0.251
12
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer Northeast region Southern region Technical Univ Constant Observations R-squared
0.300+ 0.153 0.044 0.141 −0.092 0.244 1.701⁎⁎ 0.239
0.158 0.113 −0.099 0.104 0.001 0.180 1.796⁎⁎ 0.176
0.233 0.863 0.327 0.853 −3.755 2.695 −3.266⁎⁎ 1.242
1.485⁎⁎ 0.508 0.375 0.45 1.615+ 0.937 −1.380* 0.672
0.447+ 0.249 0.0122 0.221 0.849+ 0.46 1.033⁎⁎ 0.329
116 0.61
116 0.75
156 0.53b
156 0.745
156 0.695
p < 0.01, p < 0.05, + p < .10 (two-tailed tests unless otherwise noted) a one-tailed test b Pseudo R2 (via logistic regression) c per FTE student (cpi adj $) d Transformed by taking square root to alleviate skewness e This outcome is dichotomous, so we use logistic regression. ⁎⁎ ⁎
a cross-disciplinary doctoral programme on water resource systems. Water Policy 19 (3), 463–478. Chandler, A.D., 1962. Strategy and Structure: Chapters in the History of the American Enterprise. The MIT Press, Cambridge, MA. Clausen, T., Fagerberg, J., Gulbrandsen, M., 2012. Mobilizing for change: a study of research units in emerging scientific fields. Res. Policy 41 (7), 1249–1261. http://dx. doi.org/10.1016/j.respol.2012.03.014. Clemens, E.S., Powell, W.W., McIlwaine, K., Okamoto, D., 1995. Careers in print: books, journals, and scholarly reputation. Am. J. Sociol. 101 (2), 433–494. Evans, E.D., 2016. Measuring interdisciplinarity using text. Socius 2, 1–18. Fleming, L., 2001. Recombinant uncertainty in technological search. Manag. Sci. 47 (1), 117–132. Foster, J.G., Rzhetsky, A., Evans, J.A., 2015. Tradition and innovation in scientists’ research strategies. Am. Sociol. Rev. 80 (5), 875–908. https://doi.org/10.1177/ 0003122415601618. Gaughan, M., Bozeman, B., 2002. Using curriculum vitae to compare some impacts of Nsf research grants with research center funding. Res. Eval. 11 (1), 17–26. Geiger, R.L., 1990. Organized research units – their role in the development of university research. J. Higher Educ. 61 (1), 1–19. Geiger, R.L., Sá, C., 2005. Beyond technology transfer: US state policies to harness university research for economic development. Minerva 43 (1), 1–21. https://doi.org/ 10.1007/s11024-004-6623-1. Geiger, R.L., Sá, C., 2008. Universities and the two paths to innovation. Tapping the Riches of Science. Harvard University Press, pp. 54–71. Gómez, I., Bordons, M., Fernández, Msa., Morillo, F., 2009. Structure and research performance of Spanish universities. Scientometrics 79 (1), 131–146. Goodall, A.H., 2009. Highly cited leaders and the performance of research universities. Res. Policy 38 (7), 1079–1092. Gumport, P.J., 2000. Academic restructuring: organizational change and institutional imperatives. Higher Educ. 39 (1), 67–91. Gumport, P.J., Snydman, S.K., 2002. The formal organization of knowledge: an analysis of academic structure. J. Higher Educ. 73 (3), 375–408. Hackett, E., 2000. Interdisciplinary research initatives at the U.S. National Science Foundation. In: Weingart, P., Stehr, N. (Eds.), Practising Interdisciplinarity. University of Toronto Press, pp. 249–259. Hackett, E., Rhoten, D., 2009. The snowbird charrette: integrative interdisciplinary collaboration in environmental research design. Minerva 47 (4), 407–440. https://doi. org/10.1007/s11024-009-9136-0. Hall, K.L., Stokols, D., Stipelman, B.A., Vogel, A.L., Feng, A., Masimore, B., Morgan, G., Moser, R.P., Marcus, S.E., Berrigan, D., 2012. Assessing the value of team science: a study comparing center- and investigator-initiated grants. Am. J. Prev. Med. 42 (2), 157–163. Hampton, S.E., Parker, J.N., 2011. Collaboration and productivity in scientific synthesis. Bioscience 61 (11), 900–910. Hargadon, A.B., 2002. Brokering knowledge: linking learning and innovation. Res. Org. Behav. 24, 41–85. Harris, M. and Holley, K.. 2016. “Universities as anchor institutions: economic and social potential for urban development.” Pp. 393-439. Harris, M.S., Holley, K.A., 2008. Contructing the interdisciplinary ivory tower: the planning of interdisciplinary spaces on university campuses. Plan. Higher Educ. 36 (3), 34–43. Harris, M.S., 2010. Interdisciplinary strategy and collaboration: a case study of American Research Universities. J. Res. Adm. XLI (1), 22–34. Hearn, J.C., 2006. Alternative revenue sources. In: Priest, D.M., St. John, E.P. (Eds.), Privatization and Public Universities. Indiana University Press, Bloomington, IN, pp. 87–108. Hearn, J.C., Belasco, A.S., 2015. Commitment to the core: a longitudinal analysis of humanities degree production in four-year colleges. J. Higher Educ. 86 (3), 387–416. https://doi.org/10.1353/jhe.2015.0016. Holley, K.A., 2009a. Interdisciplinary strategies as transformative change in higher education. Innov. Higher Educ. 34, 331–344.
References Adams, J.D., Roger Clemmons, J., 2011. The role of search in university productivity: inside, outside, and interdisciplinary dimensions. Ind. Corp. Change 20 (1), 215–251. https://doi.org/10.1093/icc/dtq071. Azoulay, P., Liu, C.C., Stuart, T.E., 2017. Social influence given (Partially) deliberate matching: career imprints in the creation of academic entrepreneurs. Am. J. Sociol. 122 (4), 1223–1271. Bache, K., Newman, D., Smyth, P., 2013. Text-based measures of document diversity. In: KDD '13 Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Chicago, Illinois, USA. pp. 23–31. Barringer, S.N., 2016. The changing finances of public higher education organizations: diversity, change and discontinuity. Research in the Sociology of Organizations 46, 223–263. Basner, J.E., Theisz, K.I., Jensen, U.S., Jones, Cid., Ponomarev, I., Sulima, P., Jo, K., Eljanne, M., Espey, M.G., Franca-Koh, J., Hanlon, S.E., Kuhn, N.Z., Nagahara, L.A., Schnell, J.D., Moore, N.M., 2013. Measuring the evolution and output of cross-disciplinary collaborations within the Nci Physical sciences-oncology centers network. Res. Eval. 22 (5), 285–297. Berman, E.P., 2012. Creating the Market University: How Academic Science Became an Economic Engine. Princeton University Press, Princeton. Biancani, S., McFarland, D.A., Dahlander, L., 2014. The semiformal organization. Org. Sci. 25 (5), 1306–1324. Biancani, S., Dahlander, L., McFarland, D.A., Smith, S., 2018. Superstars in the making? The broad effects of interdisciplinary centers. Res. Policy 47 (3), 543–577. Bishop, P.R., Huck, S.W., Ownley, B.H., Richards, J.K., Skolits, G.J., 2014. Impacts of an interdisciplinary research center on participant publication and collaboration patterns: a case study of the national institute for mathematical and biological synthesis. Res. Eval. 23 (4), 327–340. Boardman, C., Bozeman, B., 2007. Role strain in university research centers. J. Higher Educ. 78 (4), 430–463. Boden, D., Borrego, M., 2011. Academic departments and related organizational barriers to interdisciplinary research. Higher Educ. Rev. 8, 41–64. Bollen, K.A., 1989. Structural Equations with Latent Variables. Wiley, New York. Borrego, M., Boden, D., Newswander, L.K., 2014. Sustained change: institutionalizing interdisciplinary graduate education. J. Higher Educ. 85 (6), 858–885. Bozeman, B., Craig Boardman, P., 2013. Academic faculty in university research centers: neither capitalism's slaves nor teaching fugitives. J. Higher Educ. 84 (1), 88–120. https://doi.org/10.1353/jhe.2013.0003. Brint, S., Proctor, K., Hanneman, R.A., Mulligan, K., Rotondi, M.B., Murphy, S.P., 2011. Who are the early adopters of new academic fields? Comparing four perspectives on the institutionalization of degree granting programs in us four-year colleges and universities, 1970–2005. Higher Educ. 61 (5), 563–585. https://doi.org/10.1007/ s10734-010-9349-z. Brint, S., 2005. Creating the future: ‘new directions’ in american research universities. Minerva 43 (1), 23–50. https://doi.org/10.1007/s11024-004-6620-4. Brint, S., 2017. Cluster Hiring Initiatives at US Research Universities: An Analysis of Productivity and Variation in Outcomes. University of California-Riverside: National Science Foundation. Brint, S.G., Turk-Bicakci, L., Proctor, K., Murphy, S.P., 2009. Expanding the social frame of knowledge: interdisciplinary, degree-granting fields in American Colleges and Universities, 1975–2000. Rev. Higher Educ. 32 (2), 155–183. Caldera, A., Debande, O., 2010. Performance of Spanish universities in technology transfer: an empirical analysis. Res. Policy 39 (9), 1160–1173. Carayol, N., Matt, M., 2004. Does research organization influence academic production?: Laboratory level evidence from a large European University. Res. Policy 33 (8), 1081–1102. Carr, G., Blanch, A.R., Blaschke, A.P., Brouwer, R., Bucher, C., Farnleitner, A.H., Fürnkranz-Prskawetz, A., Loucks, D.P., Morgenroth, E., Parajka, J., Pfeifer, N., Rechberger, H., Wagner, W., Zessner, M., Blöschl, G., 2017. Emerging outcomes from
13
Research Policy 49 (2020) 103910
E. Leahey and S.N. Barringer
of Life Sciences, Physical Sciences, Engineering, and Beyond. The National Academies Press, Washington, DC. Newswander, L.K., Borrego, M., 2009. Engagement in two interdisciplinary graduate programs. Higher Educ. 58 (4), 551–562. Nichols, L.G., 2014. A topic model approach to measuring interdisciplinarity at the national science foundation. Scientometrics 100 (3), 741–754. NSF, 2017. National Science Foundation Centers. National Science Foundation Vol.:. Owen-Smith, J., 2003. From separate systems to hybrid order: accumulative advantage across public and private science at research one universities. Res. Policy 32, 1081–1104. Owen-Smith, J., 2018. Research Universities and the Public Good: Discovery for an Uncertain Future. Stanford University Press, Stanford, CA. Pfeffer, J., Salancik, G.R., 2003. The External Control of Organization: A Resource Dependence Perspective. Stanford University Press, Stanford. Rhoten, D., Parker, A., 2004. Risks and rewards of an interdisciplinary path. Science 306 (5704), 2046. Rogers, J.D., Youtie, J., Kay, L., 2012. Program-level assessment of research centers: contribution of nanoscale science and engineering centers to US nanotechnology national initiative goals. Res. Eval. 21 (5), 368–380. Sá, C.M., 2008. ‘Interdisciplinary Strategies’ in Us research universities. Higher Educ. 55 (5), 537–552. Sabharwal, M., Hu, Q., 2013. Participation in university-based research centers: is it helping or hurting researchers? Res. Policy 42 (6–7), 1301–1311. http://dx.doi.org/ 10.1016/j.respol.2013.03.005. Schilling, M.A., Green, E., 2011. Recombinant search and breakthrough idea generation: an analysis of high impact papers in the social sciences. Res. Policy 40, 1321–1331. Scott, Wrd., Davis, G.F., 2007. Organizations and Organizing: Rational, Natural, and Open System Perspectives. PearsonPrentice Hall, Upper Saddle River, NJ. Singh, J., Fleming, L., 2010. Lone inventors as sources of breakthroughs: myth or reality? Manag. Sci. 56 (1), 41–56. Slaughter, S., Thomas, S.L., Johnson, D.R., Barringer, S.N., 2014. Institutional conflict of interest: the role of interlocking directorates in the scientific relationships between universities and the corporate sector. J. Higher Educ. 85 (1), 1–35. https://doi.org/ 10.1353/jhe.2014.0000. Smith, A.M., Lai, S.Y., Bea-Taylor, J., Hill, R.B.M., Kleinhenz, N., 2016. Collaboration and change in the research networks of five energy frontier research centers. Res. Eval. 25 (4), 472–485. Stensaker, B., Lee, J., Rhoades, G., Ghosh, S., Castiello, S., Vance, H., Calikoglu, A., Kramer, V., Lu, S., O'Toole, L., Pavluytkin, I., Peel, C., Sayed, M., 2019. Stratified university strategies: the shaping of institutional legitimacy in a global perspective. J. Higher Educ. 90 (4), 539–562. Tolbert, P.S., 1985. Institutional environments and resource dependence: sources of administrative structure in institutions of higher education. Adm. Sci. Q. 30 (1), 1–13. Turner, V.K., Benessaiah, K., Warren, S., Iwaniec, D., 2015. Essential tensions in interdisciplinary scholarship: navigating challenges in affect, epistemologies, and structure in environment – society research centers. Higher Educ. 70, 649–665. Uzzi, B., Mukherjee, S., Stringer, M., Jones, B., 2013. Atypical combinations and scientific impact. Science 342 (6157), 468–472. https://doi.org/10.1126/science.1240474. Volk, C.S., Slaughter, S., Thomas, S.L., 2001. Models of institutional resource allocation: mission, market, and gender. J. Higher Educ. 72 (4), 387–413. Wagner, C.S., David Roessner, J., Bobb, K., Klein, J.T., Boyack, K.W., Keyton, J., Rafols, I., Börner, K., 2011. Approaches to understanding and measuring Interdisciplinary Scientific Research (Idr): a review of the literature. J. Inform. 5 (1), 14–26. Weitzman, M.L., 1998. Recombinant growth. Q. J. Econ. 113 (2), 331–360. https://doi. org/10.1162/003355398555595. Wu, L., Wang, D., Evans, J.A., 2019. Large teams develop and small teams disrupt science and technology. Nature 566, 378–382. Yang, C.H., Heo, J., 2014. Network analysis to evaluate cross-disciplinary research collaborations: the human sensing research center, Korea. Sci. Public Policy 41 (6), 734–749.
Holley, K.A., 2009b. The challenge of an interdisciplinary curriculum: a cultural analysis of a doctoral-degree program in neuroscience. Higher Educ. 58 (2), 241–255. https:// doi.org/10.1007/sl0734-008-9193-6. Holley, K.A., 2009c. Understanding Interdisciplinary Challenges and Opportunities in Higher Education 35 Jossey-Bass, San Francisco, CA Vol. Hopkins, D.J., King, G., 2010. A method of automated nonparametric content analysis for social science. Am. J. Polit. Sci. 54 (1), 229–247. https://doi.org/10.1111/j.15405907.2009.00428.x. Jacobs, J.A., Frickel, S., 2009. Interdisciplinarity: a critical assessment. Annu. Rev. Sociol. 35 (1), 43–65. https://doi.org/10.1146/annurev-soc-070308-115954. Jacobs, J.A., 2013. In Defense of Disciplines: Interdisciplinarity and Specialization in the Research University. University of Chicago Press, Chicago, IL. Jaquette, O., Curs, B.R., 2015. Creating the out-of-state university: do public universities increase nonresident freshman enrollment in response to declining state appropriations? Res. Higher Educ. 56 (6), 535–565. https://doi.org/10.1007/s11162-015-9362-2. Kabo, F.W., Cotton-Nessler, N., Hwang, Y., Levenstein, M.C., Owen-Smith, J., 2014. Proximity effects on the dynamics and outcomes of scientific collaborations. Res. Policy 43 (9), 1469–1485. Kraatz, M.S., Zajac, E.J., 2001. How organizational resources affect strategic change and performance in turbulent environments: theory and evidence. Org. Sci. 12 (5), 632–657. Kraatz, M.S., Ventresca, M.J., Deng, L., 2010. Precarious values and mundane innovations: enrollment management in American Liberal Arts Colleges. Acad. Manag. J. 53 (6), 1521–1545. Laursen, K., Salter, A., 2004. Searching high and low: what types of firms use universities as a source of innovation? Res. Policy 33 (8), 1201–1215. Leahey, E., Barringer, S., Ring, M., 2019. Universities’ Structural Commitment to Interdisciplinary Research. Scientometrics 118 (3), 891–919. https://doi.org/10. 1007/s11192-018-2992-3. Leahey, E., Beckman, C.M., Stanko, T.L., 2017. Prominent but less productive: the impact of interdisciplinaryity on scientists research. Adm. Sci. Q. 62 (1), 105–139. https:// doi.org/10.1177/0001839216665364. Leahey, E., 2018. Science Policy Research Report: Infrastructure for Interdisciplinarity. National Science Foundation SciSIP Program Vol.:, Award #1723536. Lee, J.J., 2007. The shaping of the departmental culture: measuring the relative influences of the institution and discipline. J. Higher Educ. Policy Manag. 29 (1), 41–55. Lee, S., Bozeman, B., 2005. The impact of research collaboration on scientific productivity. Soc. Stud. Sci. 35 (5), 673–702. Lo, J.Y.-C., Kennedy, M.T., 2015. Approval in nanotechnology patents: micro and macro factors that affect reactions to category blending. Org. Sci. 26 (1), 119–139. https:// doi.org/10.1287/orsc.2014.0933. Lounsbury, M., 2001. Institutional sources of practice variation: staffing college and university recycling programs. Adm. Sci. Q. 46 (1), 29–56. Louvel S. 2016. “Going interdisciplinary in French and US universities: organizational change and university policies.” 46:329-59. doi:http://dx.doi.org/10.1108/s0733558x20160000046011. Mathies, C., Slaughter, S., 2013. University trustees as channels between academe and industry: toward an understanding of the executive science network. Res. Policy 42 (6-7), 1286–1300. https://doi.org/10.1016/j.respol.2013.03.003. McClure, K.R., Barringer, S.N., Brown, J.T., 2020;al., forthcoming. Privatization as the “New Normal” in higher education: synthesizing literature and reinvigorating research through a multi-level framework. Higher Educ. Handb. Theory Res. 35. Mitrany, M., Stokols, D., 2005. Gauging the transdisciplinary qualities and outcomes of doctoral training programs. J. Plan. Educ. Res. 24 (4), 437–449. Moed, H.F., Luwel, M., Houben, J.A., Spruyt, E., Van Den Berghe, H., 1998. The effects of changes in the funding structure of the flemish universities on their research capacity, productivity and impact during the 1980′s and early 1990′s. Scientometrics 43 (2), 231–255. National Academies of Science, 2005. Facilitating Interdisciplinary Research. The National Academies Press, Washington, DC. National Research Council, 2014. Convergence: Facilitating Transdisciplinary Integration
14