24024(,1)240$6,215((+66351&(5('*28,3('5'(/6$&256 342-(&6'$6$,5$1$/;<('%; +((/6$25642-(&63$46 RIWKHQRQSURÀW$PHULFDQ,QVWLWXWHIRU5HVHDUFK7KH'HOWD&RVW 42-(&6=64$15/$6(5"5748(;#'$6$,162)240$656+$6&$1%(75(' )24/21*6(40$1$/;5(52)64(1'5,1021(;4(&(,8('$1'021(; 53(16,1+,*+(4('7&$6,21>((+663999'(/6$&256342-(&624* '(/6$&256'$6$$1'+6636&521/,1(24*20($53:)24024( ,1)240$6,21
!1,8(45,6;$1.,1*5((' 03428(0(16 Ninghua Zhong, School of Economics and Management, Tongji University, China
[email protected] doi:10.1016/j.sheji.2017.03.003
In the paper “Can College Rankings Be Believed?,” Meredith Davis summarizes three aspects of college rankings systems—their history, the criteria and methods they use, and their impact. The author concludes that university rankings are never objective, especially the holistic ranking, which only misleads the public and has no positive effect on the improvement of education and faculty quality. Moreover, Davis notes that rankings, on the whole, discourage the diversity of institutional missions. There is no doubt that college rankings face tremendous challenges in theory and practice. However, as the author says, “While there is ongoing debate as to whether refinements in ranking strategies have resolved any of the issues related to comparing institutional quality and academic offerings, the consensus is that rankings are here to stay.” One major reason is that even though they are far from perfect, rankings satisfy the public’s need for information. It is almost impossible for parents and students to evaluate universities by themselves. Given the demands of a global student population, college rankings have increased in popularity despite some serious flaws. Today, more and more students are going abroad in search of richer learning experiences. Thousands of students from developing countries go to the European Union or the United States every year to pursue their graduate degrees, while students from developed countries study at universities in emerging countries to learn about their cultures and peoples. Such students find it extremely difficult to gather reliable information about
Can College Rankings Be Believed?
universities on other continents. So the demand for a simple and straightforward ranking list of global universities is simply massive. Although I do agree with the author on most of her arguments—rankings, on the whole, are not reliable—rather than dispensing with them altogether, I think it is more critical to focus on improving the current ranking system. There have been positive changes in recent years that deserve our—and Davis’s—attention. First, as Davis mentions, holistic institutional rankings are particularly biased. I completely agree and posit that such rankings should only be established for individual disciplines. Some discipline-based measures are provided by certain rankings providers. The author could explore and discuss the positive changes that such measures have brought about, as well as how to improve them further. Second, although universities may offer programs in similar disciplines, the specializations of their faculties might range from the more practical to the quite theoretical. For example, some institutions aim to guide students’ scientific analyses of economic policies, while others want to contribute more to an academic exploration of economic principles. One idea would be to divide universities into different categories according to their missions and have the rankings reflect what extent each university has realized its purpose—or even whether the university has the capability to undertake its stated mission in the first place. Perhaps the author could put forward some professional suggestions regarding how to improve the rankings for design departments across the world. Third, is rankings oversight possible? More and more global rankings providers are emerging. Can they be ranked and, thus, motivated to do a better job? Such supervision is especially important for disciplines that are subjective, like design. One initiative does exist. In 2004, the UNESCO European Centre for Higher Education together with the Higher Education Policy Institute in Washington, DC co-sponsored the International Ranking Expert Group, a self-regulating body that proposed the Berlin Principles on Ranking of Higher Education Institutions. Although the group’s influence on the quality of university rankings is still very limited, it has laid some solid groundwork for the development of reliable global higher education institution rankings. In the future, these independent bodies could do more. For example, they could collect all the rankings on design institutions, send them to experts in the design field worldwide, and ask those people whether certain rankings are largely consistent with their experience. Feedback could be
summarized and announced publicly. Such a process, though subjective, may adjust the biases generated by some rankings that use numbers from a database. It is also worth mentioning that several rankings have been abandoned because of low credibility, indicating that there exists some oversight. That power needs strengthening. (4(',6+$8,5=$12//(*($1.,1*5((/,(8('> 1276701
24(:$03/(5((+66359996,0(5+,*+(4('7&$6,21&20 924/'71,8(45,6;4$1.,1*5%;57%-(&6$1'+663599962371,8(45, 6,(5&2057%-(&64$1.,1*5 24024(,1)240$6,215((6+(156,676()24,*+(4'7&$6,21 2/,&; $&&(55('(%47$4;
+663999,+(324*4(5($4&+37%/,&$ 6,215%(4/,134,1&,3/(54$1.,1*+,*+(4('7&$6,21,156,676,215
2//(*($1.,1*5 $1?628(?0$1?6 ($8(?0 Carma Gorman, Department of Art and Art History, The University of Texas at Austin, USA
[email protected] doi:10.1016/j.sheji.2017.03.004
In this issue of She Ji, Meredith Davis provides a brief history of college rankings and summarizes a number of damning scholarly critiques of their indicators, data, and weighting formulas. She concludes that the best-known ranking systems rest on shaky foundations. Davis concurs with those who argue that college rankings serve prospective students and their families poorly. If they rank anything accurately, it is faculty research productivity in STEM fields. But as Davis makes clear, faculty research productivity, “reputation,” and the other measures on which college rankings rely do not measure “educational quality.” In fact, many ranking organizations don’t even verify the accuracy of their data. Despite the failings of the best-known college ranking systems, Davis concurs with previous scholars that they are “here to stay,” precisely because they help government funding agencies and corporations predict which institutions are likely to yield the best return on research investments. Though she does not
state it in such blunt terms, the grim implication of Davis’ argument is that media executives, educators, legislators, bureaucrats, and industry leaders care more about ROI than about how well colleges educate students. If they did care about “educational quality,” they would rank colleges using different and more accurate data. Most important, they would stop using “reputation” as a central indicator. Davis is not the first to make these arguments. She is, however, the first writer I have seen who uses the flaws of US News and World Report’s (USNWR’s) graduate program rankings of MFA programs in design to demonstrate the unreliable nature of all reputation-based college rankings. Near the end of her essay, Davis observes that in the most recent USNWR ranking of MFA programs in design, “Art Center College of Design (California) earned a rating among the top ten master’s programs in graphic design. At the time, the institution had no graduate program in the discipline.” She follows with the observation that “A previous [USNWR] ranking included Rhode Island School of Design at the top of Digital Design programs before the institution had established a major in the field, and when digital work in the Graphic Design program was relatively new to the institution.” She clinches her point by noting that past editions of USNWR gave high ranks to the nonexistent school of architecture at Hong Kong Polytechnic and the nonexistent law school at Princeton. Davis notes that all four of these bloopers result from what Nobel Laureate Daniel Kahneman and other psychologists call the “halo effect.” When asked to name the best programs in a particular field off the top of their heads, many people who lack reliable data or personal knowledge of programs take a mental shortcut. They list colleges that they believe have excellent programs in related fields, or they list colleges that they believe to be excellent all around. The high rank of nonexistent programs in reputational polls shows that the halo effect skews reputational rankings. More troubling, it shows how little the “experts” know about the quality of the programs they are ranking. As Davis puts it, “The more ranking systems rely on reputational data…that is, on top-ofthe-mind impressions by peers and employers rather than on hard data, the higher the likelihood of biased and/or inaccurate results.” Davis makes a second excellent point about the inadequacies of the US News and World Report rankings of Master of Fine Arts (MFA) programs in design. The USNWR ranks only MFA programs. The MFA rankings list, she points out, “does not include schools that award degrees titled Master of Design,
she ji The Journal of Design, Economics, and Innovation
Volume 2, Number 3, Autumn 2016