Error and the growth of experimental knowledge

Error and the growth of experimental knowledge

1919 - one feels deflated at not knowing Newsholme’s personal feelings towards a contemporary who privately savaged him as ‘weak, vacillating, incompe...

192KB Sizes 1 Downloads 34 Views

1919 - one feels deflated at not knowing Newsholme’s personal feelings towards a contemporary who privately savaged him as ‘weak, vacillating, incompetent, untrustworthy and vain’ (p. 335). The second problem lies with Eyler’s engagement, or lack of it, with Newsholme’s life beyond his professional career, stating that, ‘There is no reason to think that Newsholme’s personal life would be particularly interesting to others’ (p. xii). But as Eyler also notes of his strict Wesleyan childhood, ‘Newsholme bore the marks of this upbringing throughout his life’ (p. 5). Perhaps no more so than 80 years later, in Newsholme’s active transAtlantic retirement, did this appear to be the case. Eyler deals fulsomely with the public aspects of this period and suggests that his apparently prudish writings of the time concerning sexuality, marriage and alcohol, for example - should in fact be understood in the context of his liberal convictions which emphasized society’s obligation to individuals (p. 394). Arguably, more of the private Newsholme is required to completely grasp the root of these seemingly contradictory beliefs. Despite these reservations, the final verdict on this book must undoubtedly be positive. Perhaps best described as a worthy and careful contribution rather than an inspirational one, Eyler uses Newsholme’s published works to fill an important gap in the biographic pantheon of public health luminaries of the nineteenth and early twentieth centuries. It is to be recommended for its adroit balance of the local and the national, of the detail and the grand overview, and of the individual person with the administrative machine. G. Mooney

Error and the Growth of Experimental Knowledge by Deborah

G. Mayo

University of Chicago Press, 1996 $74.00 hbk; $29.95 pbk (493 pages) ISBN 0 226 51197 9; 0 226 51198 7 At present, statistical practice in science is at odds with the dominant philosophy of experiment. On the one hand, the cornerstone of the current philosophy of experiment is one or another of Bayesian methods. All of these methods presuppose that one can use prior probability assignments to hypotheses, generally interpreted as an agent’s subjective degrees of belief. For Bayesians, a probability represents the degree of subjective confidence (usually varying with the evidence) in a proposition. On the other hand, statistical practice is based on classical and Neyman-Pearson (NP) statistics (e.g. statistical significance tests, confidence-internal methods) that eschew the use of prior probabilities when these cannot be based on actual frequencies. Deborah Mayo’s book attempts to show that her reinterpretation of NP statistics is a

Copyright

0 1997

Elsevier

Science

Ltd. All right reserved.

viable alternative to Bayesian approaches. She argues, for example, that defects of one (behavioral) model of NP tests erroneously have been taken as defects in NP methods themselves. When one uses her model of NP tests, as Mayo says Egon Pearson intended, then ‘accept H’ does not mean ‘take action A rather than B’ (as Neyman saw it) but rather ‘infer that a specific error is ruled out by the data’. If one accepts Mayo’s account of NP methods and interprets Peircean induction as severe testing, then it is possible to reconcile philosophy of experiment with statistical practice in science. Mayo’s approach is premissed on what she calls the ‘error-statistical account’, learning piecemeal from our mistakes and focusing on a statistical procedure’s error probabilities in order to scrutinize objectively the inferences based on test results. According to her account, methodological rules for experimental learning are strategies that enable learning from common types of experimental mistakes. The rules systematize the day-to-day learning from mistakes. From the history of mistakes made in reaching a type of inference, one can develop a repertoire of errors and methodological rules (techniques for circumventing and uncovering errors). Some of the rules refer to before-trial experimental planning, others to after-trial analysis of the data. Although her approach is similar to that of Karl Popper, Mayo shows that an hypothesis that Popper would count as ‘best tested’ is not necessarily ‘well tested’ for her. Apart from its theoretical contributions to new intepretations of Kuhn and Popper, as well as solutions to several important problems in philosophy of science (underdetermination, the nature of scientific progress, induction, objectivity, and the role of novel evidence), Mayo’s book has significant practical merits. All researchers in the physical, biological, medical and social sciences can profit from Mayo’s analyses of modelling patterns of irregularities that are useful for discovering errors. She discusses both philosophy of statistics and statistical methodology in an insightful way that is both accessible to laypeople and thoughtprovoking for experts. It is a major contribution to both science and philosophy. Kristin S. Shrader-Frechette

Beyond Calculation: Years of Computing by l?J. Denning

The Next Fifty

and R.M. Metcalfe

Springer Verlag, 1997. DM 44.00 hbk (xviii + 313 pages) ISBN 0 387 94932 1 Predictions about the future development of computers and the role that they might play in our society have usually been hopelessly wrong. Time and again we have been caught out by the astonishing impact that computers

0160-9327/97/$17.00.

have had and are still having. It all really started in the 1940s when the first antediluvian behemoths such as the Colossus machine began to be constructed. These were truly enormous contraptions that could easily contain 500 miles of wiring and take up a large room. Around this time, American mainframe manufacturers were predicting that the entire United States would be able to make use of about five computers in all and that worldwide the demand would not exceed a dozen computers. Such predictions now make us smile and are consigned to the trashcan of history. But are much more modern predictions any better? As we all know, computers were beginning to display some of their remarkable abilities in the 1940s and 1950s. For instance, it is well known that early computers were used for calculating the trajectories of ballistic missiles, something that was exceedingly difficult to do otherwise. There is also the saga of the breaking of the wartime secret code of the Germans by the British computer at Bletchley Park in 1943 under the guidance of Alan Turing. So, perhaps it is not so surprising that by the mid1950s the computer manufacturers IBM and Univac had reached the conclusion that computers might just have a role to play in the running of big business. But, even as late as 1977 it was predicted by Ken Olsen, the chief executive officer of Digital Computers, that there would never be a market for home computers. It is thus little wonder that Bob Lucky, vice president of Bellcore, asked in 1995 the rhetorical question: if we couldn’t predict the Web, what good are we? - an interesting question, and one that is explored in this book. This work arose out of the golden jubilee celebrations of the United States-based Association for Computing. The overall theme was ‘The next fifty years of computing’ and the underlying intention was to explore likely developments and also to engage in the clearly very risky business of making predictions on the future role of the computer. In view of the past dismal record in this general area, the Association for Computing wisely decided that it would ‘not so much make predictions as develop possibilities, raise issues, and enumerate some of the choices we will face about how information technology will affect us in the future’ and that in doing so ‘we ought to be humbled by our past incapacity to make predictions that were not only believable but actually came true’. How well did they succeed? The work they have produced contains 20 chapters, all but two of which have been written by a single author. All of the authors are well-known within the computing fraternity and a fair number of them are members of the Association for Computing. Their contributions are divided up into three major sections that deal with (1) the coming revolution, (2) computers and human identity, and (3) business and innovation. In the first section it is pointed out, for instance,

Endeavour Vol. 21(4) 1997

177