Lessons from the past

Lessons from the past

Seminars in Fetal & Neonatal Medicine (2005) 10, 23e30 www.elsevierhealth.com/journals/siny Lessons from the past Alex F. Robertsona,*, Jeffrey P. B...

133KB Sizes 1 Downloads 189 Views

Seminars in Fetal & Neonatal Medicine (2005) 10, 23e30

www.elsevierhealth.com/journals/siny

Lessons from the past Alex F. Robertsona,*, Jeffrey P. Bakerb a

Department of Pediatrics, Brody School of Medicine, East Carolina University, 600 Moye Boulevard, Greenville, NC 27858-4354, USA b Department of Pediatrics, Center for the Study of Medical Ethics and Humanities, Duke University Medical Center, Durham, NC, USA

KEYWORDS History; Neonatology; Newborn care

Summary This article considers errors of care in neonatology. In the 19th century errors that resulted in high infant mortality were shaped by the social environment, and in this setting the development of the incubator failed. In the early 20th century, with the emergence of the modern hospital as a technological, sciencedriven system, physicians had more control of patients’ environments, and thus medical errors could occur from systematic care and affected larger numbers. Later in the 20th century, the development of randomized controlled trials and systematic reviews began to improve care and to decrease the risks associated with new treatment methods. Large variations in practice still exist between physicians as individuals and institutions. Considering these variations as risks has led to the use of institutional databases, benchmarking and clinical care guidelines. The efficacy and safety of these methods is unproven. Risks will never disappear from medicine. The question of what risks are ‘acceptable’ is, in general, unanswerable. ª 2004 Elsevier Ltd. All rights reserved.

Introduction There have been medical errors for as long as there have been physicians. The more interesting historical theme is how physicians have tried to minimize the chance of violating the Hippocratic ‘do no harm’ dictum. In many cases, particularly before medical education standards rose in the late 1800s, inadequate training or poor judgment was blamed. However, intelligent, caring, and skilful physicians have also demonstrated the capacity to make mistakes. * Corresponding author. Tel.: C1 252 753 4513; fax: C1 252 744 3806. E-mail address: [email protected] (A.F. Robertson).

When discussing the care of newborn and young infants over the past 200 years, the history of medical errors can be divided into three periods. The first period, covering most of the 19th century, was characterized by errors reflecting the social environment. During that time physicians cared for infants in homes or in hospitals that were fundamentally custodial rather than medical institutions as we know them today. Scientifically based interventions, such as the incubator, failed due to extraneous factors. Such setbacks led to a second period in which doctors extensively re-conceptualized the hospital as a technological system, or a world driven by technology and systematic protocols. During this

1744-165X/$ - see front matter ª 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.siny.2004.09.006

24 period physicians became much more confident that they could exclude the outside world and focus on caring for the patient as dictated by science rather than social necessity. Yet this period, which dominated the first half of the 20th century, gave rise to some of the most tragic medical errors of all time. Such errors included the use of inappropriately low incubator temperatures, high oxygen concentrations, thymic irradiation and the use of harmful antibacterial agents. Systematic protocols reduced errors from idiosyncratic judgment but created a context in which errors in medical practice could be propagated on an unprecedented scale. The iatrogenic disasters of this period gave rise to a third period, which emerged after the Second World War. This period was characterized by the use of randomized controlled trials to assess costs and benefits of new therapies.

Mortality and the incubator Neonatology arose from social and political priorities late in the 19th century. These priorities were related to the high infant mortality rate in European countries, which had become obvious in the 1800s. To understand the changes in newborn care that resulted in increased infant mortality we must look back to events beginning with the Industrial Revolution in England, France and the United States. Infant mortality was influenced by the migration of people to larger industrial centers and the employment of mothers in factories. In the mid1700s, an agricultural economy was predominant in England and the majority of the population was involved in subsistence farming. Between 1760 and 1830, the British Parliament passed the ‘Enclosure Acts’, which redistributed land into larger areas under the control of wealthier landowners. These Acts, and the development of new agricultural techniques, resulted in a marked increase in food production and drove the disenfranchised farmers into the cities.1 At the same time, technological developments in cotton cloth and steel production moved industry out of the family setting and into factories.2 The increased city populations provided cheap labor for industry but led to abysmal living conditions for lower-class workers. Infant mortality was affected by infection (related to unsanitary living conditions) and poor nutrition. Destitution and the high rate of female employment in the factories meant that many families could not care for their children; infanticide was common. The ‘Old Poor Law’, enacted in 1601,

A.F. Robertson, J.P. Baker required each parish to bring up unprotected children,3 as a result, children were left in the streets of the London Parishes or handed over to the Parish Officers, who were paid a small ‘settlement’ to make arrangements for their care. The provisions of the Old Poor Law had been intended to reduce the vagrancy and disease that were so common in the surviving children. Three methods of infant care were used: ‘baby farming’, workhouses and foundling hospitals. In ‘baby farming’ the parish infants were assigned to a parish nurse, usually outside the city, for care. The care was either breast feeding or artificial feeding. Because there was payment for this care, and no effective regulation of the process, many nurses took in multiple infants and the care was often negligent. Workhouses were built in cities to house unemployed people and give them work, usually within the house. Mothers and their infants, as well as other children, were housed there and the mortality rate of the infants was close to 100%. A vivid description of the plight of these infants and children is presented in the book London life in the 18th century.4 Another method of newborn care was provided by foundling hospitals, which took in infants left in the street or at the portal of the hospital. Established in 1741, the London Foundling Hospital transferred most infants to wet nurses in the countryside soon after admission and then brought them back to the hospital at 3e5 years of age. The mortality rate of these children reached 81% in the period 1758e1760. A thorough description of all aspects of a foundling hospital is presented in the book Coram’s children: the London Foundling Hospital in the eighteenth century.5 The high mortality of infants in all these care situations was largely related to mothers not being available to breastfeed their own infants. In the mid-1800s about 30% of the total female population of England was employed, and this number was almost exclusively of the poorer classes.6 In pre-industrial Europe, wet nursing was most common among wealthy and noble families. With the development of large city populations and foundling hospitals, the supply of wet nurses was insufficient. This shortage led to the use of animals as wet nurses. For example, the foundling hospital in Aix, France, incorporated the use of goats for nursing. cribs As described by Drake7: .the cribs are arranged in a large room in two ranks. Each goat which comes to feed enters bleating and goes to hunt the infant which has been given to it, pushes back the

Lessons from the past covering with its horns and straddles the crib to give suck to the infant. The use of animals or artificial feedings was also necessary to prevent wet nurses from acquiring syphilis from infected infants. For these reasons, the use of artificial feeding became more common. The materials used for such feeding were paps and panadas, made chiefly with flour or bread cooked in water, milk or butter.8 In the Dublin Foundling Hospital, the mortality rate was 99% using such artificial feeding.9 The artificial feeding of infants progressed very little through the 1800s and it was not until the 1890s, when studies of the chemical composition of milk were undertaken by Biedert in Germany and Meigs in the US, that feeds similar to modern-day formulas were developed.10,11 Another major development for successful artificial feeding was the acceptance of the need to provide clean milk by pasteurization or boiling.12 The motivation for change was both social and political. During the 18th century, humanitarianism was on the rise and there were active proponents for improving infant and child care. Among these were Jonas Hanway, a governor of the London Foundling Hospital4 and Roussel, the French physician and legislator. Roussel introduced a bill for governmental oversight of paid wet nurses, whom he felt were responsible for most of the w20% mortality rate of infants under 1 year of age. Amazingly, wet nurses cared for more than half of all newborns in Paris at that time.13 In the early 19th century, Napoleon Bonaparte reorganized and centralized many French institutions, including medical schools, midwifery schools, maternity hospitals and foundling homes.14 His motivation was to provide manpower for the expanding French empire, especially in the military. A later motivation was the declining birth rate in France, which was recognized by the mid-19th century. This declining birth rate and infant mortality were blamed for France’s devastating and unexpected loss to the newly created German Empire in the FrancoePrussian War of 1870e1871 (based on the concern about Germany’s ability to field a larger army). Reformers frequently spoke of motherhood as a patriotic duty and centered their efforts on exhorting, educating, and supporting women in their roles as child-nurturers. This maternal focus, particularly distinct to France in the late 1800s, encouraged obstetricians as well as pediatricians to take interest in infant mortality. Obstetricians directed their efforts to the problems of premature infants, whose contribution to

25 the overall infant mortality rate was second only to that of gastroenteritis. At the Maternite ´, the largest maternity hospital in Paris, this mortality was all too evident every winter when the great majority of infants born at 7e8-months gestation died from hypothermia and other causes on the hospital’s cold wards. Ste ´phane Tarnier, the obstetrician in charge, introduced incubators for their care in 1880. Although incubators of one fashion or another had been in use since 1835, the Tarnier incubator was the first to be enclosed and began being used in all Parisian maternity hospitals. This simple device involved little more than a warm box heated by a water reservoir and ventilated by convection. It was initially a success, and was associated with almost a 50% reduction of mortality of infants between 1200 and 2000 g. In 1893 Madame Henry, the Maternite ´ hospital’s chief midwife, opened a far more ambitious institution to rescue premature infants, the Service des De ´biles (‘weaklings’), as premature babies were popularly known. Unlike Tarnier’s earlier efforts, Henry’s premature infant nursery was directed at babies born at home. Wet nurses provided breast milk; from the standpoint of reducing infant mortality this made sense, because only a fraction of babies in Paris were born in the hospital in the 1890s. But the Service des De ´biles was a disaster; the chaotic world of the Parisian poor overwhelmed the hospital. Infants typically arrived hypothermic, malnourished, and often infected. Moreover, their parents brought them only as a last resort, after all efforts in the home had failed. To Pierre Budin, Tarnier’s successor, the reasons were clear: He felt that parents used this service not so much as a hospital but as a ‘mortuary depot’, a place to take the infant when on the verge of death. Doctors thought they had created a hospital service for newborns; parents viewed the service as a reincarnation of the foundling home. Budin gave up on the service and later restricted his efforts to caring for relatively large (2000e2500 g) infants born in hospital. Contemporary American pediatricians who introduced incubators onto the wards of children’s hospitals fared no better. For example, in 1899 New York City pediatrician Henry Chapin reported that only two of the 73 infants he had attempted to treat with the incubator survived. Echoing the French experience, he noted that most had arrived in a ‘moribund condition’ following prolonged exposure in the tenements. Unlike the French, however, Chapin and many American pediatricians blamed the incubator itself. Accustomed to associating fresh air with good health, and stagnant,

26 enclosed spaces (such as the tenements) with illness, they could not bring themselves to consign a malnourished infant to what looked like a box. The Tarnier incubator violated the common pediatric ‘wisdom’ of the time and dropped out of favor during the early 1900s; it was revived some years later.15e17 In retrospect, we can see that the incubator was prematurely dismissed as an ineffective technology. It was blamed for mortality that in fact reflected factors beyond doctors’ control e namely, the reluctance of parents to bring their premature infants to the hospital during the critical early window when intervention might still be of use. The pediatric hospital was all too vulnerable to its outside influences. But less obvious is the extent to which pediatricians’ perception of the incubator was also shaped by the outside world. The early Tarnier incubators, often made of wood and relying solely upon convection for air movement, appeared to provide exactly the same kind of environment that seemed to predispose young children to illness in the crowded city. Far from transcending the noxious influences of the outside world, it seemed to reflect and even concentrate them.

Systematic technology and systematic errors In 1889, Alexandre Lion of Nice, France, developed another type of incubator. Featuring a thermostat and a forced-air ventilation system, this incubator was designed to be much more independent of the quality of its environment, whether measured by risk of infection or nursing skill. The American obstetric leader Joseph DeLee called it a ‘scientific apparatus’ as opposed to a mere ‘warm box’. Lion incubators represented the cornerstone of a new approach to premature infant care. This approach was grounded not in simple domestic technology that could be constructed in the tool shed but in complex technological systems. The American physician DeLee was instrumental in refining this idea. In the early 1900s, at the Chicago Lying-in Hospital, DeLee set up the first American ‘incubator station’ featuring four Lion incubators, trained nurses using standardized protocols, and even a transport system with a portable incubator. Incubator stations were tremendously expensive at that time, and DeLee had to close his down within 10 years. The new technology nonetheless held great fascination for the public. The chief use of the Lion incubator in the US was

A.F. Robertson, J.P. Baker its display, complete with live infants, in the elaborate ‘incubator baby shows’ that became the rage of world fairs and expositions in the early 1900s. Inspired by these exhibitions, and aided by a prominent Jewish philanthropist, the pediatrician Julius Hess created the first permanent American incubator station at Michael Reese hospital in Chicago in 1922.15e17 DeLee and Hess’s reliance upon systematic technology distinguished their approach to premature infant care. Premature infants were not managed by individual clinical judgment but by explicit and rational guidelines provided for highly trained nurses. Machines, charts, and quantitative data guided their management. Such a strategy doubtless improved overall care and reduced medical errors from carelessness or individual misjudgment. It provided an environment that seemed to embody the values of science and objectivity. Yet it would be seen that mistaken assumptions could still work their way into the relatively aseptic world of the 20th century hospital nursery. This became especially apparent with respect to two iatrogenic misadventures involving the incubator: The use of inappropriately low incubator temperatures and the routine use of high oxygen concentrations. In 1938, Chapple reported a new incubator design that was built to circulate outdoor air and maintain a positive pressure to prevent air entering the incubator from the nursery. Also, balloon sleeves were used for access to the baby.18 After the Second World War, the Air-Shields Co., using the Chapple design, began manufacturing the Isolette. This improvement in incubator design, and the increasing involvement of pediatricians in the newborn nursery, promoted the development of neonatal intensive care units. Pediatricians began to exert increasing control over the infants’ environment and physiology, and this control inevitably led to risks for the infants. From about 1900 to 1964, incubator temperatures were held at approximately 25 (C. Originally, this was considered safer than higher temperatures because early incubators are easily overheated. In 1933, a report from Harvard Medical School suggested that this temperature was best for stability of the infant’s core temperature and was, in fact, a characteristic of prematurity, which should be preserved.19 Seventeen years later, other studies revealed a lessened mortality for premature infants at a higher ambient temperature.20 Beginning in 1900, oxygen was used in premature infants for cyanosis and later for severe apneic episodes. A report in 1942 showed that supplemental oxygen (70%) decreased periodic

Lessons from the past breathing in premature infants and corrected abnormal gas diffusion in the lung. As Silverman points out in his book, Retrolental fibroplasia: a modern parable, the oxygen flow required to reach higher oxygen concentrations in the Isolette was excessive and therefore an aireoxygen intake assembly was designed to limit the flow of air and increase the oxygen concentration in the incubator.21 This air-flow change was the beginning of an epidemic of retrolental fibroplasia. In 1954, the results of a randomized controlled trial showed clearly the relationship of high oxygen concentrations to retrolental fibroplasia. Another systematic error related to technological advances was the early 20th century practice of radiating the thymus gland for the condition called status thymicolymphaticus. This medical misadventure resulted from an unlikely procession of errors related to developments in anesthesia and radiology, as well as a mistaken notion about the causes of sudden infant death. From about 1910 to 1950 thousands of infants received thymic radiation for status thymicolymphaticus. The first indication that this treatment might lead to cancer was an article in 195022 describing cases of thyroid cancer in children. Jacobs et al.23 describe the historical perspective of thymic radiation: in 1889, Paltauf described the association of an enlarged thymus with sudden infant death. He proposed that the enlarged thymus was a sign of generalized enlargement of the body’s lymphoid tissue and that this constitutional state predisposed to sudden infant death. The infants with this condition were: ‘.well fed, pale, pasty, flabby, and rather inert and effeminate, with large tonsils and thymus.’ The need for this diagnosis as an explanation for sudden infant death is clear. This diagnosis absolved parents from a coroner’s accusation of death due to overlaying or infanticide. Another need for the diagnosis was the occurrence of sudden infant death during anesthesia. As early as 1869, the British Medical Journal had an editorial entitled Chloroform accidents; the autopsy diagnosis of status thymicolymphaticus was to become useful to anesthesiologists. In addition, the introduction of diphtheria antitoxin in 189224 sometimes led to sudden death, which could also be excused by the thymic diagnosis. The general use of anesthesia for surgical procedures made tonsillectomy an acceptable treatment. In the early 1900s, chronic infection of the tonsils was thought to be the cause of many unexplained systemic diseases. The indications for tonsillectomy were quite broad, including ‘reflex neuroses’ seen in children (e.g. asthma, night-time

27 coughing, bed-wetting and seizures) and, in adults, arthritis, muscle pain, nephritis, and various nervous symptoms.25 Tonsillectomies reached a peak in the 1930s and began declining in the 1940s and 1950s with the introduction of antibiotics. With the popularity of tonsillectomy came an increased number of deaths during surgery in otherwise seemingly well children. These deaths were likely related to anesthesia but were more easily accepted as being due to the constitutional problem of status thymicolymphaticus. What was needed was a way of diagnosing this predisposition and perhaps treating it before death occurred. By 1904, medical articles were defining the X-ray boundaries of an enlarged thymus and the chest X-ray became the preferred method for diagnosis. In 1903, Heinecke26 showed that X-ray treatment of young animals decreased the size of the thymus and suggested that: ‘One might try this therapeutic measure in cases in which an abnormally large thymus is the basis of the trouble.’ Subsequent papers in the radiology literature, and recommendations by eminent physicians, led to the practice of checking the thymus size by X-ray before surgery and treating all thymuses that were judged large. In addition, many hospitals began taking chest X-rays of all newborn infants and prophylactically treating those with an enlarged thymus with radiation. In 1938, Donaldson27 reported a series of 2000 newborn infants examined by chest X-ray within 24 h of birth. The thymus was felt to be enlarged in 18% of the infants and they received prophylactic radiation. Undoubtedly, there were radiologists who routinely treated babies without even checking to see if the thymus was enlarged. X-ray treatments of this intensity were generally thought to be harmless. In 1948, Conti and Patton28 reported their experience with 7400 chest X-rays taken in newborn infants and made the following comment: The obstetrician or pediatrician should accede to the wishes of parents who want neonatal roentgenograms of their children. It might even be wise to administer therapeutic dosage over the thymus. Whatever assurance is gained by this apparently harmless and perhaps beneficial procedure will aid in alleviating an anxiety which occasionally becomes a thymus phobia. There is no estimate of how many children were ultimately affected. Subsequent studies of thyroid

28 tumor risk showed a relative risk of 45 for thyroid carcinoma and 15 for thyroid adenoma.29 Hildreth et al.30 reported the risk of breast cancer as being 3.6 times greater. Increasing information about the long-term risks of radiation and strong statements against thymic radiation (by Nelson in his pediatric textbook and by Caffey, the ‘father’ of pediatric radiology) resulted in the end of thymic radiation as a treatment method for children. Other examples of systematic errors in the carefully controlled hospital environment involve antibiotics. The concern about infection in premature infants led to the practice of treating most small babies with prophylactic antibiotics. Sulfa drugs were given to premature infants, beginning in Sweden in 194931 and in the US in 195032; no ill effects were reported. In 1953, sulfisoxazole was introduced into the nursery at Babies Hospital in New York. When oxytetracycline was suggested as an alternative to penicillin and sulfa, a randomized controlled study was begun. This study showed an increased death rate and kernicterus in the infants treated with sulfa and penicillin, a complication that was recognized only as a result of the randomized controlled study. Only later was the mechanism realized; the sulfa was displacing bilirubin from albumin and the free bilirubin was deposited in the brain.33 In 1956, the triple antibiotic treatment of all infants under 2000 g became popular, especially in those with prolonged rupture of the membranes. The drugs were chloramphenicol, erythromycin, and sulfadiazine. Although the danger of aplastic anemia in older children and adults was known, there had been no studies of the possible toxicity of chloramphenicol in newborn infants. By 1958, cases of cardiovascular collapse in treated infants were recognized. The overall impact of this iatrogenic disaster was huge. In Baltimore, a study of the vital statistics in 1957 suggested an excess mortality of 118 infants was due to chloramphenicol.33 By 1961, hospital nurseries were the site of many staphylococcal infections. A bathing technique using hexachlorophene was begun and became popular around the world. In 1971, concern arose about the transcutaneous absorption of hexachlorophene and the cystic changes in the brain of monkeys washed each day with hexachlorophene. Finally, in 1973, the same changes were shown at autopsy in premature infants who had received hexachlorophene baths.33 The controlled, standardized care in hospitals meant that many infants were affected by each of these errors (improper incubator temperature,

A.F. Robertson, J.P. Baker oxygen toxicity, radiation of the thymus, antibiotic and hexachlorophene toxicity).

Clinical trials, systematic reviews, institutional databases, clinical care guidelines Today, neonatology is characterized by a more cautious approach to instituting new treatments and changing established protocols. There is a greater realization that demonstrating efficacy and absence of negative effects is absolutely necessary to avoid major risks in patient care. This basic information is best provided by the carefully designed randomized clinical trial. Matthews discusses the history of statistics in medicine and the development of clinical trials in the book Quantification and the quest for medical certainty.34 The concept of randomization was introduced in 1935 in agricultural productivity studies. In 1946, Hill designed a clinical trial for the Medical Research Council in England, studying the effect of streptomycin on tuberculosis; the results were published in 1948. This carefully designed trial was recognized throughout the world as a model of an objective drug trial. The first randomized controlled trial in American pediatric patients was carried out in 1949 by doctors Silverman and Day at Babies Hospital in New York.35 A premature infant with retrolental fibroplasia was treated with adrenocorticotropic hormone and responded favorably. Subsequently, 31 other infants were treated with encouraging but not uniform results. Rather than publish their results, Silverman and Day ran a randomized controlled trial, which showed little difference in the frequency of scarring retrolental fibroplasia between the treated and untreated patients. The mortality in the treated patients was higher due to infections. Further evidence of the value of randomized controlled trials was seen in the 1953e1954 studies on oxygen and retrolental fibroplasia, the 1954 studies on antibiotics and kernicterus and, that same year, the study of humidity levels in incubators. This last study ultimately led to the realization that premature infant mortality was affected by the ambient temperature in the incubator. Silverman points out that there will always be new practices in neonatal medicine and that there will not always be the opportunity for randomized controlled trials. He summarizes with

Lessons from the past the plea, in these cases, to use concurrent controls: Since knowledge in medicine is never complete, the use of concurrent controls in clinical trials of patient interventions cannot prevent all therapeutic catastrophes. But the precaution can always bring about a substantial reduction in the number of patients maimed and killed as a result of inevitable surprises. Most neonatologists and perinatologists are involved in group practices. Each group must determine a method to decide which new treatments can be introduced into the practice. All methods of care should be reviewed in respect to our knowledge (or lack of knowledge) of perinatal physiology. We should be skeptical of authoritative opinions, especially our own! We usually change practice habits after published reports become available regarding efficacy. But there is always the question of the report’s adequacy, especially when considering conflicting reports in a literature review. Systematic reviews, as provided by The Cochrane Collaboration, are an immense aid in interpreting the available literature. In 1972, Cochrane published Effectiveness and efficiency: random reflections on health services.36 Drawing from his experiences as a prisoner of war medical officer, he questioned the efficacy of many medical interventions and advocated the use of randomized controlled trials to evaluate healthcare methods. He also suggested the systematic review of available controlled trials. In the 1970s the obstetrician Chalmers began a comprehensive registry of perinatal randomized clinical trials. The project was ultimately funded by the Cochrane Center as it expanded to other specialties and countries; it is now the Cochrane Collaboration.37 However, most practices in neonatology have not been studied rigorously and will remain untested because the treatment methods are so varied and the outcomes dependent on so many factors. Also, technological changes outpace the ability to study each change. Benchmarking e the comparison of institutions as a method of determining best practices e has been used for the last decade in neonatology. Walsh38 suggests that benchmarking: ‘.is one tool that may be used to learn from the existing natural experiment that is produced by variations in practice among institutions.’ Benchmarking requires participation in a program of institutional databases such as the Vermont Oxford Network databases in the US.

29 The difficulties of using these databases to compare institutions and practice procedures are great because not only will the patient populations vary but the information gathered might not be uniformly reported. If benchmarking is to be used to change practice methods, thereby involving risk, the project should be as carefully reviewed and supervised as any institutional research project. Another method used to decrease the variations in practice in neonatology is the use of ‘evidence-based’ practice guidelines. Evidencebased medicine is a ‘concept of medical practice that integrates the best available evidence with other knowledge gained from experience, clinical judgment, and patient preferences’.37 The amount of ‘hard evidence’ available for practice guidelines is very limited and often the guidelines are based more on expert opinion. In this sense they serve only to make practice more consistent and represent an extension of the ‘systematic technology’ phase of neonatology discussed above. These guidelines also need to be studied scientifically. Merritt et al.39 reviewed the effectiveness of guidelines in neonatal medicine and concluded that, with some exceptions, their value has not been demonstrated. They point out that the effects of guidelines are not always as anticipated and suggest that guidelines should have a clearly stated objective, which can then be measured in the medical setting.

Practice points  Social factors strongly influence the success of care methods.  Any systematic use of technology or therapeutic agents exposes large number of patients to their risks.  Changes in methods of care must be validated by careful study before being recommended and used.  Practice guidelines should be scientifically tested.  When randomized controlled trials are not feasible, concurrent controls should be used.

Acknowledgements We are grateful for the extensive grammatical review by Alex F. Robertson IV.

30

References 1. Enclosure Acts. !http://www.cssd.ab.ca/tech/social/ tut9/lesson_2.htmO. 2. Hooker R. The industrial revolution. !http://users.ox.ac. ukwpeter/workhouse/poorlaws/poorlaws.htmlO. 3. Poor laws. !http://users.ox.ac.ukwpeter/workhouse/ poorlaws/poorlaws.htmlO. 4. George MD. London life in the 18th century. New York: Alfred A. Knopf; 1925. 5. McClure RK. Coram’s children: the London foundling hospital in the eighteenth century. New Haven & London: Yale University Press; 1981. 6. Crow D. The Victorian woman. New York: Stein & Day; 1972. p. 72. 7. Drake TGH. Infant feeding in England and in France from 1750 to 1800. Am J Dis Child 1930;34:1049e61. 8. Drake TGH. Pap and panada. Ann Med Hist 1931;3:289e95. 9. Fildes VA. Breasts, bottles and babies: a history of infant feeding. Edinburgh: Edinburgh University Press; 1986. p. 275. 10. Greer FR. Physicians, formula companies, and advertising. Am J Dis Child 1991;145:282e6. 11. Bracken FJ. The history of artificial feeding of infants. Maryland State Med J 1956;5:40e54. 12. Abt IA. Baby doctor. New York, London: McGraw-Hill Book Company, Inc; 1944. 13. Ellis JD. The PhysicianeLegislators of France: medicine and politics in the early third republic 1870e1914. Cambridge: Cambridge University Press; 1990. p. 219. 14. Toubas PL, Nelson R. The role of French midwives in establishing the first special care units for sick newborns. J Perinatol 2002;22:75e7. 15. Baker JP. The incubator and the medical discovery of the premature infant. J Perinatol 2000;5:321e8. 16. Baker JP. The incubator controversy: pediatricians and the origins of premature infant technology in the United States. 1890 to 1910. Pediatrics 1991;87:654e62. 17. Baker JP. The machine in the nursery: incubator technology and the origins of newborn intensive care. Baltimore: The Johns Hopkins University Press; 1996. 18. Chapple CC. Controlling the external environment of premature infants in an incubator. Am J Dis Child 1938;50: 459e60. 19. Robertson A. Reflections on errors in neonatology: I. the ‘hands-off’ years, 1920 to 1950. J Perinatol 2003;23:48e55. 20. Silverman WA. The future of clinical experimentation in neonatal medicine. Pediatrics 1994;94:932e8. 21. Silverman WA. Retrolental fibroplasia: a modern parable. New York: Grune and Stratton; 1980. 22. Duffy BJ, Fitzgerald PJ. Thyroid cancer in childhood and adolescence: a report on twenty-eight cases. Cancer 1950; 3:1018e32.

A.F. Robertson, J.P. Baker 23. Jacobs MT, Frush DP, Donnelly LF. The right place at the wrong time: historical perspective of the relation of the thymus gland and pediatric radiology. Radiology 1999;210: 11e6. 24. Dally A. Status lymphaticus: sudden death in children from ‘Visitation of God’ to cot death. Medical History 1997;41: 70e85. 25. Crowe SJ, Watkins SS, Roteholz AS. Relation of tonsillar and nasopharyngeal infections to general systemic disorders. Bull Johns Hopkins Hosp 1917;28:1e63. ¨ ber die einwirkung de ro }ntgenstrahlenauf 26. Heinecke H. U tiere. Mu}nchener Med Wochenschr 1903;50:2090e2. 27. Donaldson SW. A study of the relation between birth weight and size of the thymus shadow in 2000 newborn. Ohio State Med J 1938;34:538e41. 28. Conti EA, Patton GD. Study of the thymus in 7,400 consecutive newborn infants. Am J Obstet Gynecol 1948; 56:884e92. 29. Shore RE, Woodard E, Hildreth N, Dvoretsky P, Hempelmann L, Pasternack B. Thyroid tumors following thymus irradiation. JNCI 1985;74:1177e84. 30. Hildreth NG, Shore RE, Dvoretsky PM. The risk of breast cancer after irradiation of the thymus in infancy. N Engl J Med 1989;321:1281e4. 31. Muhl G. On prophylactic and early treatment of infections in newborn infants, especially the premature. Acta Pediatr Scand 1949;37:221e36. 32. Clifford SH. Prevention and control of infection in nurseries for premature infants. Am J Dis Child 1950;79: 377e83. 33. Robertson A. Reflections on errors in neonatology: II. The ‘heroic’ years, 1950 to 1970. J Perinatol 2003;23: 154e61. 34. Matthews JR. Quantification and the quest for medical certainty. Princeton: Princeton University Press; 1995. 35. Silverman WA. Personal reflections on lessons learned from randomized trials involving newborn infants, 1951 to 1967. James Lind Library.!www.jameslindlibrary.orgO. 36. Cochrane AL. Effectiveness and efficiency: random reflections on health services. London: Nuffield Provincial Hospitals Trust; 1972. 37. Dickersin K, Manheimer E. The Cochrane Collaboration: evaluation of health services using systematic reviews of the results of randomized controlled trials. Clin Obstet Gynecol 1998;41:315e31. 38. Walsh MC. Benchmarking techniques to improve neonatal care: uses and abuses. Clin Perinatol 2003;30: 343e50. 39. Merritt TA, Gold M, Holland J. A critical evaluation of clinical practice guidelines in neonatal medicine: does their use improve quality and lower costs? J Eval Clin Pract 1999; 5:169e77.