Journal of the Franklin Institute 348 (2011) 459–475 www.elsevier.com/locate/jfranklin
The 2007 Benjamin Franklin medal in electrical engineering presented to Robert H. Dennard, Ph.D. Lawrence W. Dobbinsa, Charles A. Kappsb, a Electrical Engineering, King of Prussia, PA, USA Computer & Information Science, Temple University, Philadelphia, PA, USA
b
Available online 16 May 2010
Abstract For inventing the 1-transistor/1-capacitor dynamic random access memory that significantly reduced the cost of memory, and for contributing to the development of the metal oxide semiconductor scaling principle that guides the design of increasingly small and complex integrated circuits. & 2010 Published by Elsevier Ltd. on behalf of The Franklin Institute.
1. Introduction Nearly everywhere you go these days you see ordinary people using highly sophisticated electronic equipment. Much of this equipment operates under the control of a programmed digital computer. This even includes such simple devices as digital watches. More and more, this equipment has ‘‘multi-media’’ functions. This includes digital cameras, MP3 music players, camera-cell phones, as well as virtually all personal computing equipment. Multi-media functions are not possible without high-speed processors, sophisticated display devices, and large amounts of high-speed memory. It was Dennard’s invention of the Dynamic Random Access Memory (DRAM) that made these multi-media devices possible. Today, a typical notebook computer has 1000 times the number of bytes of memory as the largest mainframe computers had 40 years ago, and at a price that makes it available to virtually anybody who wants one. In addition, Dennard’s scaling laws have helped reduce the size of not only memory, but also processors and all kinds of integrated circuits. Reducing the size of integrated circuits not Corresponding author. Tel.: þ1 484 331 7722.
E-mail address:
[email protected] (C.A. Kapps). 0016-0032/$32.00 & 2010 Published by Elsevier Ltd. on behalf of The Franklin Institute. doi:10.1016/j.jfranklin.2010.05.008
460
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
only reduces cost and allows more complexity, but also allows for increased speed and lower power consumption.
2. Background 2.1. Use of the capacitor in history Static electricity was discovered by the ancient Greeks, but did not begin to become well understood until the beginning of the 18th century. In 1745, Pieter van Musschenbroek invented the ‘‘Leyden Jar.’’ This was the first capacitor, and could store an electrostatic charge. Benjamin Franklin took an interest in Leyden Jars and made significant improvements on the way they worked. Franklin used a Leyden Jar capacitor as part of his famous kite experiment to store the electricity produced by the lightning. It can only be imagined what Benjamin Franklin would have thought of the idea of easily affordable personal computers having billions of independently chargeable capacitors in their memory chips.
2.2. A brief history of computer memory One of the biggest problems that has vexed computer development from the earliest days until very recently has been the difficulty and expense of storing data for fast access by the central processing unit. As technology developed, many ingenious schemes were developed for implementing computer memory. In this section, a sample of these schemes is discussed, giving a context for the development of the DRAM memories used in virtually all computers these days from hand-held computers to large servers. This discussion is also limited to so called ‘‘main memory’’ that is used for active data and program storage in the computer. There is a similar history that applies to large-scale peripheral memories, but this is a separate topic, not closely related to DRAMs.
2.3. The ENIAC The ENIAC was the first large-scale electronic computer.1 It was developed at the University of Pennsylvania and completed in 1946. The memory was constructed using vacuum tube ring counters. This technology required ten vacuum tubes for each decimal digit of storage. The total storage consisted of 20 words. Each word was a 10-digit signed decimal number. This would require over 2000 vacuum tubes. The entire ENIAC used 17,468 vacuum tubes and consumed 174 kW of electricity. Later, a core storage unit was added, which increased the size of the memory by 100 words. (Core storage is discussed later.) Later designs of vacuum tube storage supplanted ring counters with binary coded decimal circuits requiring 4 vacuum tubes per digit rather than 10 (Illustration 1). 1 There are claims that Alan Turing’s Colossus and John Vincent Atanasoff’s ABC computer predate the ENIAC, but little is known of the Colossus since it was developed in secret and destroyed at the end of WWII, and the ABC was of much smaller scale than the ENIAC.
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
461
Illustration 1. One digit vacuum tube counter. Photo by C. Kapps.
Illustration 2. UNIVAC I acoustic delay line memory module. Photo form Wikipedia.
2.4. The UNIVAC I The UNIVAC I was the first commercial computer and was delivered in 1951. Memory on the UNIVAC I consisted of 1000 words, each containing 12 alphanumeric characters. The technology used was acoustic delay lines. In an acoustic delay line memory the data is encoded into acoustic pulses that are entered into an acoustic medium via a transducer. The data travel down the acoustic medium where they are then received by a transducer, amplified and reshaped, and finally reentered into the acoustic medium. This serves as a memory since some amount of data can be entered into the acoustic medium before the first element of data reaches the receiving transducer, traveling at the speed of sound in the acoustic medium (Illustration 2). The acoustic media for the UNIVAC I memory modules were two foot long tubes filled with mercury. Mercury was used because it has an acoustic impedance that is compatible with the piezoelectric crystals used in the transducers. The delay was sufficient to store 10 words of data in each mercury column. Each module had eighteen mercury columns, so that seven modules provided the 1000 words of memory. There were additional modules that served as IO buffers and other temporary storage (Illustration 3). Acoustic delay lines were used for memory as recently as 1970. More recent acoustic delay lines used a coiled length of piano wire as the acoustic medium with magnetostrictive
462
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
Illustration 3. Acoustic delay line memory. Drawing by C. Kapps.
Illustration 4. Acoustic delay line CRT buffer—ca. 1969. Photo by C. Kapps.
transducers at each end. One application for these memories was for the refresh-memories in CRT displays. CRT displays need to repaint the display 50–70 times a second in order to avoid flicker. Consequently, a memory is needed to hold the data displayed on the screen. At that time, character-cell displays typically displayed 24 lines of 80 characters, requiring 1920 bytes of storage (Illustration 4). 2.5. IBM 650 The IBM 650 computer was first manufactured in 1953. This computer belonged to a family of computers that used a magnetic drum as its main memory. A magnetic drum is a rotating cylinder that is coated with magnetic material as with a magnetic tape or disk. Read/write heads similar to those used on tapes and disks are situated over the drum in order to read and write tracks of data on the drum. Characteristically, the read/write heads are fixed so that there is one head per track. The drum on the IBM 650 held 2000 words of data. Each word consisted of 10 digits stored in the bi-quinary number system (alternate 2’s and 5’s in order to form a decimal system) (Illustrations 5 and 6). The drum on the IBM 650 rotated at 12,500 RPM requiring 4.8 milliseconds per revolution. The time necessary to access a piece of data could vary. If the data was just on
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
463
Illustration 5. Magnetic drum memory from the IBM 650 computer. Drawing from IBM Archives.
Illustration 6. 4 4 Section of early core memory. Photo by C. Kapps.
the horizon, about to come under the read head, access was very fast, but if the data had just been missed, one had to wait the 4.8 ms. until it came around again. This delay is called latency. In order to minimize latency, provision was made so that both programs and data could be scattered irregularly. Programs could then be optimized so that data and code were just reaching the read heads when they were needed. The assembly language for the IBM 650 was called SOAP for Symbolic Optimized Assembly Program. Instructions were not executed in sequence. Rather, each instruction contained the address of the next instruction. The optimizing assembler had an algorithm built in that located instructions in order to minimize latency. Note that the acoustic delay line of the UNIVAC I also had latency problems. However, instructions were executed in sequence (except for branching) so that there was little opportunity to control latency. 2.6. Magnetic core memories As the 1960s approached, other memory schemes developed using such devices as CRT storage tubes. However, magnetic core memories soon took over, and stayed as the main memory technology well into the 1970s.
464
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
A magnetic core is a small bead of a magnetic ceramic material called ferrite. The beads are formed so that the crystal structure is aligned around the circumference of the toroidal bead. Thus, the bead is always magnetized in either a clockwise or counterclockwise direction around the torus. One direction symbolizes 0; the other symbolizes 1. An electric current passed through the opening in the torus can force the magnetization in one direction or the other. A sense wire passed through the opening can detect if the state of the core changes from 1 to 0 or vice versa. In order to read the data in a core, a 0 is written. If the data is 1, a transition will be detected, but if it is 0 there is no transition. In any case, the core is now set to 0, and its original data is lost. This is known as destructive readout. Since computer programs generally loop, using data over and over, the data must be rewritten each time it is read so that it will be available the next time it is needed. This read/rewrite cycle for core memory use often became the main factor controlling the design of computer operation. Notice that the data becomes available halfway through the memory cycle and so many computers took advantage of that by executing instructions during the rewrite phase of the cycle. While core memory had the disadvantage of destructive readout there was an advantage of non-volatility. Permanent magnets require no power to retain their magnetism and therefore core memories require no power to retain their data contents. Some computer manufacturers would ship computers with preloaded startup programs so that they could just be plugged in and started up. (Self-booting disks had yet to be invented.) In order to simplify addressing of the cores they are normally arranged in a square array with two write wires running through each core. One wire runs vertically and the other horizontally. Each wire receives one half of the current necessary to flip the core magnetization. Since the affect of current in pairs of wires is additive the one core at the intersection of the active vertical wire and the active horizontal wire will be written to. No other cores will be affected. As a result, the complexity of the addressing logic is on the order of the square root of the number of cores. During the two decades of core memory’s popularity many variations were tried. These included multi-aperture cores, magnetic rods, and thin film magnetic surfaces. However, none of these supplanted core memory. It was not until about 1980 that semiconductor memories began to replace core memory. Virtually all core memories and most memory technologies that followed arranged the bit cells into a two dimensional array. The array is usually square so that there are the same number of rows and columns. The number of cells is therefore the square of the number of rows (or columns). As a consequence, the order of magnitude of the row and column circuitry is the square root of the number of bit cells. Thus, for a 16K (16,384) bit core memory there will be 128 rows and columns (1282=16,384). Consequently, there will only be 128 repetitions of the row and column circuitry. The benefit of this two dimensional scheme is even greater when memories become much larger as they are today. For example a 1-Giga-bit, (1,073,741,824-bit) memory plane will have 32,768 rows and columns. This is only 0.003% of the number of cells. Thus, even if the complexity of row and column circuitry is 100 times that of the bit cells, the overhead is only 0.3%. Illustration 7 shows the 2-dimensional bit cell arrangement that has been used on most memories form magnetic core to integrated circuit memories. 2.7. Integrated circuit memories The vacuum tube flip-flops and ring counters were not very practical since each bit required a number of cubic inches of space. The transistors that became prevalent in the
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
465
Illustration 7. Magnetic core. Drawing by C. Kapps.
Illustration 8. 4096-bit Core Plane. Photo by C. Kapps.
1950s and the 1960s helped a little but still required cubic centimeters instead of cubic inches. This was still too much volume for memories of any size larger than register banks in a central processor. However, the development of large-scale integrated circuits in the late 1960s reduced the size of circuits by orders of magnitude and made it possible to create memories of a thousand bits or more on a chip of silicon 3/16th of an inch square. The earliest integrated circuit memories consisted of an array of flip-flops. Each flip-flop was constructed from two cross-coupled inverters with two additional transistors used for read/write enabling. The total number of transistors in each bit cell was six. The bit cells were arranged in a square array with row addressing on the left side, and read/write and selection logic along the bottom. For each row or column line, the addressing and selection logic required several transistors, but the number needed was only the square root of the number of the number of bit cells, so the overhead for addressing was typically only a percent or two of the overall chip area. This two dimensional structure is similar to that described for core memories above. Illustrations 8–11 show the schematic layout of a single static memory and the arrangement into a 4 4 array. The circuits are shown using CMOS technology. In the older NMOS technology, the P-channel pull up transistors would be replaced with N-channel depletion transistors with the gates tied to the sources rather than being cross coupled.
466
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
Illustration 9. 2-Dimensional memory plane. Drawing by C. Kapps.
Illustration 10. Static memory element. Drawing by C. Kapps.
Illustration 11. 4x4 Static memory array. Drawing by C. Kapps.
3. Contributions of Robert Dennard The breakthrough in memory design that Dennard is responsible for making use of capacitors for storing data. A capacitor consists of two conducting surfaces that are close, but insulated from each other. Placing a voltage between the conducting surfaces causes an
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
467
Illustration 12. Capacitor/switch storage. Drawing by C. Kapps.
electrostatic charge to be stored in the capacitor, which can be detected at a later time. Illustration 12 shows how a capacitor might store data. If switch ‘‘a’’ is pressed momentarily the capacitor will be charged to the voltage of the voltage source. At a later time, switch ‘‘b’’ can be pressed and the voltage from the capacitor will light the lamp briefly, until the charge on the capacitor is dissipated. Had switch ‘‘a’’ not been thrown, there would be no charge on the capacitor and the lamp would not flash. Thus, observing the lamp will tell us whether switch ‘‘a’’ had previously been pressed or not. In other words, the capacitor ‘‘remembers’’ whether switch ‘‘a’’ was pressed or not, and we have a memory unit of sorts.2 There are two things to note. The first is that pressing switch ‘‘b’’ drains the capacitor charge. It is thus necessary to rewrite the data back into the capacitor if the datum it represents is to be read more than once, as is the usual case with computer memory. The second thing is that the insulation of the circuit is not perfect, so the capacitor will slowly lose or accumulate charge, corrupting the data. As a result, memories built with capacitors must have their data read and rewritten continually in order for the data to remain valid. The ability of a capacitor to hold charge is called capacitance and is proportional to the surface area of the conductors and inversely proportional to the thickness of the insulation. Capacitance is measured in farads. A 1 F capacitor charged to 1 V can discharge at 1 A for 1 s. A low voltage, 1 F capacitor might be as large as a tuna can. Since integrated circuits are so small, a typical capacitor might be 1/10 of a pico-farad or 1013 F. As a result, integrated circuit memories of this sort must have their data read and rewritten about once every millisecond. Since these memories require continual reading and rewriting they are called dynamic as opposed the flip-flop memories described above. These are called static because, once written, the data remains valid without any active intervention. Illustration 13 shows the 1-transistor/1-capacitor dynamic memory cell invented by Dennard [1]. The transistor acts as a switch that connects the storage capacitor to the column data line. The column data line connects to read/write logic that either forces charge in or out of the capacitor or detects the presence of charge. Illustration 14 shows how the dynamic memory cells can be arranged into a 4 4 array. Since each column data line itself has capacitance, connecting it to the storage capacitor has the effect of connecting two capacitors together. The result is a sharing of charge. Since 2
Note that varying voltage sources can be used in this setup charging the capacitor to varying voltages. This can be used to construct an analog memory sometimes called a ‘‘sample and hold’’ circuit.
468
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
Illustration 13. 1-Transistor/1-Capacitor dynamic memory cell. Drawing by C. Kapps.
Illustration 14. 4x4 DRAM array. Drawing by C. Kapps.
the column data line extends the width of the chip, it is large and has much more capacitance than the storage capacitor, and the result of the connection produces a miniscule voltage change, regardless of the charge on the storage capacitor. Techniques are used for increasing the capacitance of the storage capacitor and reducing the relative capacitance of the column data line. We do not want to make the physical size of the storage capacitor larger than necessary since that would defeat the purpose of dynamic memory, which is to be very small and compact. However, as the overall size of the components on the chips is scaled down getting enough capacitance becomes more and more of a problem. For example, if the component size is scaled in half, four times as many bits could be placed on a given size chip. The column wires are half as wide due to the scaling process, but since the chip size is the same size, they are the same length as before. The capacitance of the row wire is therefore cut in half. However, the storage capacitor is half as wide and half as high, potentially cutting the capacitance by a factor of four. This would double the column wire to bit storage capacitance ratio. To take advantage of reducing the chip size and avoid loss of data some means of increasing the storage capacitance is needed without making the capacitor wider or higher. One technique for increasing capacitance is to use a transistor that is turned on and has a floating drain for the capacitor. The depletion region between the channel/source/drain and the body of the transistor forms an extremely thin insulation area. Furthermore, the gate-tochannel capacitance is added to the data storage capacitance as well. Another technique is to
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
469
Illustration 15. V cut storage capacitor. Drawing by C. Kapps.
Illustration 16. DRAM cell with deep-trench capacitor. Drawing from NEC Electronics.
etch a ‘‘V’’ shaped trench for the capacitor to increase the effective area of capacitance without increasing the physical surface area. See Illustration 15. In later designs all sorts of techniques have been used for increasing capacitance in reduced areas including the etching of extremely deep trenches. Illustration 16 shows how a DRAM cell can be built using a deep-trench capacitor. The trench is etched into the substrate and then lined with an extremely thin insulation layer. The trench is then lined with a conductor to form a capacitor. Illustration 17 shows a Photomicrograph of an array of deep-trench capacitors [8]. Before reading the data in the storage capacitor, the column data line is ‘‘precharged’’ to a standard voltage that acts as a reference for measuring the slight changes in voltage produced by the storage capacitor. A sense amplifier is needed at the end of the column data line that can detect the slight change in voltage caused when the bit storage capacitor is connected. The sense amplifier sets or clears a latch that holds the data for rewriting and display to the outside world. Illustration 18 shows an implementation of a combination sense amplifier/data latch [5]. The sense amplifier is in effect of a RS-flip-flop. The column lines are paired off into odd
470
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
Illustration 17. Micrograph of an array of deep-trench capacitors. Photo from IEEE.
Illustration 18. Micrograph of an array of deep-trench capacitors. Photo from IEEE.
and even addresses. The even numbered column is connected to the left side of the flip-flop and the odd numbered column to the right side. Initially, both columns are forced or precharged to a mid-range voltage. This leaves the flip-flop in an unstable equilibrium halfway between 0 and 1 on both sides. A slight nudge up or down on either side will cause the flip to flip or flop to a 0 or 1.
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
471
Illustration 19. Micrograph of an array of deep-trench capacitors. Photo from IEEE.
Illustration 20. Effect of reading 1. Simulation by C. Kapps.
The same circuitry is used for two adjacent columns. This has two advantages. The first is that it saves circuitry, but perhaps more importantly, it puts exactly the same load on both sides of the flip-flop insuring balance in the unstable equilibrium state, preventing a natural tendency to flip in a preferred direction. Illustrations 19 and 20 show the effect of using this sense amplifier to read a 0 and a 1, respectively. In Illustration 19 the bit storage capacitor is charged to 1.0 V. Initially it was charged to 0.0 V to represent a Boolean 0, but leakage has allowed it to drift to 1.0 V. The column line was precharged to 2.25 V, which is approximately the equilibrium voltage of the flip-flop/sense amplifier. When the even row select line is asserted it turns on the bit selection transistor, which pulls the storage capacitor nearly up to the 2.25 V of the column line, but it also pulls the column line down slightly, enough to upset the equilibrium and cause the flip-flop to reset to 0. Since the bit selection transistor is still on, it drags the bit storage capacitor along with the column line to 0.0 V, thus refreshing the charge level in the capacitor. In Illustration 20 the bit storage capacitor is initially charged to 1.5 V. Initially the value for a Boolean 1 is about 2.5 V, but 1.0 V of leakage has lowered it to 1.5 V. In this case, the column line is nudged up enough so that the flip-flop swings to a Boolean 1. The bit
472
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
Illustration 21. SRAM vs. DRAM 1-bit cell size. Drawing by C. Kapps.
Illustration 22. 4-bit SRAM array vs. 48-bit DRAM array. Drawing by C. Kapps.
storage capacitor is recharged back to 2.5 V in the process. Note that the storage capacitor is not charged all the way to the 5.0 V of Vdd. This is because the polarity of the bit selection transistor is now reversed so that it now is operating as a source follower, which cannot pull up beyond its threshold. Illustration 21 shows the physical layouts of a 1-bit 6-transistor SRAM cell versus a 1-bit 1-transistor/1-capacitor DRAM cell. The size difference is apparent. Illustration 22 shows how these cells can be arranged into 2-dimensional arrays. The particular layouts yield a 12:1 advantage of the DRAM over the SRAM in the number of bits that can occupy the same area on a chip. The actual advantage can vary due to differences in layouts and process parameters. While DRAM has a significant advantage over SRAM in the number of bits that can be stored in a given space there is a downside. DRAM needs to be continually refreshed, and every read needs to be followed by a rewrite of the data that is destroyed in the reading process. This requires additional circuitry making DRAM a poor choice for simple systems such as smaller embedded processors. However, as embedded processors are becoming more complex, on-chip DRAM memories are beginning to be used [4,7].
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
473
The rewriting and refreshing also slows down the rates that DRAM can operate. In order to alleviate this problem most computers of any degree of complexity will use small, fast SRAMs to store a portion of the data from a much larger but slower DRAM. The technique for doing this is called ‘‘caching’’. Caching algorithms can be quite complex, and are based on the principle of ‘‘locality’’ that presupposes that most computer programs are content using only small amounts of memory for significant chunks of time. Thus it can pay to transfer these local pieces of memory to a fast cache storage and limit the need to transfer data from the slower but cheaper memory that holds the large amount of data that a computer program may need in its overall execution. It is the extensive use of cache storage that makes the use of inexpensive DRAM very attractive for the bulk of a computer’s active memory. In addition, many modern DRAM chips have their own caches built right on the chip. When a row is selected on a DRAM array, every capacitor on that row is read, and there must be a data latch for every bit on the row. An extra set of latches can be added to serve as a cache so that bits from successive addresses can be read without actually reading from the DRAM. This is a form of cache, and requires an extra set of latches because refresh must still continue and the DRAM row addresses must be read for that purpose. 4. Dennard’s contribution to scaling theory In 1972, Robert Dennard presented a paper along with Gaensslen, F.H.; Kuhn, L.; and Yu, H.N. at the International Electron Devices Meeting. This paper outlined a method for allowing the scaling down of the size of the transistors in an integrated circuit. Scaling involves reducing all three physical dimensions, but also involves recalculating doping levels, and operating voltages and currents in order to avoid the usual deleterious effects associated with shortening the transistor channels. These formulations were later published in [2]. The January 2007 Newsletter of the Solid-State Circuits Society of the IEEE [3] features the contribution of Robert H. Dennard for the scaling principles. The following abstract is displayed on their web-site, and describes Dennard’s contributions as applied right up to the present time. In 1974 Robert Dennard, et al., wrote a paper that explored different methods of scaling MOS devices, and pointed out that if voltages were scaled with lithographic dimensions, one achieved the benefits we all now assume with scaling: faster, lower energy, and cheaper gates. The lower energy per switching event exactly matched the increased energy by having more gates and having them switch faster, so in theory the power per unit area would stay constant. This set of linear scaling principles of MOS technology has served as the treadmill on which the entire Semiconductor Industry has grown for the past three decades [6]. 5. List of honors, awards, fellowships, prizes and honorary degrees Received the Lemelson-MIT Lifetime Achievement Award, May, 2005, $100,000 dollars Elected to the American Philosophical Society, 1997
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
474
Received Ronald H. Brown American Innovator Award, 1997 Inducted into the National Inventors Hall of Fame, 1997 Awarded Harvey Prize by Technion-Israel Institute of Technology, 1990 Awarded Doctor of Science (honoris causa), State University of New York, 1990 Received Industrial Research Institute Achievement Award, 1989 Awarded National Medal of Technology (from President Regan), 1988 Elected to the National Academy of Engineering, 1982 Received the IEEE Cledo Brunetti Award, 1982 Elected IEEE Fellow, 1980 Appointed IBM Fellow, 1979 6. 2007 Benjamin Franklin medal in electrical engineering Citation: For inventing the 1-transistor/1-capacitor dynamic random access memory that significantly reduced the cost of memory, and for contributing to the development of the metal oxide semiconductor scaling principle that guides the design of increasingly small and complex integrated circuits. References [1] R.H. Dennard, Field-effect transistor memory DRAM Patent Number(s) 3,387,286, 1968. [2] R.H. Dennard, F.H. Gaensslen, V.L. Rideout, E. Bassous, A.R. LeBlanc, Design of ion-implanted MOSFET’s with very small physical dimensions, IEEE Journal of Solid-State Circuits 9 (5) (Oct 1974) 256–268. [3] January 2007, Newsletter of the Solid-State Circuits Society of the IEEE. [4] S.S. Iyer, et al., Embedded DRAM: technology platform for the Blue Gene/L chip, IBM J. Res. Dev 49 (2/3) (March/May 2005). [5] Kenneth S. Gray, Cross-coupled charge-transfer sense amplifier and latch sense scheme for high-densfitey T Memories, IBM. J. Res. Dev. 24 (3) (May 1980). [6] J.A. Mandelman, R.H. Dennard, et al., Challenges and future directions for the scaling of dynamic randomaccess memory (DRAM), IBM J. Res. Dev. 46 (2/3) (March/May 2002). [7] R.E. Matick, S.E Schuster, Logic-based eDRAM: origins and rationale for use, IBM J. Res. Dev. 49 (1) (January 2005). [8] R., Jammy, U., Schroeder, et al., Synthesis and characterization of TiO2 films for deep-trench capacitor applications, in: Proceedings of the 2000 12th IEEE International Symposium on Volume 1, Issue, vol. 1, 2000, pp. 147–150.
The Benjamin Franklin Medal in Electrical Engineering Medal Legacy Previous laureates in electrical engineering with a common intellectual thread to Robert Dennard include:
1915 1929 1936 1941 1941 1947 1949 1955 1960 1961
Thomas Edison, Franklin Medal Thomas Edison, Scott Medal Robert J. Van De Graff, Cresson Medal Edwin H. Armstrong, Franklin Medal Harold Eugene Edgerton, Potts Medal Vladimir Kosma Zworykin, Potts Medal J. Presper Eckert, Jr., Potts Medal Claude Elwood Shannon, Ballantine Medal Harry Nyquist, Ballantine Medal J. Presper Eckert, Jr., Scott Medal
L.W. Dobbins, C.A. Kapps / Journal of the Franklin Institute 348 (2011) 459–475
1961 1966 1966 1973 1975 1979 1979 2002
Leo Esaki, Ballantine Medal Robert N. Noyce, Ballantine Medal Jack S. Kilby, Ballantine Medal Willard S. Boyle, Ballantine Medal Chi-Tang Sah, Certificate of Merit Seymour R. Cray, Potts Medal Marcian E. Hoff Jr., Ballantine Medal Shuji Nakamura, Benjamin Franklin Medal in Engineering
Peter J. Collings, Ph.D. Chairman The Committee on Science and the Arts
475