Computers and the implicate order: the ghost in the machine

Computers and the implicate order: the ghost in the machine

J. Social Biol. Struct. 1985 8. 3949 Computers and the implicate order: the ghost in the machine? Jim Hauser Department of Physics, California ...

907KB Sizes 0 Downloads 59 Views

J. Social

Biol.

Struct.

1985

8. 3949

Computers and the implicate order: the ghost in the machine? Jim Hauser Department

of Physics, California

Polytechnic State University, CA 93407, USA

San Luis Obispo,

When the computer first appeared as an educational tool, the textbook quite naturally became a model for the first educational software. Although some computer graphics were used, these programs relied primarily on the computer’s text processing capabilities. At the other end of the spectrum is the video game: although some text is used, these games rely primarily on the computer’s graphics capabilities. Recent developments in educational software represent the best of both worlds. Imagine a textbook that has illustrations you can play with like an electronic Tinker Toy set, and you have ROCKY’S BOOTS (Robinett & Grimm, 1982). Imagine a typing manual in which the words and letters march relentlessly forward until you type them out of the sky, and you have TYPE ATTACK (Hauser & Brock, 1982). Whereas educators originally programmed for the left side of the brain, now the right side is being considered. In a nutshell the left side of the brain is logical, analytic, verbal, and text-oriented, while the right side is intuitive, synthetic, non-verbal, and graphics-oriented. To be creative in science, music, or everyday life, we have to have one foot in each realm. Theoretical physicist Richard Feynman has said that someone with perfect mathematical ability (that is, someone well-versed in logic and analysis) who has no prejudices (that is, no intuitions) would be a failure as a physicist. Although Feynman would perhaps use different terminology, he would agree that any steps that we can take toward developing both sides of the brain are steps in the direction of enhanced creativity and effectiveness. But what terminology would Feynman use? We are somewhat uncomforable with the terms left and right brain. Brain function may not be localized in this simple manner. Nonetheless, the brain appears to have two complementary information processing modes that have been conventionally labelled left and right brain. It is our thesis that his duality is more fundamental than physiology and psychology would lead us to believe, and that physics, the language of order and measure, can provide a conceptual foundation for the duality and can show that it exists even within the digital computer itself. When you go beneath FORTRAN, LOGO, and the other higher level languages, you find yourself manipulating ones and zeros in the memory of the host computer. This onet An elaboration of PhysicsTeachers (AAPT),

a talk given at the Southern 9 April 1983.

0140-1750/85/010039

+ 11 $03.00/O

California

Chapter

0 1985 Academic

of the

American Associationof

Press Inc.

(London)

Limited

40

J. Hauser

dimensional closed universe is the most fundamental level of the computer from a programming as well as from an information processing point of view. In fact, the computer display itself is just a window into memory: the video game creatures or the words that shuffle across the screen are simply numbers shuffling through memory. We will argue that computer memory has two distinct orders, an explicate order and an implicate order, and that these complementary orders express themselves in all higher level functions of the computer.

The explicate order What is actual is actual only for one time And only for one place T. S. Eliot

Explicate order is a term used by theoretical physicist David Bohm to describe the cosmology of classical physics (Bohm, 1980). In particular this world view is characterized by the use of Cartesian coordinates. The general characteristic of the explicate order is the analyzability of everything into separate parts (particles) that ultimately can be related to the Cartesian grid. The Cartesian grid has become almost a symbol of the computer age. Everywhere we look, we are bombarded with High Tech interiors, video games, television commercials, magazines, and movies that feature grid after grid to the point of distraction. Anyone who has been disappointed by the jagged appearance of circles and curved lines on his or her home computer knows the reason why: the Cartesian structure of the computer screen provides an environment ideally suited to drawing grids, not curves. But the screen is only a reflection (technically, a mapping) of the underlying memory. So the Cartesian structure of the screen is a derivative of the Cartesian structure of the memory to which we now turn our attention. One important distinction should be made before we begin. The physical structure responsible for memory capability in a computer is the silicon chip. We are not talking about his physical unit when we refer to computer memory. Bather we are speaking of a higher level: the logical structure of memory as it appears to the programmer, or more generally, to the outside world. Although it is possible to relate the logical memory to the underlying physical memory (the chips), we are not doing so at this time. A microcomputer typically has 64K of memory, where 1K equals 1024 words. A word in this context is one byte (8 bits) of data. Each byte has an associated address that servesto order the entire memory into a one-dimensional Cartesian space. All programs and data within the computer are linear strings of bytes that inhabit this closed universe. Typically, different regions of this space are allocated for different uses. For example, in the Apple II, the higher regions (that is, memory associated with large addresses) are used to store the BASIC interpreter and the operating system programs, while the lower area of memory is available for user programs and data. In a textbook, information is also presented along a one-dimensional Cartesian grid. In other words, the textbook is a textbook example of the explicate order. It is the corresponding explicate order of the computer memory that is chiefly responsible for the ability of the computer to impersonate a textbook. When a sentence appears on the screen, it is stored in a contiguous area of memory, one byte of data corresponding to one alpha-numeric character. This correspondence prompts us to associate the explicate order with the conventional left side of the brain. More specifically, the explicate order of computer memory provides a medium that can reflect the verbal, analytic aspect of our intelligence.

Computers and the implicate order

41

The explicate order of the textbook and computer-as-textbook is responsible for certain intrinsic limitations of these ‘left brain’ media: all thoughts, concepts, and ideas are analyzed into words and ultimately into single characters or bytes and then strung out on a linear grid. More subtle and profound issues aside, this represents a bottleneck that is increasingly inadequate in the face of the present information explosion. More importantly, it is becoming increasingly clear that a textbook education which addresses only our ‘explicate side’ is incomplete. One of the main purposes of this paper is to provide additional supporting evidence for this perceived imbalance by showing that, unlike the textbook, the computer has another side that is capable of reflecting the integrative aspect of our intelligence.

The implicate order To see a World in a Grain of Sand . and Eternity in an hour William Blake Implicate order is a term used by David Bohm to describe a cosmology suggested by relativity and quantum theory. Whereas the distinguishing feature of explicate order is discreteness and separateness, implicate order is characterized by a particular holographic interrelatedness: the whole is enfolded into each part, and each part is enfolded into the whole. If the particle is a concept from the explicate order, then the wave, the medium responsible for the hologram, is closer to the implicate order. Although it is controversial as to what extent spacetime and mass-energy exhibit implicate orders, that should not prevent us from recognizing that other structures may have implicate order. In particular, a computer memory has both an explicate order and an implicate order. Consider two separate points, A and B, in the one-dimensional Cartesian space that is the memory of a computer. It is easy to write a simple program at point A that reads or changes what is stored at point B. Likewise you can place a program at point B to read or change what is stored at point A. More generally, a program at any point in the memory has access to information from all other points in the memory. One could say that the whole memory is enfolded into each part, and each part is enfolded into the whole of memory. And so, by definition, the memory has an implicate order. As we will presently show, the implicate order of computer memory is chiefly responsible for the ability of the computer to generate animated graphics. For this reason, we can associate the implicate order of computer memory with the conventional right side of the brain. More specifically, the implicate order of computer memory provides a medium that can reflect the visual, integrative aspect of our intelligence. Consider a typical video game such as PAC-MAN. All of the action takes place in the memory of the computer. PAC-MAN himself is just a pack of numbers shuffling through memory. He is visible through the device of memory-mapped display: the glowing video screen is a window into memory. Furthermore, he is merely a puppet of the video game program that resides off-stage in an undisplayed area of memory. Any life that PAC-MAN exhibits must be attributed to the unseen game program, in the same way that any life exhibited by Punch and Judy must be attributed to the puppeteer that resides off-stage in an undisplayed area of the play house. The puppeteer is able to mimic self-locomotion by manipulating otherwise inanimate objects on the stage. In exactly the same way, the video game program is able to mimic self-locomotion by manipulating otherwise inanimate data in the graphics window. But the puppeteer uses strings and PAC-MAN, like Pinocchio, has no strings attached. In

42

J. Hauser

other words, the puppeteer works through an explicate order, while the game program works through an implicate order. It is important to note that both the puppeteer and the game program are working locally. The main difference is that the connection between puppet and puppeteer can be interrupted by objects or events in the intervening space (the strings can be cut, for example), while the connection between PAC-MAN and the game program is not affected by any objects (other programs and data) or events (reads and writes) in the intervening memory. A more technical way to describe the difference is to note that the effect of the puppeteer propagates from point-to-point within the explicate order of the imbedding Cartesian space (along the strings), while the effect of the game program can only be described as action-at-a-distance within the imbedding Cartesian space of memory. More specifically, the game program and PAC-MAN are synchronized: whereas the effect of the puppeteer on the puppet is delayed in direct proportion to the amount of intervening string, the effect of the game program on PAC-MAN is independent of the amount of intervening memory.

The hardware Talk of synchronous connection would be unnerving to a classical physicist. Steeped in the explicate order, he would probably begin a search for ‘hidden variables’, that is, for explicate mechanisms within the imbedding Cartesian space that would save the phenomena. But there are no such mechanisms that operate within the logical structure of memory. A quantum physicist might approach the problem from a more sophisticated perspective. Familiar as she is with the notion of the ‘collapse of the wave function’ and other quantum concepts, she probably is somewhat less unnerved with talk of synchronous connection. Moreover, she probably would be less inclined to look for hidden variables because of the mandate of quantum theory to foresake such notions and to deal with the observables: the programs, data, and read-write events that are the sum total of substance and activity in the computer from an information processing perspective. The key to reconciling these two perspectives is found in the notion of hidden context. A computer is a multilevel device. Each level has an autonomy and a self-consistency that is independent of the autonomous and self-consistent underlying levels. In the language of computation, we say that the underlying levels are transparent to the higher levels-they simply are not observable. For example the BASIC programmer does not need to know anything of the underlying machine language. Likewise, the machine language programmer can be totally ignorant of the underlying hardware. One does not have to know anything about computers to play PAC-MAN. So when our hypothetical quantum physicist forsakes hidden variables to work with observables in the computer memory, she is, from the point of view of computer science, merely choosing to describe and understand an autonomous and self-consistent level within the computer. And if this level is characterized by synchronous connection operating through an implicate order, so be it. Our hypothetical classical physicist also has a valid position. There are no local hidden variables, mechanisms operating within a given level that can reduce the synchronous connection to a causal chain of events that propagates from point-to-point within the explicate order of that level. But there are global hidden variables, mechanisms operating within the hidden context of a lower level that reduce the synchronous connection in a higher level to a causal chain of events that propagates from point-to-point within the explicate order of the lower level. More specifically, there are electrons and conductive wires within the hidden context of the underlying hardware that reduce the synchronous connection in the logical memory to a causal flow of electricity that propagates from point-

Computers and the implicate order

43

to-point within the explicate order of the physical memory. Each byte is stored at physical sites within the silicon chips, and each site is connected to all other sites by electronic pulses traveling along conductive channels in the main circuit board of the computer. In other words, the hardware is wired together in a one-to-many manner. For an insight into the global character of the hidden hardware variables, consider the Apple IIe. If a single RAM memory chip is removed from the motherboard, one bit in each and every memory location is disabled. In other words, a local manipulation in the hardware domain produces a global effect in the logical memory domain. Before discussing the implicate order of computer memory within wider contexts or levels, we should try to disentangle the levels within the computer itself: hardware, memory, and programs. Strickly speaking the logical structure of computer memory, in and of itself, exhibits only an explicate order. Only by population this space with ‘living’ programs does the implicate order of the space emerge. We are reminded of the Buddist metaphor of Indra’s Net which becomes Indra’s Necklace for the purposes of the present discussion. Imagine each memory location to be a rough, uncut stone in a necklace of similar stones. Such a system has only an explicate order: each stone is discrete and is separate from the other stones. On the hardware level, this corresponds to a set of memory chips and interconnections (called a memory board) before it has been plugged into the host computer. Returning to the necklace analogy, imagine each stone to be given an adamantine polish. At once, the whole necklace is reflected in each jewel, and each jewel is reflected throughout the whole necklace. And so we see the implicate order of Indra’s Necklace. On the hardware level, this corresonds to plugging the memory board into the host computer. At this point programs can be introduced into the logical structure of memory thereby ‘polishing’ each memory location, that is, allowing the whole of memory to be reflected (or read) by a program residing at any point. The main difference between the polished necklace and the programmed memory is that the implicate order of Indra’s Necklace drives from mechanisms (light waves) operating within the level inhabited by the jewels, while the implicate order of computer memory derives from mechanisms (electron flow) operating within a lower level. This is the difference between local and global hidden variables. These terms, to a certain extent, are misnomers. ‘Local hidden variables’ could be called ‘overlooked variables’, for they reside within the level in question, and ‘global hidden variables’ could be shortened to ‘hidden variables’, for they are the ones that hide in the hidden context of a lower level. It is interesting that recent experimental work in quantum physics (Rohrlich, 1983) parallels the preceeding discussion to a certain extent. These experiments provide conclusive evidence against local hidden variables and in favor of standard quantum mechanics. However, these experiments do not rule out global hidden variables. Nevertheless, most physicists find global hidden variables an unneccesary complication and tend to use Occam’s Razor to remove them from discussion. This is appropriate within the present context of physics, if physics is defined as the study of a single level of perception or participation within the universe. But the universe of computation and information is a multi-leveled reality, and here the concept of hidden variables comes into its own as a clarifying and organizing principle.

Computation as metaphor Not long after the advent of modern quantum theory, Sir James Jeans remarked that the universe was beginning to look more like a great thought than a machine. This is increasingly true of the computer as well. Although our discussion so far concerns levels of organization

44

J. Hauser

within a digital computer, the fact that it is possible to express the arguments in the form of a dialogue between classical and quantum physicists is suggestive. The computational metaphor provides a theoretical framework for a discussion of process, and process is the subject matter of modern physical theory. To make these ideas more concrete, we suggest the following metaphor. Perhaps, in the spirit of the Walt Disney film TRON (1982) we inhabit the memory of a ‘cosmic computer’, our perception restricted to a limited range of ‘programming levels’. The laws of physics, in particular quantum theory, would emerge as the result of the structure and order of the cosmic computer. This analogy may appear a bit far fetched and more than a little vague, but can yield some interesting insights on further analysis. In an attempt to make sense of this metaphor, we might try to identify the physical universe with the cosmic computer, but it is more fruitful to consider the universe to be a hardware substrate on which we create a logical computer through our subjective definitions of space, time, matter and number - free creations of the human mind as Einstein would call them. Having thus explicated the architecture of the logical computer (declarative knowledge), the laws of quantum mechanics would be implicated as procedural knowledge. Something like this already has happened in the non-cosmic computer. Recent work in theoretical computer science by Manthey and Moret (1983) has ‘shown how basic concepts of physics such as relativity, exclusion, uncertainty, nondeterminism, and conservation of momentum and quantum numbers are naturally emcompassed by the computational metaphor’. These researchers suggest that this correspondence between physics and computation is capable of further elaboration. In particular, they are interested in refining the correspondence with respect to the dual nature (wave and particle) of quantum phenomena. We suggest that the implicate-explicate duality of computer memory may not be unrelated. David Bohm has said that his work does not represent an idea, only an idea for an idea (a meta-idea). To the extent that the computational metaphor can provide a context for a discussion of some of the more subtler issuesof quantum theory, we are perhaps witnessing the emergence of an idea.

Information and the implicate order Bohm developed his notion of the implicate order through his study of quantum mechanics. And the waves of quantum mechanics are essentially waves of information. Therefore it is not surprising that information itself has both explicate and implicate order. The explicate order of information is the textbook order that we have discussed. Information can be strung out on a line, encoded into English, and transmitted in book form. Likewise, information can be strung out on a line, encoded onto a carrier wave, and transmitted through space. In the first example the information rides on the back of a material medium, and in the second example, it rides on the back of radiant energy. So it appears that the explicate order of information derives from the explicate order of the mass-energy medium that ferries it about the universe. But the medium is not the message. More specifically, although information rides on the back of matter, it is not the same as matter. Therefore, the orders, measures, and structures that we have found useful in our description of matter may be of limited value when we begin to study information. The implicate order of information is related to the notion of meaning and context. lnformation transmitted by any message is highly dependent on context. For example, the word ‘die’ in and of itself does not transmit any specific message, or more accurately, it transmits all the different meanings that can be found in dictionaries. lnformation theorists

Computers and the implicate order

45

would say that his word has a high entropy. Placing ‘die’ in the sentence ‘The die is cast’, lowers the entropy considerably, for now there are only two possible meanings: either ‘die’ is a kind of tool for imparting a desired shape, or it is a small cubical gaming device, one of a pair of dice. Only by relating the sentence to the larger context of the foundary or casino can the entropy be lowered to the extent that the word ‘die’ has a meaning and carries information. (Actually the meaning of the phrase ‘The die is cast’ emerges only by relating it to a larger cultural context in which the literal meaning of the words is transcended by metaphorical usage.) So there is no information without context; all parts of a message contribute to the meaning of each part. Stated another way: the whole is enfolded into each part, and each part is enfolded into the whole. And so we see the implicate order of information. In contrast with the implicate order of computer memory, the implicate order of information appears to be intrinsic, that is, it is not reducible to hidden explicate mechanisms. The strings that bind together the meaning of a message are figures of speech, not features of space. Notice that the implicate order of information is concerned with wholes rather than parts, with synthesis rather than analysis, with the forest rather than the trees. We could say, therefore, that the implicate (implicit) side of information is concerned with the ‘big picture’, with context, and so we see a strong connection to the conventional right side of the brain. At the other end of the spectrum is the explicate order of information which is concerned with parts rather than wholes, with analysis rather than synthesis, with the trees rather than the forest. We could say, therefore, that the explicate (explicit) side of information is concerned with the ‘bare facts’, and so we see a strong connection to the conventional left side of the brain.

Self-reference and the implicate order The term holographic is more a generic label than a precise term. From the point of view of physics, there are at least two separate meanings, both implying ‘writing the whole’. But what is written and where? If the whole of one domain is written to each part of another domain, then we have two-domain holographic order. This is the type that characterizes the hologram, in which all points in the scene (domain 1) illuminate each point on the photographic plate (domain 2). This explains how it is that each fragment of a shattered hologram can regenerate one view of the scene. If the whole of one domain is written to each part within the same domain, we have onedomain holographic, or implicate, order. This is the type that characterizes both computer memory and information. Notice that implicate order also could be described as selfholographic, for a domain exhibiting implicate order is a self-referential domain: each part refers to all other parts. The self-referential character of computer memory is expressed in all higher level functions of the computer. Self-reference is a key feature of the computational metaphor. This is one of the main themes of Douglas Hofstadter’s Pulitzer Prize winning book Godel, Escher, Bach (1979). Hofstadter believes that the field of artificial intelligence will benefit from a more complete understanding of self-reference and the related subject of recursion. But what about natural intelligence? Hofstadter has said the trait that makes us different from lower animals is our ability to step outside the system. This ultimate creative act requires self-reference: we have to, in some sense, look at the system in which we are imbedded in order to transcend that system. Einstein looked closely at his Newtonian world view in order to step outside the system of absolute time. Self-reference is at the heart of all paradigm shifts. Paradoxically, although present-day digital computers are steeped in self-

46

J. Hauser

reference, they cannot step outside their own operating systems. Be that as it may, the important point is that the computer can help us to step outside of ours, that is, the computer can help us learn how to learn. There is a paradigm shift that a novice must experience in order to begin to understand and effectively use the computer. Before the shift, the computer is confusing and alien. After the shift, everything starts to fall into place. The transition begins when the user discovers he can solve certain problems only by recognizing that he is interacting within the context of a particular system. Consider the following hypothetical but typical dialogue between two users: Novice: I type ‘LOAD PROGRAM’ but all I get is a syntax error. Hacker: What are you trying to do? Novice: I’m trying to run my program, and the BASIC manual says first type ‘LOAD PROGRAM’. I don’t understand. Hacker: But you’re not in BASIC! Novice: What do you mean? Hacker: Apparently you didn’t load in the system called BASIC, and so you find yourself interacting with the operating system. And the OS doesn’t understand any of your BASIC commands. Novice: What do I do now? Hacker: You have to put away your BASIC manual and take out the OS manual. If you still want to run your program, the OS manual will tell you how to load BASIC. On the other hand you might want to play around in the OS for a while before entering BASIC.

Novice: Thanks. Hacker: That’s okay kid. Just remember that it’s a jungle in there until you learn the System. And this System is not any particular system, rather it is a system that enables you to step outside any particular system. The experienced Hacker has learned to isolate, evaluate and transcend context. Just as a meta-theorem is a theorem about theorems (like Godel’s famous theorem), we have here an example of a meta-system: a system that refers to systems. Likewise the threshold that separates the novice from the experienced user isa meta-paradigm shift, for it is a paradigm shift that can generate further paradigm shifts. It should be noted that the meta-shift does not happen all at once, nor is it ever finished. It is the nature of this meta-paradigm shift to be recursive and therefore evolutionary: if you have a system that allows you to step outside the system, then you can use that system to step outside itself. The process begins as soon as you sit down at the keyboard. The computer by its very nature presents one problem after another whose solution requires the novice to step outside the system. This is reminiscent of the technique of the Zen master who presents riddles, or koans, whose solution requires the student to step outside his system of thought. The moment of insightful solution, or satori, is the goal of the exercise, not the solution itself. The key to this process is interaction and motivation. Flipping from one television station to another is a form of stepping outside the system. Moreover, television can be self-referential and can present the viewer with koan-like puzzles. But educational television does not have the power of an educational computer because it is a non-interactive medium. Television forces us to be observers. A computer invites us to be participators. It is interesting that this is also the distinction between classical and quantum physics.

Mindstorms Experts have predicted for decades that the computer revolution will change education in profound ways. On the other hand, the nature of the changes and the route to achieving these

Computers and the implicate order

47

changes have begun to emerge only recently. Perhaps the most important feature of this emerging paradigm is that children in the normal course of events leading to computer literacy can learn how to learn. And part of this process derives from the self-referential nature of the computer which constantly invites the user to isolate, evaluate, and transcend context (the meta-shift). Learning to learn is discussed with particular clarity and power in Mindstorms by Seymour Papert (1980), who also warns that some of the routes to computer literacy that are currently being adopted by the schools do not promote the learning of learning. Papert’s unique statement derives from his unique perspective: he worked for many years with Jean Piaget at the Center for Genetic Epistemology in Geneva and with Marvin Minsky at the Artificial Intelligence Lab at MIT. A culture which has learned how to learn is qualitatively different from a culture in which this perspective is found only infrequently. We are living in a transition period between the two cultures. We read in the newpapers of twelve year old ‘computer geniuses’ who have started their own companies. But we are seeing only the tip of the new culture. So far only a very small percentage of children have been interacting with computers, and only for a few years. Imagine the explosion of ‘computer genius’ that will occur when most children interact with these devices starting from infancy.

Implications: high tech-high touch Arthur C. Clarke is a prophet of communications and information. He orginated the notion of the communications satellite almost two decades before Sputnik. His intuitions, however, go beyond the hardware into the domain of cultural evolution. In his novel, Childhood’s End (1953) he tells of a generation of children that has powers of communication far beyond their parents. Gradually, their numbers increase until one day they simply vanish from the face of the earth, moving on to the next level of evolution. The earth, in turn, having fulfilled its manifest destiny vanishes in a blinding flash of self-annihilation. Childhood’s End was a major influence on Stanley Kubrick’s film, 2001: a Space Odyssey (1968). However, the theme of transformation is handled in a complementary way. Rather than dealing directly with the grand scheme of social evolution, the film depicts the transformation of the individual. The protagonist, through his encounter with a new world revealed by high technology, is transformed into the Starchild who returns to the awaiting earth, heralding a new order. Both of these ‘scenarios’ have something to say for the actual year 2001: although it is clearly the manifest destiny of the next generation to enter a world that we can only glimpse, the present generation can be transformed to a certain extent by the new high technology. Business consultant John Naisbitt, author of Megatrends (1982). brings these ideas down to earth with his notion of ‘high tech-high touch’. He sees the growth of high technology and the concurrent human potential movement as not unrelated: as technology dehumanizes our environment, we explore and try to enhance our own humanity. There is another way to look at the high tech-high touch phenomenon. The dehumanizing aspect of technology is not something new. We have been surrounded by machines for hundreds of years, and so it is not surprising that we have emulated them. The important point is that in the later half of the twentieth century our machines are developing a new character. Whereas a machine like the automobile can be engineered from the ground up entirely in terms of explicate orders, the computer and other information processing devices require both orders for a complete understanding. To paraphrase Jeans, our technology is beginning to look more like a great thought than a machine. Marshall McLuhan (1964) has said that when a new medium emerges, it is used to do

48

J. Hauser

conventional things. Case in point: the steam engine originally was used to pump water out of mines. Only later did it occur to someone to add wheels and lay down tracks; only then could the steam engine come into its own. And so it is with the computer. Originally our fragmented culture used (and still uses) the information technology to continue the conventional fragmentation. This is symbolized for me by the first educational software which merely reflected the conventional, explicate textbook. But we have seen that the computer is capable of reflecting, in addition to fragments, a more ecological perspective. And it is the recognition of this aspect of the medium that gives the computer wheels, and enables us to lay down the tracks that will allow the computer to come into its (high touch) own. We can see evidence of this ‘high touch’ in the number of computer clubs and networks that have sprung up in recent years. Although at this time, these organizations are rather narrow in scope, they promise to grow and change to include a wider cross section of society (from Papert’s ‘samba clubs’ to McLuhan’s global village). As a result, we will see a greater effect of the community on the individual, and in turn, the individual will have a greater effect on the community. And so we see the implicate order of society. This theme has been developed into a coherent and detailed vision of the future by Yoneji Masuda in his book The Information Society as Post-Industrial Society (1981). He seesall human activity taking place within the context of specific fields. An industrial society conducts its business within the field of geographical space, while a post-industrial, information-oriented society conducts much of its business within the field of information space, the abstract space of information networks (see Figure 1). Whereas geographical space is a medium exhibiting only an explicate order, information space is a medium exhibiting both orders. So Masuda’s vision of a synergistic ‘computopia’ springs from his intuition that the implicate order of information space will produce a more implicate culture. We think that Masuda would agree that a similar intuition prompted McLuhan to say that electronic circuitry is orientalizing the West.

The computer as Rosetta stone Science to a large extent has been a search for explicate orders in the flow of events. This has been true since the beginning of the modern era when Descartes invented his grid. And because science has made over society in its own image, we find ourselves living in an explicate world. But we are also living in an era in which we are discovering the utility of ( 0 ) Field

of geographical

space

( b 1 Field

of information

spoc~

Fig. 1. This is taken from Masuda’s figure 7.1. In (a) the dotted lines emphasize the separateness of geographical space by showing that the elements interact directly only with a limited number of nearest neighbors (explicate order). In (b), the lines emphasize the interconnectedness of information space by showing that each element is joined by equivalent linkages to all other elements (implicate order).

Computers

and the implicate

order

49

looking for implicate orders in the flow of events (e.g. the ecology movement). And this is having, and will continue to have, profound implications for society. Just as the Renaissance and the discovery of the explicate order go hand-in-hand, we can expect another rebirth to be associated with the recognition of the implicate order. And so this paper has come full circle, for it is generally recognized that the computer is an integral component of a new wave of change and creativity that promises to lift society out of its collective doldrums. The Aquarian Conspiracy by Marilyn Ferguson (1980) and The Turning Point by Fritjof Capra (1982) are recent books that document this emerging social paradigm. Both authors point out that we are a transition period in which an outmoded, fragmented (explicate) worldview is being replaced by a more balanced perspective that is characterized by a greater awareness of the ecological (implicate) side of things. And both authors agree that a more balanced educational system will be a part of the new order. But we are not there yet. For the most part we see the world through explicate glasses. We speak the textbook language of the explicate order and are just now voicing some simple phrases in the language of the implicate order. We are like archeologists at the discovery of the Rosetta stone. They could understand Greek but could not decipher Egyptian hieroglyphics. The discovery of the Rosetta stone, a tablet on which the same messagewasinscribed in each of the languages, was the breakthrough that eventually made the archeologists bilingual. The computer is a kind of Rosetta stone, for inscribed into its memory is both the explicate and the implicate languages. An educational system based on the computer will, therefore, be more balanced than the present one which is based on the explicate textbook. And so the computer will help us become bilingual: the left brain will see the explicate language of the computer and will learn of the right brain in his own explicate terms, while the right brain will see the implicate language of the computer and will learn of the left brain in her own implicate terms. Marshall McLuhan was right: the Medium is the Message. References Bohm, D. (1980. Wholeness and the Implicate Order. London: Routledge & Kegan Paul. Capra, F. (1982) The Turning Point. New York: Simon and Schuster. Clarke, A. C. (1953) Childhood’s End. New York: Random House. Ferguson, M. (1980). The Aquarian Conspiracy. Los Angeles: J. P. Tharcher. Hauser, J. & Brock, E. (1982) TYPE ATTACK. Sacramento, CA: Sirius Software. Hofstadter, D. (1979) Godel, Escher, Bach: an Eternal Golden Braid. New York: Basic Books. Jeans, Sir J. ( 1930) The Mysterious Universe. Cambridge: Cambridge University Press. Manthey, M. & Moret, B. M. E. (1983) The computational metaphor and Quantum Physics. Comm. Ass. Comput. Mach. 26(2), February, 137-145. Masuda, Y. (198 1) The Information Society as Post-Industrial Society. Institute for the Information Society. Mcluhan, M. (1964). UnderstandingMedia: The Extensions ofMan. New York: McGraw Hill. Naisbitt, J. (1982). Megatrends: Ten New Directions Transforming our Lives. New York: Warner Books. Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books. Robinett W. & Grimm, L. (1982). ROCKEY’SBOOTS. Palo Alto, CA. The Learning Company. Rohrlich, F. (1983) Science, September 23.