Consider the particular case

Consider the particular case

JOURNALOF MATHEMATICALBEHAVIOR13, 241-253 (1994) Consider the Particular Case* ROBERT W. LAWLER Purdue University The analysis of particular pro...

841KB Sizes 0 Downloads 105 Views

JOURNALOF MATHEMATICALBEHAVIOR13, 241-253 (1994)

Consider

the

Particular Case*

ROBERT W. LAWLER Purdue

University

The analysis of particular problems for the application and illumination of principles has long been a central activity in the physical sciences. The attempt to take guidance for the human sciences from the physical sciences has often been unconvincing and subject to criticism. Instead of borrowing notions from the physical sciences, I reflect here on the process of problem solving in a particular case and from that process abstract objectives, methods, and values that can help us identify and solve our own problems and judge the value of those solutions. My aim is not to develop a single, universal method from this example. It is an analysis of how we can proceed to conclusions of interest that we can have confidence in. I begin with a focus on the importance of analyzing particular cases. I make use of a particularly illuminating description by Feynman of a complex physical effect as a concrete example of a specific form of analysis. I use that worked example to illuminate the meaning of research I have done. I believe that this comparison is useful in understanding epistemological analyses based on computation modeling.

This article is based on work and ideas developed over many years at diverse places. It was my privilege to meet Feynman in the past when I studied at Caltech. Years later, I went to MIT to study with Minsky, directed to him by hearing from a classmate of my student days of Feynman’s interest in Minsky’s work. I undertook case studies of learning in Papert’s laboratory at MIT, guided by his appreciation of Piaget and the intellectual program represented now by Minsky’s book The Society of Mind. earlier a joint effort of Minsky and Papert. The influence of Papert, as epistemologist, remains deep here in specific ideas. On the suggestion of Minsky, I worked with Oliver Selfridge as an apprentice at computational modeling. Oliver’s contributions to this work are pervasive; the first publication about SLIM bears his name as well as my own. Bud Frawley showed me how to do the recursive game extension at the heart of SLIM. Sheldon White and Howard Gruber encouraged my case studies and showed me links between my own intellectual heroes and others (Vygotsky and Lewin). Discussion over several years with Howard Austin, Robert B. Davis, Wallace Feurzeig, and Joseph Psotka have left their imprint on my work. Arthur Miller’s criticisms of an early draft and suggestions for revision were always cogent and often helpful. Jim Hood, Mary Hopper, Gretchen Lawler, and Mallory Selfridge were kind enough to read early versions of this work and suggest useful directions for further development. No surprise then that the climate of MIT’s AI Lab, GTE’s Fundamental Research Lab, the Smart Technology Group at the Army Research Institute, and now Purdue have each made their special contribution to this article. *This article presents parts of a longer chapter “On the Merits of the Particular Case” appearing in Lawler and Carley (in press). Correspondence and requests for reprints should be sent to Robert W. Lawler, Liberal Arts and Education, Purdue University, West Lafayette, IN 47907. 241

242

LAWLER

My primary source for the physics of this comparison is a series of popular lectures by Richard Feynman (198.5) published in QED: The Srrange Theory of Light and Mutter. 1 Feynman was a primary architect of Quantum Electrodynamics (QED) and was its advocate as the most thorough and profound of current physical theories. He approached his theme through a discussion of the reflection of light from a surface. Reflection is a familiar phenomenon generally believed to be well understood by educated laymen, both in terms of common sense (“the angle of reflection equals the angle of incidence”) and at the academic level of classical physical theory. Feynman’s analysis shows that classical theory does not account adequately for the phenomena known today; it even reveals what one might call “mysteries” unsolvable except through quantum analysis. Furthermore, the interpretation in terms of quantum electrodynamics directly permits the progressive deepening of layers of analysis to the level of atomic particle interactions. There is a dilemma in classical electrodynamics. Light is both a wave and a particle. Evidence for the wave-like character of light is, first, interference phenomena where light from two sources cross; and second, the diffraction of light when passing through very small holes or slits. These experimental results, refined over hundreds of years from the time of Newton’s original experiments, have been explainable only by the equations for waves interacting and “canceling out.” The wave theory broke down in modern times when devices were developed sensitive enough to detect a single particle of light, a “photon.” The wave theory predicted that the “clicks” of a detector would get softer as the light got dimmer, whereas the “clicks” actually stayed at full strength but occurred less frequently. No reasonable model could explain this fact. Feynman claims that quantum electrodynamics explains the phenomena of “wave-particle duality” without claiming to actually “resolve” the dilemma, that is, to reduce the explanation of the phenomena to an intuitive model based on experience in the everyday world. His description of this “resolution” characterizes his effort as typical of the practice of physics today: The situation today is, we haven’t got a good model to explain partial reflection by two surfaces; we just calculate the probability that a particular photomultiplier will be hit by a photon reflected from a sheet of glass. I am going to show you “how we count the beans”-what the physicists do to get the right answer. I am not going to explain how the photons actually “decide” whether to bounce back or go through; that is not known. (Probably the question has no meaning.) I will only show you how to calculate the correct probability that light will be reflected from glass of a given thickness, because that’s the only thing physicists know how to do! What we do to get the answer to this problem is analogous to the thing\ WChave to do to get the answer to every other problem explained by quantum clcctrodynamics. (Fcynman. 1985, p. 24) ‘In the description\

following.

I attempt i\ succinct paraphrase of

Feynman’\

text in his book.

CONSIDER

THE PARTICULAR

CASE

243

Figure 1A.

A LITTLE MORE DETAIL The probability that a particular photon will reflect off a glass surface is represented as a “probability amplitude” vector, which can be called an arrow. An arrow represents, for each photon, the probability that it will reflect from the glass surface. The “direction” of the arrow indicates in which direction the photon will move on reflection. This direction is a function of the time elapsed since the photon left the emitter. It speeds around like a very quick clock hand. Feynman uses that representation in the analysis of reflection from a mirror represented in Figure 1A. A beam of light from the source (S) spreads across the surface of the mirror and the photons that arrive at the photomultiplier (P) are represented by the paths in Figure 1A. Figure 1B graphs the time for a photon to travel the paths of Figure 1A. Figure 1C shows the vector addition of the amplitude vectors for all the possible reflection paths. What determines how long the final arrow is? Notice a number of things. First, the ends of the mirror are not important: There, the little arrows wander around and don’t get anywhere. If one chopped off the ends of the mirror, it would hardly affect the length of the final arrow. The part of the mirror that gives rina

244

LAWLER

Figure 1C.

the final arrow a substantial length is nearly the same direction-because almost the same. The time is nearly bottom of the curve, where the time

the part where the arrows are all pointing in their time to reach the photomultiplier is the same from one path to the next at the is least. Feynman concluded:

To summarize, where the time is least is also where the time for the nearby paths is nearly the same; that’s where the little arrows point in nearly the same direction and add up to a substantial length; that’s where the probability of a photon reflecting off a mirror is determined. And that’s why, in approximation, we can get away with the crude picture of the world that says that light only goes where the time is least. So the theory of quantum electrodynamics gave the right answer-the middle of the mirror is the important part for reflection-but this correct result came out at the expense of believing that light reflects all over the mirror, and having to add a bunch of little arrows together whose sole purpose was to cancel out. All that might seem to you to be a waste of time-some silly game for mathematicians only. After all, it doesn’t seem like “real physics” to have something there that only cancels out! (Feynman, 198.5, p. xx)

REFLECTIONS ON THE EXAMPLE FROM FEYNMAN’S ANALYSIS Although there are limitations inherent in studies of particular cases, Feynman’s analysis shows how solving such specific problems is central to the detailed analysis necessary for illuminating the explicit meaning of general principles. One can argue that the particular problems frequently come first, and that from their solutions general principles emerge, which in turn are finally comprehensible when applied through subsequent models. Particular cases provide guidance in both the formulation and comprehension of general laws. This process seems similar to Bourbaki’s description of how a mathematician creates an axiom-based system.2 >“A mathematician who tries to carry out a proof thinks of a well-defined mathematical object, which he is studying just at this moment. If he now believes that he has found a proof, he notices then, as he carefully examines all the sequences of inference, that only very few of the special properties in the object at issue have really played any significant role in the proof. It is consequently possible to carry out the same proof also for other objects possessing only those properties which had to be used. Here lies the simple idea of the axiomatic method: instead of explaining which objects should be examined, one has to specify only the properties of the objects which are to be used. These properties are placed as axioms at the start. It is no longer necessary to explain what the objects that should be studied really are. N. Bourbaki, quoted in Fang t 1970. p. 69)

CONSIDER

LEARNING

THE PARTICULAR

THROUGH

CASE

245

INTERACTION

It seems remarkable that something so familiar as reflection from a mirror opens a pathway into solving the deepest puzzles of physics, as in the case of waveparticle duality. And yet, why be surprised? Insights are usually a reconceptualization of familiar affairs. Learning is familiar also, something we have all experienced personally and see in others all the time. There are psychological reasons to believe that a case-based approach is better suited to understanding the character of learning than laboratory methods.3 Specifically, if one sees learning as an adaptive developmental mechanism, then that adaptation must come from interaction with the everyday world. One should look where the phenomena occur. Furthermore, if learning is a process of changing one state of a cognitive system to another, then representations of that process in computing terms should be expected to be more apt than in other schemes where the procedural element of representation might be less important.4 In relating case study details to computing models, one might use those details much as one uses boundary conditions to specify the particular form of a general solution to a differential equation. Such is the use made of the psychological studies here, as a foundation for the representations used in the models, and as justification for focusing on key issues: the centrality of egocentricity in cognitive self-construction and the particularity of the ndive agent’s knowledge. The virtue of machine learning studies is that they allow us no miracles; they can completely and unambiguously cover some examples of learning with mechanisms simple enough to be comprehensible. Must we claim that learning happens in people the same way? Not necessarily. Building such models is an exploration of the possible, according to a specification of what dimensions of consideration might be important. The computer’s aid in systematically generating sets of all possible conditions helps liberate our view of what possible experiences might serve as paths of learning. Feynman asked what happened at all those other places on the reflecting surface where “the angle of reflection” doesn’t equal “the angle of incidence.” Similarly, when we can generate all possible interactions through which learning might occur-including some we first imagine are not important-we can explore those alternate paths and the suite of relationships among elements of the ensemble.

CONSIDERING

ALL THE POSSIBILITIES

The analysis I choose to contrast with Feynman’s began as a verbally formulated analysis of one child’s learning strategic play at tic-tat-toe (Lawler, 1985). It was Wnder the banner of “Situated Cognition,” education psychologists are beginning to recognize the validity of techniques and arguments long advanced by anthropologists and case study analysts. See Brown, Collins, and Duguid (1992). 4This position, as well as many related ideas, are set forth in a combined developmental and computational discussion in Minsky’s (1972) Turing Award paper.

246

LAWLER

continued in a constructive mode through developing a computer-embodied model, SLIM (Strategy Learner, Interactive Model; see Lawler & Selfridge, 1986.) The latter was based on search through the space of possible interactions between one programmed agent (SLIM) having some of the characteristics of the psychological subject and a second, REO, a programmed Reasonably Expert Opponent. REO is “expert” only in the sense of being able to apply uniformly a set of cell preference rules for tactical play.5 Strategies for achieving specific forks are the knowledge structures of SLIM. Each has three parts: a Goal, a sequence of Actions, and a set of Constraints on those actions (each triple is thereby a GAC). I simulated an operation of such structures in a program that plays tic-tat-toe against variations of REO. Applying these strategies leads to moves that often result in winning or losing; this in turn leads to the creation of new structures, by specific modifications of the current GACs. The modifications are controlled by a small set of rules, so that the GACs are interrelated by the ways modifications can map from one to another.” In order to evaluate specific learning mechanisms in particular cases, one must go beyond counting outcomes; one must examine and specify which forks are learned from which predecessors in which sequence and under which conditions of opponent cell preferences. The simulation avoided abstraction, in order to explore learning based on the modification of fully explicit strategies learned through particular experiences.’ The results are first, a catalog of specific experiences through which learning occurs within this system, and second, a description of networks of descent of specific strategies from one another. The catalog permits a specification of two desired results: first, which new forks may be learned when some predecessor is known: and second, which specific interaction gives rise to each fork learned. The results obviously also depend on the specific learning algorithm used by SLIM. Consider how SLIM can learn the symmetrical variation to one particular fork. Suppose that SLIM begins with the objective of developing a fork represented by the Goal pattern {I 3 9} and will proceed with moves in the sequence SREO‘s preferences are thee common in tactical play: first. win if possible, next bloch at need: finally choose a free cell. prefcrrmg the center ccl1 first, and then any comer ccl1 and finally a side cell. See Humans Pmhlrrn So/~~i~~. by Newell and Simon ( 1975). “Subject to limttation\ based on rotational symmetry and the prototypical kmdb of \tratepies adopted by the psychological subject, the generation ot posalble games wab exhautive. focused on won-games in which SLIM

moved first in Cell I:

seeFigure

The analysis

2. The focus on corner

opening play is baaed on the behavior of the aubjcct and the relative varlcty of strategies ruch opcninfs permit. ‘This focus of the models is precisely whcrc the cgocentriclty of naive thought and cognitive xlfconstruction of the psychological wbjcct arc embodied. The models are “egocentric” in the specific scnw that no consideration of the opponent is taken unless and until the current plan ib blocked. The psychological subject played this way. When a plan is blocked. SLIM drops from strategy driven play into tactical play based on preference\ for ct’lls valued by type (center, corner) and not by relation to others.

CONSIDER

numbered

cells

THE PARTICULAR

egocentric

241

CASE

plan

accidental

win

Figure 2.

plan [ 1 9 31 (see Figure 2). SLIM moves first to Cell 1. REO prefers the center cell (5), and moves there. SLIM moves in Cell 9. The plan is followed until REO’s second move is to Cell 3.8 SLIM’s plan is blocked. The strategic goal { 1 3 9) is given over-but the game is not ended. SLIM, now playing tactically with the same set of rules as REO, moves into Cell 7, the only remaining corner cell. Unknowingly, SLIM has created a fork symmetrical to its fork-goal. SLIM can not recognize the fork. It has not the knowledge to do so. What happens’? REO blocks one of SLIM’s two ways to win, choosing Cell 4. SLIM, playing tactically, recognizes that it can win and moves into Cell 8. This is the key juncture. SLIM recognizes “winning without expecting to do so” as a special circumstance. Even more, SLIM assumes that it has won through creating an unrecognized fork (otherwise REO would have blocked the win). SLIM takes the pattern of its first three moves as a fork. That pattern { 1 7 91 is made the goal of a new GAC. SLIM examines its known plans for creating a fork (there is one, [ 1 9 31) with the list of its own moves, executed in sequence before the winning move was made [ 1 9 71. The terminal step of the plan is the only difference between the two. SLIM modifies the prototype plan terminal step to create a new plan, [ 1 9 71. SLIM now has two GACs for future play.9 The complete set of results involves consideration of all paths of possible learning, even those deemed unlikely a priori, and concludes with the complete specification of all possible paths of learning every fork given any fork prototype. Consideration of all paths of learning I take to be comparable to consideration of all paths over which a photon might travel when striking a surface. For corner opening play, the first six GACs form a central collection of strategies. XThis is one of two possible games from this juncture; both are completed in the simulation. ‘This form of learning by modifying the last term of a plan is one of two sorts; the other involves generation two possible plans based on deletion of the second term of a prototype plan. We know the forks achieved by plans

[I

9 31 and

[I

9 71 are symmetrical.

SLIM

has no

knowledge of symmetry and no way of knowing that the forks are related other than through descent, that is, the derivation of the second from the first. This issue is discussed in the longer version of this text.

248

e

LAWLER

1

2

,kal 1

b 197

5

6

3

4

2

CYR 1

5

6

E3

4

b b 139

179

5

5

A 2

tl 1

3

4

t>6 E‘igure3.

Their interrelations can be represented as trees of derivation or descent (as shown in Figure 3). The tree with strategy three as top node may be taken as typical. Play in five specific games beginning with only GAC 3 known generates the other five central GACs. The specialness of the six central nodes is a consequence of their co-generatability. Some of those are directly generatable, can generate each other (such as GACs 1 and 2); they are reciprocally generatable. Some lead to each other through intermediaries (such as GAC I and 3); they are cyclically generatable. For these six central strategies, the trees of structure descent can fold together into a connected network of descent whose relations of co-generativity are shown in Figure 4.

249

CONSIDERTHEPARTICULARCASE

I

I’ I

Genetic Descent Network: Central Strategies for Corner Play Figure

4.

The form of these descent networks is related to symmetry among forking patterns. But they include more: They reflect the play of the opponent, the order in which the forks are learned, and the specific learning mechanisms permitted in the simulations. These descent networks are summaries of results.

COMPARISON The apparent similarities ploratory epistemology focus on detail in the interaction of objects or both is to try all cases

AND CONTRAST

relating Feynman’s analysis of reflection and the exof SLIM occur at different levels. They begin with a analysis of specific cases, and in the analysis of the agents with their context. The basic principle applied in and construct an interpretation of them. To predict the

250

LAWLER

learning of a specific strategy by a human subject, one would need to know what strategies are already known, how the opponent’s play would create opportunities for surprising wins for the subject, and what learning methods are in the subject’s repertoire. Knowing exactly these things in the machine case is what permits examination of the epistemological space of learnable strategies. There are many paths of possible learning, some central and some peripheral. In QED, the criterion of centrality is near-uniform directionality of the photon arrows. In SLIM, the criterion of centrality is a different and a new one: cogenerativity. The basic method is to aggregate results of all possibilities in a fully explicit manner. The process of aggregation is where the differences become systematic and significant. In QED, the aggregation of individual results is formally analytic-that is the solution of path-integral equations of functions of complex variables. In SLIM, one begins with lists of games won without a plan. One then reformulates the relations between prototypes and generated plans into trees of fork plan transformations, which are the trees of descent. The learning algorithms are the functional mechanisms effecting the transformations. In the analysis of SLIM, aggregation is systematic and constructive though not formal: One pulls together the empirical results of exhaustive exploration (trees of descent in Figure 3) into a new representation scheme (the genetic networks of descent in Figure 4). In general, the process followed with these data is similar to Weyl’s (1952) use of reformulation in his general description of the development of theoretic knowledge in Symmetry and Bourbaki’s description of the genesis of axiomatic systems in The Architecture oj’A4crthemuks. Working with SLIM started with the general principle that learning happens through interaction. The model was constructed to represent the behavior of both the learner and the opponent in explicit detail with specification of representations and learning algorithms giving the notion a precise meaning. Through the aggregation and reformulation of results, a new generality was achieved, which suggested a new idea-that cogeneratability of related but variant knowledge forms is what makes learning possible in any particular domain. This principle would support stable knowledge in minds with reconstructive memories, such as Bartlett (1932) suggested humans have. Furthermore, of course. QED claims completeness in covering all known interactions of photons and electrons. Strategy learning is important, but not so fundamental in cognition as electron-photon interactions are in physics. Moreover, quantum electrodynamics permits going further down in detail to the discussion of interactions of such particles in the matter. In respect to ways for aggregating results. the similarities are superficial unless the consequences are similar or significant for some other reason. What are the results of Feynman’s analysis’? Two results at least are important in their use for us. Feynman says that one must regularly remind students of the goal of

CONSIDER

THE PARTICULAR

CASE

251

the process-discovering “the final arrow,” that is, the resultant vector of probability amplitudes, and understanding how the particularities of the interaction generate that “final arrow.” Our objective is comparable. The learnability analysis of this article introduces two novelties: a new goal and a new principle. The new goal transforms a psychological focus to an epistemological one. At this point one is not so much interested in what a particular child did as an individual. But the case provides some boundary conditions for modeling with the question “If the details of at least one natural case have such characteristics, what kinds and paths of learning are possible. ?“I0 The question is of general interest if one admits that particularity and egocentricity are common characteristics of novice thought. Where Feynman’s analysis of reflection depends on the principle of least time to determine what paths contribute most to the observable “final arrow,” SLIM points to co-generativity, represented by those connections in the central group (strategies l-6) permitting each one to be learned no matter which is adopted as the prototype fork. Peripheral strategies” are rarely learned because they can be learned in few ways. The conclusion is that one can characterize the learnability of a domain as a function of particular interactions among agents based on the connectedness of possible paths of strategy learning. This is what the network of genetic descent does. That network is the equivalent of the “final arrow” of Feynman’s analysis of reflection, despite its different appearance and different basis. Furthermore, these methods and representations reveal what makes it possible to learn in a problem domain; even what makes it possible to judge that knowledge of a given domain is more learnable in one context than in another. The new principle is that the learnability of a domain is the result of all the possible cases of concrete learning through particular experience; those that contribute significantly to the learnability of a domain are those that are mutually reinforcing, in the sense that they are co-generative: A -+ B + C + A, and so on. This is directly comparable to what one finds the case in addition of arrows-the final effect results from the aggregation of related components; isolated possibilities don’t add up to a significant result. The new principle-identifying the learnability of a domain with its connectedness-is a direct consequence of the state transformations being the learning algorithms of the system. One may paraphrase the situation thus: You’ll learn where Rome is, if all roads lead there.

“‘There are, of course, other reasons for taking seriously the detailed study of the particular case. Classic arguments for doing so appear in Lewin (1935) and are discussed in the longer version of this article (in Lawler & Carley, in press). I lSuch are those defined by other GACs discussed Lawler and Selfridge (I 986) and in the longer version of this article.

252

LAWLER

A FEW FINAL WORDS Feynman joked that it doesn’t “seem like real physics” to add bunches of little arrows (representing the off-center parts of the mirror’s reflection) because they will only cancel out in the end. What then is the “real physics?” Is it a compact and simple description that one can use and rely on? Or is it a thorough consideration of all the cases, even including the improbable ones that may in special circumstances yield decisive results? The former is the result one hopes for: confidence in a succinct description that is so broadly applicable we might call it a principle or law. The latter is a process that can increase our confidence in what we know, even if the result is not so general as to be judged a “law.” Whitehead noted that the value of a formalism is that it lets you apply a practical method without concentrating too much on it, so that one’s attention can be given to inventing and applying methods to other problems. Knowing how to add, for example, permits us to ignore the process and to focus on the meaning or significance of elements operated on. Whitehead asserted the same is true for the calculus. It is also is important for models such as SLIM-which generates dependably a list of all relevant games playable given a prototype strategy. That list can be filtered to permit focus on some subset of games of high interestsuch as those won by the player. Computational procedures are formal, but modeling as an activity is more constructive than analytic. That may make it more apt for representing what we know of learning than analytic methods. Finally, even if one cannot explain human learning at a comparable level to that at which one can explain reflection, given that the principle that co-generativity under specific algorithms provides the explanation of differential learnability. one should be able to articulate why learning is possible in specific domains on the basis of the internal relationships of schemes of representations and learning algorithms, the latter seen as transformations between the states of those relationships. This is a retreat from the psychology of case study to epistemology. Others have retreated before us and still make a contribution, as physicists did in order to “resolve” the wave-particle duality. Throughout this article, the issue of whether or not simple descriptions can comprise “real science” has continually surfaced. Different people have different criteria for judging what work is scientific and what is not. Peirce (1957), for instance, argued that science is defined by a question of intent (generally based on a real objective of finding out what is the case in the world) and that method is derivative though still significant. Exploring epistemology through computation is clearly scientific in intent (thus, by Peirce’s criterion). It is also arguably scientific in method, as marked by its similarity to Feynman’s explication of QED. One should ask of such work not “Is it science‘?” as judged by some narrow criterion, but “Is it good science‘? Is it important science?” Ultimately, that judgment must be made on the merits of the particular case.

CONSIDER

THE PARTICULAR

CASE

253

REFERENCES Bartlett, Frederic C. (1932). Remembering. Cambridge: Cambridge University Press. Brown, John S., Collins, Allan, & Duguid, Paul. (1992). Situated cognition and the culture of learning. Education Researcher. Fang, J. (1970). Towards a modern theory of marhematics (Paideia series in Modem Mathematics, Vol. I). Happauge, NY: Paideia. Feynman, Richard. (1985). QED: The strange theory of light and matter. Princeton, NJ: Princeton University Press. Lawler, Rober W. (1985). compurer experience and cognitive development. Chichester, UK. Ellis Horwood, Ltd. Lawler, Robert W. & Selfridge, Oliver G. (1986). Learning concrete strategies through interaction. In Proceedings of the Seventh Cognitive Science Conference. University of California at Irvine. Lawler, Robert W. & Carley, Kathleen. (in press). Advanced qualitative methods in the srudy of human behavior: computer supported analysis of case studies. Norwood, NJ: Ablex. Lewin, Kurt. (1935). The conflict between Galilean and Aristotelian modes of thought in contemporary psychology. In Dynamic Psychology: Selected Essays of Kurt Lewin (D. Adams & K. Zener, trans.) New York: McGraw-Hill. Minsky, Marvin. (1972, January). Form and content in computer science. Journal of the Association for Computing

Machinery.

S. (1957). Lessons from the history of science. In V. Tomas (Ed.), Essays in the philosophy of science. Liberal Arts Press. Weyl, Hermann. (1952). Symmetry. Princeton, NJ: Princeton University Press. Peirce,

Charles