European Journal of Operational Research 177 (2007) 1375–1384 www.elsevier.com/locate/ejor
The organizational learning curve Guido Fioretti
*
Department of Business Economics, University of Bologna, Italy Department of Computer Science, University of Bologna, Italy Available online 16 June 2005
Abstract A very practical consequence of organizational learning is that the time required to produce a unit decreases with the total number of units produced. Potentially, this thumb rule known as the learning curve has a great practical importance. Unfortunately, no procedure is available to predict the pace and extent at which the production time will decrease. Aggregate models are able to fit empirical data with a few parameters, but they are unable to link these parameters to specific properties of an organization. This article links the parameters of the only available disaggregate model of the learning curve to measurable features of the component units of an organization. Unfortunately, no data are available that may confirm that precisely these features are key to organizational learning. However, analytical results derived under simplifying assumptions yield meaningful dynamics that appear to fit with empirical reports. This circumstance suggests that practical applications will be possible in the future. 2005 Elsevier B.V. All rights reserved. Keywords: Learning curve; Progress curve; Learning by doing; Organizational learning; Huberman; Communities of practice
1. Introduction Newly assembled production lines are subject to a number of problems and time delays. Managers know that the first unit will be produced inefficiently. They also know that very soon a learning
*
Present address: via di Corticella 23, I-40128 Bologna, Italy. E-mail address: fi
[email protected]
process will set in. Sooner or later, production time will start to decrease. The learning curve captures this wisdom. Essentially, it states that production time decreases with cumulative production at a uniform rate [30]. It is also known as ‘‘progress curve’’ or, when it comes to its macroeconomic implications, as ‘‘learning by doing’’ [4]. Empirical learning curves are roughly described by the following equation:
0377-2217/$ - see front matter 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2005.04.009
1376
G. Fioretti / European Journal of Operational Research 177 (2007) 1375–1384
tn ¼ t1 N a ; ð1Þ where tn is the time required to produce the nth unit, t1 is the time required to produce the first unit, N = 1 + 2 + n = n(n + 1)/2 is cumulative production and a > 0, a 2 R is a parameter that is specific to the particular manufacturing process being observed. The learning curve was discovered in the 1930s in the airframe industry [29]. Since then, it has attracted much attention by researchers and professionals alike [1,21,8]. However, its use for practical purposes remains problematic because it is very hard to predict at what speed learning will proceed. Cases have been observed, where learning did not take place at all [6,13]. In other cases, those who engendered successful learning in a plant or department were unable to arouse this ability in other plants or departments [18]. Since we do not know what factors affect the parameter a in Eq. (1), no procedure is available for estimating it ex ante. A remarkable amount of empirical research has been carried out on learning curves. To our purposes, the following stylized facts are crucial: • Learning rates may differ across plants of the same firm that produce the same good with similar equipment in the same country [3,2]. • Interruptions of production, such as prolonged strikes or major restructuring, cause production time to increase once production is resumed [24,5]. • Variety of work teams makes production time decrease at a faster rate [19,22]. • Assembling operations make production time decrease at a faster rate than machining operations [9,10]. All these findings tell us, first of all, that learning curves do not depend on easily detectable macroscopic features. Rather, they must depend on microscopic arrangements that may vary across plants and that are easily disrupted if production is suspended. Furthermore, they tell us that learning curves are most pronounced in situations where a lot of different items can be combined in a number of
ways. Possibly, it is not a chance that the learning curve was discovered in the airframe industry. Obviously, it is always possible to fit learning curves on aggregate models of knowledge production such as the exploration of novel technologies [20]. Such explanations may account for the observed variety of learning rates across firms. However, they are unable to explain the impact of interruptions of production or the role of variety of teams and operations. Alternatively, one may view organizational learning as arising from the formation of novel interaction routines [27,15,16]. According to this view, learning rates reflect communities of practice that are strongly rooted in interpersonal relationships at a particular workplace [28]. These microsocieties are disrupted if the workplace disappears for too long a time, which may explain the bad performance of learning curves after prolonged strikes. They may be trapped in vicious circles, which might explain why on some occasions the learning curve failed to materialize. On the whole, it seems that this relational perspective may overcome the pitfalls of aggregate models. Only one model has been proposed, where the learning curve derives from the pattern of interactions between organizational units. It is the model of organizational learning of Huberman et al. [23,13], where productive activities are described as moving connections on a random graph. In this model, the nodes of the graph represent small organizational units such as single workers or a few workers who operate a single machine or a set of closely connected machines. Each node is able to perform a set of operations that depend both on the capability of the workers and the possibilities offered by the machines that they operate. The edges represent flows of goods between organizational units. A flow takes place as soon as a unit has finished the operations that it was called to carry out. The model assumes that there exist no constraints on the sequencing of operations. Thus, the structure of the graph changes with time reflecting the changing arrangements between organizational units. Within this structure, a production process is described by a path from the
G. Fioretti / European Journal of Operational Research 177 (2007) 1375–1384
source node to the sink node. Thus, reducing production time means finding a shorter path. The search for a shorter path may be more or less effective. A sequence of ever shorter paths may be found, that converges toward the shortest path. In this case, the typical learning curve is observed. On the contrary, if shorter paths are found very seldom, the learning curve is very flat. In the limit, if shorter paths are never found no learning takes place. Finally, if interruptions of production disrupt the graph, learning starts from a different configuration when production is resumed. HubermanÕs model depends on two parameters, p and r. Parameter p represents the probability to establish a link between two nodes. Thus the higher p, the more paths are explored. Parameter r represents the probability of exploring the right ones. Thus the higher r, the more effective the search. Organizational learning requires both a high p and a high r. There must be many possibilities of sequencing the operations of the various units— as it is the case in assembling operations—for p to be high. However, plants with equal p may engender organizational learning to different degrees depending on r. The effectiveness of the search may reflect interaction routines and group cohesiveness. It may be easily degraded if the normal patterns of interaction are interrupted as it happens during prolonged strikes or major restructuring. This article aims at expressing parameters p and r in terms of observable magnitudes. It does so under assumptions that are just as unrealistic as those of HubermanÕs model. It does not attempt to turn HubermanÕs model into a realistic model, but seeks to highlight which magnitudes we should look at when carrying out empirical studies. Eventually, agent-based simulations may arrive where analytical descriptions fail. Section 2 characterizes organizational units in terms that have been borrowed from classifier systems. Section 3 relates parameters p and r to observable features of these units. Thus, prediction of the shape of a learning curve becomes conceivable. Section 4 illustrates the influence of certain features of organizational units on the organizational learning curve by means of numerical examples. Finally, Section 5 concludes with an evaluation
1377
of the potentialities and shortcomings of the model in the light of available empirical cases.
2. The organizational units We want to characterize organizational units with respect to their capability to establish links with other units, as well as with the outer world. Organizational units may represent any compound of men, machines, or both. Flows between them reflect the technology and structure of any particular organization and, in general, they depend on: • The requests made by a unit; • The ability of a unit to comply with the requests made by other units. The requests issued by an organizational unit may be a query for a particular skill, a call for a routine operation by a worker to another worker, a signal issued by a machine that it needs raw materials, a signal issued by a machine that a certain amount of finished goods is ready, a demand for raw materials made by the operator of a machine, etc. In short, any communication issued by an organizational unit that is expected to be met by at least one other unit. In general, an organizational unit is able to meet several requests. So a subordinate may be able to accomplish several tasks, or a machine may be able to process several types of raw materials. In general, the more generic the organizational units, the more possibilities for establishing links between them. However, specialized units are likely to be more proficient at a specific job [26]. Thus, there exists a trade-off between genericity, which allows flexibility, and specificity, which enhances proficiency. Specificity and genericity are opposite terms that denote one single feature of organizational units. This feature comes in degrees, from minimum specificity (equivalently, maximum genericity) to maximum specificity (equivalently, minimum genericity). Let us characterize it in the following way. Let organizational units be endowed with categories. With these categories, they classify the requests that they receive. Essentially, by
1378
G. Fioretti / European Journal of Operational Research 177 (2007) 1375–1384
‘‘categories’’ we mean containers that are able to encompass a subset of requests under a single label. Thus, all requests that are classified in a category produce the same reaction. In the case of human operators, these categories are the mental categories through which they see the world. For instance, an engineer may recognize that a number of technical problems arise because of feedbacks and apply the same control theory to all of them. Since mental categories achieve economies of information processing, they may be viewed as instances of bounded rationality [25]. In the case of machines, categories represent the spectrum of raw materials that they are able to accept. For instance, a steel press may be able to work with steel sheets of a certain size range. Let us frame this problem by means of a formalism derived from classifier systems [11,12]. Classifier systems are a classical tool of artificial intelligence where connections between certain components are established depending on the specificity of their categories. Following the formalism of classifier systems, let us encode requests as binary strings of a fixed length L. Let us call them information strings. On the contrary, let categories be represented by strings of length L whose elements can be either zeros, or ones, or ‘‘donÕt care’’ characters #. A category classifies all information strings that have zeros and ones in the same positions where it has zeros and ones, and whatever character where it has a ‘‘donÕt care’’. Fig. 1 illustrates a category of length four with two ‘‘donÕt care’’ characters. This category classifies the 22 = 4 different information strings shown in the figure, which implies that it makes no distinction between them.
#
0
0
1
1
1
1
1
1
1
#
0
1
0
1
0
0
0
0
0
Fig. 1. A category with two ‘‘donÕt care’’ characters and the four information strings that it classifies.
In principle, information strings may represent communications of any sort, from signals issued by machines to messages exchanged between workers to orders and directives from superiors to subordinates. However, learning curves arise in contexts of relative equality of hierarchical levels. We are dealing with workers and teams striving to find the best arrangement with the advice of engineers and technicians. We are not concerned with tall orders of a boss to subordinates. Thus, in this context decentralized rather than bureaucratic decision-making is the rule. Let two indices h and k denote categories and information strings, respectively. Let phk denote the probability that a category h classifies an information string k. Note that these indices are not the identification numbers of categories or information strings. So there might be several categories h, equal to one another, or several information strings k, equal to one another. Likewise, no hypothesis has been made regarding what categories are owned by what organizational units, or what information strings are issued by what organizational units. Consequently, phk may not denote the probability to establish a link between two different organizational units. In general phk 5 pij, where pij is the probability to establish a connection between unit i and unit j. Let us define the specificity of a category as the number of its non-# characters. Let us denote by sh the specificity of category h, and let us require that ophk/osh > 0, "h and 8k 2 H where H denotes the set of information strings that can be classified by category h. The following functional form ensures that 0 6 phk 6 1: 8 f ðs Þ h > < P f ðsh Þ if k 2 H; h ð2Þ phk ¼ > : 0 otherwise; where f 2 C1 is such that df(sh)/dsh > 0. If f = exp(sh), expression (2) boils down to the logit model [7]. In principle, by means of Eq. (2) it is possible to run agent-based models of organizations in order to verify whether the learning curve really depends
G. Fioretti / European Journal of Operational Research 177 (2007) 1375–1384
on the categories and information strings defined above. In fact, knowledge of which categories are owned by which units and which information strings are issued by which units would allow to evaluate pij from phk. The average of pij over i and j would be the p parameter in HubermanÕs model. The r parameter would result from running the model a large number of times with random initial conditions. Unfortunately, this approach is not ripe. In fact, no data on categories and information strings has been collected as yet. However, we may consider a simplified situation where analytical results can be derived. Albeit too unrealistic to be directly applied, interesting results may prompt a quest for data.
3. Diversity and flexibility This section establishes a link between the parameters p and r (Section 1) and some features of the categories and information strings (Section 2). It does so under the following simplifying assumptions: 1. Each organizational unit is able to issue an unlimited number of instances of one single information string. This string is different for each unit. Thus, there are so many different information strings as organizational units. 2. Each organizational unit is endowed with one instance of all different categories that are in the organization. Thus, the total number of all instances of all categories is the number of different categories · the number of organizational units. 3. All categories are able to classify all information strings. Apart from trivial examples, this assumption is meant as an approximation. 4. All categories have equal probability to classify any particular information string. This is clearly different from Eq. (2). 5. Any particular category has equal probability to classify all information strings. This assumption requires that all information strings are available any time a category is ready to select one of them.
1379
The above assumptions correspond to the following procedure of interaction. Sequentially, every category of every organizational unit classifies one information string. Each time an information string is captured by a category, the string is immediately reproduced. Thus, at any step all information strings are available. After all categories of all units had a chance to classify an information string, we evaluate the probability that a unit received at least one string from another unit. If this occurred, we say that the first unit connected to the second one. Since all information strings were available at all steps, and since all categories had equal probability to classify any string, connection probabilities depend only on the number of different categories and information strings. Let H 6 3L denote the number of different categories available in the organization, where 3L is the number of dispositions with repetition of three elements (0s, 1s and #s) of class L. Let K 6 2L denote the number of different information strings produced in the organization, where 2L is the number of dispositions with repetition of two elements (0s and 1s) of class L. Since categories are there in order to classify information, it must be H < K. Since there are so many information strings as organizational units, the probability that the category h owned by unit i classifies the information string issued by unit j 5 i is ph(i)j = 1/(K 1). After all categories of unit i attempted to classify the string issued by unit j, the probability that whatever category of unit i classifies the information string of unit j is pij = H/(K 1). Since this value does not depend on the particular i and j, it is equivalent to HubermanÕs parameter p, p¼
H ; K 1
ð3Þ
with K H P 1. Fig. 2 may help to explain the meaning of Eq. (3). Consider an organization composed by five units endowed with H = 2 different categories each. Since each unit issues a different string, K = 5. Any unit, for instance unit 1, is able to classify any string issued by any other unit. Information strings are always available. Thus, after establishing two connections by means of its two categories, unit 1 has a probability H/(K 1) = 1/2 to have
1380
G. Fioretti / European Journal of Operational Research 177 (2007) 1375–1384
unit 4
1/4 1/4
unit 3
unit 5 1/4 1/4
(H/(K 1))2 and so on. By dividing by K 1 in order to share these probability masses between all j 5 i we obtain that the probability that i connects to j is H/(K 1)2, the probability that it connects twice is H2/(K 1)3, and so on. Thus, the probabilityPto repeat it endlessly is the i sum of H =ðK 1Þ2 1 i¼0 ðH =ðK 1ÞÞ , which is H/(K 1)(K H 1). Consequently, the probability to choose the right path is 1 H/(K 1)(K H 1), which can be written as r¼
unit 2 unit 1 Fig. 2. With probability 1/4, each of the organizational units 2, 3, 4, 5 has issued at least one string that was captured by one of the two categories of unit 1.
connected to another unit. After all HK trials have been made, some units have captured one, others two strings by the same unit, others more. Since by ‘‘establishing a connection’’ we mean that at least one string has been captured during the whole procedure, on average any unit has connected to another unit with probability 1/2. Parameter r expresses the idea that the search for better arrangements is effective. In HubermanÕs model, getting it right means that one succeeds to connect to the terminal unit. In the limit of infinite attempts to establish connections to other units, sooner or later the terminal unit is reached. On the contrary, if novel connections are no longer tried, the terminal unit may never be reached. In other words, the terminal unit is reached unless one gets stuck in repeating endlessly the connection to one and the same unit. Thus, let us calculate the probability that one gets stuck in connecting to a particular unit. Parameter r will be the complement to one of this probability. Eq. (3) expresses the probability that, after all categories of unit i attempted to capture the information string issued by j, at least one of them succeeds. The probability that this happens twice if the whole procedure is carried out twice is
ðK 1Þ2 KH 2
ðK 1Þ ðK 1ÞH
;
ð4Þ
with K H > 1. The meaning of Eq. (4) might be illustrated by resorting again to Fig. 2. By means of Eq. (3) we established that, after the interaction procedure has been carried out, unit 1 established a connection to unit 2 with probability 1/4 + 1/4 = 1/2. The probability that unit 1 established a connection to unit 3 is also 1/2 and so on. Thus, 1/2 is the probability that unit 1 established a connection to any other unit. If we ask ex ante to what particular unit it will connect, we must conclude that unit 1 will connect to unit 2 with probability (1/ 2)/4 = 1/8, to unit 3 with probability 1/8, and so on. Similarly, the probability that, by repeating the interaction procedure, unit 1 connects twice to unit 2, is (1/2)(1/2) = 1/4. Thus, the ex ante probability that unit 1 connects twice to unit 2 is (1/4)/4 = 1/16. If we ask for the probability to connect to one and the same unit infinitely many times, this is 1/8 + 1/16 + 1/32 + . The probability that this does not happen is the complement to one. Alternatively, one may illustrate the above reasoning with the familiar setting of throwing a die. Suppose that the faces of your die are labeled with names, e.g., ‘‘John’’, ‘‘Mary’’, and so on. Suppose that one face of the die is unreadable. Since there exist many more than six names, you cannot know what name was written on the unreadable face. Suppose also that you are not told which face came up, but only whether a particular face came up, that you selected in advance. For instance, you may be told whether ‘‘Mary’’ came up, or not. You are not allowed to select the unreadable
G. Fioretti / European Journal of Operational Research 177 (2007) 1375–1384
face, which you cannot name. In this conditions, the unreadable face is a noise that disturbs the functioning of your die. Suppose that you want to assess its impact on the other faces. Your measuring procedure may consist of throwing the die five times, asking each time whether ‘‘Mary’’ came out. You may repeat this procedure very many times in order to obtain reliable results. If your die had only the five readable faces you would obtain that, on average, during any measuring procedure ‘‘Mary’’ came up one time. You would say that ‘‘Mary’’ comes up with probability one. Since your die has six faces and you cannot obtain information on one of them, on average you obtain ‘‘Mary’’ with probability 1 (1/6)/5 = 29/30. This corresponds to Eq. (3). The ex ante probability is the probability to obtain a particular face if you imagine to throw the die once. The ex ante probability to obtain a particular face among the five good ones is (29/30)/5. The probability to obtain the same face along two separate sequences of measurement procedures is (29/30)2. The ex ante probability to obtain a particular face if we require that it is obtained twice by means of two separate sequences of measurement procedures is (29/30)2/ 5. The ex ante probability to obtain a particular face by means of any number of separate sequences of measurement procedures is (29/30)/ 5 + (29/30)2/5 + (29/30)3/5 + . Finally, the ex ante probability of not obtaining a particular same face by any number of measurement procedures is the complement to one of the previous expression. This is Eq. (4). The next section will illustrate the implications of Eqs. (3) and (4) by means of simple numerical examples.
1381
for K H > 1. Thus, let us plot p and r for K H = 2. Fig. 3 illustrates p and r for H 2 [1, 10]. Correspondingly, K 2 [3, 12]. Parameter p represents the possibilities of improving the current arrangement of an organization. Thus, Fig. 3 tells us that the greater the number of different categories and information strings, the more possibilities for improvement there are. Or, equivalently, the more is there to learn, the more can be learned. However, these possibilities are wasted if the search for a better arrangement is ineffective. Parameter r measures the effectiveness of this search. Thus, Fig. 3 shows also that the greater the number of different categories and information strings, the more likely that no improvement actually takes place. Or, equivalently, that the more is there to learn, the more likely it is that nothing will be learned. Fig. 3 illustrates a trade-off between the possibility to improve the arrangement of an organization and the danger to get lost in endless search. The more possibilities for improvement, the more difficult that these possibilities are actually pursued. Let us consider an organization with slightly less categories than the previous one. Let the difference between the number of information strings and the number of categories be K H = 3. This is an organization with more generic categories than the previous one. We may think to an organization where workers are able to do several jobs and machines can be adapted to several
p (solid) and r (dashed) for K - H = 2 1
4. Numerical examples The meaning of Eqs. (3) and (4) becomes clear if one plots them for various values of H and K. Just in order to include values in the proximity of the origin, let us keep K H small. Larger differences do not have any qualitative impact on the outcomes. Eq. (3) is defined for K H P 1 but yields interesting values for K H > 1. Eq. (4) is defined
0.8
p
0.6
p, r 0.4
r
0.2 0 1
2
3
4
5
6
7
8
9
10
H Fig. 3. Parameters p (solid line) and r (dashed line) for K H = 2.
1382
G. Fioretti / European Journal of Operational Research 177 (2007) 1375–1384
operations. In other words, more an organization of open-minded problem-solvers. Figs. 4 and 5 show the effect of switching from K H = 2 to K H = 3 on p and r, respectively. Even if the number of different categories decreased by only one unit, differences are impressive. Possibilities for improvement—the parameter p—decreased slightly. On the contrary, the likelihood that better arrangements are found—the parameter r—increased dramatically. Furthermore, the greater H, the more pronounced these effects. Figs. 4 and 5 suggest that, by limiting the number of different categories, a large gain in effectiveness can be attained at the expense of a small loss of the possibilities to improve on current arrangements. An organization of open-minded generalists may loose a fraction of the learning possibilities afforded by specialization, but it is unlikely to get stuck in stupid routines. The learning curve may not be the fastest possible one, but it will surely appear. Note that the limit of K H = 0—having so many categories as information strings—is equivalent not to have categories at all. No simplification of information by classifying it in a manageable number of categories, but a perfect information processor that exploits all available information. Under many respects, this is the idea of ‘‘rationality’’ that pervades economic theory. The above considerations suggest that such ‘‘rational’’ decision-makers may perform poorly when arranged in an organization. On the contrary, bounded rationality may constitute a basis for efficient decision-making.
p for K - H = 2 (thin) and K - H = 3 (thick) 1
K-H=2
0.8 0.6
K-H=3
p 0.4 0.2 0 1
2
3
4
5
6
7
8
9
10
H Fig. 4. Parameter p when K H = 2 (thin line) and K H = 3 (thick line).
r for K - H = 2 (thin) and K - H = 3 (thick) 1 0.8
K-H=3
0.6
r 0.4 0.2
K-H=2
0 1
2
3
4
5
6
7
8
9
10
H Fig. 5. Parameter r when K H = 2 (thin line) and K H = 3 (thick line).
5. Conclusions The previous sections provided a foundation of a model of the learning curve in terms of two features of the units involved in organizational learning. These features are the number of different information strings that they produce, and the number of different categories that they employ. Information strings represent the output of organizational units in terms or requests and signals carried by various means. Categories express the way in which other units classify this information. Since organizational units are compounds of workers and machines, we may distinguish between human categories, which depend on cognitive processes, and machine categories which depend on technical specifications. Human categories are probably very difficult to observe. On the contrary, machine categories can be observed rather easily. The requests issued by organizational units are easily observed as well. Suppose that we are contemplating an organization where machines have a paramount role, as it is the case of modern manufacturing systems. If the parameters of the learning curve really depend on features that are to a large extent observable, then we may be able to make predictions about future reductions of the production time. One should collect data regarding what information the organizational units generate and what information they are able to accept. These data may be used in order to specify the probability of a unit to connect to another one, as in Eq.
G. Fioretti / European Journal of Operational Research 177 (2007) 1375–1384
(2). In their turn, these probabilities may constitute the backbone of agent-based models of organizations that simulate the formation and improvement of different arrangements with time. Since this perspective is very much ahead in the future, at the present stage an analytical description has been developed and presented in the main body of the paper. Albeit its results were derived under a series of simplifying assumptions, qualitatively they seem to make sense. Essentially, we would expect that organizations where a large number of different things must be combined may learn more, but we would also expect that these organizations may occasionally be unable to learn. We would expect to find these features in previous studies of organizational learning, though they have been carried out for different purposes so the information that they collected may not be directly relevant. The bulk of the empirical literature on organizational learning curves focused on macroscopic features such as cumulative production, expense in R&D, capital/labor ratio, etc. A few studies are sufficiently detailed to suggest that our expectations may be right, though they are still very far from providing data upon which a model could be tested. They are listed below as suggestive stories, not as proofs: • The operation of nuclear power plants follows a learning curve where yearly energy output increases with cumulative output due to less and shorter interruptions with cumulative operating time. Interestingly, it has been noted that pressurized water reactors exhibit a faster learning than boiling water reactors [17]. Pressurized water reactors improve on safety with respect to boiling water reactors because the water heated by the core does not flow directly into the turbines, but rather into a secondary loop. Pressurized water reactors are in general, more complex than boiling water reactors. Tentatively, we may suppose that there are many more interacting parts that issue many more information strings and classify them in many more categories. We saw that, if both H and K increase, the learning potential increases. • Free-software developers arrange themselves in teams that may become quite large. Eventually, these teams constitute complex structures of sub-
1383
teams devoted to sub-projects. Since free-software developers are volunteers, it is not possible to measure anything like production time. However, the effectiveness of the debugging process may be taken as an indicator of learning. Two major free-software projects have been investigated from this point of view: the web sever Apache and the web browser Mozilla [14]. Apache is widely admired for its stability and simplicity. On the contrary, Mozilla is typically cited as extremely complex and prone to bugs. Up to about the 50th development week, the debugging effectiveness of Mozilla was slightly superior to that of Apache. Since then, the debugging effectiveness of Mozilla has been either stationary or decreasing. On the contrary, the debugging effectiveness of Apache continued to increase well above the level attained by Mozilla. Tentatively, one may guess that the complex Mozilla, having too many issues to coordinate—too large and too close H and K—fell in the trap of a too low r. • Semiconductor manufacturing exhibits a learning curve whose determinants have been studied with some detail. It has been found that the parameters of the learning curve depend on information handling, automated data analysis, automated scheduling, diversity of teams, co-location of teams within the main plant, wafer size, mask layers and other indicators of technological complexity [19]. In particular, information handling and co-location of teams in the same facility appear to exert the strongest influence. Information handling includes various IT tools such as automated download of process ‘‘recipes’’ into semiconductor processing equipment, automated capture of process and equipment performance data, and automated tracking of wafer lot. Apparently, these tools direct information toward the proper recipients by directly affecting Eq. (2). A similar comment may be made about the physical proximity of teams, which improves information exchange. These examples may possibly suggest that it is worth to investigate organizational learning with much more detail than has been done hitherto. Indeed, a search for detailed data is the highest goal that the theoretical model presented in this paper may hope to attain.
1384
G. Fioretti / European Journal of Operational Research 177 (2007) 1375–1384
It is necessary to carry out several comparative case-studies where organizations are examined, that learn how to use new machineries of different kinds. In each of these studies, very detailed data should be collected regarding the features of the machines and manpower employed, as well as communications between organizational units at very short time intervals. If the case-studies would be sufficiently many, and if the observed differences in the features of organizational learning curves would reflect differences of our H and K, then the theory would be proved. Clearly, this is much work. It could not be carried out at the same time the basic intuition was developed into the theory expounded herein, but it has to be carried out if this theory wants to become a practice. It is my firm intention to do it in the forthcoming years.
[13]
[14]
[15] [16] [17]
[18]
[19]
References [20] [1] Armen Alchian, Reliability of progress curves in airframe production, Econometrica 31 (1963) 679–693. [2] Linda Argote, Organizational Learning: Creating, Retaining and Transferring Knowledge, Kluwer Academic Publishers, Boston, 1999. [3] Linda Argote, Dennis Epple, Learning curves in manufacturing, Science 247 (1990) 920–924. [4] Kenneth J. Arrow, The economic implications of learning by doing, Review of Economic Studies 29 (1962) 155–173. [5] Charles D. Bailey, Edward V. McIntyre, Using parameter prediction models to forecast post-interruption learning, IIE Transactions 35 (2003) 1077–1090. [6] Nicholas Baloff, Startup management, IEEE Transactions on Engineering Management 17 (1970) 132–141. [7] Jan S. Cramer, Logit Models from Economics and Other Fields, Cambridge University Press, Cambridge, 2003. [8] John M. Dutton, Annie Thomas, John E. Butler, The history of progress functions as a managerial technology, Business History Review 58 (1984) 204–233. [9] Werner Z. Hirsch, Manufacturing progress functions, The Review of Economics and Statistics 34 (1952) 143–155. [10] Werner Z. Hirsch, Firm progress ratios, Econometrica 24 (1956) 136–143. [11] John H. Holland, Adaptation in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor, 1975. [12] John H. Holland, Escaping brittleness: The possibilities of general-purpose learning algorithms applied to parallel
[21]
[22]
[23]
[24]
[25] [26]
[27] [28] [29] [30]
rule-based systems, in: Machine Learning: An Artificial Intelligence Approach, Morgan Kaufmann Publishers, Los Altos, 1986. Bernardo A. Huberman, The dynamics of organizational learning, Computational and Mathematical Organization Theory 7 (2001) 145–153. Christopher L. Huntley, Organizational learning in opensource software projects: An analysis of debugging data, IEEE Transactions on Engineering Management 50 (2003) 485–493. Edwin Hutchins, Organizing work by adaptation, Organization Science 2 (1991) 14–39. Edwin Hutchins, Cognition in the Wild, The MIT Press, Cambridge, 1995. Paul L. Joskow, George A. Rozanski, The effects of learning by doing on nuclear plant operating reliability, The Review of Economics and Statistics 61 (1979) 161–168. Michael A. Lapre´, Luk N. van Wassenhove, Managing learning curves in factories by creating and transferring knowledge, California Management Review 46 (2003) 53– 71. Jeffrey T. Macher, David C. Mowery, ‘‘Managing’’ learning by doing: An empirical study in semiconductor manufacturing, Journal of Product Innovation Management 20 (2003) 391–410. John F. Muth, Search theory and the manufacturing progress function, Management Science 32 (1986) 948–962. Leonard Rapping, Learning and world war II production functions, The Review of Economics and Statistics 47 (1965) 81–86. Melissa A. Schilling, Patricia Vidal, Robert E. Ployhart, Alexandre Marangoni, Learning by doing Something else: Variation, relatedness, and the learning curve, Management Science 49 (2003) 39–56. Jeff Shrager, Tad Hogg, Bernardo A. Huberman, A graphdynamic model of the power law of practice and the problem-solving fan-effect, Science 242 (1988) 414–416. Sverker Sikstro¨m, Mohammad Y. Jaber, The power integration diffusion model for production breaks, Journal of Experimental Psychology: Applied 8 (2002) 118–126. Herbert H. Simon, Models of Bounded Rationality, The MIT Press, Cambridge, 1982. Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations, Clarendon Press, Oxford, 1976 [1776]. Karl E. Weick, The nontraditional quality of organizational learning, Organization Science 2 (1991) 116–124. Etienne Wenger, Communities of Practice, Cambridge University Press, Cambridge, 1998. T.P. Wright, Factors affecting the cost of airplanes, Journal of the Aeronautical Sciences 3 (1936) 122–128. Louis E. Yelle, The learning curve: Historical review and comprehensive survey, Decision Sciences 10 (1979) 302– 328.