LEARNING
AND
MOTIVATION
Timeout
3, 3143
(19%)
Punishment: and
Delay
Rate
of Reinforcement
of Timeout1
JOHN G. CARLSON Unioersity
of
Hawaii
Responding of rhesus monkeys on either of two concurrently available response levers in the first link of a response chain caused the alternate lever to retract and produced a stimulus light, Responding in Link 2 on the remaining lever in the presence of the light produced food. Subsequently timeout (retraction of both levers) was made contingent upon the last response in Link 1 on one lever. Timeout suppressed antecedent responding on the timeout lever and produced a relatively higher rate of responding on (preference for) the alternate response lever. The preference for the nontimeout lever was maintained when rates of reinforcement ou the two levers were equated and appeared not to be related to overall rates of reinforcement within sessions. In a second experiment, the degree of suppression on the timeout lever was found to be directly related to the immediacy of the timeout with respect to the punished response.
Leitenberg (1965) has pointed out that apparent punishing effects, escape, or avoidance responding induced by timeout from positive reinforcement may in fact often be attributable to increases in positive reinforcement accompanying the change in responding. That is, since timeout (by definition) is a period of time in which reinforcement is not available, responding which prevents or eliminates timeout typically leads to a greater availability of reinforcement. Hence, positive reinforcement rather than the aversiveness of timeout could act to maintain such responding. A number of examples from the timeout literature are convincingly cited by Leitenberg in support of his argument. However, in recent research in this area, various attempts to control for availability of reinforcement have been made and, in general, the results have not supported Leitenberg’s suggestion (e.g., Baron & Kauf’ This research was Council to the author. Hawaii Statistical and different form at the 1970. I wish to thank conducting portions of statistics. Reprints may chology, University of
supported by grants from the University of Hawaii Research Computer services were provided by the University of Computing Center. Experiment 1 was presented in slightly meeting of the American Psychological Association, Miami, Jeanette Crouch and Richard Wielkiewicz for assistance in the research and Karl Minke for suggestions related to the be obtained from John G. Carlson, Department of PsyHawaii, Honolulu, Hawaii 96822. 31
@ 1972
by Academic
Press,
Inc.
32
JOHN
G. CARLSON
man, 1969; Carlson, 1970). In a further investigation of the issue, the present experiments examined the effect of timeout punishment upon antecedent responding when availablility of unconditioned reinforcement was controlled. In addition, delay of timeout punishment was manipulated to determine the role of this variable in suppressive effects of the timeout. EXPERIMENT
1
In a study conducted by Holz, Azrin, and Ayllon (1963) with mental patients, timeout punishment of one operant was found to suppress responding when an alternative, unpunished operant was available. While the investigators concluded that timeout exerted punishing effects on the one operant, it could as well be observed that the absence of timeout periods during responding on the alternative schedule meant that a higher frequency of positive reinforcement was available for that response. The question arises, then, would the subjects have preferred the unpunished response if the availability of positive reinforcement for this response had been equal to that for the punished response? An experiment by Carlson and Aroksaar (1970) provided some information on the issue. Timeout punishment of lever pressing in rats suppressed responding on the punished lever and thereby maintained a preference for an alternative lever even when the schedule of reinforcement on the alternative lever provided equal or fewer reinforcements than the schedule on the timeout lever. However, since timeout punishment itself was contingent upon every tenth response, as in the study by Holz et al., a reduction of response rate on the punished lever also resulted in fewer timeouts within a session, increasing the overall rate of reinforcement per session. Thus, while momentary rate of reinforcement on the two levers was controlled, changes in long-term rate of reinforcement were not, possibly accounting for the effects of the timeout. To deal with this problem, a schedule of timeout is required which is relatively insensitive to rate of responding-in particular, some form of interval schedule. A procedure developed by Autor ( 1969) was modified in the present experiment for this purpose. Two-link response chains were scheduled on each of two response levers. The first links of the chains were available concurrently, but entrance into the second link of either chain rendered the alternative lever ineffective. Timeout punishment was scheduled on one of the two levers. Through manipulation of interval schedules of reinforcement and timeout punishment, the procedure provided a means by which the availability of reinforcement for the two operants could be both controlled and measured to determine the role of this variable in effects of the timeout.
TIMEOUT
33
PUNISHMEST
METHOD
Three male rhesus monkeys ( Macaca mulatta ), approximately 2030 months old, were maintained at 90% of free-feeding body weight. Feeding was on a 24-hr schedule immediately following each experimental session.
=Ippurntus A fan-ventilated operant conditioning chamber for primates, measuring 609 X 609 X 609 mm, contained two automatically retractable response levers (Lehigh Valley, Model 1405M). The levers were located on one wall of the chamber 190 mm on each side of a food trough and 306 mm above the floor of the chamber. Banana-flavored Noyes food pellets ( Formula G, 300 mg) were delivered by a Gerbrands feeder ( Model A). Above each lever was a small, amber stimulus light, and the chamber was illuminated by a single houselight. Ambient white noise was continuously prcscnt in the room housing the chamber in order to mask the sound of programming and recording equipment located in an adjacent room. Procedure A summary of the experimental conditions indicating the schedules of reinforcement and timeout punishment is shown in Table 1. The animals had been subjects in previous experiments and were well leverpress trained at the start of the present study. Two levers were made available with their corresponding stimulus lights off and the houselight on. ‘4 response on either lever started two independent tape readers that
Summary Variable-interval --____
TBBLK I of Experimental schedules Linli
Link 1 iench lever)
Nllmber of sessiolw 10 s s 4 S”
:30 40 :30 30 30
Timeout lever 1.5 1.5 I5 15 15
Conditions (in ser) 2
Nontimeout, lever 15 1 .i 30 30 30
Schedlde of timeollt plulishment, No timeout, TimeoIl t Timeollt No timeout. Timeout --
d Except.
M2,
10 sessions.
34
JOHN
G. CARLSON
timed intervals of variable duration ranging from 15 to 45 set with a mean of 30 set (a variable-interval 3O-set schedule) on each lever. When an interval had timed out on either lever, the next response on that lever caused its stimulus light to be illuminated, retracted the alternate lever, and turned off the tape reader for the inactive lever. This terminated the first link of the response chain, In the presence of the lever light, lever pressing on the active lever in the second link of the chain was reinforced with one pellet on a variable-interval 15-set schedule, which contained intervals ranging from 6 to 24 sec. Immediately following reinforcement, the lever light was extinguished and the alternate lever was reinserted into the chamber. Again a response on either lever started the two tape readers and the sequence was repeated. In order to reduce the probability of the animals switching between levers during the concurrent links of the chains, a 5-set changeover delay was in effect; a response on either lever when its interval had timed out had no effects if a response on the alternate lever had occurred within the preceding 5 sec. Throughout the experiment, a daily session was automatically terminated at 30 reinforcements on either lever, which limited the maximum reinforcements in a session to 59. Following 10 sessions on the above schedule, timeout was introduced for each animal on the lever on which the mean number of responses in the first chain link was greater. Timeout consisted of retraction of both levers for 15 set contingent upon the last response in the initial, variableinterval 30-set link of the chain on the punished lever. The lever lights and tape readers were off during timeout. At the end of the timeout period, the timeout lever reentered the chamber with the lever light on, and the variable-interval 15-set link of the chain was in effect as usual. Beginning on the ninth punishment session (Session 19), the mean duration of the variable-interval schedule in the second chain link on the unpunished lever was increased from 15 to 30 sec. The minimum and maximum intervals were the same as those in the second link on the punishment schedule including the timeout period, 21 and 39 set, respectively. This schedule on the nontimeout lever was retained throughout the remainder of the experiment. All other conditions remained the same throughout Sessions 19-26. In Sessions 2,730, the timeout was removed. In Sessions 3138, the timeout was reinstated, rendering the conditions identical to those in Sessions 19-26. Monkey M2 received two additional sessions in this phase to establish clearly the direction of lever preference in the first links of the chains.
TIMEOUT HESULTS
PUNISHMENT AND
35
DISCUSSION
Figure 1 shows the total number of responses per session in the first links of the chain schedules on each lever. Since performance had typically stabilized by the final 4 sessions of each condition, means of relative rates of responding in both chain links are shown for these sessions in Table 2. Relative rate of responding in Link 1 was defined as the number or responses on the timeout lever divided by the sum of responses on both levers in this link. Relative rate of responding in Link 2 was defined as rate of responding on the timeout lever divided by the sum of rates on both levers in this link. Therefore, in either link a ratio greater than 0.50 reflects a higher rate of responding and a ratio less than 0.50 reflects a lower rate of responding on the timeout lever. In the initial, baseline sessions monkeys Ml and M4 emitted slightly more responses on the right lever than on the left lever (which defined a preference for the right lever) in the first link, and 142 showed a
FIG. 1. Total responses per session in the first link of the response chain on each lever. The variable-interval schedules of reinforcement (in see) in the second chain link on each lever are indicated at the top of the figure in the order timeout lever/nontimeout lever. Timeout was introduced on the right lever for subjects Ml and M4 and on the left lever for M2 in the sessions designated “TO.”
36
JOHN
G. CARLSON
TABLE 2 Relative Response Hates and Reinforcement Bates for the Final 4 Sessions of each Condition, Experiment 1 Relative llelative rereinforce sponse rate on ment rate timeout lever on timeout lever in Link 1 Link 2 Link 2
ObtainObtained able reinreinforceforcements per ments min per min
Sessions
Timeout schedule
7-10
No timeout
Ml M2 M4
.59 .64 .52
.52 .52 .35
.46 53 .55
1.08 1.43 1.39
15-18
Timeout
Ml M2 M4
.20 .28 .17
.53 .52 .48
.36 .33 .36
0.88 1.31 1.01
0.94 1.19 1.18
23-26
Timeout
Ml M2 M4
.31 .34 .20
.49 .46 .53
.48 .50 .50
0.85 1.10 0.94
0.86 1.05 1.03
27-30
No timeout,
Ml M2 M4
.68 .7x .54
.52 .52 .47
.67 .69 .62
0.98 1.39 1.26
35-38”
Timeout
Ml M2 M4
.30 .40 .21
.44 .53 .48
.49 .48 .49
0.83 1.12 1.00
Subject
0.86 1.05 1.03
a Except M2, 37-40.
slight preference for the left lever. Beginning in Session 11, timeout was introduced contingent upon the last response in the first link on the preferred lever. In all of the animals the effect of the timeout was to suppress responding in the first link on the timeout lever and to produce a slight increase in responding on the nontimeout lever (an effect perhaps analogous to “behavioral contrast” obtained with other procedures, e.g., Reynolds, 1961). Since introduction of the timeout on one lever reduced the rate of reinforcement on this lever, the differential rates of responding in the first links could have been attributable to this factor alone (Autor, 1969). But when the duration of the second link of the chain schedule on the nontimeout lever was subsequently increased to equate rate of reinforcement on the two levers (Sessions 19-26), there was no change in direction of preference of any of the subjects (with a momentary exception, Session 20, monkey Ml). There was some reduction in rate of responding on the nontimeout lever in all of the subjects in this
condition. When the timeout was removed in Sessions 27730, there was a reversal of lever preference in all of the monkeys.” Reinstatement of pun;shment beginning in Session 31 reinst‘jtcd the previous preference for the nontimeout lever and approximate levels of responding. These results clearly establish that timeout punishment may suppress responding when an alternate, unpunished response is available. Further, the effect appeared not to be attributable to the availability of reinforcement for the unpunished response since when this was adjusted to equal that of the punished response, the direction of lever preference was not affclcted. Actual rates of reinforcement in the second links of the chains were computed and are expressed in Table 2 as relative rates of reinforcement on the timeout le\:er. Relative rate of reinforcement was defined as reinforcement per min in the second link OIJ the timeout lcvcr (including tl le t’uneout periods) divided by the sum of reinforcements per min in this link on both levers. Actual reinforcement rates closely paralleled rates programmed by the variable-interval t;mers. Removal and reinstatement of the timeout contingency and consequent shifts in lever preference revealed that the unaltered preft,rencc for the nontimeout lever in Sessions 19-26 was not due simply to a failure of the animals to detect the change in contingencies when the second-link schedule on this lever was altered. Apparently, the differeuccs between variable-interval 15see and variable-interval 30-WC scheclulcs in Link 2 were quite discriminable ( Sessions 27-30) but made little difference when the option was responding on a schedule that included timeout punishment. In this experiment, it is also unlikely that long-term reinforcement availability can be held accountable for the direction of lever preference. Owing to the form of scheduling, presentations of timeout were relatively independent of rate of responding on the timeout lever. Moreover, since entry into the second chain link as well as timeout was contingent upon responding in the first link, a successful reduction of timeouts by an animal actuallv resulted in a redz~ction of reinforcements on tht> timeout lever. In Table 2, obtained reinforcements per min across each session are shown for the final 4 sessions of each condition. The latter values were computed by dividing the total number of reinforcements hy the total elapsed session time for the 4 sessions. If it is to be argued that availability of reinforcement accounts for effects of timeout, it must be ‘Since responding had not stabilized in Sessions 27-30, the means of relative rates in Table 2 are not representative of actual performance in any single session of this condition. Nevertheless, the vahles do reflect the immediate shifts in lrwl preference in Link 1 which were obtained.
38
JOHN
G. CARLSON
shown that the change in a pattern of responding produced by timeout results in a greater frequency of reinforcement than otherwise obtainable. One way of dealing with the effects of overall rates of reinforcement is to compute the rates each animal could have obtained if the timeout had not suppressed responding on the timeout lever. In other words, what would the overall rate of reinforcement have been if the baseline rates of responding had been maintained during timeout conditions? Since each animal averaged 30 reinforcements per session for the to-bepunished response in the initial nontimeout condition (Sessions 7-lo), each animal would have received 30 timeouts per session if timeout had had no effects on response rate. Adding to the total elapsed session time the duration of 30 timeouts and dividing the total reinforcements per session in Sessions 7-10 by these values yields “obtainable” reinforcement rates per session for the first timeout condition (Sessions 15-18). Similarly, in Sessions 23-26 and 35-38, the duration of the second link on the nontimeout lever was increased. Assuming that baseline rates of responding had been maintained, each animal would have entered these links as often as in the initial nontimeout condition. Adding this additional time (as well as time spent in timeout) to the total session length yields denominators for “obtainable” reinforcement rates in these sessions. The reinforcement rates computed by these means are shown in the right-most column of Table 2. Comparing actual reinforcement rates to obtainable reinforcement rates, monkeys Ml and M4 obtained fewer reinforcements per min than possible in the timeout conditions as a result of their lowered rates of responding on the timeout lever in Link 1. Only M2 showed very slight increases in reinforcement rates over those obtainable owing to the change in his pattern of responding produced by the timeout. It seems highly unlikely that the magnitude of these increases was detectable to this animal, let alone responsible for the dramatic and substantial effects of the timeout upon his responding. EXPERIMENT
2
It has been demonstrated that suppressive effects of timeout are related to both frequency (Thomas, 1968) and duration (Kaufman & Baron, 1968) of the timeout period, analogous to frequency and intensity effects of electric shock. The fact that timeout has properties of other known punishers favors the view that timeout may function as an aversive stimulus in some contexts. In the present study, the effects of delay of timeout presentations relative to the punished response were investigated to determine whether a delay of punishment gradient exists for timeout analogous to that which has been found for electric shock (e.g., Camp, Raymond & Church, 1967).
TIMEOUT
39
PUNISHMENT
METHOD
The monkeys were placed for eight sessionson the baseline conditions used in the earlier experiments.3 Two variable-interval 30-set schedules were programmed in the initial, concurrent links of the chains and reinforcement was available on independent variable-interval 15set schedules in the terminal links. The lever light above the functional lever was on and the alternate lever was retracted in the terminal link. As before, a 5-set changeover delay was in effect in the concurrent portion of the schedule, and the variable-interval 30-set timers did not begin cycling until a response had been made on either lever following reinforcement. When an animal had obtained 30 reinforcements on either lever, a daily sessionwas automatically terminated. Following the reestablishment of baseline rates of responding, the second-link schedule on the nonpreferred response lever (left for Ml, right for M2 and M4) was increased to variable-interval 30 set for 10 sessions. All other conditions remained the same. The remainder of the experiment was conducted in eight successive conditions, alternating between presentations of timeout on the previously established preferred response lever (the timeout condition) and removal of the timeout contingency (the nontimeout condition). The nontimeout conditions were interpolated between timeout conditions in order to attenuate the effects of prior timeout presentations. Each condition was in effect until response rates were stable, eight sessionsfor all but the initial nontimeout condition (six sessions). In all of these sessions, the duration of the second link on the previously established nonpreferred lever was fixed at 30 see (fixed-interval 30-set schedule). The duration of the second link on the alternative (timeout) lever was fixed at 15 set (fixed-interval 15-WC schedule). The initial, concurrent links were variable-interval 30-set schedules, as before. In the timeout conditions, retraction of the response lever(s) and turning off the lever light and tape readers for 15 set were made contingent upon the last response in the initial, variable-interval 30-set link of the response chain on the timeout lever. The actual presentation of the timeout was either immediate or delayed by 3, 9, or 15 sec. If timeout was delayed, it was presented independently of responding in the second (fixed-interval 15-set) link of the chain schedule on the timeout lever. At the end of a timeout, the lever reentered the chamber, the lever light was turned on, and the remainder of the interval was in effect. Each animal received all four timeout-delay intervals in a randomly determined sequence, one duration of interval in each con“Prior cedural significant
to this experiment,Experiment 1 was replicated with changes. respect.
The
results
were
identical
to
those
of
some Experiment
minor 1 in
proevery
40
JOHN
G. CARLSON
dition, with the constraint that no two of timeout within a given condition.
animals received
the same delay
RESULTS
Repeated measures analyses of variance of Link 1 data were performed Nonsignificant session effects on sessions by timeout-delay intervals. and sessions by timeout-delay interactions over the final three sessions of each delay condition were obtained on absolute responding ( F( 2,4) = 1.21, p < 0.50, and F < 1.00, respectively) and relative response rates (F < 1.00, and F( 6,12) = 1.14, p < 0.50, respectively) showing that responding had stabilized, Therefore, these sessions provide the data for Figure 2 which shows mean numbers of responses and mean relative rates of responding in Link 1 on the timeout lever for each of the four timeout-delay intervals. Means of responses or relative rates for the four nontimeout conditions are shown at the right of each graph. Relative rate of responding was defined as number of responses on the timeout lever divided by the sum of responses on the two levers in the first link. The effect of delaying the timeout upon absolute responding in the first link was consistent across the three animals. Relative to the nontimeout conditions, there was a trend toward greater suppression of responding when timeout was immediate and less suppression as the delay was increased to 15 sec. The analysis of variance showed the timeout-delay effect to be significant ( F( 3,6) = 8.98, p < 0.025). The functions relating relative rates of responding to timeout delay
0’21
1
1
9
15 yg
TIMEOUT
DELAY
1
0
INTERVAL
3
9
15
y$
O
UN SEC)
FIG. 2. Mean numbers of responses and mean relative rates of the first chain link on the timeout lever. The values were computed 3 sessions of each of the immediate and delayed timeout conditions timeout ( NO-TO ) conditions.
responding for the and the
in final non-
TIMEOUT
PUNISHMENT
41
were not as consistent, owing largely to monkey M2. The analysis of variance on this measure yielded a nonsignificant timeout-delay effect (F(3,6) = 2.48, .lO < p < 0.25). Monkeys Ml and M4 showed a preference for the nontimeout lever (relative rates on the timeout lever <0.5) at delays of 0 and 3 set and M4 also preferred the nontimeout lever at 9 and 15 SW delay. Both animals preferred the timeout lever when timeout was removed, which was consistent with the results of Experiment 1 and probably attributable to the lower rate of reinforcement (fixed-interval 30-set schedule) in the second link of the chain on this lever (cf. Autor, 1969). Subject M2, on the other hand, showed a preference for the timeout lever whether or not responding in the first chain link produced timeout. At the 0-set delay, this was a reversal of earlier performance. It seems likely that continuous exposure to the lower rate of reinforcement on the nontimeout lever during thr nontimeout conditions produced this effect. GENERAL
IlISCUSSION
The present experiments failed to support Leitenberg’s (1965) suggestion that availablity of unconditioned reinforcement may account for apparent aversive effects of timeout. Rather, the results of the manipulation of unconditioned reinforcement availability and the obtained delay of punishment gradient are consistent with the view that timeout functioned as an aversive event. Other explanations for these data which would not appeal to the aversiveness of timeout may also be dealt with. For one, it has been suggested that response rates in the initial links of chains in procedures like the present ones may be affected by response rates in the terminal links “in anticipatory fashion” (Logan & ‘Ilragner. 1965). That is, in the present cxpclriments, if timeout had caused a reduction in Link 2 response rate on the timeout lever, a form of generalization of this rate to the first link of the chain on this lever could account for the rcduction in responding and obtained lever preference. But Table 2 shows that relative rates of responding in the second chain link on the timeout lever in Experiment 1 were affected very little by timeout presentations, remaining close to the 0.50 level throughout the study. This result is also consistent with Staddon’s (1970) recent observation that timeouts appear not to affect post-timeout responding on variableinterval schedules which tend to produce constant response rates. Therefore, some factors other than differential rates of responding in Link 2 must account for effects of the timeout upon responding in Link 1. Interpretations of these data might also be in terms of effectiveness of
42
JOHN
G. CARLSON
the second-link stimulus lights as conditioned reinforcers. On the one hand, it might be observed that since timeout involves withholding of a potential conditioned reinforcer (the stimulus correlated with food) delay of conditioned reinforcement on the timeout lever in these studies would explain the lowered response rate in the first link of the chain. Under many scheduling conditions, including those in Experiment 1, this would provide a viable explanation for response suppression due to timeout. However, although in Experiment 2 the greatest suppression was obtained when the timeout followed Link I responding immediately, there was also ample evidence that delayed timeouts could suppress responding (and some limited evidence that delayed timeouts could produce preference for the nontimeout lever). Since conditioned reinforcement for Link 1 responding was immediate when timeout was delayed, it would be difficult to argue that delay of conditioned reinforcement was the primary cause for the suppression. A more plausible account in terms of the role of conditioned reinforcement might be that effectiveness of the stimulus light as a reinforcer was reduced by its correlations with timeout periods irrespective of their placement within the terminal link. There is evidence that stimuli paired with periods of “frustrative nonreward” acquire conditioned aversive properties (e.g., Wagner, 1963). In these terms, if the timeout in the present experiments functioned as an aversive event, the reinforcing effectiveness of the Link 2 stimulus on the timeout lever could have been reduced by the temporal relationship between the stimulus and the timeout. However, there seems to be little to favor this account over the more parsimonious view that timeout simply functioned as an aversive event to directly punish responding in the first link of the chain. REFERENCES ALJTOR, S. M. The strength of conditioned reinforcers as a function of frequency and probability of reinforcement. In D. P. Hendry (Ed.), Conditioned reinforcement. Homewood, IL: The Dorsey Press, 1969, pp. 127-162. BARON, A., & KAUFMAN, A. Time-out punishment: Preexposure to time-out and opportunity to respond during time-out. JournaE of Comparative and Physiological Psychology, 1969, 67, 479-485. CAMP, D. S., RAYMOND, G. A., & CHURCH, R. M. Temporal relationship between response and punishment. Journal of Experimental Psychology, 1967, ‘74, 114-123. CARLSON, J. G. Delay of primary reinforcement in effects of two forms of responsecontingent time-out. Journal of ~Comparative and Physiological Psychology, 1970, 70, 148-153. CARLSON, J. G., & AROKSAAR, R. E. Effects of time-out upon concurrent operant responding. Psychological Record, 1970, 20, 365-371.
TIMEOUT
Hor.z,
PUNISHhIEN’I
1” 5)
W. C., AZRIN, N. H., & AYLLON, T. Elimination of behavior of mental patients by response-produced extinction. Journul of the Experimental Analysis of Behauior, 1963, 6, 407412. KAUFMAN, A., & BARON, A. Suppression of behavior hy timeout punishment when suppression results in loss of positive reinforcement. Journal of the Experimental Analysis of Behavior, 1968, 11, 595-607. LEITENBERG, H. Is time-out from positive reinforcement an aversive event? A review of the experimental evidence. Psychological Bulletin, 1965, 64, 428441. LOGAN, F. A., & WAGNER, A. R. Reward and punishment. Boston: Allyn and Bacon, 1965. REYNOLDS, G. S. Behavior contrast. Journal of the Experimentul Analysis of Behavior, 1961, 4, 57-71. STADDON, J. E. R. Temporal effects of reinforcement: A negative “frustration” effect. Learning and Motivation, 1970, 1, 227-247. THOMAS, J. R. Fixed ratio punishment by time-out of concurrent variable-interval behavior. Journal of the Experimental Analysis of Behavior, 1968, 11, 609-616. WAGNER, A. R. Conditioned frustration as a learned drive. Journal of Experimental Psychology, 1963, 66, 142-148. ( Received
November
13, 1970)