JOURNAL
OF MATHEMATICAL
PSYCHOLOGY
A Theoretical STEVEN
11,
79-106
Comparison K. SHEVELL~
Stanford
University,
(1974)
of List Scanning AND
RICHARD
Stanford,
C.
Models1
ATKINSON
California
94305
Eight specific list scanning models are developed and relationships between predictions of contrasting models (serial versus parallel search, self-terminating versus exhaustive processing) are investigated. All models assume that items contained in a list are uniquely defined by specific features, and that the only process by which two items may be tested for equality is to compare corresponding features. All models predict RT to be linearly related with list length when no subject errors occur. With errors, some models predict theoretical nonlinearity but sample calculations indicate that deviation from linearity is slight in many cases. The importance of coding strategy is also discussed.
Information processing models that involve matching or comparison operations have long been of interest (Hick, 1952; Christie and Lute, 1956). One important context for such processing is scanning of a list held in short-term memory (Sternberg, 1969). In this paper we investigate the theoretical implications of a special class of scanning models. Our focus will be on the relationships between predictions of contrasting models (serial versus parallel search, self-terminating versus exhaustive processing) and on the underlying cognitive processes which may be inferred from certain types of experimental data. Finally, we will comment on a type of experimental design that might be used to distinguish between the models. The special class of models we shall consider assumes that each item in a list is characterized by distinct features. An item may be a single digit, a letter, a word, a nonsense syllable, or any other symbol. Each item is assumed to be uniquely defined by exactly Kfeatures. All items are contained within the same K-dimensional feature space, and the K dimensions are assumed to be independent. For any item, a given feature may take on any value in that feature’s characterset. An item’s value along a given feature dimension is randomly selected from that feature’s character set and, within the constraint that K features must uniquely specify each item, each feature 1 This research was supported by grants from the National (MH 21747) and the National Science Foundation (NSFGJ-443X3). supported by a National Science Foundation Graduate Fellowship this paper. a Now at the Department of Psychology, University of Michigan. 79 Copyright All rights
0 1974 by Academic Press, Inc. of reproduction in any form reserved.
Institute
of Mental Health The first author was during final preparation of
80
SHEVELL
AND
ATKINSON
value is equiprobable. The size of the character set for feature i is denoted by wi . For example, let the ith feature be color (1 < i < K). If items may have the value red, blue, yellow, green, or orange for feature i, the size of the character set for this feature dimension is 5 (i.e., wi = 5) and the probability of randomly drawing a blue item from the item pool is 0.2 = l/wi . Although the above example used the feature color, feature dimensions need not be explicit. Many (or all) features may be implicit and, in general, the number of feature dimensions is a parameter that must be estimated (although experimental attempts can be made to manipulate the value of K). In the task we shall consider, a subject is presented with a search list composed of N items. After the search list is presented there is a brief interval during which the subject must retain the search list in memory. Then a test item is presented. Apositiwe test item is one that is identical to an item in the search list; a negative test item does not have a matching item in the search list. The subject responds yes if he perceives the test item to be identical to any item in the search list; otherwise he responds no. The items composing the search list and the test item are randomly drawn (without replacement) from the item pool By definition, any item in the item pool may be in the search list and/or be the test item; items outside the item pool can never be presented to the subject. When the test item is positive, the search list is formed by randomly selecting N items from the pool. The test item is then randomly selected from the search list. In the case of a negative test item, N + 1 items are randomly drawn from the item pool. One of the N + 1 items is randomly chosen to serve as the test item, and the remaining N items form the search list. All items in the item pool have equal probability of being selected. The item pool is composed of all items in the K-dimensional feature space. Since the feature dimensions are assumed independent, there will be a total of
items in the item ~001.~ For the models we consider, a very specific response criterion is assumed. A subject will respond yes if and only if an item in the search list is perceived to have all K features matching the corresponding features of the test item. A no response will be made when a subject determines that no item in the search list is perceived to have all K features matching the features of the test item. s The models the information drawn from the match the item
we will develop apply equally well to a task where the search list (and therefore held in memory) is a single item and the test item is replaced by a test list (a list item pool). The subject responds yes if any item in the test list is perceived to retained in memory; otherwise he responds IZO.
LIST SCANNING MODELS
81
SPECIFICATION OF MODELS
We shall consider searchmodelsthat compare the features of the test item with the featuresof itemsin the searchlist. List searchesmay be specifiedby two characteristics: (1) self-terminating versus exhaustive search;(2) serialversusparallel processing.With the addition of the feature notion, the self-terminating versus exhaustive distinction may have neither, either, or both of the following definitions: (1) Self-terminating on items. Terminate processing of an item in the search list as soon as any feature of that item is perceived not to match the corresponding feature of the test item. (2) Self-terminating on the list. Terminate all processingof the searchlist (i.e., initiate response)as soon as all K features of any item in the search list are perceived to match the correspondingK features of the test item. The first definition implies that there is no further processingfor a given item after earlier processingindicates that the item does not match the test item. The second definition implies that all processingstopsassoon asthe criterion for a yes responseis reached. In both cases,self-termination does not affect accuracy of response.The processingwhich is not done becauseof self-termination is uselessprocessingsince (for the models being considered) that processingcannot change the response.A model which is both self-terminating on items and self-terminating on the list is the most efficient searchin terms of the number of features processed;the reasonis that no processingoccurs that cannot affect the response. Any lessprocessingwould not be sufficient to satisfy the responsecriterion. For the models to be considered here, the self-terminating versus exhaustive distinction has four possiblemeanings: (1) totally exhaustive search; (2) self-terminating on the list only; (3) self-terminating on items only; (4) self-terminating on items and on the list. These four casesmay be viewed as the elementsof a 2 x2 matrix, as shown in Fig. 1. For notational convenience,the four casesarelabeledwith roman numerals1through IV. In defining a serial search, we have followed Townsend (1971). A serial search is one in which the items in the searchlist are processedone at a time and processing on a single item, once begun, is completed before processingon any other item can begin. The parallel processthat we shallconsiderdoesnot involve a simultaneousprocessor. In the parallel search, all items are compared along a single feature dimension before
82
SHEVELL
AND
ATKINSON
COMPLETE PROCESSING OF ALL FEATURES OF AN ITEM COMPLETE PROCESSING OF SEARCH LIST
SELF-TERMINATING ON ITEMS
I
SELF-TERMINATING ON LIST
III
II m
FIG.
1.
Four
self-terminating
cases.
any item is compared along any other feature dimension. That is, the features are processedone at a time and processingalong a single feature dimension, once begun, is completed for all items in the search list (except those for which processinghas stopped due to a self-terminating aspect of a model) before any comparisonsalong other feature dimensionscan occur. This definition of parallel search is the natural counterpart to the serial search. The searchis parallel in the sensethat all searchlist items are partially processedbefore processingis completed for any item (except in special self-terminating cases).This search approximates a simultaneous search in which the processing rate is inversely proportional to the load. However, strictly speaking our parallel search model does not permit simultaneous processing and therefore Townsend would likely refer to this type of parallel search as a hybrid. Any disagreementis semantic and not substantive.4 When the serial versus parallel distinction is added to the matrix of Fig. 1 we have a 2 x 2 x 2 matrix (Fig. 2). Each element of this matrix is a separatemodel, yielding
COMPLETE PROCESSING OF SEARCH LIST SELF-TERMINATING ON LIST COMPLETE PROCESSING OF ALL FEATURES OF AN ITEM
FIG.
2.
Matrix
representation
SELF-TERMINATING ON ITEMS
of the eight
models.
eight modelsin all. The modelswill be denotedby a roman numeral (using the notation of Fig. 1) followed by the letter S or P (indicating serial search or parallel search, respectively). For example, Model IIS is a serial model that is self-terminating on the list but where all items checked are completely processed; Model IVP is a parallel searchmodel that is self-terminating on items and on the list. 4 Although we will maintain serial search may be alternatively before beginning another item)
the serial versus parallel distinction as described above, the described as a depth-first search (search each item to its end and the parallel search as a breadth-first (across item) search.
LIST
SCANNING
83
MODELS
A sample search list containing six items (N = 6) with three features per item (K = 3) is shown in Fig. 3. Two sample test items (one positive, one negative) are also given. The three letters within each item are symbolic representationsof feature values; it is not meant, for example, that the top item is “ABC.” The order in which the features of the searchlist items are processedunder each of the eight modelsis given in Fig. 3 for both the positive and the negative test items. The processingorder shown SAMPLE SEARCH LIST (ABC) (DEF)
(GM)
POSITIVE TEST ITEM' NEGATIVE
(GHIl
TEST ITEM:(GKI)
(JKL) (MNO) (GHK) SCANNING 1 MODEL
TIP
FIG.
3.
POSITIVE TEST ITEM (GHI)
/
Sample
ADGJMGHHI
search
ORDERS NEGATIVE TEST ITEM (GKI)
ADGJMGHW
list and test items,
and scanning
orders.
assumesthat the items are processedfrom top to bottom and the features from left to right (neither of these assumptionsis a constraint on any of the models). For these assumptions,a serial searchimplies scanningrow by row and a parallel searchimplies scanning down columns. Subject Errors
Subjects cannot be expected to complete hundreds of trials of the task without error. We assumein these models that the only source of subject error is inaccurate feature comparisons.Thus we introduce the distinction between a true match and a perceived match. When two identical features are compared, there is a true feature match. A perceived feature match is where the subject perceives that two features are identical, whether or not they are actually the same.Similarly in comparing two items, a true item match implies a true feature match for all K corresponding features. A perceived item match indicates that the subject perceivesthat a true item match exists. Error parametersdetermine the accuracy of a subject’scomparisons.This requires two parameterswhich are assumedto be independent: p = Pr (perceived feature match 1true feature match) 7 = Pr (feature match not perceived 1not a true feature match).
84
SHEVELL
AND
ATKINSON
When t.~= 71= 1 there will be no errors. Note that some conditional error probabilities are independent of one of the error parameters. For example, if p = I then no value of 7 can cause an incorrect response to a positive test item. However, for some models there will be a change in response time with changes in the value of q since decreasing 77 implies a greater probability of a perceived feature match and also increases the probability that a correct yes response is due to a search list item processed prior to the item which is identical to the test item. Expressions for the probability of a false alarm and a miss will be developed later in this paper. General Reaction Time Expressions For all eight models, a subject’s total reaction time will be given by RT =
% + /% I %a + PC,
for a yes response; for a no response.
The constants 01~and ollzinclude encoding and response time, /?Iis the time required for each feature comparison, and 4 is a function specifying the number of feature comparisons made. Since 0~~, c+, , and /3 are constants, in the discussion that follows we will be concerned with the function 4. Of course, RT is not equal to $ so one would not expect RT data to conform directly to the predicted values of 4. However, 4 is the variable of interest since 01~, (II,, , and /3 are not functions of N. Specifically, dRT/dN, the change in total reaction time with respect to N, is equal to /l(d#dN), the change in 4 with respect to N, times a constant. If /I = 1, then dRT/dN
= d4ldN.
Expressions for q3will be developed for each of the eight models. The number of feature comparisons on a given trial may itself be a random variable, and thus in general 4 is defined as the expected number of feature comparisons under the stated conditions. Errorless performance is of course a special case where the error parameters p and 7 are equal to 1. However, because the general expressions for 4 are somewhat complex, expressions first will be developed under the assumption that there are no errors. The derivation of these expressions will be done in some detail, and will provide insight into the approach used in developing expressionswhere error is considered. Notation For compactnessand convenience, the following notation will be used: tfm
true feature match;
Pfm
perceived feature match;
LIST SCANNING MODELS
85
tim
true item match;
pim
perceived item match;
c
number of feature comparisonsfor a single item in models that are self-terminating on items;
+ -
positive test item;
Y
subject responseis yes;
n
subject responseis no;
[A bar over any of the above indicates not. For example, pim means an item match is not perceived.]
negative test item;
+((model), (test item type), (subject response)) expected number of feature comparisonsgiven the stated model, test item type (+ or -), and subject response(y or n). For example, $(IVS, +, y) is the expected number of feature comparisonsunder Model IVS when a positive test item is presentedand the subject respondsyes. When an expression for + is not conditional on the response(or in errorless caseswhere the test item type determines the subject response)the third argument will be dropped [i.e., +(IVS, +)]. ERRORLESS PERFORMANCE
The development of some models requires a theorem. Rather than interrupt development of the modelsthe theorem is presentedhere. THEOREM 1. Consider any item randomly selected from the item pool that is not identical to the test item. If processing on that item is halted as soon as a single feature is perceived not to match the corresponding feature of the test item, then the expected number of feature comparisons for that item is
Proof.
By definition,
E(C,G+&lxP(
r exactly 1 feature comparisons1tim)] I=1
= 2 [l X Pr(pf m on the first IZ=l
1 features and pfm on feature 1 1 t%)].
86
SHEVELL AND ATKINSON
Expanding the conditional probability expression, E(C ] z)
= 2 [I x Pr(pfm on the first 1 - 1 features) 24
X Pr(pfm on feature Z)/Pr(tim)].
Evaluation of the probabilities on the right completesthe proof.
Q.E.D.
The ratio p((model)) = W hWmodel>, -) n) dN I dN of the slopeof+((model), +, y) to the slopeof+((model), -, n) is of major importance in investigating each model. In general, slopesmay be a function of N (i.e., the value of 4 may not be linear with N). In caseswhere the slope is not constant, we shall consider the ratio of the averageslopeswhere the averageslopeof a model from N = 1 to N = Nmax is defined by
The ratio of the average slopesis
~((model),
N,,&
=
+~msxx((modelh +, Y>
-
M<--W
+,
Y>
&,axx(
-
M(moW,
-)
4
-)
4
’
Developmentof Models IS. This is the totally exhaustive, serial search case. For any test item, positive or negative, all features of all items will be processed.Thus, MODEL
+(I& +) = $(IS, -)
= NK
p(IS) = 1. MODEL IIS. All features of all items prior to and including the item matching the test item will be checked. Since the matching search list item is randomly positioned in the searchlist, it is equally likely to be the first through Nth item processed.Thus,
cj(IIS,
+)
=
K
5 j=l
+& =
“‘“2”
I)
.
LIST
SCANNING
87
MODELS
There is no matching item in the search list when the test item is negative so all items in the search list are processed. Therefore, +(IIs,
-)
p(IIS)
= NK = +.
All items in the search list will be processed, but processing is selfMODEL IIIS. terminating on items. If the test item is positive, then the matching item in the search list will have K features processed. All search list items not identical to the test item will have an average of y features compared. Therefore, $wIIS,
t-1 = K + (N -
4(IIIS,
-)
l)y
= NY
p(IIIS)
= 1.
MODEL IVS. If the test item is positive, the probability of terminating the processing of the list on any specific item is l/N. There will always be at least K feature comparisons if a search list item matches the test item, and an average of y features compared for every other search list item that is processed. Thus ,
CUVS,+> = K + ; i$ [(z- l>y] = K + (q) For a negative test item, processing
cannot terminate l$(IVS,
-)
p(IVS) MODEL
IP.
on the list so
= A$ = *.
All features of all items will be processed. +(IP, +)
y.
= $(IP, -)
Thus
= NK
p(IP) = 1. MODEL IIP. A parallel search which is self-terminating only when an item match is found will always complete at least N(K - 1) feature comparisons. If the test item is positive, the matching search list item is randomly placed in the list. Therefore,
SHEVELL AND ATKINSON
88
For a negative test item, all features of all items will be checked (no termination of list processing)so that f$(IIP, -)
= NK
p(IIP) = (K - $)/K. MODEL IIIP. Although the search is parallel, the expected number of feature comparisonsfor each item is the sameas in case111ssince processingfor a given item continues until a feature mismatchfor that item is discovered. Thus
WIP, +> = K + (N - l)r f$(IIIP, -) = NY p(IIIP) = 1. IVP. This caseis similar to Model IIIP. The difference occurs when the test item is positive. Then it is necessaryto subtract from $(IIIP, +) the comparisons made for Kth features processedafter the item match is discovered. For any item not equal to the test item, the probability of processingthe Kth feature (i.e., the probability of not terminating on a given item before feature K) is MODEL
Since the matching searchlist item is randomly ordered in the searchlist,
For a negative test item, there can be no self-termination on the list and thus r$(IVP, -)
= A$.
iGVP) = l- ($/)@$)(~)/[l 3-O
Remarks
Concerning
Errorless
- (lg)]l.
Performance
When the error probability is zero, every model yields values of 4 that are linear functions of N, neither type of self-termination can distort this linearity. Upon reflection this result is not surprising. Self-termination on items affects only the expected
LIST SCANNING MODELS
89
number of feature comparisons per item. Self-termination on the list can result in later items not being processed if an item match is found for an earlier item, but since the matching item is randomly placed in the search list 4 is still a linear function of N. The slope ratios for the serial models are of two types. For models that are selfterminating on the list, the slope of 4 for a positive test item is one-half the slope of 4 for a negative test item since, on the average, (N - 1)/2 items in the search list will not be processed when the test item is positive. Models which are not self-terminating on the list process all N items regardless of test item type, yielding a slope ratio of 1. For parallel search models that do not self-terminate on the list the slope ratio is 1 because all search list items are always processed for either type of test item. Model IIP is more complex because it terminates on the list. In general, 4 < p(IIP) < I and is a function of K. The ratio p(IVP) is also restricted to values between 4 and 1, but is dependent on the character set sizes for the feature dimensions as well as the value of K, p(IVP) will tend to 1 more quickly than p(IIP) since
decreases rapidly as K and/or the character set sizes increase. For large values of K, the slope ratios for allfour parallel search models will approach 1. It is important to note that the information contained in the parameters K, w1 , we ,..., wK is far in excess of just the item pool size K
0 1 Wj
s
i=l
For example, consider an item pool consisting of eight digits (0, 1, 2, 3, 4, 5, 6, 7). Binary coding requires K = 3 and w, = ws = ws = 2, implying p(IIP) = 0.83 and p(IVP) = 0.95. The same information processed in an octal fashion would have K = 1 and w, = 8, and therefore p(IIP) = p(IVP) = 4. Thus coding strategies may result in individual differences even if all subjects are using the same scanning model.
ERROR PROBABILITIES Before investigating the expected number of feature comparisons under conditions where subject errors may occur, it is first necessary to determine the probability that a subject will make an incorrect response since self-termination on the list is dependent on perceived item matches. Values for the probability of a false alarm and the probability of a miss also can be useful in estimating the error parameters ~1 and 7 from experimental data.
90
SHEVELL
AND
ATKINSON
Probability of a False Alarm By definition, Pr(n 1 -) is the probability that no search list item is incorrectly perceived to match the test item. Pr(n 1 -) is a decreasing function of search list length since as N increases there are more items which can be incorrectly perceived to match the test item. Let:
R(N)
= Pr(n 1 -)
for a search list of size N
i, = the exact number of features of the Zth search list item which identical to the corresponding features of the test item. The probability of a perceived item match mismatches is $-“r(l - q)il and therefore Pr(pim
for a search list item with
1 ir) = 1 - ~x-iz(l
i, feature
- 7)iz.
For a negative test item and N = 1, R(1) is just the expected probability finding a perceived item match averaged over all items in the pool which identical to the test item. Thus, R(1)
=
5 {Pr(il
1 tim)[l
are not
- pK+(l
of not are not
- q>il]}.
i,=l
Expressions for R are more complex for N 3 2 since sampling from the item pool is without replacement (i.e., all search list items are distinct). For a search list with two items (and with a negative test item)
R(2) = 2
f
+1
i,=l
In general,
{Pr(ir
] tim) Pr(iz 1 2r ’ , tim)[l -
- $-il(l
for a search list with N items not identical
R(N) = igl z$l *** F {Pr(i, a x
[l
Pr(is ] il , %)
1%)
- y>il] [l - $+(l
- up]}.
to the test item, *** Pr(i,
1il , iz ,..., inr-r , G)
ipl -
+(l
-
#]
[l
-
pK-ia(l
_
#]
. . . [l
-
pK-fN(l
_
+"I}.
Difficulty in evaluating R(N) is due to the calculation of the conditional probabilities. We will not show the cumbersome expansion for R(N) in terms of the parameters of the models. A relatively simple computer algorithm can provide values for R(N).5 6 For large item pools, sampling from a constant
This
approximation
sampling from the pool without replacement may be approximated size pool (i.e., sampling with replacement). In this case,
is not used in any calculations
discussed
in this paper.
by
LIST SCANNING MODELS
91
Since Pr(n 1-) = R(N), the probability of a falsealarm for a searchlist of size N is Pr(y 1-) = 1 - R(N). Probability
of a Miss
When a positive test item is presented, N - 1 search list items are not identical to the test item. Thus the probability that no search list item results in a perceived item match is
COMPARISON TIMES WITH SUBJECT ERRORS
The method of development will be demonstratedby presenting the derivation for $(IVS, +, y). All other results will be shown without derivation. First, however, two theorems are necessary.To avoid ambiguity, C’ is used in place of C when subject errors may occur. THEOREM 2. Let feature dimension i have character set size wi . If each of the wi values is equally likely, then the probability of a perceived feature match between feature i of the test item and feature i of any item randomly drawn from the completeitem pool is
Pr(pfm) = (p/q) + (wi - l)(l - v)/wi = ai. Proof. By assumption,for any item in the item pool the feature value along a given feature dimensionis randomly chosenfrom the character set for that dimension.Since each character set value is equally likely, the probability of a true feature match for feature i between the test item and an item randomly drawn from the complete item pool is l/w,. Substituting this value into the identity
Pr(pfm) = Pr(tfm) Pr(pfm I tfm) + Pr(tfm) Pr(pfm 1tfm), we obtain the required expression.
Q.E.D.
The next theorem is a generalization of Theorem 1. THEOREM 3. Consider any item randomly selected from the item pool which is not identical to the test item. If processing on that item is halted as soon as a single feature is perceived not to match the corresponding feature of the test item and such perceived
92
SHEVELL
AND
ATKINSON
feature matches and mismatches may be incorrect, comparisons for that item is YI = E(C’ I tim)
then the expected number of feature
= R(1) y2 + [l - R(l)]
K,
where
y2= [I - (i aj)- b(l- ~~1~~1 [g 1@aj) (1- al)]- bd)’ a,=
1,
aj is as in Theorem 2 for j # 0,
b=jfi;
A 3
and d =
i
[l/+(1
- &
I=1
Proof. yI = E(C’ ] tim) = E(C’ 1 pim, G) = E(C’ 1 p%, G) Let y2 = E(C’ 15,
R(1) + K[l G).
-_
= (f
] G)
+ E(C’ ] pim, &)
Pr(pim
] tim)
- R(l)].
We now need derive only y2 .
y2 = E(C’ ] pim, tim) =
E(C’ I p$$
Pr(p%
1~ x [z
E(C’
1 p%)
aj] (1 -
- E(C’
a~)/)/11
] p%, tim) Pr(tim -7-Pr(tim / pim) -
[fi
aj] 1.
I pim) * (1)
(2)
Z=l
E(C’ ] pim, tim) = d/(1 - @).
(3)
Pr(tim
Pr(tim & pim) j pim) = Pr(pim)
Pr(GG
1 pim) = 1 - Pr(tim
Substituting the required
= W - $9/1’
-
[Jj
a~] 1.
I pim).
(2)-(5) into (1) and cancelling expression for ya .
(4 (5)
the term in the denominator
of (2) yields Q.E.D.
LIST
SCANNING
93
MODELS
Using the notation of Theorem 3, the expressions shown below are stated without proof. K-l
ys = E(C’ 1tim) = C pz,
(6)
7=0
y4 = E(C’ 1pim, tim) = d/(1 - @),
y5 = E(e 1tim) = R(l)
[I -
(7)
(fi q) - b(l - n1-l j=l
x
(I
[ f
Z=l
1 x
(5
%‘)
(1
-
UZ)]
j=O
-
(-iti
%)
(1
-
aK,j
i=O
+ [l - W)I(K - 11, y6 = E(C 1tim) = /F: [Zpz-l(l - p)]/ + pK-l(K
-
l),
where & is the number of feature comparisons for a single item on the jirst K - 1 features when the model is self-terminating on items. Derivation of q%(IVS,+, y) Supposethat the searchlist item that is identical to the test item is the tth item in the searchlist. The probability that processingwill stop on item I given that item Eis processedprior to item t is [Pr(stopping by item I) - Pr(stopping by item 1- l)]/Pr(y 1+). The probability of stopping processingon any given item in the searchlist is Pr(stopping on item 11+, y, t) [R(Z - 1) - R(Z)]/[l - R(N - l)(l - @)I,
z< t
[R(t - 1) tL”]/[l - R(N - 1)(1 - pK)], Z = t ([R(Z - 2) - R(Z - l)](l - pK)}/[l - R(N - l)(l - FL”)], 1 > t. 480/11/2-z
94
SHEVELL
AND
ATKINSON
Thus, $(IVS,
+, y, t) = [l - R(N
-
l)(l
- /.3)1-i
-& {VW 1) WICK + (1 1) W’ I pim, tim)lI) I( t-1
x
+ R(t + ( 5
1) pK[K
+ (t -
7 1 pim, tim)]
1) E(C’
([R(Z - 2) - R(Z -
l)][l
- px][K
+ E(C’
[ pim, tim)
z=t+1
+ (I - 2) E(C’ Since the item matching
1 pim, tim)]})
1.
(8)
the search list item is randomly
$(IVS, Substituting (1) and (7) into assistance of the identities
+, y) = t$ [ +(lvsP;‘yY (8), and completing
N
t-l
c
&zz t=1Z=l
placed
in the search list,
“‘I.
(9)
the summation
in (9) with
the
N =
C(N-k7z Z=l
and
yields
4(IW
+,Y)
= VW - w
- I)(1 - PW’ (5 {(N - WV - 1) - WI Z=l
x [K + (I - 1)rzl + w - 1)pK[K + (I - 1)?%I + (I - l)[W -
2)
- w - 1>1(1 - P>W +
3/4 +
(I -
2)
rd). (10)
This completes the derivation. Expressions for + and p (when applicable) for correct responses are given below for all eight models. Sample values of p (for the self-terminating models) are deferred to the discussion section. For computational convenience, we define R( - 1) = 0 and R(0) = 1.
LIST
SCANNING
95
MODELS
MODEL IS.
ws, +,A = AK +(IS, -, n) = NK p(IS) = 1
MODEL
IIS.
$(IIS, +, y) = K{N[l - R(N - l)(l - /A~)]}-’ (5 Z{(N - Z)[R(Z- 1) - R(Z)] 24
+ R(Z - 1) pK + (1 - l)[R(Z - 2) - R(Z - I)](1 - pK)}) +(IIS, -, n) = NK MODEL 111s.
@IS, f,
Y)
= 11- W - l)(l - PII-W x (1 -@)R(N-
- 1)~1
+
73
-
[(N
-
1) yz
+
~41
I)}
$(IIIS, -, n) = Nyz
MODEL
IVS. $(IVS, +, y) is given in Eq. (10). W-,
-7 9 = Wz
MODEL IP.
W’,
+, Y) = NK Yvp, -> n)=NK p(IP) = 1 MODEL IIP.
+(IIP, +,y)
= N(K - 1) + {N[l - R(N - l)(l - ,d)]}-’
x ( ztl Z{(N- z)[R(z- 1)- R(Z)1+ R(Z- 1)pLK+ (I - 1X1- P”) x [R(Z - 2) - R(Z - l)]}) 4(IIP, -, n) = NK
96
SHEVELL
ATKINSON
IIIP.
MODEL
WIP, +,r> = [l -w-
1w - PYW-
x (1 - @)R(N .$(IIIP,
AND
-,
-
1)n
+
Y3 -
w-
l)Y2
+
r4l
1))
fr) = NY2
MODEL
IVP.
$(IVP,
+,r>
= {N[l
- R(N -
l)(l
-
$q]}--1
(5
{(N - Z)[R(Z -
1) - R(I)]
Z=l x
IK
+
+
w
-
+ (Z x wvp,
-9
4
=
[K
(I-
'>Y2
1) PK[K
+ +
y6 + (1 -
(N1)
l)[R(Z - 2) - R(Z +
(I -
2)Y2
+
74
I-
y-2+
(N
l)%i]
-
I) %I
I)](1 - @)
+ w - b%lI)
NY,
DISCUSSION
When subject errors are introduced, all eight models continue to yield values of 4 that are linear functions of N for correct responses to negative test items. For correct responses to positive test items, only a totally exhaustive search will give values of $ that are linear with N. Self-termination on the list and/or self-termination on items will cause nonlinearity in the models we have discussed. Further, the reason for nonlinearity is directly attributable to subject errors since all models give linear results in the errorless case. In the discussion of nonlinear cases that follows, it is important to remember that we are considering only correct yes responses for self-terminating models. Nonlinearity is due to the fact that two features can be incorrectly perceived to match. All models will produce linear functions of + if the probability of an incorrect perceived feature match is zero; this condition is satisfied if and only if 7 = 1. With 71 = 1, no value of 1~ can cause q5 to be nonlinear with N. Note that if the false alarm rate is zero, then + is linear. If 77 deviates from 1 then 7 and p interact to produce the false alarm rate. As mentioned above, self-termination on the list OY self-termination on items can cause nonlinearity. Qualitatively, each type of self-termination contributes to nonlinearity in a different manner. We first consider self-termination on the list. For models which are self-terminating on the list, permitting 7 to deviate from 1 has the effect of making every search list item a potential item on which list processing
LIST
SCANNING
97
MODELS
can terminate. In calculating the expected number of feature comparisons for a given item in a search list, one must consider both the expected number of feature comparisons for that item when the item is considered in isolation, and the probability that the item will ever be processed. As more and more items are to be completely processed,the probability of reaching an item further down the searchlist decreases. Thus, the expected number of feature comparisonsfor an item decreaseswith the number of items to be processedbefore it. This is the causeof nonlinearity due to self-termination on the list. For models that are self-terminating on items, the source of nonlinearity is less obvious. The causeof nonlinearity may be seenmost easilyby consideringthe errorless case(which is linear) and then examining the changeswhich occur when subject errors are permitted. By definition, +((model), +, y) requires at least one perceived item match. When errors cannot occur, the search list item that is identical to the test item will always provide the perceived item match required for the yes response.6Therefore, in the errorless casewe need not be concerned with whether any perceived item matches occur for any of the other searchlist items. However, when errors can occur we may no longer assumethat a perceived item match will result from the search list item that is identical to the test item. When a subject respondsyes to a positive test item, we know that at least one of the N searchlist items was perceived to match the test item, but not which of the N items resulted in a perceived item match. We must consider the possibility that the perceived item match (required for the yes response)is due to an item that is not identical to the test item. Consider Model 111s(which is self-terminating on items only) and a searchlist of size N. For each of the N - 1 items that is not identical to the test item, let fi be the number of features which are identical to the corresponding features of the test item (0 < fi < K - 1 for i = I,..., N - 1). Now supposethat a positive test item is presented, a yes responseis made, and that we somehowknow that a perceived item match did not occur for the searchlist item which is identical to the test item. If every item in the item pool that is not identical to the test item had an equal value for fi , then $(IIIS, +, y, pim for the tim item) = [K + E(C’ 1pim, tim) + (N - 2) E(C’ 1tim)]. The general expressionwould be $(IIIS, +, y) = Pr(pim 1tim)[K + (N - 1) E(C’ 1G)] + Pr(&
7 1tim)[K + E(C’ [ plm, tim) + (N - 2) E(C’ 1tlm)],
a function of 4 which is linear with N sincePr(pim 1tim) is a constant and equal to 6 In the errorlesscase,the course implicit in this shorter
abbreviatednotation+(, notation
when
no errors
+) can occur.
is used.
A yes response
is of
98
SHEVELL
AND
ATKINSON
PK. However, it is not the case that the N - 1 search list items have equal values of fi , and therefore this expression for +(IIIS, +, y) is incorrect. Although each of the N - 1 items has the same expected value for fi , a group of N - 1 items forms a distribution of fi values. If the item identical to the test item does not provide a perceived item match, we must be concerned with the expected number of feature comparisons for the other N - 1 items given that one of them results in a perceived item match. Since a perceived item match implies exactly K feature comparisons, we are specificahy interested in the expected number of feature comparisons for the remaining N - 2 (that is, [N - (the tim item) - (the pim item)]) items. These items are a group of N - 2 items from a randomly drawn sample of N - 1 items (from the item pool specified in Theorem 3) from which one item has been removed. The nonlinearity is due to the fact that each of the N - 1 items does not have equal probability of being removed, since each of the items may have a different probability of a perceived item match [Pr(pim) = $*(l - q)K--fi]. The distribution of fi values for all of the N - 1 items is dependent on list length; therefore, the expected value of fi for the N - 2 items is also a function of list length N. This results in nonlinearity of 4 with N for models which are self-terminating on items. The above discussion implies that the problem of nonlinearity should not develop in cases where we need not be concerned with whether any of the N- 1 items results in a perceived item match. This is exactly the case. If the item identical to the test item is certain to result in a perceived item match, then Models 111s and IIIPwill yield linear results. That is, with p = 1, no value of 17 can cause these models to be nonlinear. A number of sample cases were computed in order to investigate the actual deviation from linearity. Values of C#versus N for correct responses are shown in Table 1 for all eight models. The parameters are as follows: K = 4, w, = w2 = wg = w4 = w = 5, p = 0.96, 7 = 0.90. Models IS and IP yield identical values and thus are shown together; similarly, Models 111s and IIIP have identical predictions. In order to detect nonlinearity, the change in 4 with unit increase in N is also shown. The result is very clear: all models predict values of $ which are essentially linear with N. These values of p and 77were chosen in order to give miss and false alarm rates similar to experimental data (Atkinson, Herrmann, and Wescourt, 1974). The probability of a false alarm is 0.03 and the probability of a miss is 0.15 (for N = 7). Note that error rates are strongly dependent on list length under the models being considered here since p and r) are assumed to be independent of N. Table 2 shows values of $ and changes in 4 with unit increases in N when the error parameters are reduced (cc = 0.90, 17 = 0.80). With these values, the probability of a false alarm rises to 0.08 and the probability of a miss is 0.32 (for N = 7). Even with these larger error rates any nonlinearity is hardly noticeable. In order to increase the rate of false alarms and the (unobservable) number of correct yes responses based on incorrect perceived item matches, the character set size w for all feature dimensions was reduced to 2. This increased the probability of a false alarm to 0.32. The prob-
TABLE Values
of I$, Changes in 4 with Unit Changes in N, and p for K = 4, WI = w2 = wg = wq = w = 5,p = 0.96,lj = 0.90 Correct
Model
IS, IP
11s
IIIS, IIIP
IVS
IIP
-
IVP
1
N
4N
1
4.000
2 3 4 5 6 7 1
8.000 12.000 16.000 20.000 24.000 28.000 4.000
2 3 4 5 6 7 1
yes responses5 ‘+N -
INN-I
Correct +N
no responses= ‘#N
-
IN-l
P
4.000
-
4.000 4.000 4.000 4.000 4.000 4.000 -
8.000 12.000 16.000 20.000 24.000 28.000 4.000
4.000 4.000 4.000 4.000 4.000 4.000 -
1 .OOO" 1.000 1.000 1.000 1 .OOo 1.000 -
5.992 7.978 9.959 11.934 13.904 15.869 4.000
1.992 1.986 1.981 1.975 1.970 1.965 -
8.000 12.000 16.000 20.000 24.000 28.000 1.351
4.000 4.000 4.000 4.000 4.000 4.000 -
0.498 0.497 0.497 0.496 0.495 0.495 -
2 3 4 5 6 7 1
5.363 6.126 8.089 9.452 10.815 12.178 4.000
1.363 1.363 1.363 1.363 1.363 1.363
2.703 4.054 5.405 6.757 8.108 9.459 1.351
1.351 1.351 1.351 1.351 1.351 1.351 -
1.009 1.009 1.009 1.009 1.009 1.009 -
2 3 4 5 6 7 1
4.673 5.345 6.014 6.682 7.348 8.012 4.000
0.673 0.671 0.670 0.668 0.666 0.664 -
2.703 4.054 5.405 6.757 8.108 9.459 4.000
1.351 1.351 1.351 1.351 1.351 1.351
0.498 0.498 0.497 0.496 0.496 0.495 -
2 3 4 5 6 7 1 2 3 4 5 6 7
7.498 10.995 14.490 17.984 21.476 24.967 4.000 5.353 6.706 8.058 9.411 10.764 12.116
3.498 3.497 3.495 3.494 3.493 3.491 1.353 1.353 1.353 1.353 1.353 1.353
8.000 12.000 16.000 20.000 24.000 28.000
4.000 4.000 4.000 4.000 4.000 4.000
0.874 0.874 0.874 0.874 0.874 0.874
1.351 2.703 4.054 5.405 6.757 8.108 9.459
1.351 1.351 1.351 1.351 1.351 1.351
1.001 1.001 1.001 1.001 1.001 1.001
0 Occasional discrepencies b p for models IS and IP.
of 0.001
are due to rounding
99
errors.
-
TABLE Values
of 4, Changes in 4 with Unit Changes in N, and $ for K = 4, w1 = w2 = w3 = wp = w = 5, /L = 0.90, -q = 0.80 Correct
Model
IS, IP
11s
IIIS, IIIP
IVS
IIP
IVP
2
N
4N
yes responses’ ‘$N
-
‘$N-l
Correct 4N
no response@ $N -
4N-1
4.000
P -
1
4.000
2 3 4 5 6 7 1
8.000 12.000 16.000 20.000 24.000 28.000 4.000
4.000 4.000 4.000 4.000 4.000 4.000 -
8.000 12.000 16.000 20.000 24.000 28.000 4.000
4.000 4.000 4.000 4.000 4.000 4.000 -
l.OOOb 1.000 1.ooo 1.ooo l.ooa 1.000 -
2 3 4 5 6 7 1
5.975 7.935 9.879 11.807 13.721 15.620 4.000
1.975 1.960 1.944 1.929 1.913 1.899 -
8.000 12.000 16.000 20.000 24.000 28.000 1.460
4.000 4.000 4.000 4.000 4.000 4.000 -
0.494 0.492 0.490 0.488 0.486 0.484 -
2 3 4 5 6 7 1
5.497 6.994 8.491 9.988 11.484 12.980 4.000
1.497 1.497 1.497 1.496 1.496 1.496 -
2.921 4.381 5.842 7.302 8.763 10.223 1.460
1.460 1.460 1.460 1.460 1.460 1.460 -
1.025 1.025 1.025 1.025 1.025 1.025 -
2 3 4 5 6 7
4.724 5.443 6.155 6.862 7.563 8.259
0.724 0.718 0.712 0.707 0.701 0.696
2.921 4.381 5.842 7.302 8.763 10.223
1.460 1.460 1.460 1.460 1.460 1.460
0.496 0.494 0.492 0.490 0.488 0.486
1 2 3 4 5 6 7 1 2 3 4 5 6 7
4.000 7.494 10.984 14.470 17.952 21.430 24.905 4.000 5.473 6.945 8.417 9.888 11.360 12.831
3.494 3.490 3.486 3.482 3.478 3.475 1.473 1.472 1.472 1.472 1.471 1.471
4.000 8.000 12.000 16.000 20.000 24.000 28.000
4.000 4.000 4.000 4.000 4.000 4.000
0.873 0.873 0.872 0.872 0.872 0.871
1.460 2.921 4.381 5.842 7.302 8.763 10.223
1.460 1.460 1.460 1.460 1.460 1.460
1.008 1.008 1.008 1.008 1.008 1.008
a Occasional discrepencies b p for models IS and IP.
of 0.001
are due to rounding
100
errors.
TABLE Values
of 4, Changes in 4 with Unit Changes in N, and p for K = 4, w1 = w2 = tug = wp = w = 2, p = 0.90, 7 = 0.80 Correct
Model
IS, IP
11s
IIIS, IIIP
IVS
IIP
IVP
3
N
4N
1
4.000
2 3 4 5 6 I 1
8.000 12.000 16.000 20.000 24.000 28.000 4.000
2 3 4 5 6 7 1
yes responses5 +N
-
+N-l
Correct h
no responses” $N
-
‘#N--l
/i
4.000
-
4.000 4.000 4.000 4.000 4.000 4.000
8.000 12.000 16.000 20.000 24.000 28.000 4.000
4.000 4.000 4.000 4.000 4.000 4.000 -
l.OOOb 1.000 1.000 1.008 1.000 1.ooo -
5.895 1.728 9.501 11.219 12.885 14.502 4.000
1.895 1.832 1.774 1.718 1.666 1.616
8.000 12.000 16.000 20.000 24.000 28.000 1.806
4.000 4.000 4.000 4.000 4.000 4.000 -
0.414 0.466 0.458 0.451 0.444 0.438 -
2 3 4 5 6 7 1
5.936 7.868 9.797 11.722 13.644 15.564 4.000
1.936 1.932 1.928 1.925 1.922 1.920 -
3.612 5.418 7.224 9.030 10.836 12.642 1.806
1.806 1.806 1.806 1.806 1.806 1.806 -
1.072 1.071 1.070 1.069 1.068 1.067 -
2 3 4 5 6 7 1
4.863 5.698 6.504 7.286 8.043 8.777 4.000
0.863 0.834 0.807 0.781 0.757 0.734 -
3.612 5.418 7.224 9.030 10.836 12.642 4.000
1.806 1.806 1.806 1.806 1.806 1.806
0.478 0.470 0.462 0.455 0.448 0.441 -
2 3 4 5 6 7 1
7.474 10.932 14.375 17.805 21.221 24.625 4.000
3.474 3.458 3.443 3.430 3.416 3.404 -
8.000 12.000 16.000 20.000 24.000 28.000 1.806
4.000 4.000 4.000 4.000 4.000 4.000
0.868 0.866 0.865 0.863 0.861 0.859 -
2 3 4 5 6 7
5.872 7.742 9.609 11.473 13.336 15.197
1.872 1.869 1.867 1.865 1.863 1.861
3.612 5.418 7.224 9.030 10.836 12.642
1.806 1.806 1.806 1.806 1.806 1.806
1.037 1.036 1.035 1.035 1.034 1.033
D Occasional discrepencies b p for models IS and IP.
of 0.001
are due to rounding
101
errors.
-
102
SHEVELL
AND
ATKINSON
ability of a miss fell to 0.25 since it is now more likely that an incorrect perceived item match will contribute to a correct response for a positive test item. Values of $ and changes in + with respect to N under these conditions are shown in Table 3. The values of 4 shown in Table 3 for correct yes responses are plotted in Fig 4. Here the nonlinearities are perceptible, but still are not strong. It is important to remember that we are considering the expected number of feature comparisons, not reaction times. If a single feature comparison requires, say, 50 msec, then for the strongest nonlinear case considered [+(IIS, +, y); K = 4, w = 2, p = 0.90, 7 = 0.801 the average slope atN= 2 would be 95 msec and the average slope at N = 7 would be88msec, a difference of 7 msec. Notice that by changing the value of w from 5 to 2 the size of the item pool has fallen from 625 to 16.
0 L,.--
0' 1234567
FIG.
4.
Values
of
4 for
correct
1234561 N
N
yes
responses
(K = 4, w = 2, p = 0.90, 7 = 0.80).
The values of + represent predictions for the special class of models considered in this paper. Surely other models can predict nonlinearity based on relatively simple assumptions. An example of a decidedly nonlinear model is an unlimited capacity, last-to-finish simultaneous search. Our point is to indicate the types of models that can produce linear results. We have considered searches which are parallel and serial in combination with self-terminating and exhaustive processing, and have found predictions for every model in which any deviation from linearity (if present at all) is extremely difficult to detect. For a large item pool and reasonable error rates, linearity becomes an even weaker test for rejecting any of the eight models. With no subject errors, all modelsare theoretically linear. Before leaving the issueof linearity, we may briefly consideranother classof models in which the error parametersp and 7 are not independent of N but rather vary so as to maintain constant false alarm and miss rates for any list length. This classof
LIST
SCANNING
103
MODELS
models is outside the theoretical development of this paper; it internally adjusts the error parameters in order to maintain a criterion. Sample calculations indicate that such a model may results. Values of + under this model are shown in Fig. Pr(false alarm) = 0.03; Pr(miss) = 0.15). Th e values of p and are shown in Table 4. 26
26
24
AODELSIS.IP
24
16
16 Q
12 6
12 s 4
4 0 ;/ 26 24 20
MODELlIS
20
20 +
implies that a subject constant performance produce nearly linear 5 (I( = 4; w = 5; 7 for each list length
I
2
3N4
5
MODELS UIS.,,IP
6
'
O 26 24 20
/
l-rYff& 1
'
3N4
'
6
'
MOOCLHS i
,-E 1 2 3”4 5 6 ’
26 24.
MOOELHP
24
MODEL lILP
FIG. 5. Values of Q when the error parameters p and 7 vary with N in order to maintain constant false alarm and miss rates [K = 4, w = 5, Pr(false alarm) = 0.03, Pr(miss) = 0.151. Triangles are correct yes responses; circles are. correct no responses.
With the introduction of subject error it becomes more difficult to generalize about slope ratios. For totally exhaustive search models, the slope ratio is 1. For other models, average slope ratios must be considered. Average slope ratios for these models are shown in Tables 1, 2, and 3 for the conditions discussed above. We will not consider average slope ratios further, except to note that i may be less than t or greater than 1, unlike all other slope ratios considered in this paper.
104
SHEVJZLL
AND
TABLE Values
[K
ATKINSON
4
of p and 7 for Constant False Alarm and Miss Rates = 4, w = 5, Pr(false alarm) = .03, Pr(miss) = .15]
N
CL
17
1
0.961
0.718
2 3 4 5 6 7
0.960 0.960 0.960 0.960 0.960 0.960
0.796 0.834 0.858 0.876 0.889 0.900
It was mentioned briefly that Models IS and IP predict identical results. The same is true of Models 111s and IIIP. All four of these models are not self-terminating on the list so that processing for any given item is not affected by processing done on other items. Models IS and IP are totally exhaustive searches. Precisely the same features are processed; only the order of processing is different. In Models 111s and IIIP the order in which the features are processed is also different, but the expected amount of total processing done on each search list item is the same in each case. In the discussion of errorless performance the importance of coding strategy was considered. A given coding strategy can have a large effect on 4 and, in some cases, on the slope ratios. Coding strategy has additional importance when subject errors can occur, since nonlinearity is closely related to false recognition. We will note one factor which might contribute to the selection of a strategy. Familiarity with the items in the item pool can permit special coding of items. For example, if items are trigrams composed of randomly selected letters and the item pool is reasonably small, subjects might find sufficiently unique images for many (or all) of the items. At a deeper level, items that are easily recognized such as single digits, single letters, or common words may be coded much differently than unfamiliar items. For example, a digit may be stored in short-term memory in terms of the concept of the presented number, not the actual features of the stimulus (i.e., “8” is stored as the number concept eight, not as two tangent circles vertically aligned). This implies, of course, a greater role for long-term memory in the encoding of familiar items. A possible interpretation of familiarity is that familiar items are coded with fewer features but along feature dimensions with larger character sets (made possible by the increased use of LTM). Just as the capacity of short-term memory is constrained by chunks rather than bits, so may a feature contain a chunk of information. The size
LIST
SCANNING
MODELS
105
of the chunk is measured by the size of the feature dimension’s character set. Thus familiar items may be more quickly processed because each feature contains a large amount of information and therefore the number of feature comparisons required to identify or compare the item is smaller. The familiarity issue indicates just one problem in inferring characteristics of a basic comparison process from STM scanning data. Stimuli are somehow encoded and comparisons are made using these internal representations. Without considering the encoding process (which may involve LTM to some unknown degree) one is attempting to discover information about the comparison mechanism without knowing what is actually being compared. By selecting stimuli that are very similar and simple (for example, single letters and digits), it may be argued that each stimulus has the same basic type of internal representation. If this assumption is true then variation due to coding is eliminated, but we still lack information concerning the internal representations that are being compared. In the context of this paper, the coding strategy may be influenced by special design of the item pool. In most memory scanning experiments the feature dimensions and character set sizes for the stimuli are unknown. In fact, items used in a single search list may have different numbers of features. An example of a suitable item pool is one in which each item is a single, straight line. The items could be varied along, say, four feature dimensions with each character set of size two. For example, a line might be one of two colors, horizontal or vertical, solid or broken, wide or thin. A number of experiments using this type of item pool are immediately suggested. A basic scanning task to determine the effect of list length on RT would be of interest. A second experiment might use an item pool of equal size but with a different number of feature dimensions and different character set sizes (for example, straight lines which are all vertical and solid, but which are one of four colors and one of four thicknesses). To explore the effect of coding strategy, a third experiment might require each subject to associate a unique single letter with each item in the item pool (i.e., the letter Q would be associated with a red, broken, horizontal, thin line). The stimuli for the actual scanning task would be the straight lines from the item pool, as before. Whatever the experimental procedure, the important point here is that stimulus coding cannot be neglected. In order to infer the basic mechanism by which two stimuli are compared, it is necessary to know what representations of the stimuli are used in the comparison process. REFERENCES ATKINSON, memory. Maryland:
R. C., HERRMANN, D. J., AND WFXOURT, K. T. In R. L. Solso (Ed.), Theories in cognitivepsychoZogy: Lawrence Erlbaum Associates, 1974.
Search processes in recognition The Loyolu symposium. Potomac,
106
SHEVELL
AND
ATKINSON
CHRISTIE, L. S. AND LUCE, R. D. Decision structure and time relations in simple choice behavior. Bulletin of Mathematical Biophysics, 1956, 18, 89-112. HICK, W. E. On the rate of gain of information. Quarterly Journal of Experimental Psychology, 1952,4, 11-26. STERNBERG, S. Memory scanning: mental processes revealed by reaction-time experiments. American Scientist, 1969, 57, 421-457. TOWNSEND, J. T. A note on the identifiability of parallel and serial processes. Perception &f Psychophysics, 1971, 10, 161-163. RECEIVED:
August 9, 1973