The geometry of knowledge acquisition through motion and sensing

The geometry of knowledge acquisition through motion and sensing

Robotics and A u t o n o m o u s Systems 9 (1992) 75-104 Elsevier 75 The geometry of knowledge acquisition through motion and sensing Leo Dorst * an...

2MB Sizes 0 Downloads 24 Views

Robotics and A u t o n o m o u s Systems 9 (1992) 75-104 Elsevier

75

The geometry of knowledge acquisition through motion and sensing Leo Dorst * and Hsiang-Lung Wu Philips Laboratories. North American Philips Corporation, 345 Scarborough Road, Briarcliff Manor. ,%~t"10510, USA

Abstract Dorst, L. and Wu, H.-L., The geometry of knowledge acquisition through motion and sensing, Robotics and A u t o n o m o u s Systems, 9 (1992) 75-104. In order to understand knowledge acquisition in autonomous systems, we introduce a representational framework in which knowledge about the world is represented as a point in a knowledge space. This is a homogeneous linear space, with an unusual vector product (the componentwise multiplication of vectors). We demonstrate how motions in the world lead to an induced linear transformation on the knowledge space, and how sensing leads to an induced reduction to a linear subspace. We also investigate how abstraction and reasoning, which ate internal transformations of the knowledge, may be represented in the knowledge space. We treat applications of the framework which demonstrate how it can be combined with Bayesian decision theory to compute optimal goal-directed behavior for both discrete and continuous cases.

Keywords: Knowledge acquisition; Representation: Vector space g e o m e t u ; Knowledge updating; Reasoning; Abstraction; Bayesian decision theory.

Leo Dorst received his B.Sc., M.Sc. (in '82) and Ph.D. (in '86) from Delft University of Technology in The Netherlands. His research was in the field of image analysis theory. From '86 to '92 he was a Senior M e m b e r of Research Staff at Philips Laboratories, where he worked on general path planning algorithms for autonomous systems. His method of wave-propagation based path planning has led to several patents. Since April '92 he is an associate professor at the University of Amsterdam, The Netherlands. Current research interests are: the translation and discretization of the continuous structures of the world into discrete (algebraic) mathematical structures, and the (geometrical) unification of sensory planning and motion planning in the context of the task to be performed.

Hsiang-Lung Wu received his B.S and M.S degrees in Electrical Engineering from the National Taiwan University, and his Ph.D. degree in Computer and Information Science form Case Western Reserve University, Cleveland, OH, USA, in 1987. His Ph.D. thesis addressed problems of range image processing and object recognition. He joined Philips Laboratories in December 1987 and continued his research on passive and active sensing, and on goal-directed action and sensing for accomplishing tasks in uncertain dynamic environments. His current research interests are on theories for goal-directed behavior of autonomous systems and on applications of muhi-media networking.

* L. Dorst is currently with the Computer Systems Group, Mathematics and Computer Science, Universit~ of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands. 0921-8890/92/$05.00 ,~') 1992

Elsevier Science Publishers B.V. All rights reserved

76

L. Dorst, H.-L. Wu

1. Introduction: Knowledge acquisition in autonomous systems An autonomous system is a system that reaches a goal by itself. In robotic autonomous systems, this goal is defined in the physical world, and reaching the goal therefore involves interaction with an observation of that physical world. However, it is not sufficient for an autonomous system to be capable of manipulating the worm only. The system can only terminate its action once it knows that it has reached its goal; therefore, the manipulation of knowledge about the world (including the system) is as essential as the manipulation of the actual world. There are various ways in which the system can obtain knowledge of its state and the state of the world. Some of the knowledge will derive from externally given information, notably on a priori notions such as continuity of the world, rigidity of bodies, etc. We will deal with systems in which some measure of information is always given in this manner, though the extent to which it is may vary. The system may also acquire information by reasoning about available knowledge; this is effectively a re-representation of existing knowledge using particular a priori laws. But the most active way in which an autonomous system can obtain information is by interacting with the world: it may execute certain motions that will partially fix its state (move until you encounter a wall; now you know your position relative to the wall, at least partially); and it may query available sensors. Both these motions and these sensory actions need not be elementary actions from the set of a priori defined actions, but may be involved compositions of elementary actions. We need to develop techniques that enable the system to determine which compositions of actions will make it reach, efficiently and robustly, the state in which it knows that it has reached the goal (which implies that it has indeed reached the goal). The formal understanding of the manipulation of knowledge currently lags far behind the formal understanding of the manipulation of the world, in which physics and control theory have reached impressive results. The different processes of knowledge acquisition and manipulation have been studied traditionally in different fields; the nature of a priori knowledge has been investigated in philosophy; knowledge acquisition (or re-representation) by reasoning in logic and artificial intelligence; and knowledge acquisition by sensing and, to a lesser extent, motion, in control theory and robotics. Each of those, in combination with some common sense and Bayesian decision theory, are often sufficient to solve particular problems in their field. An example may illustrate this (we wilt come back to this example in Section 5). We have a circuit with two components, a and b each of which m ~ be broken or functional. We have three sensors: S, senses the status of a, S b of b, and Sab of the circuit as a whole (giving 'functional' iff the whole circuit works). We also have two repair actions, M a which replaces a with a fully functional component, and M b correspondingly for b. We are interested in optimal repai r strategies, given some initial probability pa and Pb that a and b are functional, and given costs T for applying any sensor, and R for executing any repair action. Some standard descriptions of aspects of this problem are depicted in Fig. 1. In Fig. la, we have indicated the 4 states of the world, depending on the functionality of a and b. The repair actions M, and M b create transitions between these states, as indicated. This is a state diagram representation of the problem. Note that this diagram is symbolic; the points and edges are not really embedded in any space, and points outside the points indicated have no meaning. Note also t h a t the labelling of the world states implicitly uses knowledge obtained by the sensors Sa and Sb. Though this diagram clearly indicates what the motive actions M~ and M b do, it is not very helpful in determining the optimal strategy. A decision tree as indicated in Fig. lb is better for that. The branches are labeled by F and B for a 'fixed' or 'broken' outcome of the corresponding sensor. In such a tree, all possible outcomes of actions, both sensory and motive, can be indicated; we have only shown the top of the tree. The various branches have different probabilities of actually occurring, and one can make Bayesian decisions on the best way to navigate to the desired leaf nodes. Note again that the tree is not considered to be embedded in any space. Also, the tree as such does not take into account that certain branches may lead to the same state (for instance, if Sab gives 'broken' and S, gives 'functional', then this must lead to the same knowledge as if S b gives 'broken', since these statements are logically equivalent).

77

G e o m e t r y o f k n o w l e d g e acquisition

This leads us to yet another representation of aspects of the problem, by means of predicate logic illustrated in Fig,. Ic. This representation contains statements like ((S,b = broken A S, =)hnctional) ~ SI, = broken), and the like. In this problem, it is also natural to include statements to reflect the likelihood information on Pa and Pb, as in Dempster-Shafer theory. These three different representations are examples of points of view on the problem prevalent in control theory, Bayesian decision theory, and artifical intelligence, respectively. All emphasize certain aspects, and all are capable of dealing with the complete problem eventually. But when reading such solutions one always sees that there are some extraneous elements introduced, based more on common sense than on a rigorous formalism. (These elements are easily recognizable, since they require the inclusion of natural language, rather than mathematics, in a solution.) In this paper, wc show how each of these representations can be seen as a special case, or a "projection' of a more general representation that contains them all (and possibly more), and we start to formalize that more general framework. In Section 2, we state the assumptions on the type of systems we treat. We then sketch, in Section 3, the conventional way to represent knowledge by sets with probabilities. We define a new vector space fnlmework for knowledge representation in Section 4. We give the knowledge updating equations in that

a!

ab

Ma

Mb

a--b

b!

ab

Mb

Ma

/vk." L,'<.%

ab these are identical

etcetera

(b)

(a)

S ab = broken A S a = f u n c t i o n a l ~ S b = broken S a = f u n c t i o n a l A S b = f u n c t i o n a l ca Sab = f u n c t i o n a l etcetcra

I

(c)

Fig. 1. A circuit r e p a i r p r o b l e m and different r e p r e s e n t a t i o n s to trcal it (see Icxt)

78

L. Dorst, H.-L. Wu

framework, followed by results on the effect of the order in which actions are performed, and on the geometrical representation of abstraction and reasoning. In Section 5 we illustrate the representation by three examples. The first example of 3 cups and a ball is meant to show visually what the vector space representation of knowledge is, and suggests how other ways of considering knowledge (such as reasoning) are represented in that space. The second example is the circuit diagnosis and repair problem also treated in [7]. We now treat it in the geometrical framework to allow comparison with that more classical computation. The third example, accurate position measurement, demonstrates the ease of computations in infinite-dimensional vector space.

2. Scope and assumptions The concepts of knowledge acquisition that need to be elements of any model of knowledge manipulation for autonomous systems are: - the set of world states; - the set of knowledge slates, which should include a representation of inconsistent knowledge; - the conditional probability p(i I k) that the world resides in state i given the knowledge k; - the set of non-deterministic motions that change the world, and equations to compute the corresponding updated knowledge; - the set of available non-deterministic sensors, and equations to compute the knowledge updating on the basis of observations with these sensors; In this paper, we work under the simplifying assumption that the world is fully controlled, in the sense that all actions that take place in the world are caused by the system. The actions may be non-deterministic. We thus exclude, for the time being, multiple independent systems, and even a world in which the laws of physics operate (so as to make unsupported objects fall). We believe we can lift the latter restriction fairly easily. We demand two properties of knowledge: that it be consistent and complete. These two properties are based on a notion of compatibility defined as follows: A state i is compatible with a state j by an action sequence A if i could possibly be reached by applying the action sequence A to j, and could lead to the observations actually obtained while applying A. The properties are then: (1) The knowledge should be consistent: every world state contained in the knowledge state should be compatible with at least one of the states in the initial knowledge. (2) The knowledge should be complete: it contains all possible worlds that are compatible with at least one state in the initial knowledge.

3. Knowledge updating in sets with associated probabilities The most common way to study knowledge updating is by considering knowledge about the world as a set of possible world states with associated probabilities. In this section, we use this standard representation to formulate and present the unique knowledge updating equation that keeps knowledge c o m p ~ t e and consistent. A more detailed treatment may be found in [7].

3.1. Representation of states, motions, and measurements States of world and knowledge Let F be the set of feasible states of the world. Let the realizable world state i be represented by the element qi of F. The set of subsets of F is denoted 2 F. The knowledge k is represented by a set of world states K. Thus we have K ~ 2 F.

Geometry O[ knowledge acquisition

79

Associated with any set A is a membership function PA(q~), such that p A ( q i ) --() iff q, cAA. For the knowledge set K, we identify the membership function p~.(q~) with the conditional probability p ( i l k ) that the world is i, given that the knowledge equals k:

PK(qi) = p ( i l k ) .

(1)

In what follows, we prefer to use the p(il k) notation, and consequently denote the knowledge set by k (which is slightly sloppy but will not lead to confusion). Note that if the knowledge is incompatible with any world state, then Vi, p ( i ] k ) - 0; since this also holds for the empty set Q, we will denote incompatible knowledge by the empty set.

Motions A motive action M transforms a world i to another world. For a non-determistic motion, that resulting world is one of a set of possibilities j, each with an a priori known likelihood of occurrence. We denote the probability that j results from i by M by the number p ( j = Mi). We have. for a complete motion model, 1

Vi~F:

Y~p(j=Mi)=

0

t

if M can be applied in state i otherwise.

(2)

In the sets with probabilities framework, a non-deterministic motion M is represented as a probabilistic map from the state set F t o 2 F, with certain properties (see [7], Chapter 2, item Motions). The result of a motion on a world state set {qi} is a singleton set {qj}. Here qi is a m e m b e r of the set M{@ defined as:

M{q ) -= {qjl pf i = Mi) . 0},

(3)

with probability function pMlq,l(qj) given by:

p ulq,l(q, ) = p ( j = Mi).

(4)

This is to be interpreted as: the likelihood that {qj} occurs equals pM{q,l(qi). We thus have:

P l q , / ( q i ) = ~i]

---4

Pmlq3(qJ )

.

(5)

We assume that each motion can be described in terms of M and PM and that this can be done validly independent of the actual knowledge. Formally, the assumptions are~: Vi, j ~ F:

p r o b ( j = Mi and i ~ k) = p ( j = M i ) p ( i l k ) ,

(6)

Vi,. i 2, j,, j 2 ~ F :

prob(j~ = M , i I and j2=M2i2) = P ( J l = M l i l ) P ( J 2 = M 2 i 2 ) •

(7)

[:or a deterministic motion, M{@ is a singleton set for all states i; hence, PMlq,l(q¢) is 0 o r 1; as a consequence, we could ignore the membership functions, and could represent deterministic motion by a (partial) function M : F -~ F. However, in this paper we consider a deterministic motion as a special case of non-deterministic motion; we use the non-deterministic notation throughout.

Sensory actions A non-deterministic sensory action, applied to a world state, results in an obsert'ation j from some set of observations 0 s. We assume that the likelihood that Si equals observation j is given for all i and j, and denote these numbers by p ( j = Si). For a complete sensory model, we have:

Vi~F:

Y'.p(j=Si)=

1.

(8)

J Note that the second equation implies that prob(j = Mi and j Mi) = p(j = Mi)2; this may be counter-intuitive, but is actually defines the type of non-determinism we are considering. The entity ' p r o b ( j = Mi and j = Mi)' means: take a world i, execute the motion M and obtain the result j; simultaneously execute the motion M on i and again obtain j. In that interpretation. p(j Mi) 2 is indeed the correct value for the simultaneous probability.

L. Dorst, H.-L. Wu

80

In the sets-with-probabilities representation, we represent the members of 0 s by elements st from a set ~'s with associated probabilities Pstq,)(q])" let the set {sj} correspond to the observation j. Then S maps from F to 2 :s by:

{q/}

] L [{Sj) c--S{qi}--{Sm[P(m=Si)~O}] P{qi}(qj) = 8ijJ Ps{qi}(Sj)= E{ils~S{qi}}p(j = Si) "

(9)

We assume that each sensory action can be described in terms of S and Ps and that the model is valid independent of knowledge. Such assumptions can be stated formally as: Vi ~ F :

prob(s = Si and i ~ k) = p( s = Si ) p( i l k )

(10)

Vi 1, i z ~ F , Vs l, s 2 ~ : s :

p r o b ( s I = S l i 1 and $2=S2i2) =p(s~ =Slil)P(s2=S2i2).

(11)

For a deterministic measurement, there is only one s t for which pslq,~(si) is non-zero; as a consequence, the relation S is also a function F ~ :'s which maps a state to a single observation (rather than to a set of observations). Here we prefer to treat deterministic actions as a special case of non-deterministic actions.

3.2. Knowledge updating equations for complete and consistent knowledge Recall that we define completeness and consistency in terms of compatibility. There are three types of compatibility: motive compatibility, sensory compatibility and compatibility with history. Motive compatibility. Given a motive M, state j is motive compatible with state i by M if and only if -

p ( j = Aft) 4=O. - Sensory compatibility. Given a measurement s, a state i is sensory compatible with observation s by S if and only if p(s = Si) ~ O. - History compatibility. A state i is compatible with history H t if all actions performed up to time t could lead to the same set of observations as those actually made. Thus history compatibility requires sensory compatibility for all time less than t. In [7], we build up the notions of completeness and consistency of knowledge from these concepts, and then prove the knowledge updating equations. These knowledge updating equations turn out to be fully and uniquely determined by the demands. Here we prefer to define complete and consistent knowledge axiomatically through the same knowledge updating equation. This will appear somewhat arbitrary but it avoids subtle details of proof that are not relevant to the present paper; the interested reader is referred to [7] for the more rigorous converse treatment. Initially, when no sensory or motive actions have yet been performed, we have an initial knowledge set k 0 with associated probabilities p(ilko), both given a priori. This initial knowledge is taken to be complete and consistent by definition. Now consider the updating of knowledge at time t. At time t - 1, let the knowledge be kt_l, with p ( i l k t _ ~) as associated probabilities. Assume that this knowledge is complete and consistent. An action is performed, either a motive action M or a sensory action S leading to an observation s. In the case of motion, complete and consistent (c&c) knowledge k t is given by:

[ kt-1 ]---~M[ p(iIkt_l )

c&c

kt=Mkt-l=-I"JqJekt-lM{qj}] p(ilkt)= ~.q:~g,_~p(i=Mj)p(jlkt_l)

(12) "

The two equations for updating of the set and its probabilistic membership function are not independent: we may derive the latter from the former using Bayes' rule and property Eq. (6), as follows:

p(il k,) =p(q, = Mq w I qw ~ k,_~ and qw' ~ M{q~} )(by notational convention) = E p ( q i = q M y and q j = q w l q w ~ k , _ l

qj

and qw,~M{qw})

81

Geometryqf knowledj,,eacquisition = ~ P(qi=qMj and q , = q l and qw¢k, , and q~,/¢M{q,,})(Bayes) (,, p(qw~k,__, and q~,,~M{qw} ) - ~_,P(Cli-qMi and qw=qj and qw~k, 1) (qw ~- k, j, and also qw' ~ M{qw} are true by induction hypothesis)

- Ep(qi=q..w,.lqw=qj and q~,.~k, ,)p(q,,.-qj and q,,~k,

,)(Bayes)

qi

- Y'.P(qi =qMi)P(qw =q,] q, ~k, ,)P(qw ~k~ l)(Bayes, and independence, Eq.({~)) O'f

= E])(O*i

qMj)P(a.w=(ljnVm~kt

i)(qw~kt

I by induction

hypothesis)

q/

= ~p(i=Mj)p(jlk,

,)(by notational convention).

[]

/

If the action is sensor', complete and consistent knowledge k, is given by:

k, ,

[

p(ilk; ~)

~.ac

k,={qi]p(s=Si)~O}Ak, i 1 p ( s Si)p(i[k, i) . p ( i [ k ) = 2,~k, , p ( s = S j ) p ( j ] k r ,)

(13)

Again, the equation for probability is a direct consequence of the set equation, by Bayes' rule and the assumptions for S(Eq. (10)):

p ( i i k , ) = p ( q ~ =qiiqw ¢ k , ~ and Sq w=s)(by notational convention) =

P(qw=q, and Sqw=S and qw¢kr I) (Bayes' rule) P(qw ~ k , 1 and Sq w=s)

= c.p(qw = q, and Sq w =s and q,, e k, l)(final term equals 1. tautology) =c.p(Sq,=Slqw=q, a n d q w ¢ k ,

,)p(qw=qslq,,¢k,

i)p(q,,Ek,

i) (Bayes'rulc)

=c.p(Sqi=s)p(qw=q, l q w ¢ k , ,)(by Eq. (10)) = c.p( Sq i = s)p(it k,_ 1)(by notational convention), with c given by:

c=p(ct,,¢k, = Y'~P(q,,.

l and Sqw=s )

qi and qw~kr..L and Sq,,=s)

/

= ~ p ( S q , = s l q , , . = q i a n d q, e k ,

,)p(qw=qi and q w e k , i)(Bayes'rule)

l

~p(Sqi=s)p(qw=qilqw~kt

1)p(q~,~k,_l)(by Eq. (10))

i

= Y'~p( Sq, -- s)p(qw = q, I q~ e k, ~)(final term equals 1, tautology) /

= ~p(Sq i=s)p(ilk,

,)(bynotationalconvention).

[]

t

By these equations, complete and consistent knowledge is defined recursively on the basis of k o,

p(i]k o) and thehistory of actions and observations.

L. Dorst, H.-L. Wu

82

4. Knowledge space We have found that it is possible to represent the knowledge and actions geometrically, in a properly defined knowledge space. In this section we define that space, and specify the functorial mapping 3from the sets-with-associated-probabilities framework into this vector space framework. The mapping J maps sets, probabilities, as well as operations on them such as intersection and union, to appropriate corresponding enitities in a linear algebraic vector space framework. For a formal definition of a functor, see [4]. In Section 5.1 we treat an application to a simple example of three cups and a ball; throughout the present section we point to that section to provide the illustration of concepts we introduce. The reader is recommended to read the corresponding sections by way of illustration.

4.1. Mapping the basic concepts World states are basis vectors A world state i is represented by a basis vector e i of a vector space V, and vice versa. (We will see below that the basis {ei} is orthonormal.) Thus under ~-, the set {qi} maps to the vector ei. Let us call the set of basis vectors E; thus E is the ~Y--image of F. Knowledge states as vectors A knowledge state k is represented by a vector k of V ( not necessarily a basis vector). The 'conditional probability p ( i l k ) that the world state is i given that the knowledge is k' is represented using the inner product in V by: e~k=p(ilk).

(14)

Combining the two, knowledge k is represented in the vector space framework by:

k = E e i e ~ k = ~_,p(il k ) e i. i

(15)

i

This must be the image of the set k with associated probability function p(il k) under 3-. Therefore, 3merges the two concepts 'set' and 'probability' into one, namely 'vector'. This transformation will simplify the basic operations of knowledge updating, as we will see.

Compatibility of world states leads to orthonormality World states are mutually incompatible, and fully compatible with themselves. Therefore we have, for the special case the knowledge consists of a single world state j, p(ilj)=pto~}(qi)= ~ij. As a consequence: ~ij = p ( i l j) = e~ej.

(16)

It follows that:

the vectors e i form an orthogonal basis for the state space V.

(17)

We thus have

eXiej

= Oij ,

(18)

and the useful identity:

Y'~ ei eT = 1,

(19)

ei•E

where 1 is the identity matrix (the diagonal of V). (Proof of Eq, (19): right-multiply t_h¢ le~ hand side by era, then E,i e Eeie?em = ~ e e i S i m = era, since this holds for arbitra~ em, we have ~ . (19)).

Geometry of knowh,dge acquisition

83

Knowledge space Since conditional probabilities satisfy: [ l < ~ p ( i l k ) ~<1

(20)

the vector k must reside in the positive hyperoctant. Since the p ( i l k ) i s a conditional probability density function, we must have E i p ( i l k ) = 1. In the vector space this means that the city-block norm of a knowledge vector equals 1:

Ikl =- ~ p ( i k k ) =

1.

(21)

I

Combining the two demands, k lies in a 'diagonal slice' of the positive hyperoctant of the space spanned by the basis vectors e i for which e f k = p(i I k ) ~ O. We call this subspace the knowledge space of a system that can reside in the states i, and denote it by ~. Note that the vectors e~ do not form an orthogonal basis for the knowledge space ~ , which is a hyperplane of codimension 1 in the state space. Also note that knowledge which is incompatible with any possible world state has Vi : p(i I k) = 0; therefore it is represented by the null vector. For the simple example, the knowledge space is discussed in Section 5.1, item 'Knowledge space'.

Motions A motion is map M from a world state to a set of world states; the probability that the state j results from state i is given for all i, j, and indicated by p ( j = Mi). Eq. (5) states how a motion is represented in the sets with probabilities framework. In the vector space framework, we represent the motion by a linear map, also indicated by M, from E (which represents the set F ) to ~ (which represented 2F), and so Meg represents the state Mi with corresponding conditional probabilities. This map can be represented on the orthogonal basis E of V as a matrix with elements Mji = e~Mei; Me i = ~_, ejefMe i e, ~ F

~_~

eiefMe i.

(22)

(ei] ef IMei 4-O)

This must be the representation of the set Mi; and by the transcription rules and Eq. (5), we must necessarily set the number efMe~ equal to the probability p ( j l M):

p( j l Mi) = e~Me i = Mji.

(23)

Since Mii are thus conditional probabilities, they have to satisfy: t) ~
y, M ji = f O kl

(24) if Me i = O otherwise.

(25)

We demanded the properties of p ( j I M i ) stated in Eq. (6)-Eq. (7). These properties make the representation of M as a k-independent linear operator with normal operator multiplication consistent and permissible. Therefore they are implicit in the linear representation, and need not be stated explicitly. For the simple example, the motive actions are discussed in Section 5.1, item 'Motions'.

Sensory actions A sensory action S, applied to a world state, results in an obsert,ation from a set of possible observations 0 s. We assume that the likelihood that Si equals observation j is given for all i and j, and denote this by p ( j = Si). All world states lead to some sensory outcome; if necessary, we define a sensory outcome 'undefined'. Eq. (9) specifies the action of a sensor S in the sets with probabilities representation.

L. Dorst, tl.-L. Wu

84

In the vector space framework, let a basis observation st be represented by a vector st in a vectorspace ~s. The set of all such vectors forms a basis E s for ~s. The sensory action S can now be represented by a rectangular Dim(E s) x Dim(E) matrix S:

Sei= E sis~Sei. s/~E~

(26)

By the conversion rules and Eq. (9)~ we must put:

sTSei = p ( j = Si) = Sji.

(27)

Therefore the elements Sji of the S matrix are probabilities, so that they must satisfy: 0 < Ss., ~< 1,

(28)

Vi: ~Sj~ = 1.

(29)

J

Note that since a sensory outcome must be defined for all states, we do not invoke conditions similar to Eq. (25). We demanded in Eq. (10)-Eq. (11) that the probabilistic sensory model be valid independent of the actual knowledge and that it be probabilisticatly independent. It is because of these two properties that we may represent S's as k-independent operators that obey matrix multiplication laws, and therefore we do not need to specify them explicitly. For the simple example, the sensory actions are discussed in Section 5.1, item 'Sensing'.

4.2. The knowledge updating equations In Section 3, we presented the knowledge updating equations for the knowledge set and the associated probabilities framework, that preserve completeness and consistency of knowledge. Both for sensing and motion, the updating consisted of two equations, one for the set and one for the associated probabilities. In the vector space framework, these two equations each combine into one knowledge vector updating equation. We present and discuss that equation in this section, for both sensing and motion. Since we are interested in accumulation of knowledge by successive actions, we tag the quantities with a positive time parameter t, with t = 0 indicating the initial instant. Let the initial knowledge be k 0, and given a priori. We will find it useful to define a normalization function normalize [,] by:

normalize[v]=

0v ~

if Ivl=0 otherwise,

(30)

(thus normalize[v] has cityblock norm 1, or equals 0). Now let the knowledge be k,_ 1. If we perform the motive action Me, the new knowledge is given by:

k, = normalize[ Mtk,_ 1]

(31)

Proof. First, we prove that this updating equation can be mapped to updating the knowledge set described in Section 3 by considering the updating of states with non-zero probabilities:

{e i l eTk t 4= 0} = {e i l eTMtkt_l

*

0}.

This is equivalent by our transcription: rules to the representation of the knowledge set updating equation Eq. (12), which reads:

k, = M t k , _ , .

(32)

GeometO'of knowh,dge acquisition

85

Second, we prove that the updating of probability agrees with the probabilistic knowledge updating equation:

p(i! k,) = elrk, = normalize[eTM, k, ,]

Zff)(i-M,j)p(jlk, 2,.,p(i-M,j)p(jlk,

,) ~)'

which is equivalent to the formula in the sets with probabilities framework if M r is universally applicable. [] For the simple example, knowledge updating for motive actions is discussed in Section 5.1, item ' Motions'. If the applied action is the sensory action St, leading to the observation s,, we obtain:

Proof:

{e, elrk¢ 4= O} = {e~k, - k,

1 =~=

0 and si"S,e , # O}

, O {e,

IdS, e,* 0}.

This is cquivalent by our transcription rules to the representation of the knowledge set updating equation Eq. (13), which reads:

k,=k,

, N {q, l p ( s , = S i ) ¢ O } = k ,

, A {qils,¢Sq,}.

(34)

Second. we prove that the updating of probability agrees with the probabilistic knowledge updating equation:

p(i[k t

eTk, = normalize[sTS, eie;rk , ,] p ( s , - S,i)p(i[k, Elp(s,:S,j)p(jlk,

,) ,)'

which Is equivalent to the formula in the sets with probabilities framework.

[]

For the simple example, knowledge updating for sensory actions is discussed in Section 5.1, item 'Sensing'. Induced structures on the basis of this updating are treated in item 'Reasoning'.

4.3. Remarks on the linearity of knowledge space Knowledge space, as defined above, is reminiscent of a linear vector space. However, there are some operations that extend the admissable operations of such a space; these operations are essential for correct representation of knowledge acquisition. We treat the operations.

Weighted addition The possibility of multiple world states compatible with the history (and the initial knowledge) is a union operation on sets, which translates into a weighted addition of basis vectors in the knowledge space. It should be noted that there is no meaning attached to the addition of non-basis vectors: knowledge vectors cannot be combined in that way.

86

L. Dorst, H.-L. Wu

Component-wise multiplication leading to a vector Instead, combination of knowledge vectors as required in the knowledge updating equations corresponds to an intersection operation in sets. It is represented by a componentwise multiplication of vectors leading to a vector in the knowledge space. This is demonstrated by Eq. (33). Since basis vectors are admissable knowledge vectors, this is also an admissable operation on basis vectors; however, because of orthonormality the result is always 0. Inner product The inner product represents conditional probabilities. Conditional probabilities are only defined in terms of world states: the probability that the world is in a state i given that the knowledge equals k. Consequently, we only have meaning for the inner product of a basis vector with a vector; no meaning has been assigned as yet to the inner product of arbitrary vectors. Normalization Since consistent knowledge has conditional probabilities that add up to 1, the knowledge vectors are normalized to have city-block norm 1 (as well as positive coefficients). We have done so consistently in this paper. It should be noted here that there is an alternative formulation of the theory in which the normalization is not done at every step of knowledge updating. In that case, one may interpret the ratio of the norm of the resulting knowledge vector to the previous knowledge vector as the conditional probability that this particular knowledge updating would occur. Correspondingly, one may now associate a meaning to arbitrary inner products. Null vector The null vector has a natural place in the theory as a representation of inconsistent knowledge. It is exceptional in the normalized theory in that it is the only knowledge vector that does not have city-block norm unity. (In the non-normalized version, it can be seen as a limit vector of increasingly unlikely events.) Thus inconsistent knowledge, though representable naturally as a vector in the full vector-space, assumes an exceptional quality when one reduces the considerations to the knowledge space (vectors of cityblock norm 1 only). We will see an example of this in Section 5.1, item 'Reasoning'. Knowledge space is thus not a linear vector space; rather, it is a homogeneous linear space (since it is normalized), with corresponding lack of a null element, but with an extension by a component,wise product. The linear structure remains recognizable in many operations (particularly when limiting the actions to purely motive actions), and we hope to take advantage of that in the analysis of goal,directed knowledge acquisition. If we find that the positiveness of the coefficients and the use of the city-block norm present problems, then we may change the representation to a unitary representation as used in quantum mechanics in which the coefficients of vectors are complex numbers of which the squared modulus is a conditional probability. We have preferred to develop the representation first in real vector spaces for the clarity this gives in the depiction of the applications in the next sections. The reader should realize, though, that the only relevant issue is in the structure of knowledge acquisition, as given by its algebraic properties; the particular representation of this structure, be it in sets2with-probabilities or knowledge space (real or complex) should be chosen according to convenience.

5. Applications; The geometrical interpretation of the vector s ~ e

framework

The knowledge updating equations can be interpreted geometrically, by taking as the vectorspace on which they act the usual model R n, for suitable n. This geometrical interpretation is especially useful to develop the intuition behind knowledge acquisition, and the interaction of the various modeB for dealing with knowledge that were discussed in the introduction. In this section, we show the geometrical

Geometry of knowledD,e acquisition

87

representation of motive actions and sensory actions, and of their interactions. We do this by illustration of three examples: a simple problem of three cups and a ball to illustrate the basics, the circuit analysis example from the introduction, and a position determination example. The latter also demonstrates the use of the framework in continuous problems which lead to infinite-dimensional vector spaces.

5. 1. Three cups a ball "Fake a system consisting of 3 cups, upside down, under one of which there is a ball. We have motive actions available that swap the cups, and sensory actions that lift a cup and obse~'e absence or presence of a ball u n d e r it. W e ' d like to describe the properties of this system.

Knowledge sl)ace This system has 3 states, and is therefore represented by a 3-dimensional vector spacc, see F'ig. 3a. N u m b e r the cups 1, 2, 3 by their location, and represent the state with the ball under the cup at location i by thc vector e~. In general, knowledge will lie in the shaded surface on Fig. 2a, which indicates the set of all knowledge vectors with city-block norm 1. If there is no initial information on under which cup the I ball resides, the initial likelihoods p(il k o) of the three world states given the knowledge k , all equal ~. Therefore the initial knowledge is representcd by the vector k 0 = (1. 71 ~) I '] , indicated in Fig,. 2a.

Motion Now assume that we can execute a motion, for instance, a cyclic permutation of the cups over their locations. This corresponds to a matrix: M =

(t 1 (t

0 0 1

I '1 0 1 0,

(35) 1 I

I I I}T.

By Eq. (31), l~ = normalize [ M ( ~ 7 7 )T] = (7 :~ J , thus the motion has no effect on k,~, as expected. If we s o m e h o w had obtained a knowledge k~ = (0.8 0.1 0.1)r (indicating that the ball is likely to be under the cup at location 1), then after M we obtain k '1 = Mk I = (0.1 0.8 0.1), so the ball is now likely to be under the cup at location 2, as expected. Therefore knowledge moves along with the world it represents. In Fig,. 2a, the admissable motions of the knowledge vector as a consequence of swapping of the cups are the 6 elements of the symmetry group of the shaded triangle: 3 rotations and 3 reflections.

Sensing If we lift the cup at location 1 and look. we obtain information on the presence of the ball under it. Thus, this sensor has two outcomes, ' p r e s e n t ' and 'absent', which we may code by the basis vectors (10) T and (01 )J of E s., respectively (this defines Es). The sensor then is represented by: ( ' <~;~- , 0

''

0)

1

I

"

(36)

since we obtain the ' p r e s e n t ' observation only in state 1. If we start from k 0 (~ .~ . and apply SI with observation 'present', then we have s V S ~ - ( 1 0 0 ) r and the updating equation yields: k ' - n o r m a l i z e [(1.{ 0 . ~,0.~) r] = ( 1 0 0 ) T, as expected; the observation 'absent' has slSl = ( ( I l l ) ~, which leads to the vector k* = normalize [(0.~ 1. a' 1.1). T] = (0 ,i ~I)T , as expected. This is depicted in /=¢. 2a; it is seen that the sensor projects k , along the line (~ ~A, 5' + 7~A, 7~ + ±a) "r to the two l-dimensional subspaces spanned by {e~} and {e 2 + e3}. Information that can not be obtained is in the direction (0 1 - 1)r, which is perpendicular to these spaces, and equal to the null space of S 1.

Reasoning l.et S i denote the sensor that determines whether the ball is under the cup at location i. Wc can illustrate the irrelevance of the order of the sensors S~ and S 3 by Fi,m 2b. The commutative diagrams

L. DorstH.-L.Wu

88

between successive sensory outcomes for two sensors are squares, and these are neatly embedded in the knowledge space as quadrilaterals. In this case, two of the quadrilaterals degenerate to triangles since the outcome of S 3 after S 1 gives 'present' is necessarily 'absent' and does not lead to a change of the knowledge vector. Note that if we treat only vectors with norm 1, then the full commutative diagram is not representable; we need to include the null vector for a fully symmetrical treatment. Fig. 2c illustrates that the composite result: 'S~ gives absent and S 3 gives absent' is equivalent to the single result 'S 2 gives present'. In the figure, this statement is represented as the boundary of the dashed triangle. This is generally true: closed polyhedra in the knowledge space correspond to logical statements about knowledge acquisition. (The edges of the polyhedra should be composed of vectors from the (discrete) vector fields of motive and sensory action, see [3].) Thus polyhedra form an embedding of commutative diagrams representing such statements into the knowledge space. Another example of reasoning about knowledge is sensor equivalence. It should be clear from elementary reasoning that after S 1 has been done, the sensors S 2 and S 3 are equivalent. Fig. 2d indicates the situation in the knowledge space: S i is a projection to the edge of the knowledge space through a line passing though the knowledge point k and the point e i. After $1 is done, one sees that S 2 and S 3 project along the same linear subspace: the line (010) T + h(01 - 1) T if S~ gave 'absent', and the line h(100) T if S~ gave 'present'. This spanning of the same subspaces is the sign of their equivalence. (In terms of

a ]

b]

e3

e3

Sl : abs~." 1"~



S~:abs

~01" aDS

e2

~

el

c I

~

e3

s I :pres

S1:abs

e3

d__d_J

/ ~ S 3 : pres

~

e2

/

lit . . . . "~" "~S2:pres

I-...:1

el

Fig. 2. The knowledge space for the 3-cup problem (see text).

e2

Geometo' Of"knowledge acquisition

89

category theory, S~ acts as an equalizer of S, and $3; in the linear spaces, equality is based on the notion of linear d e p e n d e n c e . ) It should be noted that all these properties of the figures are true by virtue of the knowledge updating equations. T h e r e may not be a need to define a separate logic to reason about knowledge updating: all logical conclusions are implicitly drawn when the knowledge updating equations arc evaluated.

Abstraction For the purpose of clear explanation, we have chosen to represent the system as a 3-state system. This was achieved by focusing on the position of the cup under which the ball resided. However, the most natural representation for a set of 3 cups in 3 positions is one in which all 3! - 6 states are distinguished. Let us study this representation in which the cups are distinguishable, for instance by assuming that the cups have the colors red, green and blue, indicated as r, g, b, respectively. Indicate a state by the colors of the cups in position 1, 2, 3. We take an arbitrary assignment of basis vectors to states, for instance: e, = (rgb), e 2 - (gbr), e 3 = (brg), e 4 - (grb), es(rbg), e~, = (bgr). Assume that the 3 positions, labelled 1, 2, 3, are adjacent to each other, and that there are two elementary motions available: M R, which interchanges the two cups in the positions 2 and 3 (R for Right), and M~, which interchanges the two cups in the positions 1 and 2 (L for Left). In the basis given above, M R and Ml are represented by the matrices:

4//1~ =

0 0 0 0

0 0 0 l

0 0 0 0

1 ()

0 0

0 1

0 1 0 () 0 0

1 0 0 0 0 0

0 0 1 0

0 0

' ML =

0 0 0 1 0 0

0 0 0 0 0 1

0 0 0 0 l 0

1 0 0 0 0 0

0 0 l 0 0 0

0 1 0 0 0 0

(37)

Let us assume that the ball is under thc red cup. Since we are only interested in a ball/no-ball dichotomy of the cups, only the feature of 'where the ball is' is relcvant in the abstracted state description. We thus desire to describe state 1 and 5 as the same state, and similarly for 3 and 4, and for 2 and 6. We can define a sensor c~ that has the same output tot these states: (~=

1

0

()

0

1

0

I)

(t

1

I

0

0

0

1

I)

0

0

1

(38)

We would like to decompose the operators M a and M L and their composites into "components' that transform thesc c o m p o u n d states naturally, relative to the abstraction of. Algebraically, we need to find an o p e r a t o r M that has a h o m o m o r p h i c image # under a, which means that it needs to satisfy the commutative relation: ~ . M =/x .c~.

(39)

If such a pair of M and /,t can be found, we call c~ a proper abstraction. In this case, a solution of Eq. (39) is the set of operators {(Mi Ma)i}, since:

0 ,)


0 1

0 0

a.

(40)

Therefore c~is indeed a proper abstraction. Note that the corresponding abstracted motions # are identical to the powers of the motion in Eq. (35), as they should since we made the same abstraction. This is the result for the red cup; but since the set of actions spanned by M L and M R forms a group, wc achieve the same associated motions # for abstraction based on the blue and the green cup. In the general case of a semi-group of actions it may not be possible to find a global solution to the abstraction condition (i.e. a solution valid for all states). In that case, one may be forced to perform /()ca/ decompositions and abstractions. Benjamin [2] is currently investigating algebraic techniques to form

90

L. Dorst, 1-1.-L. Wu

composite actions; we plan to translate these into linear algebra so as to make them work in our representation as automatic abstraction operators. 5.2. Circuit diagnosis and repair Problem definition Suppose we have a circuit consisting of two components a and b in series. These components can be functional (F) or broken (B); this can be established by measurements S, and Sb, with outcomes F,, B a, and F b, Bb, respectively. Let the probability that these components are functional be P, and Pb, respectively. Assume we also have a composite measurement S,b that measures the functionality of the whole circuit, with outcomes F,b and B,b, according to the rules: if a or b is broken, B,b results, otherwise Fab. We have repair actions for the components a and b, called M a and Mb, respectively. These are pure 'motions', since they change world, and the knowledge only through the change in the world. We assume that both motive and sensory actions are deterministic: the repair actions always repair the component, and the sensory actions always identify the functionality correctly. We assume that any measurement costs T, and that the costs to repair any component equals R. We are interested in finding the minimal cost strategy that repairs the circuit. Vectorspace representation The system has 4 states which in our representation form the basis of a 4-dimensional vector space. Taking the state labelling in which state 1 has a 'broken' and b 'broken'; state 2 has a 'broken' and b 'fixed'; state 3 as a 'fixed' and b 'broken'; state 4 has a 'fixed' and b 'fixed'. The initial a priori knowledge about knowledge set k 0, depends on Pa and Pb, as follows: [ qaqb = [ qaPb k° [ Paqb

(41)

Pa Pb

where qa = 1 - p ~ and qb

=

l --Pb"

Sensing On the corresponding basis, the deterministic sensors S a and S b are therefore represented by: S,=

(1 0

1 0

0 1

0)Sb=(1 '

0 1

1 0

~)

'

Sab=(1 0

1 0

1 0

01 1 '

(42)

where in Eso, (10) T indicates 'broken' and (01) T indicates 'fixed', and similarly for g,& and Es, ~. Motion The two deterministic motions Ma, Mb can be described as matrices: Ma =

tl °°!/ /i °°i/ O0

0 1

Mb~_

1 0

'

(43)

10

0 0

0 1

"

l,rtsualization The state space is 4-dimensional, which is somewhat hard to visualize. H o w l e r , we know by Eq. (21) that all knowledge vectors except 0 have tit-y-block norm 1, and therefore reside a 3.dimensional subspace perpendicular to (1111) T. We do an orthonormal basis, transformation to a basis containing of the this vector, and project to the space (1 ½ ½ 3) 1 r X = 1; this gives us a full geometrical representation .... knowledge vectors (except the null vector (0000) T, which we have to treat separ~ely in such a

Geometo' of knowledgeacquisition

91

visualization; in the present problem, this vector does not play a role). The knowledge space (with positive coefficients for all components in the 4-space) is now the inside of a regular tetrahedron of which the vertices are the basis states e~. It is depicted in Fig. 3a. The initial knowledge vector k o of Eq. (41) is a vector determined by the two parameters p,, and Pb, and therefore resides on a two-dimensional subspace in the knowledge space. This surface is indicated in Fig. 3a. Note that only the initial knowledge vector needs to reside on this surface; intermediate knowledge can reside anywhere within the tetrahedron. Note also that the realizable world states, which are characterized by p,, = 0, 1 a n d / o r p~ = 0, 1 are on the initial knowledge surface, since they are legitimate initial knowledge. The sensors S~, &, and S~b may be visualized as projections, as follows (see Fig. 3b). Consider S~. If a is observed to be 'broken', we have (10)TS, = ( 1 1 0 0 ) T, and by the knowledge updating equation Eq. (33)

~

t F

..,N

Fig. 3a. The knowledge space, with the k0-surface. A typical initial knowledge is indicated. The four labeled corners of the tetrahedron indicate the endpoints of ~be vectors representing the basis states (these are the basis vectors of the four-dimensional vector space), The dotted cube is drawn for convenvience of reference to the Cartesian coordinates in Ibis 3-dimensional representation of knowledge space.

L. Dorst,H.-L.Wu

92

knowledge k = (k I k 2 k 3 k4 )T with I k l - 1 becomes k' = normalize [(k 1 k 2 00) T] = ( 1 / ( k I + k2)) (k I k 2 00) T. Similarly, if a is observed to be 'fixed', then the knowledge becomes k"= ( 1 / ( k 3 + k4)) (00 k 3 k4) T. Note that k is on the line connecting k ' and k", and that k' is on the line 1~2 connecting e l and e 2, while k" is on the line 134 connecting e 3 and e 4. Thus we may graphically find the knowledge resulting from the sensor by constructing the unique line through k, 112 and 134, and projecting k along this line in the appropriate direction (if such a line cannot be constructed, S a does not change the knowledge). This construction is indicated in Fig. 3b. The result for the sensor S b is similar, now with the lines 113 and 124. For the sensor Sab, the projection is along the line through k and e 4, onto the plane spanned by the states 1, 2 and 3. This interpretation of sensing as projection is indicated in Fig. 3c, which gives the motion of a point on the ko-surface under arbitrary sequences of sensory actions. In that figure, the transitions are labeled by the outcome of the corresponding sensors. The motive actions Ma and are also projections. Applying M~ to k = ( k I k2k3k4 )T, the knowledge updating equation Eq. (31) yields k = ( 0 0 ( k I + k 3) (k 2 -I-k4)) T, which is on 134. Obviously,

Mb

0] k

Bb

l;k [-k~Oo k21 ~ B

Fb~ [

1"k 1-k 2 [

-kl-

-k t -kT--k~l 1-k1"k 2 J

b

1- k1

k2k3 -kl -k2-k

................................................................... ...............................

. . . . . . . . . . . . . . . . . . .

Fig. 3b. The commutation of

Saand Sb.

.

Geometryof knowledgeacquisition

93

this is the subspace on which S,, always reports 'fixed'. Similarly, the motive action M h projects to 124 , the subspace on which S b reports 'fixed'. We omit the actual geometrical construction of the exact point since it is rather involved and does not lead to additional insight. Note a special case: a point p on 112 gets transformed by M~ into a point o n 134 with the same ratio of distances to e 3 and e 4 as p has to e I and e 2. Thus M a'runs along the dotted lines' in Fig. 3a, the ruling lines of the k o surface. M~, runs along the other set of ruling lines.

Equicalences We denote knowledge transition sequences from left to right by the actions that induce them; so S.Mt, is the knowledge obtained from the initial knowledge by first doing S~, updating the knowledge, then M~,, and updating the knowledge. We use the notation S[X, Y] for sensory-conditioned actions: 'if S yields 'broken', do X, else do Y'.

~ Bb

i ! ....,."" :,i...~'":'"'"~

Fab.... .........

.............

...~;'

.....::222.-"

Filth.';..'" Fab ..°::...""'~F/ab

Fb

F

a F,~'. Fab

Bab ra

: ~

i

"""',,, Sa

H Sb

Sab -.11~ . . . . . . . . . . . . . . . . . . . . . . . .

Fig. 3c. Fully sensory strategies compared in their equivalence.

ID.

L. Dorst, H.-L. Wu

94

Fig. 3b shows the two s e n s o r s S a and S b in the possible orders SoS b and SbSo, for all possible outcomes. The figure demonstrates that all 4 world states can be reached by doing Sa, then Sb; or vice versa. Thus the outcomes of those two sensors determine the state fully. It also demonstrates that the two sensors commute. In fact, commutation for all deterministic sensors was proved in [7]. Fig. 3c gives all sensory relations, and is therefore a 'folded' version of the Bayesian decision tree of Fig. lb. Equivalent points in the tree are actually equivalent in this representation, so there is no need to prune the tree by predicate logic (as there was in Fig. lb in the introduction). Also, the initial point can be chosen arbitrarily and therefore Fig. 3c actually represents a family of trees, parametrized by pa and Pb. Fig. 3c is somewhat more easily viewed by 'looking at the knowledge space from above'. The tetrahedron of knowledge state projects to a square, with the possible knowledge states at the vertices. We have drawn this in Fig. 3d. The diagram gives a clear image of all equivalence relations for the sensory actions, which were also given in [7]. For instance, S a S a : Sa, and S a S b = S b S a , and So[*, Sab ] = So[*, Sb], where * denotes arbitrary actions. In the figure, these equivalences are represented as commutative diagrams. Since these equivalences can also be expressed as statements in predicate logic, we find: predicate logic statements of equivalences correspond to closed contours in knowledge space, just as in the cups-and-ball exarhple.

Bab

Fb i

"~

i

Ba

Fab F ,,tui

,

,

!. . . . . .

~



Fa

.... "'..~:.~ oo.OO.°'°~..oO°'° o.0,'°

Fab .....'"" ....." Fb

Fb

Fb

......'"'"

"'"""

Ba

"'""" Fab

......."'"

..'""

a ~

Fa

aB~~

/." "/'F

Fb

.." ab

.,._

Y'

~.~

Bb

Fa Sb

Fab

i

/."

Ba Bb"~"ab Sa

.....'"

.."

Bab

B ~Fa Sab ~-

.,ill ........................ ll,.

Fig. 3d. The equivalence of sensory strategies, viewed from 'above'. This shows the local topology of the knowledge space with respect to sensing (sensory transformation). One point (open circle) has 6 direct sensory neighbors (only one of Which:is a world state), and can reach 9 points (4 of which are world states, the vertices of the square).

Geometry of knowledge acquisition

95

Fig. 3e gives a similar diagram for the motive actions M~ and M b, which are seen to commute always. Note that if we take the initial state equal to ( 1 0 0 0 ) v, then this figure becomes the state transition diagram for world states which was sketched in Fig. la. We now see that the state diagram of the system is a special case of the knowledge transition diagram. Fig. 3f combines the two diagrams for sensing and motion to a complete diagram for all possible combinations of sensory and motive actions. This figure gives a representation of the complexity of the topology of the knowledge space for these 5 actions S a, S h, Sab, M~ and Mh: typically there are 8 neighbors for each point, and in total 15 points that can be reached from the given point by any sequence of actions. If the initial knowledge point was on the k0-surface, this figure simplifies a little, to Fig. 3g. We see that in that case, we have for instance the equivalence relation S,M~ = M~ (which is not true in the general case of Fig. 3f). The complete set of equivalence relations under this k,-assumption may be found in [7].

Goal-directed knowledge updating and optimal strategy The goal-point is the knowledge state ( 0 0 0 1 ) v (implying that the world state is ( 0 0 0 1 ) v with probability 1). There are many ways to reach it, as Fig. 3f, g show. Some of these are equivalent by the equivalence ~elations. We assume that the initial knowledge is on the k0-surface, so that we may use Fig. 3g. Eliminating the equivalent strategies and writing the remainder in the canonical form in which possible actions on a are done first yields 11 different strategies which are indicated in Fig. 3h. We

Mb

E



°

~ ................................................................................

Illl,,'~

M

M b Mb

0,,,,,,,,,

Ma

, ,, illllllilli;lillillllllllli

ii O li I

I

loll

i

O

Ma

Fig. 3e. The equivalence of motive strategies, viewed from 'above'. This shows the local topology of the knowledge space with respect to motion (motive transformations). O n e point (open circle) has 2 direct motive neighbors (none of which is generally a world state), and can reach 4 points (only one of which is generally a world state). The other world states are indicated for reference. In the special case that the original state was [1000] x, the diagram degenerates to the state transition diagram for world states under motive actions.

L. Dorst,H.-L. Wu

96

emphasize again that these diagrams are the 2-dimensional projection of a 3-dimensional subspace in the 4-dimensional knowledge space; a better representation of, for instance, the Sab[Sa[MaSb[Mb], Mb],]strategy is Fig. 3i (for k 0 in the k0-surface). The proof that Fig. 3h is indeed exhaustive is given in [7]. The computation of the costs o f strategies is now straightforward, and a minimization over all strategies yields the optimal strategy. The choice of the optimal strategy depends on T, R (the costs of sensing and repair, respectively), and on the estimated initial probabilities pa and Pb" Detailed calculations may be found in [7]; Fig. 3j sketches the result. From these figures, we see that the general tendency is to test the complete circuit first (the strategies starting with Sab) when the likelihood that both components are functional is large. As the sensory costs increases, it is seen that sensing is surprisingly expensive: already if it is twice as expensive as repair, sensing is not done at all, the components are just replaced, broken or not.

5.3. Accurate position measurement Let us now study a continuous example, requiring calculations in infinite-dimensional vectorspaces. Suppose that we have an object of size L somewhere at a position x along a line (see Fig. 4a). Our available sensor consists of a probe S at a position y, our available motions M(z) consist of moving the sensor a distance z. The goal is to determine the position of the object as accurately as possible, with the fewest number of sensory a n d / o r motive actions, after we have probed it once during a scan.

BabM b BaD

M b M b Mb Fh F b Fb

Fab F Mb

Fab Fa

.....'"~'.""

Ma

°°°°o°°°°°°°o°°°°°

Fab ....."" • .°°°°

' 'lJJ ! IJi ,, \ i Baba~ ~'~; ....... ~ i

i .: .... 1:

~ , , ~ ~ i ! ~ ' _ ' i ,

°*°

°°°° °°°

°" do

**

"'"

I i"a,, ~ ~'~M~a it,~:~F,

......................": .................................

a

:.y"

./

o°,**°

/:

.."" Fab .." o.'" :: Fab °°° ** _:

'li

°o.

...'"

M

--

a

B b Bab

,,,=

Ji.. r01

ab

Fig. 3f. The equivalenceof sensory/motive strategies, viewed from 'above'. This shows the local topologyof the kaow~ge space with respect to sensingand motion. One point (open circle) has 8 direct sensord/motive neighbors (only one of which is a world state), and can reach 15 points (4 of which are world states, the vertices of the square).

Geornet~' of knowledge acquisition

97

Vector space Let us take the reference point for the object position to be the middle point of the object. The world states are characterized by the position of the object, x, and of the sensor, y. Since there is a double infinity of such states, we have to describe the problem in a doubly-infinite-dimensionaL vector space, spanned by basis vectors e ~ . (where this notation indicates that a basis vector is characterized by x and 3' )-

Sensing The matrix corresponding to the sensor S has the columns: (10) r

if

Se .... =

---

~
(0 1) T

2

(44)

elsewhere,

where (1 0) ~ is the sensory vector corresponding to the outcome 'present', and (0 1)1 indicates 'absent

Motion We have, by definition, for the columns of the matrix representing M(z):

M(z)e~,,=e,,v+:.

(45)

Bab ~Mb

FbMb FbMb ~B

F

.~

Fab Fa

Ba

Mb

M2

Phi

..°"°o°°'°;"

Fab°* . . ' " ' " °°'° " °°.

Fb

°o°o°

Fb

°°" °°°

• °o,°°°

°,,

Mb °°°°°°" .'" °°o°°'° ..." ...." Fab .."

°, ..

°" ...° Fab

Mb

Fb

Fab

°

,°o, °° m °°*°

o°°°°°° •°° •"

F

a

." ................ '.';..................... .................. |1

ii

i

Bb

m a

a i,,.

~Fa Ma iF a

°°° °° ,°•" o.°° ,°o° •"°°

Iil

F12

Bt !Bab

m a

...........................................................................

Fig. 3g. The local topology of the knowledge space with respect to sensing and motion, for a point on the k0-surface. One point (open circle) has 8 direct sensory/motive neighbors (only one of which is a world state), and can reach 11 points (4 of which are world states, the vertices of the square).

98

L. Dorst, H.-L. Wu

Abstraction

It is clear from the sensory equation Eq. (44) that only differences of coordinates are relevant to our problem. Therefore, we abstract by means of an abstraction a with a sensory 'introspection' space spanned by u a, defined as the matrix a given by: a e x , y = u y _ x. (46)

~

~eeeeeotoeeeeleoooouoaegaeeeeleeee~lag

MaMb

....

:

M:dMd

sJ~ .,IMb

T

o°°°°o°O°° ' °°oO°°°°*°°°°

,,.."

~h.lmomotmoooomHmmnnl~

: :ap,~

i

°o°°°*°

sdw,ad

sjJ,~Is#M~,j

~| ItOOltllle|otollglltt letllo|i|||eS i|eott isei~||n *

P

I

6

o°°° o°°,°oO°°*°°°°°" °o*°

sdM:dMdad

SdM~SJMdl

°o° °o°°

.°* °oO°

°°°°o°°°°

~-"J~M.sdMd.j "~"""'~"'" ............ """'II"~,,

.

o°O°° °o°°° °°°oO°°°°

sdMd~;d~dl

.

.

.

.

.

.

.

k

o°°°°°°°°'° i °°.°o°°°o °°°o°

Fig. 3h. The equivalence of sensory/motive strategies.

i

99

Geometry of knowledge acquisition

Thus u d is the state in which the sensor is d to the right of the middle of the object. We seek a motion p,(z) working on the abstracted representation, satisfying the abstraction condition of Eq. (39), which gives: (47)

I.*( z ) u a = la.( z ) a e x . x +a =- a M ( z ) e x , . + a = oee..... +a+: = u a + : .

This defines the matrix of p~(z) in the abstraction space; the existence o f / z ( z ) shows that a is a p r o p e r abstraction. We desire a sensor ~r in the abstract representation with a sensory o u t c o m e space isomorphic to S, so:

= O'Ud :

(10) w

if

-~

L

L ~
:

(rOlex.x +d =-- Set,x+ d

(0 1) ~

:))UI.(d)+(~)(1-Ut(H)).

elsewhere

0

............. . i

......

...............................................

\.

Fab,PaPb,,&

F b;Pb

Ill |l!l l l~

Bab ; 1 - p a~

\

f .:o

'

................................... i.......................................................................................................

•'"""-

Fig. 3i. The decision tree for the Sab[Sa[M,~Sb[Sb], Mh] ] strategy.

Mb

:::::,...

(48)

1O0

L. Dorst, H.-L. Wu

IT = O.OIR ]

IT = 0.25R I

sbs~....."\ s,~s,,sb

sb •.'"'/o°j°/'°'°s~sb

none

0

0.62

I

T =

S 0.25

a 0.80

."o

1~-o.6oR I

0.49R [

s?:S,,'.N.',,,,SaS,, SJ

sb

"d"b

Pb

tlO0~

nol~

0

Sa

0.49

Sa

0

!

po

0.60

IT;2RI J

• Ron~

rb

Pb

Pa 0.5

Pa

Fig. 3j. The opdmal strat©gies.

101

Geometry of knowledge acquisition

probe Y-]

L

(a)

object X

v/8) (b)

-1/2

1/2

5

vLraz)

UL(d)

-LI2

z~

z-U2

LI2

UL_z(d'zl2)

z-L/2

z+L/2

d

lJ2

(c)

UL(d)

UL(d-z)

d

UL+z(d+z/2)

~

JJll

iii

z-L/2

-U2

z+L/2

~"

Z

U2

-N2

z+N2

(d)

d

d

Fig. 4. Position measurement example: (a) setup; (b) the function U/(a): (c) and (d) the multiplication of U-functions (c) z > 0, (d) z<0.

Here UI(6) is defined as (see Fig. 4b): g~(6)=

if

-~

~<6< ~

(49)

elsewhere.

Calculation of the optimal strategy We assume that we have p r o b e d the object once already. In the a-representation, we thus have as initial knowledge k 0 all states for which o-u d = ' p r e s e n t ' = (1 0) T, using Eq. (33):

Y'. Ud(IO)o'u d

k o = normalize

[d .

.

= normalize . .

Y'~

u d. UL(d ) = ~ _ U d . z U L ( d ).

(50)

L. Dorst,H.-L Wu

102

As a measure of the inaccuracy in position we take the maximum difference in coordinates of states compatible with the knowledge (the 'slack'). Thus at k 0 we have: inaccuracy( k 0) = L.

(51)

Now move the sensor by some distance z and sense again; so, execute the operator or'= o-./z(z). If [ z I > L, the sensor gives with certainty the result 'absent' and is therefore equivalent to the trivial sensor; hence we may assume I z l < L. We are interested in choosing the distance z optimally, so that the highest accuracy results• We may consider ~r' as a 'displaced sensor' because of the relation: 'ud =

z)u

(52)

=

Therefore, if we detect the object 'present' (observation (10) T) or 'absent' (observation (0 1)T), we have, in complete analogy with the calculation of k0: oo

Ppresent=/A(z)TorT(lO) T=

EUd.UL(d--z)

and

Pabsent ~---]'g( z)To'T( 01) r = E u d . ( 1 -

uL(a-z)). (53)

If we sense presence of the object, we just find k~ using the updating equation Eq. (33) as the componentwise product of Ppresent with k0, normalized:

kl,present---norma|i~

EUd.UL(d--z).

d

--~

= Y'~ua.L_IzIUL_IzI d -

(54)

--oo

(by the straightforward multiplication of Ut-functions, see Fig. 4c, d), and

(55)

inaccuracy(kl,present) = L - I z l . The expectation of the outcome 'present' equals the inner product of Pprese,t and k 0, which yields: oo 1 (Ppresen,, k0) = Y'~ U L ( d - z ) . - ~ U L ( Z ) ' ( u a , ua) = 1

Izl L

(56)

--oo

After the detection of the absence of the object, we find k 1 as the c o m t ~ n e n ~ s e product of k 0 and Pab~ent" The straightforward multiplication of the Ul functions gives the f o l ~ reaoal~, depending on whether we moved to the right (z > 0) or to the left (z < 0):

kl,right,absent =

(zL) ~ooUd.-F'~lUlzl d -~

, and

kl,left,absent= ~_ooUd" ~-~l ( d

z +

2

)

(57)

For the two possibilities, we have: Izl

inaccuracy(kl,absent ) = }z l,

and

(Pab~ent, k0) = L

(58)

Therefore the total expected inaccuracy after the action ~r./z(z) equals:

inaccuracy( k 1) = (Pprcsent, k0)" inaccuracy(kl,prese, t) + ( Pab~nt, k0)" inaccuracy(k 1,absent) (59) By the Bayesian decision rule, we choose z to minimize this inaccuracy. This yields t zl=L/2,~ and therefore the optimal choice is to shift the sensor over a distance L / 2 from its p r e v i ~ l ~ s i ~ , We obtain an optimal inaccuracy equal to: inaccuracy(k 1,optimal) = L / 2 .

(60)

Geometry of knowledgeacquisition

103

The actual k~ depends on whether we moved to the right (z > 0) or to the left (z < 0) and whether we obtain the answer 'present' or 'absent': k 1 .optimal,right,p . . . . . t = k 1 ,optimal,left,absent =

k l.optimal.lcft.p ....... t = k l , o p t i m a l , r i g h t . a b s c n t

y ~ U d" L zc

UI./2

= ~ Ud.L UL/2 -zc

( (

Eq. (61) and Eq. (62) show that the sensory actions o-4x(L/2) and o - . # ( - L / 2 ) (corresponding to motion to the right and left, respectively) are equivalent: they map to the same linear subspaces. One can therefore choose to always move to the right, relative to the new median object position. The similar form of kj and k 0 implies that the situation now recurses, at the updated position L / 4 or ( L / 4 ) , with the initial inaccuracy modified to L / 2 . The solution we found, taking a motion step of half the previous inaccuracy, is therefore the proper procedure at each sensing step. The direction of the motion depends on the previous outcome: if it was 'present' then move to the right, else move to the left. This is the continuous l~ersion of a binary search. We thus find: After n actions, the position o f an object of size L can be known to an accuracy 2 "L.

Note that this example shows that calculation with infinite-dimensional vector spaces, which may at first seem daunting, is actually straightforward, since the coefficients of the basis vectors are naturally grouped into functions. In particular, the knowledge updating equation in Eq. (54), a component-wise product of vectors, is seen to be identical to the correlation of two Urfunctions.

6. Conclusion

We have given a new representation of knowledge in a geometrical framework. Wc presented the equations for knowledge updating on the basis of sensory and motive actions, and demonstrated their use with some examples. The integration of sensing and motion at this level is now reasonably well understood. The reader may have noticed that our representation is in many ways closely related to representations in control theory and physics (notably quantum mechanics); this is advantageous for the connection to practical implementation and for the identification of powerful mathematical techniques for analysis. We indicated, by example, some preliminary insights on the integration of other manipulations of knowledge, such as abstraction and reasoning. We believe that these transformations will eventually be obtained by certain well-defined operators applied to the geometrical representation. Finding these operators will ground the concepts of reasoning and abstraction in the physical representation, which is necessary in order to unify, in a consistent manner, the different ways to manipulate knowledge. This work is a precursor to intended work in which the understanding of knowledge manipulation is used to calculate and analyze goal-directed behavior o f autonomous systems. We claim that the achievement of a goal in the world by an autonomous system is more correctly viewed as the consequence of achieving the internal goal of knowing that the world is in that goal state. The study of goal-directed knowledge acquisition is then important, and will permit optimization of behavior not previously possible. For instance, it is not always optimal for a system to first identify its state and then solve a planning problem to get to its goal; often these two problems can be interleaved with advantages in efficiency and robustness [1,2]. The solution of Rubik's Cube by humans is an example of such behavior: it is quite natural to decompose the problem into subproblems, each with their own associated sensing and motion. The total problem becomes a cascade of smaller, more manageable problems, which reduces the complexity. We believe that the linear decomposition techniques from representation theory [6] and the theory of semigroups [5] can be combined with our geometrical knowledge representation to perform such goal decompositions automatically and efficiently.

104

L. Dorst, H.-L. Wu

References [1] D.P. Benjamin, A.J. Cameron, L. Dorst, M. Rosar and H.-L. Wu, Integrating Perception with problem solving, SIGART 2 (4) (1991) 41-45. [2] D.P. Benjamin, Reformulating path planning problems by task-preserving abstraction, Robotics and Autonomous Systems 9 (1992) 105-113 (this issue). [3] L. Dorst, I. Mandhyan, K.I. Trovato, The geometrical representation of path planning problems, Robotics and Autonomous Systems 7 (1991) 181-95. [4] R. Goldblatt, The CategoricalAnalysis of Logic (North-Holland, Amsterdam, 1984). [5] G. Lallement, Semigroups and Combinatorial Applications (Wiley, New York, 1979). [6] J.-P. Serre, Linear Representations of Finite Groups (Springer, Berlin, 1977). [7] H.-L. Wu and L. Dorst, Knowledge acquisition through measurement and motion: Part 1: A model in sets and probabilities, Philips Internal Report, Briarcliff Manor, NY, 1991.