Artificial intelligence in photogrammetry

Artificial intelligence in photogrammetry

Photogrammetria (PRS), 42 (1988) 245-270 245 Elsevier Science Publishers B.V., Amsterdam - - Printed in The Netherlands Artificial Intelligence in ...

2MB Sizes 3 Downloads 66 Views

Photogrammetria (PRS), 42 (1988) 245-270

245

Elsevier Science Publishers B.V., Amsterdam - - Printed in The Netherlands

Artificial Intelligence in Photogrammetry TAPANI SARJAKOSKI

Technical Research Centre of Finland, Vuorimiehentie 5, SF-02150 Espoo, Finland ( Received June 20, 1987; revised and accepted December 12, 1987)

ABSTRACT Sarjakoski, T., 1988. Artificial intelligence in photogrammetry. Photogrammetria, 42: 245-270. The scope of the discipline of artificial intelligence is reviewed and its relevance for the discipline of photogrammetry is analyzed. The most appropriate artificial-intelligencetechniques are delt with more in detail, especially the usage of heuristics in algorithms, rule-based knowledge representation and programming, and also object-oriented programming. Urgent need for using expert-system technology is seen when replacing semiautomatic man-machine systems with fully automatic systems. In building expert systems, especially in gathering of knowledge as rules, a system-analytical approach is considered to be a necessity. It is concluded that the key issue will be the structuring task in the building of complex systems for photogrammetric analysis - - also when gathering the knowledge-base for rule-based programming. In the future, it will be increasingly difficult to distinguish between the techniques and methods used in artificial intelligence and, on the other hand, those used in more conventional system design and software engineering.

1. INTRODUCTION

Artificial intelligence has received a lot of publicity during the recent years, most obviously since the early eighties, approximately 1983. Especially its application in expert systems have been advocated to cause a revolution in the use of computers. Research on artificial intelligence is mostly work on building intelligent computer-based systems for a variety of applications. The relationship to automation is obvious. In photogrammetry the need for more complete automation is a well recognized desire. We want to have automatic instruments for data acquisition, automatic systems for data reduction and analysis, automatic decision making procedures on differrent levels. Thus, it is obvious that photogrammetrists have to be interested about what artificial intelligence as a discipline can offer to photogrammetry. The knowledge photogrammetrists possess of artificial intelligence techniques must be seen with respect to the other disciplines from which they pick tools into their kit. As an example, applied mathematics, especially numerical

0031-8663/88/$03.50

© 1988 Elsevier Science Publishers B.V.

246

analysis and statistical analysis are daily used by photogrammetrists. Photo~ grammetrists' interest in applying and further developing techniques related to applied mathematics is well known and appears even in the name of the third commission of ISPRS, "mathematical analysis of data". It may be concluded that photogrammetrists are methodologically interested in applied mathematics. Computer science and information sciences in general have got considerably less space in the scientific literature of photogrammetry - - compared to applied mathematics. This is somewhat surprising, considering that most of the tasks of photogrammetry are characterized by extensive information processing phases. The relative youth of these disciplines is an obvious reason tbr the situation. A related reason can surely be found in the educational background of photogrammetrists: Most of the active researchers in the field have a considerable amount of studies in applied mathematics but much fewer a comparable amount of studies in information sciences. Similar observations apply to the discipline of artificial intelligence. The primary purpose of this paper is to study artificial-intelligence techniques, to find out which of them are the most suitable for photogrammetric applications. The interest is in the methodology or in the basic principles. The goal is not only to get photogrammetrist interested in the techniques of artificial intelligence but also to make them aware of its close connection to computer science, software engineering and system analysis and design. 2. WHAT IS ARTIFICIAL INTELLIGENCE?

In the Handbook of Artificial Intelligence (Barr and Feigenbaum, 1981) artificial intelligence is described to be "the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior - understanding language, learning, reasoning, solving problems, and so on". According to (Winston, 1984), artificial intelligence can be defined as "the study of ideas that enable computers to be intelligent". This definition remains vague, because the meaning of the intelligence is not well understood or defined. Referring to (Winston, 1984) again, the goals of artificial intelligence can be defined as to make computers more useful and also to understand the principles that make intelligence possible. In (Rich, 1983 ) artificial intelligence is defined to be "a study of how to make computers do things at which, at the moment, people are better". At first glance, this definition sounds rather obscure from the point of intelligence. Comparison to the evolution of human skills reveals its relevance, however: A few hundred years ago such skills like reading and writing were considered to require good intelligence while today's modern societies presume everybody to be able to acquire these skills. More dramatically, in some tasks as in arithmetic computations - - computers,

247

even handcalculators, clearly overperform humans. However, we do not call a calculator to be intelligent. On the other hand, we might have an "intelligent" computer terminal on our desk that has some additional processing capabilities, compared to some earlier models of computer terminals, but still is not really intelligent. Thus, we can see that intelligence is a moving target and we might prefer the definition given above: Artificial intelligence is a study of how to make computers do things at which, at the moment, people are better. With what kind of subjects does the field of artificial intelligence deal with today? The list of the topics of the sessions in the Ninth International Joint Conference on Artificial Intelligence ( IJCAI, 1985) gives a general view: artificial intelligence and education artificial-intelligence architectures automated reasoning automatic programming - cognitive modelling - expert systems knowledge representation learning and acquisition logic programming natural language - perception - philosophical foundations planning and search - robotics - theorem proving The list above can only be considered to describe the most actual research topics but it is in no way a general taxonomy of the subfields of artificial intelligence. It is given to demonstrate the broad spectrum of the problems artificial intelligence is dealing with. Some of the session topics mostly relate to the methodology of artificial intelligence. These include automatic reasoning, expert systems, knowledge representation, logic programming, planning and search, theorem proving, learning and acquisition. In some others the application area is stressed as titles natural language processing, perception and robotics indicate. -

-

-

-

-

-

-

-

-

3. ARTIFICIAL I N T E L L I G E N C E IN P H O T O G R A M M E T R Y

In this paper we are primarily interested in how artificial intelligence techniques can be utilized in photogrammetry. What then is photogrammetry? In the scope of the International Society for Photogrammetry and Remote Sensing (ISPRS), the titles of the commissions give an overview of the meaning of the word photogrammetry: 1. primary data acquisition

248

2. instrumentation tbr data reduction and analysis 3. mathematical analysis of data 4. cartographic and data bank applications of photogrammetry and remote sensing 5. other non-cartographic applications of photogrammetry and remote sensing 6. economic, professional and educational aspects of photogrammetry and remote sensing 7. interpretation of photographic and remote sensing data. Cross-comparison between the list above and the topics of interest in artificial intelligence reveals the impossibility to analyze what is the role of the artificial-intelligence discipline, as a whole, for the total field of photogrammetry: because of the broad spectrum of artificial-intelligence topics, artificial intelligence may be imagined to be applied everywhere. In the approach taken below, the emphasis is on studying applicability of some artificial-intelligence techniques in some photogrammetric tasks. Most of the examples relate to analysis of data by means of analytical photogrammetry. Applying analogy, the relevance of the techniques to other tasks can be recognized easily. Some relevant topics will be ignored. For example, in the design of manmachine systems for photogrammetric purposes, input of commands or data in natural language - - spoken or otherwise given - - is an important aspect in making the system more ergonomic and efficient. However, the research and development work concerning these kinds of problems does not belong to the specialities a photogrammetrist is assumed to possess. It is rather considered here that whenever such subsystems will be incorporated into photogrammetric man-machine systems, these subsystems must be designed by specialists in these particular areas. Topics like artificial intelligence in education will also have importance in future, for example in the training of people to carry out certain photogrammetric tasks. They cannot be treated here, due to the lack of space. To summarize, in this paper the emphasis is on analyzing what are the most suitable techniques that can be adapted from artificial intelligence to support the analyses and design of systems for photogrammetric applications. The approach stresses mostly evolution in the system design. 4. ARTIFICIAL-INTELLIGENCE TECHNIQUES

To a great extent, artificialintelligence is work on building intelligent systems for a variety of applications. In this work many techniques lack generality and can only be used for the particular problem they are originallyplanned for, But also techniques havi~ general applicability have emerged. These include: - state-space representation of a problem - heuristic search techniques

249

techniques of representing knowledge representation and treatment of uncertainty general manipulation of symbols artificial-intelligence programming paradigms - special hardware for artificial-intelligence programming - expert system technology. -

-

-

-

4.1 Knowledge representation In the study of information systems the term knowledge is used to describe one's understanding of reality (Burch et al., 1979). In artificial-intelligence literature knowledge representation usually refers to the task of modelling realworld knowledge in terms of computer data structures. In expert-system terminology we often say that knowledge consists of facts, beliefs and (heuristic) rules. It is essential that the system has a formalism for representing the knowledge and also means for manipulating and utilizing the knowledge in a process of solving a specified task. For this purpose, several schemata have evolved in artificial-intelligence discipline: state-space representation and search logic, predicate calculus procedural representation semantic networks IF-THEN rules, production systems - frames, knowledge objects. These are described in detail for example in (Barr and Feigenbaum, 1981 ). Comparison between the methods would also reveal the existing relationships between various methods. I F - T H E N rules are clearly based on logic, a more detailed analysis can be found in (Nilsson, 1980) where it is especially shown how different types of rule-based reasoning can be interpreted as special cases of logic-based reasoning. In (Kowalski, 1979) and (Clocksin and Mellish, 1981 ) it is shown how logic is used as a foundation for a programming language, Prolog. Also, procedural knowledge is used in frames or in knowledge objects as indicated in (Barr and Feigenbaum, 1981 ). It is considered here that state-space representation and search deserves a detailed study as a fundamental tool for analyzing the use of heuristics in algorithms. Similarly, IF-THEN rules are in the core of the techniques used for building expert systems. Their usage will be described under the title rule-based programming 1. Use of frames or knowledge objects is reviewed under the title -

-

-

-

-

object-oriented programming. 1The term rule-based knowledge representation is used more often. In this representation it is wanted to stress that usage of rules is also programming.

25O

4.2 State-space representation and search 4.2.1 General description State-space representation is one of the artificial-intelligence techniques used in tasks requiring problem solving. It assumes an initial state for the problem to be solved, definition of the properties or characteristics of such a state or states that the problem can be considered to be solved and a set of operators to move from one state to another. The solution is found, i.e., the goal state is reached, by applying the operators repeatedly. We can also say that the goal is reached by searching through the state space. One of the important characteristics of the state-space representation is that it forces us to state explicitly the purpose of the task, i.e. the definition of the goal state. Sometimes this enlightens the heuristic nature of the task: there can be fundamental difficulties in defining the goal state. For example, in ( Sarjakoski, 1986a) the detection of gross errors is expressed as a search problem, with a crisp definition of a goal state. In a more recent paper (Sarjakoski, 1986b), however, alternative approaches for defining the goal state have been investigated, showing clearly the possibilities of using extra, heuristic knowledge in the definition of the goal state. Search is another important concept in the problem solving based on statespace representation. A naive algorithm would exhaustively search through the complete search space. This space may, however, have such a large number of states that it is not possible in practice to carry out an exhaustive search. In computer-science literature this problem is known as a combinatorial explosion. It would be ideal if an efficient algorithmic solution for the search problem could be found. The analysis of computer algorithms has shown, however, that many search problems belong to a class of so-called NP complete problems ( nondeterministic polynomial time complete problems). Characteristic to this class of problems is that it is extremely unlikely that it would be possible to develop algorithms for their solutions that work better than O (e") in the worst case. In artificial intelligence - - and also in the more general discipline of analyzing computer algorithms - - it has been found to be practical to use heuristic algorithms to solve such a problem. Heuristic algorithms will be analyzed more in detail further in the text.

4.2.2 Application in photogrammetry Using the state-space representation in the solution of the problem of detecting gross errors is mentioned already above. First of all, it separates: the question of the goal ( i.e., what are the gross errors) from the question how the goal can be reached ( i.e., how the gross errors can be found) in a natural way. The analysis of the characteristics of the goal state reveals that very often some extra knowledge is required in the definition of the goal state. Even if we are able to reach a consensus of definition of the goal state, the analysis of the

251

search may reveal that it is hopeless to look for an algorithm being guaranteed to reach the goal state in a reasonable time - - due to the combinatorial explosion in the size of the state space. Thus, we are forced to use some heuristic algorithm. As a consequence, the optimality of the solutions may not be guaranteed, i.e., it may not be guaranteed that the alogrithm reaches the goal state. The two sources of uncertainty - - the uncertainty due to the use of heuristic knowledge in the definition of the goal state and the uncertainty due to the usage of heuristic search algorithms - - have to be considered together, too. Because the definition of the goal state may be based on heuristics, it is obvious that the optimality of the goal state is more or less questionable. In other words, some other states could also be interpreted to satisfy the requirements of a goal state, although with a slightly lesser likelihood, perhaps. In this respect, the property of a search algorithm to be guaranteed to find the goal state looses its importance. We may take a pragmatic stand~)oint when evaluating a heuristic algorithm: an algorithm is useful if it finds a state that satisfies the formal requirements of a goal state with a reasonable accuracy and uses computer resources not more than what we can tolerate. Taking an analogy, in adjustment computations - - or in numerical analysis in general - - it is commonly accepted that there is usually no need to compute a solution to a problem with considerably higher accuracy, compared to the inherent randomness of that solution caused by the randomness of the input data. For detecting gross errrors, several heuristic search methods have been introduced. A detailed study is given in (Sarjakoski, 1988). Here we can only summarize that the task is surprisingly complex and tedious. One reason for that is the often large amount of data to be treated. As a consequence, the size of the search space is not the only problem: often the amount of computations required for one operator ( i.e., to move from one state to another) is also large. Thus, some additional approximative methods and simplifications are required. The problem of detecting gross errors is a good example of how artificialintelligence techniques can be applied in analytical photogrammetry. The introduction of some methods from the artificial-intelligence discipline clarly helps in the analysis and in the structuring of the problem. There are also other problems where the state space representation and the associated heuristic search techniques are appropriate. Those include the selection of additional parameters in a self-calibrating block adjustment, which can be treated based on techniques earlier used in stepwise regression analysis (Sarjakoski, 1984c) but which can also be approached from the direction of artificial-intelligence techniques. Definitely, many others can also be found. 4.3 Heuristics in algorithms 4.3.1 W h a t is an algorithm? In computer science, an algorithm is known as a precise method usable by a computer for the solution of a problem. In addition, it is required that an al-

252

gorithm solves the problem in a finite amount of time or indicates that no solution exists (Horowitz and Sahni, 1978). Algorithms are frequently encountered in photogrammetry. Sometimes algorithms for problems to be solved are not known - - by photogrammetrist or in general - - but the problems still have to be solved, somehow. It is studied in the following how heuristics has been used and can be used in certain tasks relevant in photogrammetry.

4.3.2 Minimization of bandwidth as an example Minimization of the bandwidth of the coefficient matrix in a reduced normal equation system has been an important subissue in bundle block adjustment. The width of the band of non-zero elements dominatively determines the storage and time requirements when the normal equation system is solved. Review about the history of the development on methods for this minimization demonstrates efficiently how different kinds of knowledge and heuristics can be used in the development of algorithms. The minimization of the bandwidth can be treated as an ordering problem: the photographs have to be ordered, the order being indicated with a number from 1 to n ( n is the total number of photographs in the block ) such that when the photographs are entered into the normal equation system in t h e corresponding order, the bandwidth of the coefficient matrix is minimal, among all the possible orderings. Manual ordering of photographs. In the early block adjustment programs, users were responsible for minimizing the bandwidth. Usually the desired ordering was indicated with corresponding order in the input data. The ordering was based on the simple principle that the sequential numbering in perpendicular direction with respect to the principal direction of the block usually produces the smallest bandwidth. In regular blocks this was straigthforward (Brown, 1976), irregular blocks required some experience from the user, to assure the optimal ordering. The following matters are essential in the manual procedure: First of all, it is relied on human expertise in the solution of the problem. Secondly, the human needs some knowledge of the physical layout or configuration of the block, to be able to solve the problem. Thirdly, the approach offers full generality, because the problem-solving task is done by a human. This feature will be more illustrated in the context of the other approaches that are studied in the following. Semiautomatic ordering. With a semiautomatic ordering is understood such a procedure where some control data are used to guide an ordering algorithm. These kinds of approaches have been used in block adjustment programs

253

and Ackermann, 1976) and SPACE-M ( B l a i s , 1977) 2. In PAT-M additional input data is given for the block adjustment system, to inform it about subsequent gross-block groups. SPACE-M,on the other hand, relies on the use of specific initial ordering of the models. In the reports of each of these two programs, special consideration is given to irregular blocks or blocks with double coverage. General treatment of such cases is considered to be difficult and to require complex programming, to avoid loss of generality. The semiautomatic ordering methods are essentially based on the same level of abstraction as what is used in the manual methods: the problem is expressed and solved in terms of the original problem, i.e. such concepts as gross blocks, adjacent photographs, etc. are used. PAT-M ( S c h w i d e f s k y

Automatic ordering. Automatic ordering methods represent the latest and most mature development stage. It is essential to note that all the block adjustment programs, known by the author, using fully automatic ordering methods utilize algorithms that are commonly known in numerical analysis under the title sparse matrix techniques ( George and Liu, 1981 ). For a block adjustment program these algorithms for minimizing a bandwidth may be considered as black-box routines: the description of the sparsity structure of the coefficient matrix is feeded in and the ordering information is got as the output. Once the operationality of these algorithms has been verified in the context of block adjustment programs, a more detailed analysis should be of relatively small interest, for a photogrammetrist. They supply, however, excellent material for studying the use of heuristics in algorithms.

Change in the level of abstraction. One of the most distinct features in the general algorithms for bandwidth minimization is that a symbolic representation of the sparse matrix is given as an input data for the algorithm. Usually, adjacency matrices or adjacency lists are used, which are among the techniques used in graph theory to describe a structure of a graph. The algorithms for minimizing a bandwidth are often claimed to be based on graph theory, but, as stated in (George and Liu, 1981): "Although rather few results from graph theory have found direct application to the analysis of sparse matrix computations, the notation and concepts are convenient and helpful in describing algorithms and identifying or characterizing matrix structure. Nevertheless, it is easy to become over-committed to the use of graph theory in such analyses, and the result is often to obscure some basically simply ideas in exchange of notational elegance". 2These two programs are for independent model adjustment. Thus we are concerned about the numbering of models, not photographs.

254

Evolution of the bandwidth minimizing algorithms. The evolution reviewed in the following starts from a bandwidth minimization algorithm known as the Cuthill-McKee algorithm, originally published in (Cuthill and McKee, 1969). Its purpose is to reduce the bandwidth of the matrix and also the profile. Explained by using the terminology from graph theory, Cuthill-McKee algorithm proceeds by first selecting a starting node which gets 1 as its number. Then, repeatedly, processing the numbered nodes in the just determined order, all the unnumbered neighbors are numbered sequentially, in increasing order of degree. In addition, instead of using a single starting node, the original CuthillMcKee algorithm selects a set of starting nodes, generates a numbering starting from each of them, and finally uses the numbering producing the smallest bandwidth or profLe. Later, it was found in (George, 1971) and proven that the reversion of the order produced by Cuthill-McKee algorithm often reduces the profile significantly and never increases it (Liu and Sherman, 1976). The algorithm is then called 'reverse Cuthill-McKee' algorithm. Work on Cuthill-McKee algorithm demonstrates how heuristics can be used in the design of algorithms. The following points deserve special mentioning: (1) The motivation for developing a bandwidth minimization algorithm is pragmatic: ordering the unknowns properly in a large system of linear equations with a sparse coefficient matrix, the computational work can be reduced significantly. (2) The problem can be specified precisely, i.e. there is a precise mathematical definition of an ordering producing a minimal bandwidth or profile. ( 3 ) The problem has later been proved to belong to the class of NP complete problems. As a consequence, based on the current belief in computer science, it is extremely unlikely that an algorithm solving the problem precisely but in polynomial time, with respect to the size of the problem, could be developed. (4) Cuthill-McKee is a kind of semi-algorithm. It is not guaranteed to be able to reach the goal precisely or, strictly speaking, to solve the problem or produce optimal ordering. However, it is guaranteed to stop in a finite time and to produce some ordering. The ordering it produces has turned out to be usually so good that it reduces computational costs significantly. In addition, the algorithm has been proved to produce an optimal numbering for problems having certain properties (Liu and Sherman, 1976). In some other cases, reasonable upper bounds can be guaranteed for the bandwidth it produces. Also, it has been proven that there exists a linear time implementation of the reverse Cuthill-McKee algorithm (Chan and George, 1980). (5) The bandwidth minimization problem is only an auxiliary problem, to minimize the computational cost in the numerical solution phase. In terms of computation time 3, the minimum bandwidth is of no value if the total computation time is increased, due to the time used for the minimization of the '~Considerations about the usage of storage are not discussed here.

255

bandwidth. Thus, a good suboptimum solution is totally sufficient if it can be found in a short time, as is the case with Cuthill-McKee algorithm. Several modifications and improvements have been suggested to the CuthillMcKee algorithm. In George (1971) the reversing of the ordering is suggested, to produce a smaller profile. Gibbs et al. (1976) introduce an algorithm (called later GPS algorithm) with several modifications. The main contribution is the development of an efficient method for finding a starting node. Also, a more general numbering schema is introduced which is able to produce a smaller bandwidth than the original algorithm. A modification into the GPS algorithm is introduced in Benciolini and Mussio (1982) to facilitate the production of smaller bandwith for certain very irregular graphs. The algorithm has been applied for photogrammetric and geodetic adjustment problems. In Sarjakoski (1984a) another algorithm is developed, based on the GPS algorithm. There the main objective is to simplify the algorithm, for easier understanding and implementation, without sacrificing the generality. The algorithm has been tested with coefficient matrices of the reduced normal equations of bundle block adjustment (Sarjakoski, 1984b). In Snay (1976) another algorithm has been developed. Its origin is in Cuthill-McKee algorithm and the purpose is to reduce the profile, especially for coefficient matrices encountered in geodetic network adjustments. Interestingly enough, the problem is formalized as a queuing problem. The review above exemplifies that there seems to be no end in the process of improving the capabilities of a heuristic algorithm. Also, researchers representing different application areas can make improvements into an algorithm perhaps because of the different point of view - - and still maintain the generality. The heuristics they use is related to the high-level representational formalism, not directly to the application area. -

-

About the generality. The power of general-purpose algorithms lies in their generality. They may be applied in any area where the appropriate transformation in abstraction can be made. In generality also lies the weakness of these algorithms: the transformation is not always possible. Treatment of systems with groups of unknowns as the basic units induces some problems already. In bundle block adjustment, as an example, each object point with its three unknown coordinates and each photograph with its six unknown coordinates constitute such basic units. In the ordering of the unknowns, it is urgent to keep the unknowns within each of the basic units next to each other. Application of general bandwidth minimization algorithms does not offer any direct means for this, if each unknown is treated individually. On the other hand, if each basic unit (i.e. an object point or a photograph) is treated as a node in the corresponding connection graph, problems emerge because of the different number of unknowns associated with each node. When the algorithm is applied for the reduced normal equation system with photographs only, this problem

256

is circumvented because of the uniformity (with respect to the number of unknowns in each basic unit) of the nodes involved. This approach is fully applicable when used for conventional cases like bundle block adjustment without auxiliary data where it is natural to carry out the reduction of the normal equations and where the remaining system is known to have a small bandwith when ordered properly. Some other applications like combined adjustment resist this kind of approach, due to the more general structure of the matrix. In Kruck (1982), a method is proposed for ordering object points and photographs in combined adjustment. There the objective is to minimize the profile of the full coefficient matrix of normal equations. The problem has been solved on application level, without departure to the general algorithms. This demonstrates how the methods working on low abstraction level, i.e. close to the level of application, may be extended to deal with more complex cases. To be able to achieve the same with the general algorithms, the formalism has to be extended for representing the required additional information and the algorithm has to be modified, to take this information into consideration.

4.3.3 Heuristics in algorithms, conclusions The study above has exemplified the use of heuristics in algorithms. It brings up that heuristics can be used efficiently in the solution of intractable problems. Heuristics can be used either on a high level of abstraction or on the application level. The use of high level of abstraction promotes the generality of an algorithm while low level of abstraction renders full utilization of application dependent knowledge. Algorithms where heuristics has been used to facilitate the solution of exactly defined tasks often have the property that they cannot be guaranteed to find the optimum solution. In the example above this hardly reduced the utility of the algorithm because of the supporting nature of the task. Heuristic knowledge can also be used in the definition of the goal of an algorithm. In algorithms for detecting gross errors this is common because we have not reached a concensus about the definition of gross errors. It is obvious that the drawbacks of using a suboptimum solution in these circumstances are small. What should photogrammetry as a discipline learn about the usage of heuristics in algorithms? First, it is useful to notice that there is a class of the NP complete problems for which even the professionals in that speciality, computer scientists, are unable to find efficient solution algorithms. A problem for which a photogrammetrist attempts to find a general algorithmic solution may belong to that class and igrlorance about this fact may result in months or years of wasted research effort. Secondly, the use of heuristics often produces algorithms that are fully adequate for practical purposes. Thirdly, the study of heuristic algorithms is not only related to the discipline of artificial intelligence but also to computer science in general. This can be easily discovered by re-

257

viewing fundamental textbooks about computer algorithms, like (Horowitz and Sahni, 1978).

4.4 Rule-based programming Rule-based programming has its origin in a so-called production system model. It has the following components and structure ( Fig. 1 ) : (1) knowledge base: IF-condition-THEN-action rules (2) working memory: contains the description of the current state of the problem (3) inference engine: controls the application of the rules. The inference engine searches the context of the working m e m o r y and finds the rules for which the condition parts are fulfilled. If there exists a matching rule, the working m e m o r y is updated according to the action part of the rule. If a conflict occurs in the form of simultaneous matching of more than one rule, the inference engine has to decide which rule to apply. Several conflict resolution strategies have been developed for this purpose (Winston, 1984). The application of the rules is continued repeatedly until no rules match. It is useful to distinguish between two types of rules: ( 1 ) IF condition THEN action ( 2 ) IF condition THEN conclusion. Obviously the second type of rule can be rewritten as: IF condition THEN conclude. Characteristics of the task we are dealing with determines which kind of interpretation of the rules is more natural. The task also determines the most suitable direction of inference. The original production-system approach is situation-driven (data-driven, even-driven). The direction of inference is there called to be based on forward chaining. Essential is that following that ap-

Working Memory

I,,I

I

Changes Inference Engine

Knowledge Base

l

Fig. 1. Structure of a production system.

1

258

proach, a rule can be applied when a match occurs between the context ot working memory and the condition part of the rule. An alternate approach is called backward chaining. It is most suited for reasoning tasks where the rules are of the second type. There the reasoning is started from a desired conclusion (hypothesis) whereafter such rules will be applied such that the conclusion is on the right-hand side of the rule. Then, the reasoning is continued until a fact is found to terminate the chain of reasoning. The backward-chaining typed reasoning is especially suitable for diagnosis tasks as the development of a socalled MYCIN-system has demonstrated (Buchanan and Schortliffe, 1985 ). A good representation of different reasoning techniques can be found in Nilsson (1980).

4.5. Object-oriented programming Object-oriented4 programming is one of the recent programming paradigms that have been proposed for making development and maintenance of software systems more efficient and also for increasing the quality of software. Objectoriented programming is often discussed in artificial-intelligence literature. Then it is often related to the concepts of frames and procedural attach merit in frames. One of the early systems for object-oriented programming, FLAVORS, is implemented as an extension to the LISP language, thus strengthening the association to the artificial-intelligence discipline. On the other hand, many of the concepts essential in object-oriented programming have been introduced earlier in such languages as SIMULA,which is designed primarily for computerbased simulation purposes. Later, SMALLTALK-80is introduced as a language solely devoted for object-oriented programming. ADA-language contains some features supporting object-oriented programming. To summarize, object-oriented programming is treated here not only because of its connections to artificial-intelligence discipline but also because of its usefulness in systems design in general.

4.5.1 Fundamental concepts In object-oriented programming classes of objects are established. Each object class has a number of attributes, to allow storage of pertinent information and a number of operations, to allow the manipulation of an object and its attributes. In 'truly' object-oriented programming systems as SMALLTALK-80 and FLAVORS the classes are organized in hierarchical:manner such that a subclass can inherit attributes and operations from a class being on a higher level in the hierarchy. In truly object-oriented programming systems the instantiations of individ~Sometimes also called object-centered programming, to emphasize the central role of objects in the approach.

259 ual objects are created dynamically in run time. They also use dynamic binding of operations (or functions, procedures), contrary to static binding which is used in conventional procedural languages and also in ADA. In the following, the usage of 'truly' object-oriented systems is assumed, because of the importance of the concepts of hierarchy and inheritance. One of the main advantages of object-oriented programming is that the encapsulation of data structures of each object and the associated operators is natural and straightforward. The details of implementation are hidden from the outside world. These characteristics facilitate the maintenance of the system when new types of objects have to be added. A good review of object-oriented programming is given in Cox (1986). It also introduces Objective C, an extension to C-language for object-oriented programming. Another presentation is given in Booch (1986), giving an ADA-perspective of object-oriented programming. There the close connection between object-oriented programmign and some well known system analysis and design methods like Jackson-structured development, (Jackson, 1983 ) is discovered. Each of these two studies stresses object-oriented programming as a tool for building reusable software components.

4.5.2 An example from photogrammetry Combined adjustment of observations of different types offers a good example of how object-oriented programming can be applied in photogrammetric applications. In the adjustment, there are two main classes of objects: The class of observations has as its subclasses different kinds of observations, e.g. photogrammetric image observations, control point observations, distance observations. The class of groups unknowns has as its subclasses photographs and points. For each subclass of observations the operations have to be defined for instance for computing the 'estimated' values of observations, the partial derivatives of unknowns, displaying the data, etc. Once this has been done, the manipulation of the objects on the main level is streamlined because the special characteristics of the objects are hidden. 5. EXPERT SYSTEMS Expert systems have emerged as a part of applied research on artificial intelligence. In this section the applicability of expert system technology for photogrammetry is studied.

5.1 What is an expert system? There are several similar, but not identical definitions or statements to characterize the meaning of the words "expert-system". Referring to some of them, an expert system is:

260 A computer system that achieves high levels of pertbrmance in task areas that, for human beings, require years of education and training (Hayes-Roth et al., 1983 ). A program that handles real-world, complex problems requiring an expert's interpretatiom and solves these problems using a computer model of expert reasoning, reaching the same conclusions that the human expert would reach if faced with a comparable problem (Weiss and Kulikowskfi 1984). A program that solves problems that are ordinarily solved by human experts (Forgy, 1985 t. A computer program that performs intelligent tasks currently performed by highly skilled people (Fenves, 1984). A knowledge-intensive system solving problems which usually require human expertise (Coe, 1985). A computer system that uses a knowledge base and an inference engine to find solutions in particular areas of expertise ( Pau, 1986 ).

It is assumed that an expert system is based on the use of computers. The definitions above do not mention, however, how an expert system has to be implemented but rather only how it has to perform. In this presentation, the emphasis is on studying methods and techniques necessary in implementing such systems to be applied in photogrammetry. In that respect, the definition from Coe (1985) is the most appropritate: An expert system is a knowledgeintensive system solving problems which usually require human expertise. What is then expert-system technology? An answer can be found by investigating the working practices used in the implementation of an expert system. Usually the goal is to transfer h u m a n expertise into the form of a computer program. The task consists of team work where experts and a knowledge :engineer work together. The responsibility of the knowledge engineer is to reach such an understanding of the domain that he is able to generate a computer program that imitates as well as possible an expert in the decision-making process. An implementation phase of an expert system is highly incremental and interactive: a system will be built gradually and the functionality of the system is tested immediately. This is a so-called prototype approach. The prototype approach requires suitable tools because modifications must be easy to carry out. In Hayes-Roth (1985) it is argued that systems in which the expert knowledge is represented as rules (rule-based system, RBS) "constitute the best currently available means for codifying the problem solving know-how of human experts". He says also that "experts tend to express most of their problem solving techniques in terms of a set of situation-action rules, a n d this suggests that rule-based systems should be the method of choice for building knowledge-intensive expert system". In (Buchanan and Schortliffe, 1985) it is pointed out that rule-based knowledge representation was used already in Babylonia about 650 B.C. for governing everyday affairs, Hayes-Roth says further that "today's RBS technology provides the first practical methodology and notation for developing systems capable of knowledge-intensive performance.

261

Although artificial intelligence researchers have developed several alternatives, only the RBS approach consistently produces expert problem solvers". We can conclude that expert-system technology is a collection of methods, software tools and special computer equipment used to manage and utilize expertise-knowledge. Some of the methods can be loosely coupled with the actual use of computers, being rather system-design methods or "ways of thinking". It is useful to discover the close connections between the terms used in the context of expert systems and those used in conventional computer science (Hart, 1986) : Expert-systems terminology Computer-scienceterminology Knowledgeengineer Knowledge-base Expert-system shell Knowledge-acquisitiontool Inference engine

Programmer-analyst Program Programming language Programming environment Interpreter

The comparison shows that the difference between conventional programming and building expert systems is not as dramatic as sometimes stated. Hart (1986) says: "I believe that expert systems can be approached from the standpoint that they are another form of programming". According to Hays-Roth (1985) "rule writing can in fact be described as a special kind of programming". This suggests that the primary concern in building expert systems is the analysis of the knowledge. 5.2 Evolution from man-machine systems to automatic systems 5.2.1 From interactive to automatic systems Interactive systems represent an intermediate stage in the process toward full automation. Need for automatic decision making is not always mandatory because all the most important decisions can be left to be made by a user. In fully automatic systems the situation is different: all the decisions are to be made by the system. Progress in digital image storage and processing technologies proposes that fully digital and automatic systems for aerial triangulation could be built already. A system architecture of such a system is drafted in Fig. 2. 5.2.2 Role of an operator in an interactive system We could imagine that a human operator using an interactive system based on an analytical plotter is now replaced by the inference computer. Obviously, some of the capabilities of the operator have to be included in the inference

262

I Digital ] Photograph ~ Archive

Data Base

~

Image Processor

~

Digital Working Memory

Computer

~

Base

{........ I [

I

\ IA

pose Number { Cruncher {

Fig.2.System architectureof a fullydigitaland fullyautomaticsystem foraerialtriangulation.

engine and the associated knowledge base. What are then the capabilities required to carry out an aerial triangulation task from a to z? Considering the mensuration process alone, there are decisions to be made about certain matters: - W h a t is the order the photographs are processed in? W h a t are the points to be measured, where are they located? W h a t is the order the point are measured in? W e can start from looking at the different sources of data an operator uses: - project specifications flight plans old maps - signalizing charts - coordinate listsof control points. In the usage of an analytical stereoplotter in an interactive mode, the following tasks can be identified a m o n g the activitiesof an operator: Overall control. The operataor has to take care of the overall control, like operating different switehes, installing photographs on the carriages, etc.. Visual interpretation. The actual stereosvopic mensuration is done by the operator. It comprises visual interpretation on different levels, including the final stereoscopic matching. Information input. Sometimes the system requires information or data from the operator. The operator might have this information directly available by memorizing it. Or, the operator m a y use external sources as described above. In each of these cases, it m a y be necessary to produce this information as a result of an inference process. Decision making. There also occur cases w h e n the operator is required to response with a binary decision only. This decision m a y be based again on inference process where information from different sources is combined. -

-

-

-

263 We can conclude, based on the analysis given above, that in a man-machine system based on an analytical plotter, the responsibilities between the computer and the operator are very much intervented: at times the computer makes the decisions, based on the information supplied by the operator, and vice versa.

5.2.3 System design We can take different approaches for system design and programming when various sources of data are utilized in a fully automatic system. Following a conventional approach, systems will be implemented by writing procedures where actions are defined. These procedures also define the order in which the actions will take place. On the other hand, if a rule-based programming paradigm is used, we only define the actions and the conditions (states) in which these actions can be applied. In that case we must have an inference engine which decides, by using some strategy, the actual application order of the actions. As stated earlier, each of the approaches above can be considered to be programming. Essential is then, what is the most natural and cost-efficient way to make the programming. We can argue that the way an operator uses different sources of data is rule-based: at each stage of the mensuration process he decides what is the most useful source and makes the actions based on that decision. Of course, there are some standard (default) procedures he tries to follow whenever possible.

5.2.4 Departure from old conventions Some of the problems with analytical plotters are irrelevant when a hardware as sketched in Fig. 2 is available. For example we could abandon the photo-wise mensuration process and use some other criteria for controlling the order of actions, like the point-wise approach. As a matter of fact, this kind of approach has already been used with analytical plotters by measuring simultaneously several small-format photographs. Analytical plotters with digitizing cameras represent in many aspects an intermediate level in the process towards fully digital and fully automatic systems: many operations can be done digitally and automatically but we still use analog photographs as memory devices which have to be changed manually.

5.3 About expert-system tools 5.3.1 System analysis and design Often, when speaking of expert systems, it is assumed that they are interactive systems, conversing with a user, to solve some problem. They usually require the user to supply, in an interactive manner, the data facilitating the solutions of the problem. Although such approaches might be appropriate sometimes in photogrammetry, for example to design easy-to-use systems for

264

non-professionals, it is here considered that the most feasible application area for expert systems is in tasks in which large amounts of data have to be analyzed, automatically if possible. To perform well in such a task, the system has to "understand" the problem well, large amounts of domain-specific knowledge has to be incorporated into the system. Because the concept 'expert system' associates strongly with interactive consultation systems, it might be more feasible to use the concept 'knowledge-based system' here. Knowledge-based systems for photogrammetric analysis are hybrid systems in the sense that they definitively have to use methods well known from statistical and numerical analysis. Sometimes these kinds of systems are also called embedded expert systems, to emphasize that the knowledge-based part is incorporated into a large system. A special consideration has to be given for decomposition of the system, to keep it modular and maintainable.

5.3.2 Software Embedded knowledge-based systems set special requirements for the tools used in the implementation of such a system. Conventional procedural languages such as Fortran, Pascal, C or Ada are natural and efficient for implementing numeric algorithms. Thus, if special tools are used in the programming of the knowledge-based part of the system, interfacing to the procedural languages must be guaranteed. A larger interest in expert systems has only begun around 1983. Despite this. there is a huge amount of expert-system shells or tools available. This can be revealed by reviewing appropriate periodicals, for example The AI Magazine. where many commercially available systems are advertised. An evaluation of most of them is given in Gilmore et al. ( 1986 ). In the development of most of' the expert-system tools, special effort is put to supply ready-made tools to help in the design of interactive consultation expert systems. In the construction of embedded systems these tools are of little value, however. There are also tools that are in a way extensions of procedural languages, like oPs83 and Objective C. oPs83 offers tools for rule-based programming while Objective C supports object-oriented programming. They are promising because the underlying procedural language is directly accessible. On the other hand, they lack interpretative programming environments that are typical for Lisp or Prolog. The existing tools for expert system development are sometimes unnecessary massive and complex. It is also often argued that, for example, rules should be expressed in the form resembling natural language (English). This often results in a verbous representation which is not desirable when large rule bases have to be maintained. In Sepp~illi and Holopainen {!986) a method was introduced in which the rules were expressed in a form of a decision table. It clearly exemplifies how large a set of rules can be expressed in a very compact

265 and readable form, which, in addition, well suites to be used in embedded systems. Use of decision tables, instead of individual rules, has some obvious advantages. First, as stated above, it is a very compact form of representation. Secondly, by using several decision tables, the rule base will be structured. The formalism is simple enough that it can serve as a communication media when gathering the knowledge base by a team of experts and system designers. This helps int he maintenance of large rule bases which is often found out to be problematic. Thirdly, implementation is often straightforward, also when conventional programming languages are used. The representation based on usage of decision tables is fully general in the sense that rules which are expressed in predicate logic can always be expressed using decision tables. The decision-table method requires some further development to have the same expressive power as well formed formulae in predicate logic or even Horn clauses which are the basis for Prolog {Kowalski, 1979). 5.3.3 Hardware Most of the earlier expert-system tools were developed by using Lisp. This usually means a decrease in the efficiency of the program, compared to the use of some conventional languages, which often have been used in the more recent products, see e.g. Gilmore et al. (1986). For making the development and execution of Lisp-based programs more efficient, computers with a dedicated hardware have been designed, called Lisp machines. Originally, they differed from general purpose computers principally in the following aspects: (1) The central processing unit has in its instruction set such instructions that support fast execution of Lisp programs. ( 2 ) Interactive development of Lisp programs is well supported. (3) They have large graphics screens and use menu-techniques extensively, to support the software development. (4) They are usually single user systems, often called workstations. This further facilitates the productivity of the work, because run times can be better predicted than in time-sharing environments. Recently, most of the large computer manufacturers have released so-called engineering workstations. They deviate from the Lisp machines in the respect that the instruction set is not tailored especially for Lisp but is compatible with the one used in the computer family of the company. To predict future, it is unlikely that Lisp machines would become popular in photogrammetric applications. The relatively high price serves as one basis for this argument. Secondly, the engineering work stations offer better compatibility with existing software, being important in the design of embedded expert systems. It is also likely that progress in the design of algorithms and software

266

tools will decrease the need to use LisP-based programs, oPs83 and Objective C are examples of this trend.

5.4 Some expert-system approaches For using expert-system technology combinedly with conventional system design principles, some 'standard' approaches can be identified (Figs. 3 to 5). Figure 3 represents a front-end approach in which the usage of a complex system is made more manageable with the aid of a knowledge-based system. The works done on a knowledge-based consultant for structural analysis (Bennet and Engelmore, 1979) and on guiding the usage of a complex computer-operating system ( Schor, 1986) are examples of this kind of approach. A resembling example is given in Fig. 4. There the emphasis is to decompose the system such that user-dependent parts are clearly separated from the rest of the system which is assumed to be rather static. Knowledge-based expert systems are then used to facilitate the tailoring of a widely distributed system for the particular needs of any user or environment where the system is used. Also, within a single organization different users might have different 'views of the system, depending on their individual needs. In Fig. 5 an expert system is proposed to be used to interface an application system which might be an expert system as i t s e l f - - to multiple information sources or data bases. This kind of approach has been discussed in Rehak and Howard (1985) in the light of integrating data base systems to CAD systems. The use of knowledge-based interfaces to data base systems has received also wider interest ( Kellogg, 1986 ). The problem of combining existing heterogeneous software components by

I Complex Application System ES frontend

Fig. 3. Expert system as a frontend of a complex system.

267

J

' Application System

:} Fig. 4. Use of expert systems in tailoring of a system for different environments.

Appicator ES

~

System

Fig. 5. Expert system for interfacing an application system with multiple.

using knowledge-based approach is discussed in Chalfan (1986). The primary motivation for the work has been the desire to avoid manual phases that cause errors and delays.

5.5 Photogrammetrists as end-users of A I products Above the discussion has concentrated on how and where artificial-intelligence techniques can be utilized in systems for photogrammetric purposes.

268 Sometimes photogrammetrists act as actual users of artificial-intelligence products. MACSYMA system ( Symbolics Inc., 1983 ) for symbolic mathematics or computer algebra is an example of an end-product that can be used by researchers and engineers in photogrammetric applications. Mechanical and tedious mathematical derivations necessary in the research and development work can be eased significantly with the aid of this kind of tools. 6. FINAL REMARKS The review given above is somehow conservative and introduces only the most mature techniques of artificial-intelligence. It is considered that they can be readily applied when building systems for photogrammetric data analysis. The discipline of artificial intelligence has also other things to offer. One of them perhaps the most important one - - is a new attitude with respect to heuristic or common-sense knowledge and domain-specific knowledge: Until now there has been an attempt in photogrammetric research to develop general purpose solutions for a variety of problems. This works well to a certain point. However, for high-level accomplishments also the context of a problem has to be considered, i.e., domain-specific knowledge has to be utilized. It remains a real challenge to combine the benefits of the two desires: to build flexible general purpose systems that can be customized to utilize all the domain-specific knowledge. Expert-system shells are already examples of this trend. In them general purpose knowledge representation techniques and an inference engine are used combined with domain-specific knowledge. When building automatic systems, the need for integration is important, a system must be able to communicate with other systems having pertinent information for solving a given problem. Consideration of this dependency is more urgent than what it at first looks like. In interactive systems integration has been seen to be beneficial because of its ability to reduce the need for manual data management. In automatic systems integration is mandatory. Building complex systems remains difficult. The choice of suitable methods and tools makes the task somewhat easier. One of the questions is how to structure the problem. Different programming paradigms like rule-based programming and object-oriented programming seem to be suitable sometimes. Their real usefulness can only be verified in large, long-living projects. Smallscale demonstrations are inadequate for that purpose. An important application area of artificial intelligence has been fully neglected in this study: vision. Vision or general understanding of scenes of projected images is of exceptional interest in photogrammetry. As known already, computer-based vision is a huge challenge where all the possible means, including artificial intelligence and knowledge-based methods, have to be utilized. The concept 'artificial intelligence' has a tendency to cause misconceptions because 'intelligence' is an emotionally loaded word; if somebody is called to

269 be ' i n t e l l i g e n t ' , it is a c t u a l l y m e a n t t h a t he h a s s o m e m e n t a l c a p a b i l i t i e s a b o v e average. I n t h e discipline o f a p p l i e d artificial intelligence we are i n t e r e s t e d h o w s o m e s i m p l e f e a t u r e s c h a r a c t e r i s t i c to h u m a n s c o u l d b e i n c o r p o r a t e d i n t o c o m p u t e r p r o g r a m s . C o n c e p t s like " h e u r i s t i c p r o g r a m m i n g ' or ' a d v a n c e d c o m p u t e r s y s t e m s ' w o u l d o f t e n be m o r e a p p r o p r i a t e t h a n artificial intelligence. At t h e c u r r e n t s t a t e of art, it is o f t e n b e n e f i c i a l w h e n artificial intelligence is n o t considered to be m u c h m o r e t h a n use of d i f f e r e n t t y p e s of h e u r i s t i c s a n d logicb a s e d r e a s o n i n g m e t h o d s as well as t i r e l e s s s e a r c h for f i n d i n g a s o l u t i o n f r o m a set of a l t e r n a t i v e s . REFERENCES Barr, A. and Feigenbaum, E.A. (Editors), 1981. The Handbook of Artificial Intelligence, Volumes 1-2. William Kaufmann, Inc., Los Altos, CA, USA. Benciolini, B. and L. Mussio, L., 1982. Test on a reordering algorithm for geodetic and photogrammetric block adjustment. Rep., Politecnico di Milano, Istituto di Topografia, Fotogrammetria e Geofisica, Milano, Italy, 9 pp. (unpubl.). Bennet, J. and Engelmore, R., 1979. Sacon: a knolwedge-based consultant for structural analysis. Proc. Int. Joint Conf. Artif. Intell., 6: 47-49. Blais, J., 1977. Program SPACE-M, Theory and Development. Technical Report, Topographical Surveys, Ottawa-Canada. Booch, G., 1986. Object-oriented development. IEEE Trans. on Software Eng., 12 ( 2 ) :211-221. Brown, D.C., 1976. The bundle adjustment - - progress and prospects. Int. Arch. Photogramm., Vol. 21, Invited papers, Part 3, Helsinki, Finland. Buchanan, B.G. and Shortliffe, E.H. (Editors), 1985. Rule-Based Expert Systems. Addison-Wesley Publishing Company, Reading, MA, USA. Burch, Jr., J.G., Strater, F.R. and Grudnitski, G., 1979. Information Systems: Theory and Practice. John Wiley and Sons, New York, NY, USA. Chalfan, K., 1986. A knowledge system that integrates heterogeneous software for a design application. The AI Magazine, 7 {2) : 80-84. Chan, W. and George, A., 1980. A linear time implementation of the reverse Cuthill-McKee algorithm. BIT, 20: 8-14. Clocksin, W. and Mellish, C., 1981. Programming in Prolog. Springer-Verlag, Berlin, Heidelberg, FRG. Coe, M., 1985. What is/what are expert systems? Seminar presentation, Purdue University, West Lafayette, IN, USA. Cox, B.J., 1986. Object-Oriented Programming: An Evolutionary Approach. Addison-Wesley Publishing Company, Reading, MA, USA. Cuthill, E. and McKee, J., 1969. Reducing the bandwidth of sparse symmetric matrices. Proc. 24th Nat. Conf. Assoc. Comput. Mach., ACM Publ., pp. 157-172. Fenves, S., 1984. Knowledge-based expert systems. Seminar presentation, Purdue University, West Lafayette, IN, USA. Forgy, C.A., 1985. 0PS83 User's Manual and Report. Technical Report, Production Systems Technologies, Inc., Pittsburgh, PA, USA. George, A., 1971. Computer Implementation of the Finite Element Method. Technical Report STAN-CS-208, Stanford University, Stanford, CA, USA. George, A. and Liu, J., 1981. Computer Solution of Large Space Positive Definite Systems. Prentice-Hall, Englewood Cliffs, NJ, USA. Gibbs, N., Poole W, and Stockmeyer, P., 1976. An algorithm for reducing the bandwidth and profile of a sparse matrix. SIAM J. Numer. Anal., 13 (2) : 236-250.

270 Gilmore, J.F., Howard, C. and Pulanski, K., 1986. A comprehensive evaluation of expert system tools. SPIE Vol. 657, Applications of Artificial Intelligence IV, pp. 194-208. Hart, P., 1986. Interview: Peter Hart talks about expert systems. IEEE Expert, 1 (1): 96--99. Hayes-Roth, F., 1985. Rule-based systems. Comm. ACM, 28(9): 921-932. Hayes-Roth, F., Waterman, D.A. and Lenat, D.B., 1983. Building Expert Systems. Addison-Wesley Publishing Company, Reading, MA, USA. Horowitz, E. and Sahni, S., 1978. Fundamentals of Computer Algorithms. Computers Science Press, Inc., Rockville, MD, USA. IJCAI, 1985. Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Volumes 1-2. Los Angeles, CA, USA. Jackson, M., 1983. System Development. Prentice-Hall, Englewood Cliffs, NJ, USA. Kellogg, C., 1986. From data management to knowledge management. Computer, 19 (1): 75-84. Kowalski, R., 1979. Logic for Problem Solving. North-Holland, New York, NY, USA. Kruck, V.E., 1982. Optimierte Numerierung der Unbekannten in photogrammetrischen B15cken. Bildmess. Luftbildwes., 50 ( 6 ) : 218-223. Liu, J.W. and Sherman, A.H., 1976. Comparative analysis of the Cuthill-McKee and the reverse Cuthill-McKee ordering algorithms for sparse matrices. SIAM J. Numer. Anal., 13 (2) : 198-213. Nilsson, N.J., 1980. Principles of Artificial Intelligence. Tioga Publishing Company, Palo Alto, CA, USA Pau, L., 1986. Survey of expert systems for fault detection, test generation and maintenance. Expert Systems, 3 ( 2 ) : 100-111. Rehak, D. and Howard, H., 1985. Interfacing expert systems with design databases in integrated CAD systems. Computer-Aided Design, 17 ( 7 ): 443-454. Rich, E., 1983. Artificial Intelligence. McGraw-Hill Book Company, New York, NY, USA. Sarjakoski, T., 1984a. On Numerical Methods in Photogrammetric Block Adjustment (in Fin, nish). Licentiate in technology thesis, Hetsinki University of Technology, Helsinki, Finland. Sarjakoski, T., 1984b. On Minimization of the Bandwidth of a Positive Definite Matrix (in Finnish). Unpublished report, Helsinki University of Technology, Helsinki, Finland. Sarjakoski, T., 1984c. Efficient methods for selecting additional parameters of block adjustment. Int. Arch. Photogramm. Remote Sensing, Vol. 25, Part A3b, Rio de Janeiro, Brasil, pp. 932-944. Sarjakoski, T., 1986a. Use of multivariate statistics and artificial intelligence techniques for blunder detection. 1986 ACSM-ASPRS Annual Convention, Technical Papers, Vol. 4, Washington, D.C., USA, pp. 265-274. Sarjakoski, T., 1986b. Knowledge-based blunder treatment in bundle block adjustment, int. Arch. Photogramm. Remote Sensing, Vol. 26, Part 3/3, Rovaniemi, Finland, pp. 199-21I. Sarjakoski, T., 1988. Automation in Block Adjustment Systems - - Role of Heuristic Data and Methods. Report under preparation, Helsinki University of Technology, Helsinki; Finland. Schor, M.I., 1986. Declarative knowledge programming: better than procedural? IEEE Expert, 1(1):36-43. Schwidefsky, K. and Aekermann, F., 1976.Photogrammetrie. B.G. Teubner, Stuttgart, FRG. Seppiilii,Y. and Holopainen; A., 1986, A knowledge-based microsimulation model for analysing migration dynamics in the Helsinki region. Proc. 8th European Conference on Operations Research, Lisbon, Portugal. Snay, R.A., 1976. Reducing the Profileof Sparse Symmetric Matrices. N O A A Technical Memorandum N O S NGS-4, National Oceanic and Atmospheric Administration,Rockville,MD, USA, 24 pp. Symbolics Inc.,1983.M A C S Y M A Reference Manual. Users manual, Symbolics Inc.,Boston, MA, USA. Weiss, S.M. and Kulikowski, C.A., 1984; A PracticalGuide to Designing Expert Systems. Rowman & Allanheld Publishers,Totowa, NJ, USA. Winston, P.H., 1984. ArtificialIntelligence.Addison Wesley Publishing Company, Reading, MA, USA.