A highly parallel architecture for real time collision detection in flight simulation

A highly parallel architecture for real time collision detection in flight simulation

Comput. & Graphics Vol. 15, No. 3, pp. 355-363, 1991 0097-8493/91 $3.00 + .00 © 1991 Pergamon Pre~ plc Printed in Great Britain. Computer Graphics ...

2MB Sizes 2 Downloads 29 Views

Comput. & Graphics Vol. 15, No. 3, pp. 355-363, 1991

0097-8493/91 $3.00 + .00 © 1991 Pergamon Pre~ plc

Printed in Great Britain.

Computer Graphics in Australia A H I G H L Y PARALLEL ARCHITECTURE FOR REAL TIME

COLLISION DETECTION IN FLIGHT SIMULATION M. A. BICKERSTAFFand G. R. HELLESTRAND The VLSI and Systems Technology (VAST) Laboratory, The School of Electrical Engineering and Computer Science, The University of New South Wales, P.O. Box 1 Kensington NSW 2033, Australia Abstract--This article describes the first implementation of a hardware architecture that solvesthe problem of collisiondetection for an arbitrarily complex collectionof arbitrarily complex objectsin a visual simulation system in real time. Resolvingall collisionsbetweenN moving objects in real-time requires O(N 2) calculations are performed every frame time. One possibilityis to have one central processor performingall the calculations. This quickly reaches a performance limit. Alternatively, N processors can each do O(N) calculations, since moving objects only require calculations to be performed relative to themselves. Moreover, only active objects need to do calculations. The hardware architecture described in this article uses this calculation partitioning to solve the collision detection problem. There are a number of collision detection chips per active object, which compare that active object's polygons against all other polygons within the pilot's 360° viewing range, The number of collision detection chips per active object is proportional to the polygon complexity of an object--not the number of objects in the scenario. Further, we describethe model governing the collision detection hardware design and how the hardware is to be incorporated in the visual simulation system. 1. I N T R O D U C T I O N

In a computer-generated world, such as in flight simulation, many objects potentially interact. Visual simulation in three-dimensional (3D) space requires that objects are not only visible when they are supposed to be, but also physically realistic (e.g., if a plane flies through a mountain side you would expect the mountain to survive the impact and a fiery wreckage to be seen where the impact occurred). Object interactions require the detection of object collision, amongst an arbitrary number of interactive objects, and the determination of a suitable response to the collision, which demands that a massive amount of computation be performed. Flight simulation produced an early motivation for the development of computer animation and modelling, and it is in this area that this article proposes a hardware solution to the problem of realtime object and polygon interaction. Applications for collision detection hardware include determining the collision-free path of a robot through an obstacle space, determining ray/object intersections for the ray-tracing of objects, and interactive computeraided design where objects are "fitted" together to determine the possible functional faults. In fact, any scheme that requires intermodel component interaction has a use for collision detection. Objects are typically represented by primitives including collections of planar polygons, edges, or bicubic or quadric polynomial patches.

1.1. Exact calculation of polygon intersection Various exact calculation methods exist for determining precisely where two polygons intersect. These include standard vector calculus [ 2 ], quaternions [ 4 ], iterative time slicing[3], and using velocity and distance bounds [ 8 ]. In an environment where the state of the scenario is seen as a series of"snapshots," exact polygon intersection calculations become error prone.

Two objects may be travelling at a sufficient speed so that they appear not to make contact between one another, but they may have passed through each other in the time interval between the two frames. An exact calculation of the line of intersection of two polygons requires a large amount of hardware per polygon to be able to detect this "move-through" of one object on another.

1.2. Spatial quantisation The quantisation of object space allows obstacles to be represented for collision-free path determination. Swept volumes [ 3 ], octrees [ 13, 17 ], and polytrees [ 5 ] allow object spaces to be modelled for 3D space occupancy testing using geometric operations (e.g., intersection). Spatial partitioning is described in ref. [7], in which an object is encoded in binary form and tested for point inclusion. An adaptive grid for spatial partitioning is described in ref. [10]. Spatial quantisation requires large amounts of storage to implement, and is not suited to hardware environments due to the complexity of accessing the data structures. 1.3. Bounding volumes Instead of performing calculations against the exact representations of objects, or fragmenting space with large data structures, an alternative is to enclose each object or its primitive components in a geometrically simple volume (e.g., a bounding box or sphere). Boxes are used heirarchically[l I], or in a single level to enclose polygons [ 24 ], edges [ 12 ], and patches and subpatches[26]. Sectors are a variation on bounding boxes[25 ] using six planar quadrilaterals to enclose surface patches. Also, boxes are used as a basic modelling primitive against which simple distance calculations are performed [ 23 ]. In robotics, artificial potential fields of repulsion and attraction serve to determine collision-free paths [ 19 ]. Forbidden regions of 355

356

M. A. BICKERSTAFFand G. R. HELLESTRAND

proximity calculations. Further, each moving (or active) object only needs to calculate the proximities of other objects relative to itself. Therefore, the need to perform object-to-object comparisons is localised to each active object in the system. A distributed, multiuser flight simulation system embodies this concept by allowing individual user control for each active object in the scenario. Object 1.4. Computational geometry and existing hardware modelling hardware is localised on a per-user basis and object subsystems are connected by a network, so each designs The field of computational geometry is concerned local subsystem needs only to calculate object proxwith the computational complexity of geometric prob- imities relative to itself. A consequence of this locallems within an analytical framework. Refs. [20, 27] isation is that as active objects are added to the simprovide a complete review of the existing algorithms. ulation (by adding another subsystem to the network) The polygon/d-polytope intersection detection and any hardware expansion that is required is linear relreporting problem has been studied widely with ative to each object subsystem--not quadratic. Of the various models used in proximity calculation, the current sequential algorithm performance of O(N log N), for N polygnns/d-polytopes using O(N) the bounding box is the simplest. The bounding boxes arc placed around the polygons that comprise the object storage. Both of these bounds are optimal. Hardware designs for collision detection are few. Ref. being modelled and a polygon-to-polygon comparison [ 6 ] describes a generalised systolic array for geometric is performed. In computational geometry, the optioperations. A dynamic processor database is described mal polygon-polygon comparison performance is in refs. [26, 28 ] and explains iterative hardware for O(log M), where there are M polygons representing the "query database" against which the query polygon ray-tracing. The hardware architecture described in this study is compared. The query database is in the form of a binary based around a computationally simple model which tree which facilitates the query performance of minimises the per polygon comparison hardware re- O(log M)[20]. When considering a hardware implequirement, Parallelism is utilised to perform a single- mentation, the tree structure becomes cumbersome. to-many polygon comparison, thereby comparing each A bus topology is simpler than a tree topology in a 2D polygon, belonging to objects in the pilot's possible layout. A parallel implementation performs calcula360 ° view, against all the polygons representing the tions in O( 1 ) time. pilot's vehicle in the scenario. Another consideration is whether a hierarchical form The following sections describe the model governing of collision detection (e.g., boxes within boxes) is better the collision detector, the visual simulation system de- than a single-level structure. When considering the design and how the collision detection hardware is in- sign of a hardware pipeline stage, the worst case corporated, the collision hardware design, and simu- throughput criteria must be met (i.e., the pipeline stage lation results of major components in the hardware must be able to perform its function within one pipeline design. timestep). This means that the collision hardware, performing the finest grain calculation, must be able 2. DESIGN CRITERIA AND PRINCIPLES to perform a worst case, single-to-many polygon comThe collision detection problem, at an object to ob- parison in one time step. Performing coarser levels of ject level, has a time complexity of O(N2), where N collision detection is redundant since the finest level is the number of active (i.e., either manually or au- of hardware can cater adequately with the worst case tomatically controlled) objects/vehicles in the scenario. throughput. In a centralised visual system, an increase in the numBounding boxes can cater for frame-to-frame cober of objects in the scenario causes a quadratic increase herence and interframe object move through by being in object-to-object comparisons. Either the computa- stretched in the direction of object motion by an tion time increases quadratically, or the amount of amount equal to the average distance covered by the hardware dedicated to doing the comparisons increases object between frames. The amount of stretching would quadratically. A multiple object simulation becomes be set at system initialisation so that the "query dataincreasingly difficult to realise as the number of objects base" does not need to be modified during normal opin the scenario grows. For this reason, many C A D / eration. The collision calculation is performed relative CAM and animation systems do not support even to the viewpoint coordinate system, and so stretching minimal collision detection[24 ]. the bounding boxes involves a multiplication of The main concept, upon which the collision hard- bounding ranges along one Cartesian axis only. Only ware described in this study is based, is that the visual the resident object's bounding boxes are stretched, since simulation system breaks the O ( N 2) problem into N this calculation is trivial (i.e., a logical shift left). The O(N) problems. All the previously mentioned models resultant elongated bounding boxes are loaded into the and systems treat collision detection from a global point collision detection hardware at system initialisation and of view. The solution presented treats moving objects normal operation begins. The network object's in a scenario as the only objects which need to perform bounding boxes are not stretched. motion for robot manipulators are created in ref. [22]. Spheres of protection used together with test points are popular in flight simulation systems [ 9, 18 ]. Axis-aligned bounding boxes placed around object polygons are used in the system described in this study due to their inherent computational simplicity when used to calculated polygon-polygon distances.

A highly parallel architecture Figs. 5 ( a - d ) show objects and their associated bounding boxes. The relative bounding box orientations and box stretching can be seen where two objects are in collision ( i.e., F l 5 and paper aeroplanes in Figs. 5(e) and 5(f). Although stretching offers a solution to interframe collision detection, it can cause inconsistent collision results to occur between two active objects (e.g., A can detect a collision with B but B may not detect a collision with A). The inconsistency is proportional to the amount of stretching (i.e., the larger the stretch factor the larger the possibility of inconsistent results). A tradeoff must be arrived at between a satisfactory stretching factor and the degree of false collision detections that occur. Consider the line of intersection between two planes defined by their unit normals p, q, respectively. The line of intersection is common to both planes so /5.27 = k~ and c~. J? = k2, where .fis a point on the line of intersection. The equation for the line of intersection can take the form b + #(/5 × c~), where ~t is a scalar factor. To define the line of intersection, b (which can be any point of the line) must be found. By letting b be the point of intersection of the three planes b can be determined, since b satisfies (/5 × 4)" b = k3 where k3 can be equal to 0 if the plane defined by the cross product goes through the origin. The point of intersection of the three planes/~, c], and (/~ × q) is found by solving the following equation:

-1

=

ql

q2

q3

k2

P2q3 -- P3q2

Plq3 - P3ql

Plq2 - P2ql

k3

where: b, are the vector components of b, p, are the vector components of the unit normal to one of two intersecting planes, q, are the vector components of the unit normal to the other intersecting plane. k, values being the resultant dot product values of and/5, c~, and (/5 × c7), respectively. The final solution requires many multiplications and additions, and would be a costly and complex to implement. Also, the above does not include bounds checking in that polygons are bounded half-space intersections. A bounding box requires 3 comparators to estimate polygon-polygon intersection, and it is simpler and cheaper to implement. Exact vector calculations require that numerical results persist for several stages of the calculation to produce a binary result. The bounding box model minimises the amount of numerical computation and uses Boolean logic to perform a majority of the calculations. Consequently, from the point of view of functional complexity and layout,

357

the bounding box calculation model is simpler and smaller. The three comparators work in parallel since each axis bound comparison is independent of the other axes. Each comparator determines which bounding box is to the left or right, and determines whether the appropriate box bounds intersect. The term collision dom a i n defines one of these rectilinear, axis-aligned bounding boxes. Domains are formed around polygons within objects, and completely enclose the particular polygon. The collision hardware is located within a pipeline which processes a list of objects sequentially in one frame time, but internally deals exclusively with polygons. Consequently, the pipeline produces a stream of polygons which may or may not intersect with other polygons. The resulting collision detection hardware has to do a "many-to-one" polygon comparison for every polygon passing through the pipeline to determine a collision event. Increasing the maximum number of polygons per object requires a linear collision hardware increase. There is an associated decrease in object per frame performance of the pipeline. This decrease is countered by decreasing the pipeline time step. 3. THE DISTRIBUTED CGI SYSTEM ARCHITECrURE The visual simulation system, for which the collision detection subsystems are designed, consists of many s i m u l a t i o n n o d e s connected together by a broadcast, multiple-ring, token network. A simulation node (Fig. 1) is a single user subsystem which represents the user's object (vehicle) in the current scenario. The simulation node includes the input control, object modelling, and display subsystems, which produce the image on the display device. A simulation node can operate independently of the network. Object descriptions are the major data entities exchanged between simulation nodes. Each simulation node embodies the following subsystems: T h e n e t w o r k interfaces collect object descriptions from the multiple token rings and transmit the resident object controller's object description at the appropriate time. The object description is a collection of an object's classification code, and its 3D positional and orientational state. A resident object is the object that user's simulation node creates in the scenario--it is the object about which the user's node provides object descriptions to the rest of the system. For instance, if pilot A is flying an aircraft of type X, and pilot B is flying an aircraft of type Y, then pilot A's resident object is the type X aircraft, and pilot B's resident object is the type Y aircraft. T h e resident object controller interprets the user's control signals to generate apparent horizon limits for the terrain storage system, and an object description to be transmitted on the network. These actions are performed once every frame. Collision data is supplied to the object pipelines by the resident object controller once at system reset. Objects that the resident object

M. A. BICKERSTAFFand G. R. HELLESTRAND

358

Network Interfaces

_

_

:l[-'-]

ti

' Pilot's Controls

IC°nL[ Terrain System

-~--~

=

Multiple

~ 2

Token

3

Rings

~

~~ Object

/ / 1 __J

/ 1

Pipelines

1

Imaging Engine

CRT Fig. 1. Simulation node architecture.

interacts with (e.g., other aircraft or terrain objects) are termed network objects, since these object descriptions typically come from other simulation nodes in the network or from the simulation node's local terrain storage/extraction subsystem. The controller interprets the result produced by the collision hardware and changes the resident object classification to a list of polygons representing an explosion when a positive collision result is indicated. This is a precursor to a controller removing the node from the simulation scenario. The terrain storage and extraction subsystem accepts the apparent horizon limits and produces a list of object descriptions which can be seen, under perfect visibility, from a 360 ° view from the user's position and orientation. A collection of object pipelines accepts object descriptions and produces perspective transformed Y-sorted polygons representing all the objects that are viewable (excluding polygon occlusion) from the user's position and orientation. The Y-sorting is relative to the projection plane of the user's display device. Each pipeline contains an instance of the collision detection hardware. The imaging engine accepts the Y-sorted polygons and produces pixel data which has been subpixel antialiased, textured, and depth occluded. It should be noted that this subsystem is a dynamic bus/pipeline architecture as opposed to the popular frame buffer concept.

The reader should refer to refs. [ 1, 14-16 ] for further details and the development history on the above subsystems. Given that the collision hardware is concerned with polygon-to-polygon interaction, it is logical at this point to describe the object pipeline in more detail. Fig. 2 shows the functional segmentation within each object pipeline. The segments implement the normal steps that are applied to generate a geometrically and perspective-transformed picture. The polygon generator accepts the object description and produces a list of polygons representing the object. These polygons are defined in object space. Consequently, they are transformed to world coordinates and then to viewpoint coordinates. After geometric transformation, the polygons are clipped to the user's viewing pyramid, and perspective transformed. The perspective transformed, screenprojected polygons are sorted relative to their minim u m Y-coordinate vertex. Collision hardware is located after the geometric transformation stage to perform polygon-to-polygon comparisons. Data feeds both the clipping and the collision hardware. 4. THE COLLISION DETECTOR ARCHITECTURE The fundamental data unit in the pipeline is the polygon, although vertices are transformed individually. All polygons are triangles, and are comprised of three vertices and texture (colour) data for controlling colour modulation across each polygon during ren-

A highly parallel architecture Object Desc.

_~,P. Gen.

H

Geom. Trans~

l

Pers.

Clip H Trans. CoLl. Dot.

359

H ~-~ Polygons Y-sort

[_~ Collision I Result

Fig. 2. Object pipeline functional layout.

dering. Each polygon requires three pipeline time steps for the vertices. Therefore, the collision hardware must be able to perform a many-resident polygon to onepolygon comparison in three time steps. The collision hardware consists of a collection of chips each of which carries part of the burden of doing the collision detection calculation. Each chip includes a domain generator and a bank of axial modules caterring for the three Cartesian axes. The domain generator collects the domain information from the network polygon (from a network object) that is currently passing through the object pipeline. The domain generator output is double buffered so that the next polygon can be processed immediately while the axial

modules do their processing. The axial modules perform the network-domain to resident-domain comparison. A row of three axial modules (i.e., one for each geometric axis) is needed to perform the domain comparison. Fig. 3 shows the functional configuration of the modules. There is a column of axial modules and a domain generator for each geometric axis of reference (i.e., X, Y, Z ) . Although the axial modules are organised in columns, the axial modules across each row produce a collision result for a polygon to polygon comparison, since a domain requires comparison on the X, Y, and Z axes, simultaneously. The results from each triplet of axial modules (e.g., X, Y, Z ) are logically AND-ed together to form the

L.

Collision Result

I

I ,,'

H IN--I t

//"

Domain Generator Module

Fig. 3. Parallel collision detector.

M. A. BICKERSTAFFand G. R. HELLESTRAND

360

Note that the above notation is in infix form with "1" being the OR operator, "'&" being the AND operator, and ..... being the NOT operator, given usual bracketting convention. Each axial module consists of a comparator, some switch logic for data flow control, and a pair of rw,isters which hold the domain range values for that axis for a resident object polygon. The domain generator is a pair of comparator/latch pairs, which store the current maximum or minimum value of coordinates of polygon vertices (see Fig. 4). During system initialisation, the axial modules are loaded with the resident object's polygon domains as defined in the user-view coordinate system. During the three pipeline time steps of each network polygon, the respective vertices are used to collect the network polygon domain. While the domain is being collected

collision result for one polygon comparison. These individual results are logically OR-ed together to produce the collision product for that collision detector chip. Since more than one collision detector chip is used at once, the result from each chip is OR-ed together to produce the final collective result. All it takes is one polygon-to-polygon comparison to produce a positive result (i.e., a complete row of X, Y, Z axial modules all produce a positive result) to indicate that there has been a collision. These calculations are summarised in the following equations (subscript R indicates resident object data; subscript Nindicates network object data). Axis borrow values are: (X, Y, Z)B = (X, Y, Z ) R n - (X, Y, Z)~,~,~ and the collision product is:

Collision Product = Collision Product ] ( ((('xn) & (xR.,. - x , , ~ )) I ( x a & (xN=,o - x R ~ ))) & ( ( ( ' Y s ) & (YR,,. - YN=.~ )) [ ( Y s & (YN,,, - YR=u ) ) ) & ( ( ( ' z B ) & ( z R n - z , ~ )) I (ZB & (z,~.,, - Z R ~ ))) ).

M°dule

_._~

F~

/

R~x

~.~ yl

cmp

outmin

outmax

t Domain Generator

ctrl

7 .

.

.

.

.

.

.

.

71 .

.

.

.

.

.

.

.

.

.

.

Vertices .

.

.

.

.

.

.

.

.

.

.

Fig. 4. Collision detector data flow.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

-I

A highly parallel architecture from the current polygon passing through the pipeline, the previous domain is being fed to the axial modules. In the first time step that the domain is available to the axial modules via the data buses, a borrow result is produced by the axial modules dictating the order of the operands to be used for the collision comparison. For example, in the X axis, if the resident domain is to the right of the passing network polygon, then the resident polygon is of higher order, and thus the resident domain's minimum X value is compared with the passing network domain's maximum X value. If the order is inverted, the passing network domain's minimum X value is compared against the resident domain's maximum X value. During the second time step, the axial modules each produce their particular collision result for the domain values for their particular axes. Each row of axial modules, in turn, produces a result for one polygon-to-polygon comparison, and the collection of collision detector chips produces a collective result for the complete resident object. The steps of operation of the collision detector is as follows: 1. At system reset, load the resident object's domain(s) into the modules, that is (X, Y, Z)R,,I,~,~. 2. In parallel, for the next three timesteps, present the current domain to the axial modules and collect the next domain from the polygon vertices currently passing through the transformation pipeline. 3. On the first vertex time step, the domain values for the polygon are presented to the rain/max bus lines and the calculated borrow determines which combination of operands are used to determine the collision result for that particular polygon pair. 4. Here, the domain values are used again to determine the collision result as per the equations above. This occurs during the second vertex timestep. Go to step 2. Note that the third vertex timestep is a null period, with respect to the axial modules. 5. IMPLEMENTATION DETAILS AND RESULTS Functional and analogue simulation has shown that the comparator used in the axial modules and the domain generators operates well within the required time step of the geometric transformation pipeline. The comparator has been laid out in double metal 2-urn CMOS technology, and is scheduled for fabrication in a 14-bit test module. The layout size for the compamtor indicates that each chip will include a double-buffered domain generator and 30 axial modules for the three Cartesian axes (X, Y, Z ) . This means that a 1000polygon object requires 34 chips, all connected on a single data bus. The total number of pins per chip is approximately 125. 6. CONCLUSIONS The novel collision hardware design provides realtime object-to-object calculation of proximity at a polygon level. The hardware is believed to be the first case of a hardware implementation for providing realtime collision detection in a visual simulation system.

361

The hardware provides a more accurate estimation of convex hull-convex hull contact than previously provided l~y ~ehertles using a totally enclosing sphere/box of "protection." The encompassing visual simulation system allocates a simulation mode per active object in the scenario, thus partitioning the O ( N 2) object interaction problem into N O(N) problems. The collision hardware does not impose any kind of restriction on the number of polygons per object in the simulation since expansion of the collision hardware is linear ( i.e., another chip is simply added to the data bus). The collision hardware has been designed, simulated, and laid-out as an integrated circuit. Critical components of the collision detection chip are currently being fabricated. Acnowledgements--We thank the following organisations for their generous support of this work: The Commonwealth Seientitle and Industrial Research Organisation (CSIRO), Division of Radiophysics; the Australian Department of Defence, Aeronautical Research Laboratories (ARL); the Joint Microelectronics Research Centre, University of New South Wales; and the Faculty of Engineering, University of New South Wales. We would like to thank Mr. R. L. Forster for kindly taking the photographs used in this study. REFERENCES

1. M. A. Bickerstaff, A. D. Skea, C. B. Choo, and G. R. Hellestrand, Microelectronics in imagery generation, University of New South Wales Department of Computer Science Technical Report 8910 (August, 1989). 2. J.W. Boyse, Interference detection among solids and surfaces, Comm. ACM 22( 1), 3-9 (1979). 3. S. Cameron, A Study of the clash detection problem in robotics, Proceedings of lEEE International Conference on Robotics & Automation 1,488-493 (1985). 4. J. Canny, Collision detection for moving polyhedra, IEEE Trans. Putt. Anal. and Machine Intell. 8(2), 200-2099 (1986). 5. I. Caflbom, An algorithm for geometric set operations using cellular subdivision techniques, IEEE Comp. Graphics and Appl. 44-55 (1987). 6. B. Chazelle, Computational geometry on a systolic chip, 1EEE Transactions on Computers 33(9), 774-785 (1984). 7. M. Chen and P. Townsend, Efficient and consistent algorithms for determining the containment of points in polygons and polyhedra, Eurographics '87, 423-437 (1987). 8. R. K. Culley and K. G. Kempf, A collision detection algorithm based on velocity and distance bounds, Proceedings of lEEE Int. Conf. on Robotics & Automation 2, 1064-1069 (1986). 9. R. L. Ferguson, AVTS: A high fidelity visual simulator, IMAGE III Conf. Proc., 475-486, NTIS (1984). 10. W. R. Franklin, Etficient polyhedron intersection and union, Proc. of Graphics Interface '82, 73-80 ( 1982 ). 11. J. Goldsmith and J. Salmon, Automatic creation of object heirarchies for ray tracing, IEEE Cornp. Graphics and Appl. 14-20 (1987). 12. J. K. Hahn, Realistic animation of rigid bodies, Proc. of SIGGRAPH Conf. on Computer Graphics 22, 299-308 (August, 1988). 13. V. Hayward, Fast collision detection scheme by recursive decomposition of a manipulator worksoace, Proc. oflEEE Int. Conf. on Robotics & Automation 2, 1044-1049 (1986). 14. G. R. Hellestrand and D. M. Gedye, Computer Generated Imagingfor Real-Time Flight Simulation, JMRC Report for Aeronautical Research Laboratories (1984).

362

M. A. BICKERSTAFFand G. R. HEt/F-~TRAND

15. G. R. Hellestrand, GOLD: An architecture for real-time computer generated imaging, suited to VLSI implementation, IREE Proceedings of the 5th Australian and Pacific Region Microelectronics Conference, Adelaide, Australia (8609), 109-121 (May, 1986). 16. G. R. Hellestrand, C. B. Choo, M. A. Bickerstaff, and A. Skea, Microelectronicsfor Imagery Generation, JMRC Contract Report for the Department of Defence Aeronautical Research Laboratories (December, 1987). 17. M. Herman, Fast, three-dimensional, collision-free motion planning, Proc. of IEEE Int. Conf. on Robotics & Automation 2, 1056-1063 (1986). 18. J. B. Howie and M. A. Cosman, CIG goes to war:. The tactical illusion, IMAGE III Conf. Proc. 439--454, NTIS (1984). 19. O. Khatib, Real-time obstacle avoidance for manipulators and mobile robots, Proc. of lEEE Int. Conf. on Robotics & Automation 1, 500-505 (1985). 20. D.T. Lee and C. K. Wong, Finding intersection of rectangles by range search, J. Algorithms 2, 337-347 ( 1981 ). 21. D.T. Lee and F. P. Preparata, Computational geometry-a survey, IEEE Trans. on Comp. 33(12), 1072-1101 (1984). 22. T. Lozano-Perez and M. A. Wesley, An algorithm for planning collision free paths among polyhedral obstacles, Comm. ACM 22(10), 560-570 (1979). 23. W. Meyer, Distances between boxes: Applications to collision detection and clipping, Proc. of lEEE Int. Conf. on Robotics & Automation 1, 597-602 (1986). 24. M. Moore and J. Wilhelms, Collision detection and response for computer animation, Proc. of SIGGRAPH Conf. on Comp. Graphics 22, 289-298 (August, 1988). 25. J. Ponce and D. Chelberg, Localized intersections computation for solid modelling with straight homogeneous generalized cylinders, Proc. oflEEE Int. Conf. on Robotics & Automation 2, 1481-1486 (1987). 26. R. W. Pulleyblank and J. Kapenga, A VLSI chip for ray tracing bicubic patches. In Advances in Computer Graphics Hardware ImEurographics Seminars, W. Strasser (Ed.), Springer-Verlag, Berlin, 125-140 (1987). 27. M. I. Shamos and F. P. Preparata, Computational Ge-

ometry--An Introduction, Springer-Verlag. New York (1985). 28. J. Skytta and T. Takala, Utilisationof VLSI for creating an activedata base of 3-D geometric models. In Advances in Computer Graphics Hardware l--Eurographics Seminars, W. Strasser (Ed.), Springer-Vedap~ Berlin, 83-93 (1987). APPENDIX The following images were generated using the functional simulator of the Imaging Engine. The objects (the F 15 fighter and the paper aeroplane) are part of the Commodore Amiga software package "AEGIS Videoscape 3D" by Aegis. These images merely show the model being implemented. These are not renderings by the prototype hardware--they are functional proofs. The first four figures 5(a--d) show the fighter in two orientations and the associated collision domains around each polygon comprising the object. Fig. 5(e) shows two fighters about to collide (including the domains around each polygon). The fighter facing skyward is the resident object, and the other fighter is the network object. The figure shows the collision model that is implemented in one simulation node. The frame of reference for collision calculations is the resident object's reference frame. Notice that • The resident object's domains are extended in the direction of motion. This extension is to counter frame-to-frame object move through. • The domains of the network object are oriented relative to the resident object's reference frame--not its reference frame. This means that the domains can be compared on all axes with a simple subtraction--not with 3D vector geometry. Fig. 5 (f) shows two paper aeroplanes in collision. This figure illustrates the above points for two simpler objects. In this case, the resident object is the aeroplane pointing downward, and the network object is pointing right to left. This figure illustrates the way the collision domains representation/approximation to the convex hull varies with object orientation.

3

J