Computer Networks and ISDN Systems 29 (1997) 1625-1633
Piece--wise linear morphing and rendering with 3D textures Kartik Venkataraman a,*, Tim Poston b a Microprocessor b Centre for Information-enhanced Medicine,
Research Labs, Intel Corporation, Santa Clara, Institute of Systems Science, National University
USA of Singapore,
Singapore
I1 9597
Abstract Piecewiselinear approximationto deformationof a volumedatasetwhosenon-emptyregion is divided into tetrahedra allows 3DTexture volumerenderingin about 3/2sec, modulotexture memory limitations.Reducedversionsdrawn with fewer slices,which provideconsiderable feedback,can be renderedat interactivespeed.@ 1997ElsevierScienceB.V. Keyword.s: Tetrahedral decomposition; 3D texture mapping; Volumetric morphing
1. Volume displays Volume data, from computerized tomography, confocal microscopy, emission or MR scans, seismic reconstruction, etc., are now common across the sciences and engineering. One must often render an ar-
ray D = {Qjk
I (Lkk) E [O,Ll} X [O,Ml X [O,Nl
as volumes, rather than displaying the surfaces of segments found by a thresholding test. Surface extraction can introduce serious artifacts, and loses (for instance) data about bone homogeneity around a planned implant. Where the are non-scalar, such as by giving resonance at more than one frequency, there need be no good choice of scalar threshold. They may be better displayed by mapping their dimensions to colour components, and leaving segmentation to the powerful firmware of the human visual cortex. Human perception, in turn, is greatly enhanced if the display can be m-rendered at “interactive speed”, with change that tracks user input, for instance by rotating the view. Ideally, render once per screen refresh: in practice, maximum latency between input uijk
signal and display, if the user is not to suffer annoying overshoot, is on the order of 0.1 sec. We describe here a rendering approach that can now achieve such display speeds for reduced detail or for subvolumes, sufficient for some forms of user interaction, and should allow them for fuller images with hardware improvement to be expected soon. A volume is usually rendered by answering the question “From viewpoint V, what should I see in direction v,” typically chosen as the direction corresponding to a screen pixel, “when some analogue of light travels through D that way?” One can compute direction-by-direction, finding along each ray through the eye the absorption of “light” from behind or the reflection of “light” from a source. This my-casting 1 is normally done in the CPU. Alternatively, one can use slices across the line of sight; for each slice S from far to near, and pixel by pixel p across the screen region S occupies,determine the point ps on S ‘keen from” p, ’ In ray-tracing, rays ndkct, and typic&y interact only with surfaces. Cast rays are straight, but interact with the volume all along their length.
0169-7552/97/$1.7.00 @ 1997 Elsevier Science B.V. All rights reserved. SO169-7552(97)00077-9
PII
1626
K. Venkataraman, I: Poston/Computer Networks and ISDN System 29 (1997) 1625-1633
find a rendering contribution2 at ps from the values Uijk at the surrounding grid points of D, and blend with the result at p of previously used slices. This 3DTexturn technique [ 1,6,8] is accelerated by highly parallelized graphics hardware in such machines as the SGI Indigo Impact. Moreover, since it is compatible with the hardware Z-buffer scheme for hidden surface elimination, this method makes it much easier to embed surface-rendered objects in the volume, so that a mineshaft or scalpel defined by polygons can be included in the same display. Often, much of D represents empty space. The corresponding CPU time is wasted. Papers such as [ 3,911] describe schemes where (parallel) rays are cast only through non-empty cubes in an octree structure. The corresponding technique using 3DTextures would replace each slice S by the set of triangles, quadrilaterals, pentagons and hexagons in which it meets nonempty-space cubes. We introduce the use of tetrahedra, which meet a plane more simply than do cubes. A 2D analogue of our algorithm (Fig. 1) yields a 1D “view” from the left. The tetrahedra may be regularly laid out, like the triangles in Fig. 1, or adapted to the data, with smaller ones where a morph is less linear [ 71. The display algorithm is unaffected. Their use in approximating a volume morph has major speed advantages. An algorithm that similarly separates spatial and 3DTexture coordinates to shift much of the burden from sequential CPU to parallel raster managers, but computes the morph non-linearly on each texture polygon vertex ( S. Meiyappan, private communication), runs six times slower on the same equipment. In 3D, each Sin Fig. 1 becomes a plane, the triangles become tetrahedra, and the segments where S meets them become triangles and quadrilaterals (Fig. 2). Finding their comers is linear interpolation, in the CPU; the interpolation and blending across them is highly parallelized in the geometry engine. In Fig. 1 (a) unmorphed data, and triangles cover the points (shown grey) with non-zero values. A map F morphs the data to a new shape as in Fig. 1(b). This F has a piecewise linear (triangle by triangle)
Fig. 1. (a) Unmorphed
data and (b)
morphed
data.
approxima_tion 8. At a vertex u, F^(V) = F(u) ; within a triangl_e F( u) is linearly interpolated. Its inverse F-’ returns F-morphed points back to their unmorphedpsitions: w@n any_triangle not crushed to a line, FM1 exists, and F and F-’ are comp$able by linear interpolation between vertices. The F-morphed version of the data (b) is rendered a slice S at a time, advancing to the left. A typical flat slice S meets morphed triangles in segments, with ends p found by linear interpolation between vertices. The corresponding points o,” the edges of the original$angles give the comers F-’ (p) of the broken line F-’ (S) ; on each segment of S, find the points p in line of sight with viewer pixels, and interpolate between ends to find points F = F’(p) in F-‘(S). 2. Volume morphing
2 For stand eight (x, y.
convenience, let D(x, y, z) for a non-grid point (x, y, z) for the value trilinearly interpolated from the uijn at the (i, j, k) points surrounding (x, y, z ) : the standard “value at” z ) rendered in both my-casting and 3DTexture schemes.
Tetrahedral subdivision ties piece-wise linear vulume molphing to the linearity inherent in 3DTexture hardware. There are many uses for transform-
K.
Venhtaraman, ‘I: PostoniCotnputer
Networks
ing points (x, y, z ), where data are first found, to points (X, I: Z) = F(x, y. z): one may be matching a labelled reference brain to a scan of a patient, to transport the label information; one may have a computed elastic response; etc. User input often fixes N landmark points pi = (xi, yi, z,i) in the reference data and Pi = (Xi, F, Zi) in the target, which must correspond: Pi=F(pi),
i=
l,...,N,
(1)
With a variety of fitting methods, one fits functions X(xi,yi,zi), Y(xi,yi,zi) and Z(xi,yi,Zi) to achieve this. In almost all such methods, computing F(xi,yi, ;ti) for each grid point in D is laborious. More seriously, when F has a formula in some neatly computable class (other than a group, such as the linear or projective transformations) its inverse F--’ does not. Using the same interpolation scheme to find a function G with pi=G(Pi),
i=l,...,N
(2)
does not. yield G such that GF(x, y, z) = (x, y, z), for most (n,y,z) f (Xi,yi,Zi). The inverse Of one thin-plate spline can be hard even to approximate with another, and similarly for most tractable smooth mappings. Finding a true F-’ (X, I: Z) requires a CPU-costly search, for each Pi. However, since l to transform a polygonal object with vertices Xl,..., xM one usually renders the corresponding polygons with vertices F(x) 1, . , . , F(xM), while l to transform a volume dataset in (x, y, z )-space to a gap-free one in (X, I: Z)-space, one selects a grid PI,,~~ = { (Xo+lSX, &+mSK &+rdZ)} in (X, K Z) -space and finds the values of D at points F-‘mm). A mixed display thus needs matched F and F-’ for agreement. If internal surfaces have been constructed as in [ 5 ] , for volume data D,. from a reference brain, one would 1ik.e to morph them consistently with D,f to the space of a patient’s dataset. An F found numerically, such as the deformation when retracting the brain, has no fast formula to be embedded in a search routine. Solving for F-l (X, I: Z) becomes worse yet. These problems shrink with F Fplaced by a piecewise linear (PL) approximation F. A PL approxima-
and ISDN Systems 29 (1997)
16251633
1621
tion to a curve c(x) fixes it on comers and linearly interpolates between them; a linear3 interpolation of c(x) for x two-dimensional is fixed by three points, so one divides x-space into triangles and fixes c(xk) on their vertices, replacing a smooth surface by the triangulated kind drawn by surface renderers. In 3D, a linear map is fixed by values on four points, the corners of a tetrahedron. (A domain with more vertices, such as a cube, requires more complex interpolation such as the trilinear maps in [ 41, which required three minutes rendering per frame.) Much of the linear interpolation can be done in the graphics hardware: if a slice S meets a tetrahedron T in a triangle 7 with corners cl, ~2, cg, the 2D linear interpolation on 7 from values interpolated at cl, ~2, c3 along the edges of T coincides with 3D linear interpolation direct from the vertices of T. Passing to 3JDTe2ures (a) the ( X, I: Z ) coordinates of Fq , Fez, FQ to specify where the triangle is to drawn, and (b) the (x, y, z ) coordinates of cl, ~2, cg themselves as texture coordinses, rznderz points (X, I: Z) in the triangle between Fq, FCZ, FCZ with the texture values at points (x, y, z ) linearly interpolated between cl, c;?, cg; exactly the points F^-’ (X, I: Z). Moreover, suppose one has an array of “abstract” vertices u1, . . . , UK without locations, and abstract tetrahedra TI , . . . ,TH with each Th being a 4-tuple uh4 of the Vi, and correspondences of the uhl,..., sets {vi} with point sets {xi} ; usually including the landmark sets {pi}, so that F will share F’s exact fit there - and {Fxi}. If the resulting geometric tetrahedra T(x,Y,z) in (x, y, z )-space do not overlap, and nor do the Tcx,gz), the PL map G interpolated between-values G^(F(Xi)) required to equal x; is exactly F-l. A PL map is thus rapidly computable with available hardware support, and its inverse is equally available, once the Fxi - typically a few hundred, against the Lx M x N = lo6 or so grid points used in a direct volume transformation by F - have been determined. It is of course less accurate, just as a polygonal surface is less precise than a spline patch computed polynomially at every pixel: but polygonal graphics is the standard, for performance reasons.
3Strictly an a@ne interpolation,unless 0 is mapped to 0; but we follow the common abuse of language.
1628
K. Venkataraman, T PostodComputer Networks and ISDN Systems29 (1997) 1625-1633
3. The basic algorithm
x4
We assume that the points (X,x2) = F(x,y,z) with (x, y, z ) in the domain of D are roughly centred on display-world coordinates (X, I: Z) = (0, 0, 0)) and that the viewpoint is located on the positive Zaxis at a higher Z-value than any of these. (If not, replacing F by E o F for a suitable Euclidean motion E, composing F with the 4 x 4 matrix for E, reduces the problem to this case.) The morphed vertex coordinates, as above, are {(Xi, l$, Zi>}i=r,...,~. The slices to be drawn are thus given by Z = 4’j, over a range from ~MIN = min{ Zi} via lj = (MN + jS5 t0 ~MAX. The data are: The volume data set or 3D texture D, with scalar or vector values ulmn at each grid point (1, m, n). We can also refer to texture coordinates (x, y, z ) between the grid points, since the 3DTexture functions linearly interpolate values u( x, y, z ) there. A transfer function transforming u values to (r, g, b, a) values for rendering. A blending rule for combining the results with the current contents of a pixel. An array {xi}i=r,,,,,K of texture coordinates xi = (Xi, yi, zi) giving the vertex positions in the unmorphed, unrotated frame in which the texture D is defined. A corresponding array {Xi}i=i,...,~ of coordinates Xi = (Xi, K, Zi), typically produced by transformation (linear or otherwise) of {xi}i=i,...,~. An array {Th}h=l,...,~
of4-tupleS
T/t = (Wtlr uh2, uh3,
vh4) of indices v,,i E (1,. . . , K}, with vhi # vhj for i # j, indicating that tetrahedra having displayspace vertices X, with texture coordinates x,,,, are to be rendered. For brevity we let y stand for a general combination (x, X) = (x, y, z, X, x Z) of texture and world coordinates, and yhj = (xhj, xhj). “Render (yr , . . . , yP)” will mean “pass the geometric vertices xl,. . . , xP as a polygon with texture coordinates Xi, . . . , X, to the 3DTexture rendering function”, having set the transfer function and blending rule. In the absence of certain hardware limits (see Section 3.1) , the procedure is as follows: ( 1) Quicksort [ 121 each of the tetrahedron’s Uhj SO that Zu,,iGZVh, whenever 1Gi< jG4.
21 = z2 = y1=
Vl
Y2 =
v2
Y3 =
v3
x4
.z3 =
c < .z4
Zl < c < z2 <_ 23 5 z4 Yl
= Vl
+
-qv,
y2
= Vl
+
=(v3
- vl)
E(v4
-
Y3 = v1+
- Vl)
Vl)
X u
Zl
< c = z2 < z3 5 z4
VP
y1= y2
=
Y3
= Vl + z(v4
21 I y1
Vl
a
+ -yv3
< c = 22 =
Yl
=
z3 5 z4
h(,,
-
z3 < 24
v2
Y2 = v3
- vl) Y3
< c <
= v1+
y2=v1
- v,)
zt
= v1+
ti(v4
=
+z(v4-
v2 +
&
- 4 - Vl)
-
v
Xl 21 < < = z2 = z3 =
Xl 21 I
z2 5 z3 < c < z4
y1 = v1 + h(v4
-
y2 = v2 + zqv4
_ v2)
Y2 =
v3
y3
- v3)
Y3 =
v4
=
v3 +
E(v4
Vl)
24
y3=v2+7-q4y4
-
vl)
Yl
=
z‘j
v2
Fig. 2. The ways a z = constant plane can meet a non-degenerate tetrahedron in non-zero area, and the combinations yi of its vertices’ texture/spatial coordinates vi = (Xi. Xi), i = ( 1,. . . ,4).
K. Venkuramman, I: Poston/Compurer Networks and ISDN Systems29 (1997) 1625-1633
(2) Quicksort the tetrahedra themselves by lowest point, so that &,, < G, whenever 1 < h <
1629
A
I; 6 H.
(3) For slice levels lm 0 Updalte &in =
l
< & < 4~: 1 +
maX(b
1 GM
<
Ji}t
he
index giving the Z-lowest tetrahedron that can meet the slice {Z = li} in a set of non-zero area, and h, = min{h 1 Z,,, 2 li} - 1, giving the highest. Step 2 makes & trivial to update: hd,, involves a slightly longer search upward from the previous hhi, - 1. For indices hdi, < h < hmax, test whether Z v,,l ‘: 5i < GM with strict inequalities. For each such h, render the (VI, . . . , vp) found in one of the eight cases in Fig. 2.
+------ ----,---
--
--.--I-----------
b)
----- -I------
3. I. Hardware considemtions
Hardware support for 3D’IWures is currently unique to Silicon Graphics machines, and some factors specific to this support are relevant to performance. A set of triangles renders faster if it is given in a connected way, as a strip or fan, so that successive triangles share vertices (and hence the computations on them). However, we have found no way to present the triangles within a layer in such connected subsets that does not Izost more in CPU overhead than it saves in geometry engine costs. A more global consideration is that the “texture memory” subsystem usually cannot hold the entire data set at one time. Loading a new subset is timeconsuming, so the data must be rendered in blocks: ) typically, sets B, Of data phtl~S Dk = { (uijk) (i,j) E [O,L] x [O,M]} form < k < n. If these planes coincide with {Z = constant} planes in display coordinates - that is, if F( x, y, z ) has a Z-component which is a function of z alone - the basic algorithm above can be used unchanged (Fig. 3 (a) ) . Points on a rendering slice S with constant Z correspond to texture values interpolated between the same two data planes Dk and &+I, so by loading successive blocks that overlap ‘by one plane we can arrange that each slice uses data from one block only. However, if F is a more general transformation (Figs. 3( b,c) ), a tetrahedron-slice cr in the slice S may cross the: boundary between loadable data blocks. If the blocks are rendered in order B,,,,,,,,,, B,,,,, . . .,
(W
Fig. 3. For data blocks Bl, . . . , BN, the boundary between F(Bi) and F(&+I) may coincide with Z = constant plants (a), for linearFwithgeneralplanesAx+bY+cZ=D,asin(b),orfor general deformations - with curved surfaces (c)
andahasacomerpwithF-‘(p) E Bntom,andalso a comer q with F-‘(q) E B,,,,, part of it must be rendered while B,,, is loaded, part later when B,,, is in texture memory.
1630
K. Venkataraman,
Fig. 4. Data volume
Z PostonIComputer
of a human head rendered
Networks
and ISDN Systems 29 (1997)
with four different
(The use of tetrahedra big enough to cross more than one block boundary in (x, y, z )-space would allow cr to need more than two rendering steps.) If F is merely a rotation, or a more general linear map as in Fig. 3(b), the conditions ma < z and z < ml correspond to linear inequalities mo 6 = aX + bY + cZ + d (say) and aX + z(XK:z) bY + cZ + d < ml, which can be imposed as displaycoordinate clipping conditions excellently supported in hardware. With these clipping planes passed to the hardware, perform Step 3 of the basic algorithm for each data block in turn. If Z( x, y, z ) = LYX+ fly + yz + 6 (say) has y > 0, load the blocks in order of increasing z ; otherwise, in decreasing order. Each block is still loaded exactly once. Existing 3DTexture implementations support no clipping expressed in texture coordinates; even the natural operation of clipping to the currently loaded texture memory. Thus for a general non-linear F, as in Fig. 3 (c) , one must clip each tetrahedron-slice u in
transfer
functions,
1625-1633
using only rotation.
software. Any suitable scheme may be used; our implementation used the Sutherland Hodgman clipping algorithm. Moreover, in this case there can be conflict between z-ordering and Z-ordering. Along the Z-axis in Fig. 3(c) , corresponding to a particular pixel in the final display, contributions from the data blocks come in order from B4, B1, Bz. These contributions must be blended in that order, so B-2 must be loaded both before and after Bl. The only general way to handle this is to render each {Z = constant} slice S completely, before going on to the next; in a worst case like Fig. 3 (c) , most slices require all four data blocks shown. This means that each requires multiple swaps of data in and out of texture memory. In rendering slice S we create for each block a list n of polygons that will require it; parcel each primitive from Step 3 as a whole into one list, or as clipped pieces into more than one; and use one swap for each non-empty list.
K. Venkaiaramun.
Z PostodComputer
Networks
and ISDN Systems 29 (1997)
162S-1633
1631
Fig. fi. Data volume of the human head morphed to a macaca rendered with the four different transfer functions.
4. Image quality
We tested this rendering scheme on a 128 x 128 x 137 data volume, one of those created in a study [ 21 of morphing between different anthropoid heads, using biologically homologous landmark points. Photographic data were embedded within the volume at surface: points, coded within part of the intensity value bits, so that the transfer function could retwn l the photographic colour at a surface point, transparency elsewhere, 0 opacity for all non-air voxels, a opacity for voxels with “bone” density values, transparency elsewhere, 0 translucency. Figs. 4 and 5 show the results, rendered in the first
set with F simply a rotation from the human scan, in the second set morphed to map its landmarks to those of a macaque.
5. Performance and optimization
As remarked in Sextion 3.1, in rendering a nonlinearly morphed volume each slice may require each texture block. This creates a performance penalty proportional to the number of times one has to swap texture blocks. An efficient C++ implementation, using OpenGLTM [ 131 texture extensions, morphs and renders a 128 x 128 x 137 unmorphed volume divide into 891 tetrahedra at the rate of approximately 2 frames in 3 seconds on an SGI Maximum Impact
1632
K. Venkataratnan,
T. Posfon/Compurer
Networks
workstation. These rendering rates will of course be faster by a factor of 2 or more on the high end SGI system with more texture memory and faster graphics subsystems. In the case of morphed volumes, the rendering rate depends on the number of texture memory swaps as well as the number of primitives straddling texture boundaries. In bad cases like Fig. 3 (c), where many cutting planes require every texture block, the rendering rate is about 1 frame every 4 to 5 seconds. When the block boundaries are near to 2 = constant planes, the rendering rate is equivalent to the rendering rate obtained in the unmorphed case. SGI provides extensions to OpenGL that enable efficient texture memory operations. For systems with limited amount of texture memory, OpenGL establishes a “working set” of textures resident in texture memory, which may be bound to a texture target much more efficiently than non-resident textures. We implemented careful texture memory management to achieve high rendering rates in most instances. In the case of nonlinear morphing, the lists A mentioned in Section 3.1 avoid referring to each tetrahedron more than once per slice. These times do not give interactive speed: but the display of every 15th slice (Fig. 6) gives a caricature image adequate in (for instance) interactively choosing an angle of view or a point on the man-macaque continuum, absent texture memory limitations. Since 3DTexture rendering is an extremely parallelizable task, its hardware acceleration is proceeding rapidly, and can be expected soon to allow the scheme described above to provide real-time interaction with full fidelity displays.
and ISDN Systems 29 (1997)
I6251633
fig. 6. The skin and skull versions of the “human” shape in Fig. 3, with only one slice in 15 rendered so that it can rotate at interactive speed. As hardware improves, such images reduced for speed will converge on full-detail rendering.
the implementation in OpenGL of clipping defined in texture coordinates as well is in geometric coordinates. For the case important here, clipping by the boundary of the currently loaded block, this would be a simple, inexpensive step.
6. Conclusions Acknowledgements
With sufficient texture memory, volume rendering speeds for a PL-morphed display can be within an order of magnitude of real-time interaction for data sets of common medical size, and can be expected soon to achieve it. The range of applications for this should encourage the acceleration and price/performance improvement of 3DTexture display hardware. An immediate step that would largely remove the performance penalty for texture memory limits, at least for linear transformations such as rotations, is
We are grateful to our colleagues; H.T. Nguyen for his help in providing a stable software environment, Heng Pheng Ann for his registration of the photo images to the volume datasets, and Joan Richtsmeier of Johns Hopkins University for the landmarked volume datasets. We recall numerous useful discussions with Meiyappan Solaiyappan, and Raghu Raghavan’s support throughout, and we thank the National Science and Technology Board of Singapore for funding.
K. Venkataraman,
T Poston/Computer
Networks
References [ l] U. Cullip, Il. Neumann, Accelerating volume reconstruction with 3D texture hardware, UNC Tech. Rept. TR93-0027, 1993. [2] S. Fang, R. Raghavan, J. Richtsmeier, Volume morphing methods for landmark based 3D image deformation, in: Proc. 1996 SPIE Medical Imaging, SPIE 2710, Newport Beach, CA, 1996. 131 S. Fang, R. Srinivasan, S. Huang, An efficient volume rendering algorithm by octree projection, Manuscript. [4] A. Lerios, CD. Garfinkle, M. Levoy, Feature based volume metamorph,osis, in: Proc. SIGGRAPH 1995, pp. 449-456. [5] W.L. Nowinski, A. Fang, B.T. Nguyen, R. Raghavan, R.N. Bryan, J. Miller, Talairach-Toumoux/Schaltenbrand-Wahren based electronic brain atlas system, in: Proc. CVRMed’95, Lecture Notes in Computer Science, vol. 905, Springer, Berlin, 1995, pp. 257-261. [6] K. Perlin, 1E.M. Hoffert, Hypettexture, Computer Graphics (SIGGRAPH ‘89 Proc.) 23 ( 1989) 253-262. (71 T. Poston, D.T. Ng, W.C. Ng, S.J. Ng, Even covering of irregular wlumes by tetrahedra, in preparation. 181 0. Wilson, A. van Gelder, J. Williams, Direct volume rendering via 3D textures, UC Santa Cruz Tech. Rept., UCSC-CRL.-94- 19. [9] J. Wilhehns, A. van Gelder, A coherent projection approach for direct volume rendering, Computer Graphics (SIGGRAPH ‘91) 25 (4) (1991) 275-284. [lo] R. Yagel, A. Kaufman, Template-based volume viewing, in: Proc. Eurographics ‘92, 1992. [ II ] K. Yamaguchi, T.L. Kunii, K. Fujimura, H. Toriya, Octreerelated data structures and algorithms, IEEE Comput. Graphics Appl. 4 (1) ( 1984) 53-59. [ 121 D.E. Knuth, The Art of Computer Programming, Sorting and Searchmg, Vol. 3, Addison-Wesley, Reading, MA. [ 131 J. Neider, T. Davis, M. Woo, The OpenGL Reference Manual, ISBN O-20 l-63274-8.
and ISDN
Systems 29 (1997)
1625-1633
1633
KartIk Venkataraman received his Bachelor’s degree in Electrical Engineering from the Indian Institute of Technology, Kharagpur, India in 1986; he received an MS. in Electrical and Comnuter Enaineerine from the Universitv of kassachisetts, Amherst, MA in -1588. Santa He was with the Intel Corporation, Clara, CA, as a design engineer and comnuter architect in the i860 mocessor de-, sign group. Later he woked on devel.’ /--oping 3D graphics software for i860 and x86 based systems.From 1993 to 1995 he was with CieMed, Institute of Systems Science, Singapore where he worked on medical imaging and visulaization applications. Since 1996 he is working with the Microcomputer Graphics Lab (Microcomputer Research Labs) in Intel Corporation, Santa Clara, CA, on physically-based modeling of 3D and volumetric obiects. His current research interests are in algorithms and arch&ectures for computer graphics and visualiiagon, and the use of numerical analysis and mathematical modeling techniques in modeling and simulating of real-world phenomena. TII Poston was born in England, and received his Ph.D. in Mathematics from the University of Warwick in 1972, under Chris Z&man. He has since worked in Rio de Janeiro, Rochester, NY, Oporto, Geneva, Stuttgart, Charleston, SC, UCSC, UCLA, POSTECH (South Korea) and since 1992 in the Centre for Iiformation-enhanced Medicine at the National University of Singapore. He has worked on geometric and bynamical problems from physics, economics, medicine, archaeology, chemistry, computer vision and graphics, virtual reality, psychology, geography, elasticity, drug delivery, and mathematics. He is a co-author of “Tensor Geometry” (Springer) and “Catastrophe Theory and its Applications” (Dover).