Systems control for a micro-stereolithography prototype

Systems control for a micro-stereolithography prototype

Microprocessors and Microsystems 22 (1998) 67–77 Systems control for a micro-stereolithography prototype S. Huang*, M.I. Heywood, R.C.D. Young, M. Fa...

2MB Sizes 2 Downloads 119 Views

Microprocessors and Microsystems 22 (1998) 67–77

Systems control for a micro-stereolithography prototype S. Huang*, M.I. Heywood, R.C.D. Young, M. Farsari, C.R. Chatwin School of Engineering, University of Sussex, Falmer, Brighton BN1 9QT, UK Received 31 March 1997; received in revised form 17 March 1998; accepted 6 April 1998

Abstract The system control of a micro-stereolithographic fabrication system is detailed, with specific attention given to the opto-electronic and electro-mechanical interfaces. The application of the National Instrument LabVEIWq environment is demonstrated to provide a good basis for rapid application development of the necessary control structures. In particular, this provides the basis for a highly flexible control system, typical activities of which include: low-level interface formats; icon-based systems design; and support for good human–computer interfacing. The resulting system has the capability to provide two orders of magnitude improvement above that currently available. q 1998 Elsevier Science B.V. Keywords: Rapid prototyping; Micro-stereolithography; Software systems control; Rapid applications development

1. Introduction Component prototyping using stereolithographic techniques was first demonstrated 12 years ago, with the first commercial example of a fast free form fabrication technology appearing in 1987; this system was developed by 3D Systems Inc. Since then the market has expanded to approximately 10 major suppliers using systems based on several different technologies to achieve component build: stereolithography, laminated object manufacture, fused deposition modelling, selective layer sintering and solid ground curing. With the exception of the latter process, all of these methods rely on scanning either a laser beam or a material deposition head to facilitate the curing of slices representing planner cross-sections of the target part. The part in total is formed from the repeated application of layers of the virgin material and the layer curing process. The principle aim of the system described in this paper is to provide control of a new Rapid Prototyping process capable of producing components with small to micro-dimensions with applications in industrial, scientific and medical products [1]. Specifically, a new stereo-photolithography technique is employed in which a dynamic opto-electronic mask, principally a spatial light modulator, is used to create 3D components using a completely planer (layer-by-layer) process of exposure [1,2]. This represents a major advance over the aforementioned methods because layers are now * Corresponding author

0141-9331/98/$19.00 q 1998 Elsevier Science B.V. All rights reserved PII S 01 41 - 93 3 1( 9 8) 0 00 7 0- 2

concurrently cured over the entire surface as opposed to incrementally building the layer itself; equivalent to a line scanning procedure in which images on a TV screen are constructed. It is thus possible to build components with gross dimensions in the range of 50 mm to 50 mm (micro to small form factors), and feature sizes as small as 5 mm with a resolution of 1 mm; an order of magnitude smaller than the technologies currently available. The overall opto-electronic system employed for this purpose is shown in Fig. 1. This optical system operates in the ultraviolet using the 351/363 nm wavelengths of an argon ion laser. Such a source reliably produces a Gaussian beam and provides flexible operation at these wavelengths. The principle physical components of the system consist of: the multi-lens system; an ultraviolet laser light source; a spatial light modulator; a high resolution translation stage; and an optical shutter. The general mode of operation may be summarised in six steps: 1. describe the part using a suitable computer aided design environment supporting solid or surface modelling; 2. orientate and create planer slices of the target part and initialise the photolithographic sub-systems; 3. retrieve a slice of the target part as a bitmap image and display it on the opto-electronic mask; 4. open the shutter and initiate timer count-down for the photo-polymer cure time; 5. close the shutter and initiate the resin re-layering process; 6. loop to step 3 if further slices exist.

68

S. Huang et al./Microprocessors and Microsystems 22 (1998) 67–77

Fig. 1. Overview of control system architecture. Table 1 Interface and performance specifications Component

Interface

Requirements

Supplier

Opto-electronic mask (SLM)

480 3 640 (VGA) 600 3 800 (SVGA)

Coreco

Translation stage Optical shutter

Binary/grey level video interface PCI card RS 232

Coherent-Ealing Electro Optics Optilas/UniBlitz

Digital dial test indicator

Custom serial interface

20 nm resolution 5 mm step size 0.1 mm repeatability 2 ms min exposure time Delay time 1 ms Rep. rate 0–400 Hz 1 mm resolution

The spatial light modulator is the critical interface between electronic and physical embodiment of the part. Three-dimensional computer aided design images from a solid or surface modeller1 are orientated and sliced at uniform increments along the chosen plane. Each slice is then converted to a bitmap format and used to drive the spatial light modulator. This is an optical mask placed within the path of a coherent light source, in this case an argon ion laser. The subsequent imaging optics is responsible for transferring this to the site of the component build. Formation of the component takes place in a resin bath. Specifically, the resin polymer and incident light are selected such that incident light generates free radicals, hence initiating the curing of selective regions of the resin. The period of exposure is controlled by a shutter immediately following the initial laser source. Once curing of a layer is complete, the component is ‘dipped’ further into the bath such that a new layer of liquid resin is formed over the surface cured. The process then repeats over the same sequence of operations for the next layer until the component is completed. 1 DUCTy surface modeller with TRIFIXy and additional component slicing software also supplied by DELCAM Ltd. are used in this case.

Mitutoyo

Further details of the general procedure associated with the rapid prototyping process may be found in standard texts such as [3]. Patents describe the specific details of the optical [2] and resin recoating systems [4]. The remainder of the paper details the application of the National Instrument LabVIEWq environment to satisfy the system control and user interfacing requirements of the above system. Specific detail is provided of the techniques necessary to drive the opto-electronic system and control the translation operation with sufficient accuracy to satisfy the above performance specification.

2. System requirements The principle components of the system and application sequencing were identified above and in Fig. 1. The relationship between these elements and associated input/output requirements is summarised in Table 1, along with a sample requirements specification. In short, it is apparent that the control system must have the ability to manipulate data in real-time using a variety of different formats. For example, although limited to binary or grey level formats, the image

S. Huang et al./Microprocessors and Microsystems 22 (1998) 67–77

69

Fig. 2. Basic bitmap manipulation process.

data controlling the characteristics of the opto-electronic mask may be encoded using 1-, 8- or 24-bit schemes. The translation stage responsible for dipping the target part into the resin for recoating, by as little as 5 mm at a time, is interfaced via dynamic link library files. Furthermore, a requirement also exists for image capture, where this is to provide a path for visual closed loop control of the curing process. That is, the mask supplied to the spatial light modulator may be dynamically altered to correct partial curing of the present layer before applying the mask for the next. The chief requirements for any development system are therefore summarised as support for: multiple video channels; good I/O handling using the Windows NT environment; and extensive support for icon based development of the various control modules. The constraint on the operating system results in the additional requirement that widespread support must exist for development based on ‘.DLL’ files; without this, any hardware interfacing becomes very difficult. Finally, a solution based on program code design was not desirable, the engineering group having experienced extensive software rust problems with changing research staff and the inherently low-level nature of some aspects of the application. The specific components used for each interface are also summarised in Table 1. With respect to the spatial light modulator interface, the following additional comments are made. Both spatial light modulators are driven by standard VGA or SVGA interfaces; hence this could just as easily be satisfied by using the standard video output of the host computer. However, an extra video card is

necessary in order to support concurrent access to both the computer monitor and the spatial light modulator. Furthermore, it is envisaged that support for the above image-based closed loop control will also be necessary. Experience with the Windows NT operating system (versions 3.4 and 4.0) indicated that attempting to integrate two standard PCI video cards within the same host was not the fastest development path. The Coreco package therefore represented the most integrated solution. In the following, the development of LabVIEWq applications to support the major I/O interfaces and system control/sequencing is detailed. 2.1. Spatial light modulator application The objective of this interface is to send a specific sequence of level slice data (bitmap format), as defined by the CAD representation of the component, to the spatial light modulator. The spatial light modulator displays a mask, the image of which appears on the resin bath surface to build a layer of the micro-component. As multiple bitmap file formats exist, the program should be capable of universally reading and decoding several bitmap file formats. This is achieved by reading information from the corresponding bitmap header file before the transfer is initiated; briefly summarised in Fig. 2. Control processes requiring access to the frame-grabber board may only do so via its driver; Oculus Tci-ODX (from here on referred to as ODX) driver [5]. That is to say, the ODX driver represents the interface between the hardware and application software. The driver is supplied as a Windows NT dynamic link library;

Fig. 3. LabVIEW video interface control.

70

S. Huang et al./Microprocessors and Microsystems 22 (1998) 67–77

Odtci.dll. This dynamic link library provides support for all ODX functions and parameter registers as required by the Ultra-II frame-grabber. Specifically, all control activities which require access to the frame-grabber have to be interpreted in terms of odxbind function calls, where these are defined by the dynamic link library; Fig. 2. Furthermore, all of the functions defined in the third-party library (odtci.dll in this case) are called using a library function in LabVIEW; , Call Library Function . . In order to ‘load’ the frame-grabber with a bitmap image, it is necessary to work at the pixel level. Specifically, this application requires support for both 8-bit grey scale and binary image formats. This implies that any incoming file formats are required to match the 8-bit format of the hardware. The hardware–software interface is therefore associated with three function calls to the Odtci.dll dynamic link library; Fig. 3. The first function, odxbind, locates all ODX drivers already installed and extracts the entry point of each ODX driver. This entry is stored in an internal table. In this application there is only one ODX driver (one Ultra-II card) in the system; however, it is possible to manage up to eight drivers [5]. The third function, setdmajor, defines the current ODX driver from the list of available drivers. This function must be called each time the driver addressed changes. However, as only one card exists in the application, it is only necessary to use this once at the beginning of the program (or even dispense with it entirely, as it is selected by default). The function select_minor selects a minor device as in a different frame-buffer. As described in the above text regarding the spatial light modulator, this is performed by specifying a number from 0 to 3 to determine the primary, host, secondary and overlay frame-buffer, accordingly.

2.2. Configuration and initialisation Configuration of the host memory is performed by setting the memory format and buffer type. The monitor display type is defined by setting memory buffer type, size and mode. This process is fulfilled using an existing program, WconFig.exe, supplied as part of the Ultra-II package [5]. In addition, depending on the frame-buffer co-ordinate system, it is necessary to set the processing window using pwin. Furthermore, before a bitmap file is processed and sent, the status of some of the registers in the frame-grabber card are polled. Such register parameters are read and written using opr_inq and set to a specified value using opr_set (provided by the manufacturer); Fig. 4. By way of an example, if opr_inq(PIXSIZ) registers an ‘8’, then 8-bits per pixel resolution is set for the memory size of the monochrome buffer type attribute. The sequencing associated with the entire procedure is implemented by the LabVIEWq sequence structure property; this is depicted as a frame of film (Fig. 4). This may consist of one or more sub-diagrams, or frames for sequential execution. Determining the execution order of a program by arranging elements in sequence is called control flow, as in the number 0, 1, 2 on the top-bar shown of three block diagrams in Fig. 4. The fourth diagram represents the user interface for the service of the third sequence diagram. 2.3. Conversion of the different format bitmap images to 8bit grey-level As introduced in the analysis of the previous section, this application requires the conversion of bitmap image data to a format compatible with the Coreco frame-grabber. This section will explain the detailed conversion procedure by two modes, 1-bit monochrome and 24-bit RGB colour

Fig. 4. Control of temporal sequences: bitmap register initialisation.

S. Huang et al./Microprocessors and Microsystems 22 (1998) 67–77

71

Fig. 5. Auto-configuration using bitmap header file.

images to a grey-level format. Both the basic conversion concept and LabVIEWq block diagram are discussed in the following section. 2.3.1. 1-Bit monochrome image From the function , Read from I16 File.vi . , shown in Fig. 5, the image data are retrieved in a 16-bit format. This implies that once a word is read from the bitmap file then the individual bits are written to the frame-grabber card on a bitby-bit basis. This satisfies the direct correspondence between each bit in the initial bitmap monochrome image file and frame-buffer pixels. The basic conversion concept may also be expressed in a C program such as the following: Image_size ¼ bfSize ¹ bfOffBits; //total 1-bit image size N ¼ Image_size/2; //read the real numbers of image size because //read data in LabVIEW is a 2-byte long int NewData[8*N]; //array space for new data, size is 8*N words for (i ¼ 0; i , N; iþþ) { for (j ¼ 0; j , N; jþþ) { Odd ¼ Data[i] & (2(15 ¹ 2* j)); //Masking odd bits (1,3…15) Even ¼ Data[i] & (2(14 ¹ 2* j)); //masking even bits (0,2…14) ShiftOdd ¼ 7 ¹ 2*j; ShiftEven ¼ 14 ¹ 2*j; if (ShiftOdd . 0) Odd ¼ Odd . . ShiftOdd; //shift data to bit ‘8’ in 16bit memory else Odd ¼ Odd , abs(ShiftOdd);

Even ¼ Even . . ShiftEven; //shift data to bit ‘0’ in 16-bit memory IndexNumber ¼ 8*i þ j; //new data index number NewData[IndexNumber] ¼ Odd þ Even; //new data are created and output } } 2.3.2. 24-Bit colour image This conversion procedure is easier than the 1-bit image case. That is, the image data have three bytes per pixel, one for each channel of the RGB format, whereas the required grey-level format only requires a single byte. A simple way to abstract the luminance from 24-bit RGB is given by Lum ¼ (R þ G þ B)=3

2.4. Sending a bitmap image to the spatial light modulator (secondary monitor) Before sending bitmap data to the frame-buffer, it needs to clear the frame-buffer whilst leaving the overlay display application untouched. The procedure can be implemented by a simple loop program in 800 3 600 pixel size which is a maximum display resolution in a dual monitor mode, Fig. 6. Because the bitmap image data is still a 16-bit value after the conversion to grey-levels, a little work was required to match and load it into a display memory of 8-bits-per-pixel. This implies the use of masking and shifting to obtain the individual byte values. The display procedure uses the amov() and wtline() dynamic link library functions to load bitmap data into the

72

S. Huang et al./Microprocessors and Microsystems 22 (1998) 67–77

Fig. 6. Initialising the frame buffer.

frame-grabber card by scanning the window from the bottom left, line-by-line. Three inputs of the wtline() function can be set for the different display modes; Fig. 2. Input (1), the second sequence diagram shown in Fig. 6, is the input data; input (2) is used to represent display direction (DIR) for each line data and it is selected to x(DIR) ¼ 0 & y(DIR) ¼ 1 by equating to unity; whereas input (3) represents image data line. In this application, the number of pixels in each row should be equal to the image width size, and the numbers of the scan line match the image height. The control process can be described below by a brief C program: xmin ¼ ymin ¼ 0; xlen ¼ 0; ylen ¼ biHeight; DIR ¼ 1; //data line direction for x ¼ 0 and y ¼ 1 for(line ¼ 0; line , ylen; line þ þ ) { amov(xmin, ymin þ ylen ¹ line ¹ 1); //moving current position for next row data wtline(NewData[], DIR, xlen); //dump the line to the FGB (Odx) } The control program successfully realises the display of a bitmap image with any data format on the secondary monitor. Meanwhile, overlay mode is also functioning. In addition, the current image may be moved to an arbitrary position on the screen by changing x-axis and y-axis coordinates. The program at this stage controls the display of a single monochrome (black and white) bitmap image, with both 640 3 480 and 800 3 600 resolutions, on the spatial light modulator.

3. Image capture The image capture process is also implemented using the Ultra-II frame-grabber board. An incoming image is displayed in a Window of the VDU or secondary monitor by selecting the associated frame buffer of the minor device. An overlay mode may also be used for loading any image or

text to an image capture window for modification or annotation. Before application, the configuration of the framegrabber is accessed to verify memory and display modes. For acquisition of image data from the camera, suitable configurations exist for CCD monochrome camera and PAL Europe video format by setting the camera type and format, etc. A brief flow chart is given in Fig. 7 for the control procedure of grabbing and freezing an image from a CCD monochrome camera which is connected to #1 (detailed in accordance with particular acquisition configuration [5]).

4. Translation stage The translation stage plays a central role during the fabrication of 3-D micro-components, ensuring a high accuracy and repeatability over step sizes measured in micrometers. A translation stage using DC servoing and an encoder drive is selected in accordance with the characteristic specification of Table 1 from Ealing Electro-Optics Ltd. By encoder driver it is implied that DC driven micrometers fitted with magnetic encoders are used to verify position. Computer control of the encoder driver actuators is performed by the Encoder Driver card, a PCI bus card, where the key feature of this lies in its ability to automatically determine what modules are present at system configuration. The encoder driver card communicates with the PC through a 4K (4096 byte) memory buffer, normally located in high memory from D000:0000 to D000:0FFF. The speed, acceleration and distance of travel attributes are programmed independently for the actuator. The control system for the translation stage is presented in Fig. 8. The control system consists of two sections, an encoder closed loop controller and a position measuring feedback controller, the latter using the PCI card. Together the pair implement a trapezoidal profile generator which computes the desired position of the actuator versus time and a digital compensation filter using a PID control strategy [6]. In addition, in order to overcome the backlash of the geared device and enable the definition of arbitrary data, a measurement

S. Huang et al./Microprocessors and Microsystems 22 (1998) 67–77

73

Fig. 8. Translation stage control flow diagram.

position is used by the servo controller to effect: an acceleration until the maximum velocity is reached; or a deceleration, thus ensuring no overshoot in the final position. (The deceleration rate should be equal to the acceleration rate.) The encoder driver module uses a digital proportional integral derivative (PID) filter to compensate the control loop. The actuator is held at the desired position by applying a restoring force to the actuator using the standard PID control rule. The following discrete-time equation illustrates the control performed by the servo controller: X un ¼ Kp 3 En þ Ki En þ Kd [En9 ¹ E(n ¹ 1) ] (1)

Fig. 7. Control flow of the frame-grabber.

system using a digimatic gauge indicator (DGI) is applied. The measurement data associated with the DGI is captured through a National Instrument counter-timer I/O card (PCTIO-10). 4.1. Encoder driver module As indicated in the above section, an incremental encoder provides feedback for closing the position servo loop. During trajectory generation, the servo controller subtracts the actual position (feedback position) from the desired position (profile generator position), and the resulting position error is processed by a digital filter to drive the actuator to the desired position [6]. In the position mode of operation, the information on the acceleration, maximum velocity and

where u n is the motor control output at the sample time n, E n is the position error at sample time n, n9 indicates sampling at the derivative rate, and K p, K i, K d are filter parameters which may be loaded by an application requirement. The first term, the proportional term, provides a restoring force proportional to the position error. The second term provides a restoring force which grows with time, and thus ensures that the static position error reaches zero. The last term provides a force proportional to the rate of change of position error. In general, longer sampling intervals are useful for low-velocity operations. The actual distance, velocity and acceleration values for the encoder driver are based on the encoder parameters: 60 counts for each rotation of the motor; the motor rotates 485 times for each rotation of the spindle; and the spindle moves 0.5 mm for each complete rotation. These parameters for encoder counts, distance value D ec, velocity V ec and acceleration A ec, can be calculated by the following equations: Dec ¼ Dmm 3 Cmm

(2)

Vec ¼ Vmm 3 Cmm 3 T 3 IBias

(3)

Aec ¼ Vmm 3 Cmm 3 T 2 3 IBias

(4)

where C mm is the number of encoder counts per mm, i.e. 60 3 485 3 2 ¼ 58 200, D mm is the desired distance in mm, V mm is the desired velocity in mm s ¹1, A mm is the desired

74

S. Huang et al./Microprocessors and Microsystems 22 (1998) 67–77

acceleration in mm s ¹2. In order to obtain more accuracy for the calculation of V ec and A ec, the calculated value is multiplied by 65 536 to yield a 32-bit number with a 16-bit integer position and a 16-bit fractional portion. To preserve accuracy, calculations of V ec and A ec are performed using floating point math, with the result converted to an integer value. 4.2. Feedback control design with dgi device The DGI device from Mitutoyo is a digimatic indicator where positional data are output digitally [7]; the main features are summarised in Table 1. The backlash of the translation stage, caused by the DC motor and high ratio gearbox, is eliminated using the DGI to measure the distance moved. The component build procedure contains two sections, the first moves the translation stage to a point in space above the required start position. The platform of the translation stage is then moved down to the target datum position. The reversal of motion associated with this procedure introduces the classic backlash characteristic. In the next step, an additional control loop is introduced by way of the DGI device. This provides measurement of the actual position of the stage, thus providing the basis for a compensated encoder count value to drive the translation stage; Fig. 8. The output data format from the DGI and corresponding timing chart are shown in Fig. 9. The request signal (REQ) is used as the enable signal for the data and clock output. From Fig. 9, the clock is available after REQ is active over a wide time window (10 to 30 ms). The REQ signal should be held (t . (T 1 þ T 2)) in a low state until locking is achieved and is sent high before the final clock (52nd bit) is produced. The output data are read at the positive edge of the clock signal and the procedure for reading data is completed in the 52nd clock cycle.

Fig. 9. DIG data format and timing constraints: (a) output timing; (b) timing sequence chart.

4.3. Control software The control program for the translation stage using LabVIEWq is decomposed into several parts: DC servo motor PID control; acquisition of DGI data; instigation of backlash elimination procedure; and provision of suitable user feedback. The flow diagram is illustrated in Fig. 10. Initially a motion setup program is executed to set the memory for the Encoder Driver Card before the main motor control program is used. The controller ID of zero indicates that the hardware initialisation is complete. The base address in PC memory is set from D000H up to DFFF for the Encoder Controller. Fig. 11 represents the control algorithm for the translation stage using the DC servo motor. The ‘Enab’ sub-diagram is used to enable the control of the specified moving axis. System configuration, sub-program ‘Servo’, was summarised above. The moving relative position sub-diagram,

Fig. 10. Translation stage control flow diagram.

S. Huang et al./Microprocessors and Microsystems 22 (1998) 67–77

75

Fig. 11. LabVIEW control of translation stage DC servo.

Fig. 12. Overall human–computer interface for prototype micro-sterolithographic system. A, display a same slice image as one in SLM LCD; B, display the relative moving distance for translation stage by using GDI; C, represent the control of all data for translation stage; D, represent control procedure for sending a slice image to SLM; E, represent control procedure for optical shutter.

76

S. Huang et al./Microprocessors and Microsystems 22 (1998) 67–77

Fig. 13. Hardware sub-components (non-optical) of the micro-stereolithographic rapid prototyping system: (a) translation stage; (b) spatial light modulator (SLM); (c) digimatic gauge indicator (DGI); (d) optical shutter.

‘Rel’, is applied to move the translation stage for a target distance by setting a specified number according to Equations 2-4. The direction of the translation stage can be controlled by the sign of the number. For the position measurement and user feedback of the translation stage status, the relevant LabVIEWq structure is in the form of several sequentially nested windows; omitted here for brevity. First, the I/O card is selected as device 1 and 8-bit port A used for DGI data transfer to the PC, while Port B is used to control signals from the PC to the measurement device. After the I/O port B is set, the procedure for resetting shift registers and enabling REQ is carried out; that is the DGI data is captured. The next step is to read this data through Port A using the second acquisition sequence. This begins by selecting all of the channels for the data transfer to I/O interface by sending a decode signal to the 74HCT138 device. Next, data are held on the output lines of each 8-bit shift register and placed on the data bus of I/O Port A by a corresponding 3-state Gate. All data associated with the feedback control and DGI readings are displayed in the Front Control Panel of the main LabVIEWq program. The completed system is illustrated in Fig. 12, the systems interface, and Fig. 13, the hardware system components.

5. Conclusion The system requirements and associated development of a LabVIEWq based systems controller for a new rapid prototyping process are detailed. Other than the controller itself, the system is composed of five elements: the

multi-lens system; an ultraviolet laser light source; a spatial light modulator; a high resolution translation stage; and an optical shutter. The associated interfacing requirements are in the form of: a PCI application interface for the translation stage; a dual mode frame-grabber card for image capture and support of a video channel to drive the spatial light modulator in either VGA or SVGA modes; RS-232 link for control of the optical shutter; and combined parallel port and PCI hosted counter-timer card in support of backlash minimisation and arbitrary definition of zero data. By using the LabVIEWq framework advances above standard programming languages are attributed to: a faster development cycle (5 months); support for a high degree of interpretability by first-time developers (icon based functionality); and comparatively easy integration with third-party products.

Acknowledgements This work was conducted as part of an EPSRC project [1], under the Design and Integrated Production Programme, grant number GR/L31814, with industrial support from: CRL Smectic Technology Ltd., Daewoo Motor Company Ltd., Datum Dynamics Ltd., Delcam International plc., MBM Technology Ltd., MCP Equipment Ltd., Ricardo Consulting Engineers Ltd., The Parker Pen Company and Zeneca Resins. In particular the authors wish to highlight contributions made by: Dr J. Brocklehurst (CRL Smectic Technology Ltd.) who supplied the SLM technologies; Coherent-Ealing Electro-Optics Ltd for support during the

S. Huang et al./Microprocessors and Microsystems 22 (1998) 67–77

development of the translation stage; and Delcam International plc. who provided the environment for surface modelling and level slicing of the CAD model.

References [1] C.R. Chatwin, R.C.D. Young and M.I. Heywood, Manufacture of fully three dimensional micro-components. EPSRC proposal — Design and Integrated Production Programme, Grant GR/L31814, 1996. [2] R.C.D. Young, C.R. Chatwin and M.I. Heywood, Use of dynamic masks for object manufacture. Patent Application No. GB 9615840.7, 1998. [3] P.F. Jacobs, Rapid prototyping and manufacturing: fundamentals of stereolithography. SME, 1992. [4] M.I. Heywood, R.C.D. Young and C.R. Chatwin, Reapplication of materials for object fabrication. Patent Application No. GB 9615839.9, 1998. [5] Ultra-II user manual v.2.0, 1997; ODX toolkit v.2.02, 1996; Oculus drivers toolkit, 1990. [6] Motion control API for Windows, v.1.2, 1996: Encoder drivers, v.1.10, 1992; Motor control card instruction manual, 1990. [7] ID-C Digimatic indicator manual, Mitutoyo. Application note, 3055543, 1995.

Shiping Huang graduated with BSc and MSc degrees from the Department of Electrical and Electronic Engineering, Zhejiang University, China, in 1982 and 1988, respectively. During this period he worked as a researcher at Zhejiang University with a specialisation in real-time signal processing and control for power systems. Between 1987 and 1990 he was a lecturer at the same institution. In 1990 he began what was to become a productive association with the University of Sussex. He graduated with a PhD degree from the School of Engineering in 1996; his research focused on the design and implementation of realtime adaptive filters. This period also included sponsorship of his research from the Allen-Bradley Company, Milwaukee, USA. In 1996 he joined the Industrial Informatics and Manufacturing Systems Research Centre as a research fellow with a specific remit for real-time control and systems integration. He has published several journal papers and patents within the field of real-time control, signal processing, power systems and magnetics.

Malcolm Heywood graduated with a degree in Electrical and Electronic Engineering at Polytechnic South West, Plymouth, UK, in 1990. Throughout the first degree and a year following graduation he worked at Racal Radar Defence Systems Ltd with the Electronic Systems Measurement (ESM) Division for Surface Ships. During this period experience was gained in the systems engineering requirements of ESM projects and the development of related integration software. In 1991 he joined the Neural and VLSI Research Group at the University of Essex, graduating in 1994 with a PhD in electronic systems engineering. From September 1994 to October 1995 he held a research fellow post at Brunel University, investigating the application of soft-computing techniques for signal processing applications in non-destructive testing. In October 1995 he joined the School of Engineering at the University of Sussex as a lecturer. His research interests centre on the application of soft-computing to industrial problems with evolving or uncertain properties and software development practices. He is a member of the IEE and the IEEE.

77

Rupert Young graduated from Glasgow University in 1984 with a degree in engineering. Until 1993 he was employed within the Laser and Optical Systems Engineering Research Centre at Glasgow, during which time he gained wide experience in optical systems engineering and image/signal processing techniques. He was awarded a PhD degree in 1994 for research into optical pattern recognition. In 1993 he took up a post as a Senior Scientific Officer at the Defence Research Agency, Malvern, where he conducted research into optical processing hardware configurations, spatial light modulator technology and algorithm development for orientation independent object recognition. In April 1995 he was appointed a lecturer in the School of Engineering, University of Sussex. Here, he is continuing research into various aspects of optical pattern recognition, digital image processing and electro-optics system design, and applying this to a wide range of problems of industrial relevance.

Maria Farsari received her first degree in physics from the University of Crete in 1992 and her PhD in non-linear optics from the University of Durham in 1996, where she was a Marie Curie fellow from 1992 to 1994. She is currently a research fellow at the University of Sussex, where her interests include micro-stereolithography and holography.

Professor Christopher Chatwin holds the Chair of Industrial Informatics and Manufacturing Systems (iims) at the University of Sussex, UK, where, inter alia, he is Director of the iims Research Centre and the Laser and Photonics Systems Research Group. Before moving to Sussex, he spent 15 years at Engineering Faculty of the University of Glasgow, Scotland, where as a Reader he was head of the Laser and Optical Systems Engineering Centre and the Industrial Informatics Research Group; during this period he ran a succession of major national and international interdisciplinary research programmes. Prior to this he worked in the automotive industry. He has published two research level books: one on numerical methods, the other on hybrid optical/digital computing — and more than 100 international papers which focus on: optics, optical computing, signal processing, optical filtering, holography, laser materials processing, laser systems and power supply design, laser physics beam/target interactions, heat transfer, knowledge-based control systems, expert systems, computer integrated manufacture, CIM scheduling, manufacturing communication systems, computational numerical methods, genetic algorithms, maximum entropy algorithms, chaos, robotics, instrumentation, digital image processing, intelligent digital control systems and digital electronics. Professor Chatwin is on the editorial board of the international journal Lasers in Engineering and is editor of the RSP/Wiley book series on ‘Industrial Informatics and Integrated Manufacturing Business Systems’. He is a member of: the Institute of Electrical and Electronic Engineers; the Society of Photo-Optical Instrumentation Engineers; the European Optical Society; the Association of Industrial Laser Users; the Laser Institute of America; a senior member of the Society of Manufacturing Engineers; a member of the New York Academy of Sciences. He is a Chartered Engineer, Euro-Engineer, Chartered Physicist and a Fellow of The Institute of Electrical Engineers, The Institute of Mechanical Engineers and The Institute of Physics.