DFVLR's intelligent SAR-processor—ISAR

DFVLR's intelligent SAR-processor—ISAR

Adv. Space Res. Vol. 7, No. 11, pp. (11)273—01)279, 1987 0273-1177/87 80.00+50 Copyright © 1987 COSPAR Printed in Great Britain. All rights reserved...

636KB Sizes 3 Downloads 56 Views

Adv. Space Res. Vol. 7, No. 11, pp. (11)273—01)279, 1987

0273-1177/87 80.00+50 Copyright © 1987 COSPAR

Printed in Great Britain. All rights reserved.

DFVLR’S INTELLIGENT SAR-PROCESSOR TSAR



W. Noack and H. Runge Deutsche Forschungs- und Versuchsanstaltfur Luft- und Raumfahrt e. V., DFVLR, 8031 Oberpfaffenhofen, F.R. G.

ABSTRACT The fact that future SAR (Synthetic Aperture Radar) sensors like ERS-1 and X-SAR will be operational systems requires a processor system design which is significantly different from existing SAR correlators. Future systems require highest throughput and reliability. In addition more attention must be paid to the user community needs in terms of various product levels and adequate production and organization schemes. This paper presents the design of the ISAR system which is identified by a distnbuted processor architecture using a high speed array processor, enhanced by a two-dimensional accessable memory, a front-end processor and a knowledge engineering workstation. An expert system will support a human system operator for the mass production of SAR images and the detection and correction of system malfunctions. As a result the system will be accessible and comprehensive for both, experts and operators. Keywords: Synthetic Aperture Radar (SAR), ERS-l, X-SAR, Knowledge Engineering, Expert System, Arrayprocessor

1. Introduction Since 1979 the German Aerospace Research Establishment DFVLR is processing data from various Synthetic Aperture Radar (SAR) Sensors. Several hundred scenes of the American SEASAT L-Band SAR acquired over Europe, the Canadian airborne X/C/L-Band SAR on the Convair 580 and images of the Shuttle Imaging Radar SIR-B experiment were processed and distributed by the German Remote Sensing Data Center (Deutsches Fernerkundungsdatenzentrum DFD). In the case of SEASAT and CV580 the data were processed under contract of the European Space Agency ESA. The software for the processor was developed by the Canadian firm MDA under a joint contract of DFVLR and the ‘Canadian Center for Remote Sensing’ CCRS and designated ‘Generalized SAR Processor’ or GSAR Ill. The processor was called ‘generalized’ because it is adaptable to most space and airborne SAR sensors. The processing time for a 100km x 100km SEASAT image is9 hours, but including the operational overhead the throughput is limited to one image per day. With the emergence of new operational SAR sensors like ERS-1 and X-SAR at the end of the decade a processing system with much higher throughput is required. The European Space Agency is going to launch the first European remote sensing satellite ERS-l in 1989. In contrast to all previous SAR sensors ERS-1 will be an operational system with an expected lifetime of two to three years. The Active Microwave Instrument (AMI) of the ERS-1 will operate in C-Band and support primarily ocean investigations and to a limited extent land applications, It will cover a 100 km wide swath with a 23°incidence angle which can be switched to 35°by operating the so called ‘roil-tilt-mode’. The sateffite orbit will be sun-synchronous, with a nominal altitude of 777 km. The ground repeat pattern will be 3-days during the initial stages, but will be changed up to twice a year to give variable repeat pattern up to 43 days. ERS-1 will perform 5256 orbital revolutions per year. Assuming an average of 8 minutes SAR acquisition time per orbit - corresponding to 32 scenes of 100km x 100km - a total of 168,000 scenes per year will be acquired. The raw data will be transmitted in real time with 105 Mbps via an X-Band link to a worldwide network of acquisition stations where all data will be recorded on High Density Digital Tapes (HDDT). A significant amount of this data volume will be received by European ground stations and the planned German transportable ground station, which may be deployed in developing countries or even in Antarctica /2/. Another demanding project is the German X-SAR, an X-Bsnd SAR on the American Space Shuttle scheduled for 1990. In contrast to the ERS-l satellite X-SAR is a shuttle based sensor and will be flown together with SIR-C. The anticipated amount of scenes to be processed in high precision by DFVLR is 4,000 per year. A second replica of the processing system described in this paper will be used to generate X-SAR images.

(11)273

(11)274

W. Noack and H. Runge

To cope with the enormous amount of data from the sensors to come a new processing system must be developed which achieves much higher throughput rates than the currently used - one product a day - GSAR processor. A DFVLR study team showed in /3/ that the SAR correlation process can be speeded up to about 15 minutes for a 100km x 100km ERS-1 frame using latest computer hardware and architectures. First results of a phase-B study, currently under preparation by DFVLR confirm this forecast. However, it turned out that the new bottleneck inthroughput will be the human operator who has to configure the processor with numerous parameters, different for every particular image. To enable an operator to do this job within a time frame of 15 minutes he shall be supported by an expert system. The idea of an intelligent SAR processor, called ISAR is born. This paper describes the so called Preprocessor of the German Processing and Archiving Facility (PAP). It is a high performance signal processor to convert SAR raw data into images and is designated ISAR. The anticipated production rate is 6000 ERS-1 frames per year. As a further step of refinement a Postprocessor will perform a ‘geocoding’ of the images with an average of 8 geocoded products per day or 2000 products per year. The term Geocoding describes the task, which accepts digital SAR image data sets from the Preprocessing System and generates geometrically corrected and precisely located Geocoded Products in different map projections. The high accuracy shall be achieved by using precise orbit/attitude data, ground control points and digital elevation models. The achieved accuracy shall be within the range of the pixel size. To handle the digital products stored on magnetic tape or optical disks and the film transparencies, DFVLR will set up a ‘Data Management Facility’ for the German PAP. This will be the interface to ESA and the national user community. It will help to retrieve data Out of the archives and catalogs and be responsible for product distribution and dissemination. 2. ISAR

-

A High Precision Processor

The term high precision shall be applied in a many layered way. First of all it is a requirement which is closely related to product quality. In this context pixel location accuracy in cross-track and along-track direction is a key issue. These parameters are influenced by the accuracy ofthe earth model, the position and the attitude measurements of the satellite. it is required to achieve a location accuracy of 20m in cross-track and lSOm in along-track for all ISAR products (acquired over slow varying terrain slope). Following the common baseline for the prediction of the ERS-l orbit and for the derivation ofrelated geodetic altitude and groundirace quantities, the Goddard Earth Model 6 (GEM 6) was adopted for the earth’s surface as that reference ellipsoid, which approximates the local geoid over oceans better than ± 100 m /4/. The so called ‘state vector’ provides all information necessary to determine the position of the satellite in all three axes as well as the velocity vector. The Mission Management and Control Center of ESA delivers the predicted orbit with an accuracy of 36m, 51m and 1300m (radial, cross-track, along-track). The restituted orbit values are 25m, 25m and lOOm. However, a product offered by the German PAP will be processed with a refined satellite state vector. These orbit values are compiled out of laser tracking data and have an accuracy better than 1 meter for the position and 10 cm/s for velocity. These data will be available 24 hours after acquisition and transferred to the Oberpfaffenhofen facility by a computer link. Therefore, one ofthe main advantages of the German PAP products is that high precision orbit data are used which are not available shortly after data acquisition. Another topic of accuracy will be the calibration of the range chirp and the compensation of the antenna gain. A double precision number representation will be used for the computation of all relevant parameters. The signal processor will ensure a linear dynamic range of 96dB. In order to provide the user with a full choice ofproduct variety, a whole family of intermediateproducts (level 1.5) will be available which comprise the Fast Delivery Product, Bulk Products in slant- or ground-range and even products with complex pixel representation preserving phase information. SAR processing is a two-dimensional process which coordinate axes are designated across track or ‘range’ and along track or ‘azimuth’. The ISAR processor will use the so called ‘Range-Doppler-Algorithm’ because of its proven ability to produce high quality images. it is also a very well known and documented algorithm which applies two separate matched filter operations in range and azimuth direction. The filtering is performed in the frequency domain in order to utilize the timing advantage of the Fast Fourier Transform (liFT). The term Range-Doppler designates the chronological sequence of processing steps of which the most important are • • •

Range Compression, Azimuth Compression and Interpolation.

The penalty for breaking up the two-dimensional nature of the imaging process into two one-dimensional procedures comprises in two additional processing steps called linear and quadratic range migration correction. These tasks take care of earth rotation and effects due to the varying across track distance to a target during integration time.

DFVLR’s Intelligent SAR-Processor — TSAR

3. ISAR

-

(11)275

A Distributed Architecture Signal Processor

The generation ofthousands of images per year requires a processing system with an extremely high computational rate. In the past real-time or near-real-time processing problems of this kind were solved by specially designed hardware. These hardware solutions require programming on a very low level: the algorithms are microcoded, nanocoded or even hardwired, The problem is that this software or firmware is not portable to other hardware or more important to the latest and generally cheaper and more powerful hardware, In addition these hardware solutions are specially designed and often unique systems and therefore hard to muintiiin, To get round these problems powerful off-the-shelf computer shall be used for the ISAR hardware system. On the other hand supercoinputer from CRAY or the Cyber 205 from Control Data are extremely expensive and hard to integrate into a real-time environment with special interfaces to a HDDT (High Density Digital Tape), the medium on which the raw data are stored, recorder or an satellite antenna. Therefore, for the number crunching task a powerful arrayprocessor, the ST-lOO from Star Technologies was selected. For signal processing applications this machine operates in the class of a CRAY-i/S supercomputer, but for a much lower price. The performance of 100 MFLOPS (Million Floating Point Operations per Second) of the ST-100 is still unsurpassed by other arrayprocessors. The particular subsystems are the Front/End computer (FEH) which is the host for the ST-lOO, the Corner Turn Memory (CTM), the Real Time Input Facility (Rfl) and a line of different workstations for special purposes: Software Engineering Workstations (SEW), an Image Quality Analysis Workstation (IQW) and a Knowledge Engineering Workstation (KEW). The hardware architectural design of ISAR is depicted in Figure 1.

I~ I I FRONT/END MEM 8 MByte MFLOPS 2 DMA 3 MByte/s



I~W I I

I

II

II

I I

I I

I

ST1 00

MEM 4 MByte MFLOPS 100 DMA 50 MByte/s

I~ I Rh

IcEW

KNOWLEDGE ENGINEERING WORKSTATION

HDDT, ANTENNA

lOW

~



SEW

SOFTWARE ENGINEERING WORKSTATION

CTM _______________

LAN

I

MEM 512 MByte DMA 40 MByte/s DUAL POR1’

LAN RTI CTM STIOO FRONTIEND

ANALYSIS

LOCAL AREA NETWORK REAL TIME INPUT CORNER TURN MEMORY ARRAY PROCESSOR HOST COMPUTER INCLUDING PERIPHERALS

~ COMMANDS COMMANDS AND DATA DMA

Figure 1.

TIr tSAR Haidwure Coufigwaiian

The Local Area Network connects the workstations for software development, process configuration, Image quality analysis etc. with the high throughput computers FEM, ST-100 or RU The typical data transfer rate for the LAN is 500 kBit/s for file transfer. On all computer systems connected to the LAN the Transmission Control Protocol/Internet Protocol (TCP/IP) will be installed. This protocol is complient with the U.S. mll standard ‘ARPANET’ and todays most popular protocol for Ethernet LANs. It supports a file transfer, electronic mail and virtual terminal sessions between computer of different manufactures and different operating systems. In addition a ‘Network File System’ NFS will be implemented on the workstations and the Front/End processor to allow a transparent access to all files on all disks in the network. Like TCP/IP NFS runs also on hardware from different vendors and under different operating systems, However, the LAN is not good for high speed data transfers, therefore, DMA connections will be installed between the high performance computers. The most demanding I/O-data rate is to feed the ST-100 in order to keep the arrayprocessor running under full load. In the case of SAR processing which is mostly FFTing and vector multiplication a 100 MFLOPS arrayprocessor requires at least a sustained data rate of 10 MByte/s. However, the ST-lOO DMA port operates with 50 MByte/s. The data rate between the Front/End computer and the ST-100 will be approximately 3 MByte/s. The Real Time Input Facility, which is a frame synchronizer and a formatter, has to deal with a serial bit stream from the satellite antenna or HDDT of 105 MBit/s.

(11)276

W. Noack and H. Runge

The Front/End processor must be a powerful computer because it has to perform a lot of time critical jobs. First of all it is the host for the ST-100 and controls and downloads tasks into the arrayprocessor. The ST-100 is only good for vector processing, therefore, all scalars operations have to be performed in the Front/End computer. It must be particularly powerful in double precision floating point arithmetic to fulfill the high precision requirement for the preprocessor. In addition the Front/End computer will be the data sink for the preprocessor. The ‘Data Management Facility’ computer will access to the data via a shared disk and the ‘Postprocessor’ via the Ethernet LAN. The Corner Turn Memory (CTM) will be used as an input device for raw data coming from the HDDT recorder or the sateffite antenna. A raw data set of about 300 MByte which is equivalent to a 100km x 100km scene will be buffered in the CTM. In addition the CTM will enlarge the working memory spcae of the ST-lOO arrayprocessor by additional 160 MByte. In the C’FM several matrices can be stored at the same time and accessed in row and column direction with a sustained data transfer rate of nearly 40 MByte/s. The Software Engineering Workstations (SEW) will be delivered from SUN Microsystems and are MC68020 based machines with 4 MByte main memory and a high resolution bit map screen. The Image Quality Analysis Workstation will be a SUN-3/160C VME-bus machine hosting an extensive image processing system. The expert system will be implemented on the Knowledge Engineering Workstation which is a LISP machine from Symbolics. For application programs written in LISP this machine is much faster than other computer, because the LISP code gets directlytranslated to system microcode. In addition the hardware architecture supports LISP program execution by a special instruction fetch unit and a so called ‘garbage collector’.

4. ISAR - An InteHigent SAR Processor 4.1 ‘The Problem During the last years the user requirements for SAR products have been evolving intensively, Among them are the requirements for multifrequency, multipolarization, high radiometric and geometric precision, varying incidence angles and related parameters. Furthermore a lot of experiments require multiseasonal data acquisition or the inclusion of calibration experiments. This will result in a raised number ofdifferent products which have to be configured for every processor run. In addition the machine operator (the man who runs the SAR processor) has to do a lot of manual jobs like tape or disk handling, administrative tasks like the generation of throughput statistics or taking care of hardware and software maintenance. On the other hand the signal processor described in the previous chapter is able to generate a SAR image every 15 minutes. As a matter of fact a human being can’t operate such a fast SAR processor in this timeframe in a reasonable fashion. Another problem is system malfunction which may be hardware or software related and time consuming to detect. The operator may also face the problem that he knows about erroneous sensor parameters and has to predict its influence in the resulting image quality. It would take a human operator much longer than 15 minutes to solve only one of the above mentioned problems. To ensure an economic utilization ofthe processor the operator must be supported by a modern man-machine-interface based on an expert system. As a result the signal processor will not longer be the bottleneck in the production chain, and therefore, the time behaviour of the overall man-machine-interface has itself to be analysed very carefully. Organizing a SAR processor environment is a highly sophisticated task. It is required for the operator to have a fundamental knowledge of the SAR processing algorithm, its implementation, the data flow and hardware specific features of the computer peripherals. 4.2 ‘The ISAR Man-Machine-Intethoe The following section will give an overview of the most relevant actions at the man-machine-interface. It is proposed to group the PAP into several functional units, among them are the Data Management Facility and the Preprocessor. The Data Management Facility will receive Its processing orders by satellite and h~ssubsequently to work out all items of concern like processing level, required product projection (slant/ground/map), availability of resources like HDDTs, CCTs or Optical Disks as carrier of the raw and processed data, It has to build up a data structure to be interpreted within the production base. In the next step the Data Management Facility will send out the elaborated processing request over the Local Area Network (LAN) to the Preprocessor system which puts it, after acknowledgement on top of its request queue. According to the requirements the Preprocessor shall be able to process different types of sensors, platforms and orbits. So in the first step the required parameters will be grouped into semantically meaningful sections. The following procedures are devoted to the configuration of the product request. As a result ofthis decision making the processor modules which are needed will be identified. Moreover, the parameters and the manner in which they are to be calculated have been weighted. The semantically important information will be structured into a set of product informations, which represents the required data flow of this single job. A lookup into the resource allocation table shows the possible schedule ofthe job. Once the processing queue is emptied from the last finished job the new one will be started, prior to that the ‘human’ operator gets his request to mount the appropriate HDDT on the input device, Then the system will automatically preposition the tape and wait until it gets the start signal. Figure 2 shows the data and control flow in the ISAR processor. The signal processing algorithm used is described in /5/. It can be seen that the control mechanism and error recovery is implemented separately from the highrate data-flow. This procedure is a result of the functional subdivision of the processor by mapping these functions onto dedicated compute devices.

DFVLR’s Intelligent SAR-Processor — ISAR

THE SIGNAL PROCESSOR SI,,,uIEIo.,

THE EXPERT SYSTEM

I’—

riiim___

HEAt _______

R.ng.Compr.uIon

JJ rooppue~t,th,ason JJ

k— 4—

L~J -‘1CAt~troi~ L___!~!~__J I

-



Azimuth Comp~..uIon__H— IntwpolaAon

U

Image Quality 4--——

FIgure 2.

1~ tSAR l~ ~

COITOOL FLOW


DATA FLOW

Credici Flow

A new product can be configured and operationally prepared for the next processing during the previous image is processed. This means that the operator has to supervise at least two jobs running in parallel and has to do tape handling in addition. However, as mentioned earlier only if the production chain runs non-stop will it run really economically. After the product has been finished an image quality analysis procedure has to relate the actual parameter measurements to the required figures within the product data base. If quality measurements deviate from the nominal figures the operator has to cancel the job in the queue. Furthermore he has to decide whether he should stop the whole production chain in order to preserve all necessary data for the detection of possible errors or to continue with the production. In traditionally built processing environments the emergence of an error is not transparent to the operator, because the evaluation of messages is done on ordinary numeric computers. But a message has a certain semantic context which has to be evaluated by deriving the consequences from the given situation symbolically. This property also allows the forecast of possible situations where the system could run into, if it wouldn’t be stopped. Usually the operator and the expert have to trace the error syntactically by running through logifies, reading consol logs and trying to reconstruct the context of the event. In order to derive the causal connection a lot of semantic knowlege has to be put into this reasoning process. In general a machine operator doesn’t possess this knowledge and he has to consult specialists in the areas SAR, hardware and software. It is therefore very advantegeous if the reasoning can be done supported by a computer which has a kind ofmemory for the complete computing history and the connections between the single events. Additionally this inteffigent memory has to be able to test hypothesis attempting to find these rules of the knowledge base in which conclusion it is contained. This procedure is called backward chaining and is explained in /6/. The mentioned knowledge base is the main subject of the next chapter. A sensor simulator will be an important part of the processing system. If a problem occurs which is believed to be sensor, orbit or attitude related the actual data set will be dumped from the production data base into the simulator. Then the sensor system behaviour can be simulated and it can be determined whether it caused the problem or not.

4.3 Structuring the Requirements Collecting the ideas given during the introduction and the description of the men-machine interface there have been a lot of items presented which are subject to the automation approach. The goal is to transfer these tasks onto an expert system. By definition 17/ an expert system is a practical program system or rule-based system whose programs perform tasks which require expertise but no great insight. In our case the expert system will be responsible for • • • • •

monitoring, diagnosis, debugging, instruction and control.

As a matter of fact the expert system shall also be capable of giving instructions to the untrained operator. That means that it will act as a simulation and training device for the user.

(11)277

(11)278

W. Noack and H. Runge

Knowledge Base: Expert System

Facts u es

Front I End

Inference Machine

tructured ocessing equests



Dispatcher INPUT Rd _____ AR AP AR

AP

AR

STP

STP

STP

STP

Arrayprocessor FIgure 3.

Production Base

API

INTER

SIP

Suvuture of 11w ISAR Opurdier Iissr~

Figure 3 gives a scheme of the intelligent operator interface which is planned for the SAR processor. Basically three levels of control can be distinguished: the highest level is the expert system itself which is the supervisor of the entire system embedded in the local area network. The computation of the control structures is done symbolically. The local area network serves as an interface to the Front/End machine which represents a lower level of control realized using high level language ADA constructs. At the lowest level the array processor control structures are realized by a FORTRAN like language and assembler if necessary. If we now split up the tasks more definite we find that monitoring applies to several aspects like • • •

network control process control product quality control

Diagnosis means the procedure ofobserving the system by looking through machine state vectors or messages and trying to infer malfunctions. In order to achieve this a system of relationshipa between the malfunctions and the causes has to be constructed. One ofthe main advantages of the use ofan expert system in diagnosis is that newly appearing errors can be analysed by finding the corresponding causes and adding the results to the knowledge base. According to /6/ the ideal expert system consists of several components. In that definition the knowledge base comprises the facts and the rules. These rules have to be seen as the active part ofthe knowledge base because they will be subject to the interpretations performed by the so called rule interpreter. Debugging means the application of proven recipies to analyse malfunction situations. It incorporates the facts about the system design and behaviour, and assumes the capability of forecasting. Debugging procedures shall be applied to image quality related procedures, machine malfunctions or software problems. Whenever problems with a product are observed, either by the operator or automatically, the rule interpreter looks through its rule system using backward chaining methods in order to find the causes ofthe problems. A representative application of this method can be found in /8/. As a result of the debugging procedure there will be given a set of instructions which can be performed automatically or by operator selection. It is therefore an application of diagnosis and debugging subsystems of the expert system.

5. Software Engineering for tbu ISAR Proeussor Development Because of the complexity of the software system it has been decided to apply a phase model to the software development. It will involve a lot of engineers and programmers working on different modules for various hardware architectures. In order to meet the high requirements ADA was selected for the implementation. ADA is supplemented by a set of program development tools called APSE (ADA Programming Support Environment APSE) helps to improve the programmers productivity by strictly applying the top-down design method within the package concept using ADA as a design and implementation language. Furthermore ADA improves the program portability and reduces programming errors.

DFVLR’s Intelligent SAR-Processor — ISAR

During the past years a lot of problems have been encountered running operational and development jobs in parallel. Therefore, the software will be developed on s dedicated software development environment using SUN-3 workstations. Very big parts of the system can be developed, tested and debugged solely within the workstation environment where the user takes full advantage of the window technique, bit map display and powerful CPU. The completed program can then be transferred to the Front/End machine over the network. Among other expert system tools KEE (Knowledge Engineering Environment) has turned out to bea promising instrument for the realization of the intelligent SAR processor ISAR. It can be looked at as a specific programming language dedicated for the machine realization of knowledge and inference. The representation of that knowledge is done using a frame based language.

6. Conclusion The hard- and software concept of DFVLR’s ISAR - the Intelligent SAR Processor - will guarantee both, high throughput and high precision. An economic processing at operational use will be enabled by an intelligent man-machine-interface.

7. References 1.

Bennett, I.R., Cumming, I.G. et alli, “Features of a Generalized Digital Synthetic Aperture Radar Processor”, Proc. of the 15th Int. Symp. on Remote Sensing of the Environment, Ann Arbor, May 1981

2.

Gredel, J., Markwitz, W., Noack, W., Schreier, G., “0ff-line Processing of ERS-l Synthetic Apertur Radar Data with High Precision and High Thrtseghput”, submitted for publication at ISPRS, Baltimore, 1986

3.

Gredel, J., Noack, W., Runge, H., Schrdter, H,, “Performance Prediction ofan Enhanced SAR Processor in View of ERS-l”, ESA Study Contract Report No. 5l27/83/GP-l(SC), Oberpfaffenhofen, May 1984

4.

Klinkrad, H., “Algorithms for Orbit Prediction and for the Determination of Related Static and Dynamic Altitude and Ground Trace Quantities”, ESA ER-RP-ESA-SY.000l

5.

Bennett, J.R., Cumming, I.G., “A Digital Processor for the Production of SEASAT Synthetic Aperture Radar Imagery”, ESA-SP-154, December 1979

6.

Hayes-Roth, F., Waterman, D.A., Lenat, D.B., “Building Expert Systems”, Teknowledge Series in Knowledge Engineering, Volume 1, Addison-Wesley, 1983

7.

Charniak, E., Mc Dermott, D., “Introduction to Artificial Intelligence”, Addison-Wesley, 1985

8.

Pikes, R., Kehler, T., “The Role of Frame-Based Representation in Reasoning”, Comm. ACM, (28)9, 1985, pp 904-920

(11)279