266
ABSTRACTS OF PAPERS ACCEPTED FOR PUBLICATION
representation generated by structural analysis to the one required for statistical classification. It is shown that if a certain continuity property holds for the parametetiations of the structural shapetypes, then it is possible to infer the mapping automatically. Inference is slow and heuristic, but is higbly automated, controlled by only a few statistical parameters, and is applicable uniformly to all shape types. In addition, if the shape types are sufficiently elementary, the resulting mapping can be computed quickly using kD-trees. Large-scale statistically significant trials, in the context of a mixed-font. variable-size optical character recognition (OCR) system, have shown that the technique is superior to simpler, fixed mappings and is effective in generalizing common characteristics in mixtures of fonts. GMJJ Lend Rcquantizatio~
MICWEL WERMAN AND SHMUEL PELEG. Department of Computer Science, The Hebrew University of Jerusalem, 91904 Jerusalem, Israel. Received November 16. 1986; accepted November 6,1987.
A distance measure between pictures, that enables comparison of images with different gray level ranges, is described. With such measure halftoning can be treated as an optimization problem. Based on this measure, an algorithm for reducing the number of gray levels of a picture with minimal visual degradation is developed. The algorithm can even produce a binary halftone of the picture, which is needed for many hard copy devices. The complexity of the algorithm is high, and it is not recommended as a daily halftoning method. However, the method provides a well-defined mathematical approach for comparing images. Reed-M&r Tramjoma Image Cede. B. R. K. REDDY AND A. L. PAL Department of Computer Science, Arizona State University, Tempe, Arizona 85287. Received October 10, 1986; accepted September 14,1987. A new technique using the Reed-Muller transform has been applied for image data bandwidth compression. The basic concept was derived from the Reed-Muller canonical expansion of boolean functions. The transform over Galois field 2 was investigated in this work. A fast algorithm has been developed for the computation of the Reed-Mu&r transform. Simulation results indicate that the Reed-Muller transform provides good quality reconstructed images with approximately 2.5 bits per pixel for monochrome images. The computational efficiency and simple hardware realization of this transform might make it a viable candidate for certain real-time image data compression applications. An&zing Orthagmphie l3-qjeeiian of Mdriplc 30 Veki@~ Vector Fiel& in Optical Fbw. H. TSUKUNE. Electrotechnical Laboratory, l-l-4 Umezono, Tsukuba Science City, Ibaraki, Japan. J. K. AGGARWAL. Laboratory for Image and Signal Analysis, Department of Electrical Engineering, The University of Texas, Austin, Texas 78712. Received December 11,1985; accepted November 17,1987. This paper describes a method for analyzing multiple optical flow fields in optical flow and reconstructing 3D velocity vector fields. First, the discussion is concentrated on purely rotational motion. This method estimates a set of descriptive parameters of a purely rotational optical flow field such as projected rotational axis and projected angular velocity of a purely rotational optical flow field and extracts rotational optical flow fields specified by the parameters from input optical flow. This process based on the Hough transform utilizes constraints holding between parallel optical flow vectors to construct the parameter space. Taking correspondence between two rotational optical flow fields in two different views is profitable to decompose.the descriptive parameters and reconstruct the 3D rotational velocity vector fields. The method proposed is applied to computer generated optical flow fields. Second, a discussion suggests that the method is applicable to analyzing general motion, from the fact that its optical flow field is essentially the same as that of purely rotational motion, though the test for the general motion is not presented here.