ARTICLE IN PRESS Signal Processing 88 (2008) 2747– 2753
Contents lists available at ScienceDirect
Signal Processing journal homepage: www.elsevier.com/locate/sigpro
On the realization of 2-D orthogonal state-space systems$ Robert T. Wirski Faculty of Electronics and Computer Science, Koszalin University of Technology, S´niadeckich 2, 75-453 Koszalin, Poland
a r t i c l e in fo
abstract
Article history: Received 18 September 2007 Received in revised form 9 March 2008 Accepted 27 May 2008 Available online 20 June 2008
The problem of Roesser model realization is considered in this paper. Sample-by-sample scheduling techniques are discussed, which utilize index mapping functions, resulting in column-by-column, row-by-row, and diagonal-by-diagonal implementations. It is shown that the diagonal-by-diagonal implementation requires specific approach, and can be realized using a novel switched delay structure. An adopted technique is presented for the realization of memoryless block of orthogonal 2-D state-space equations, which leads to a structure consisting of Givens rotations, possibly a 1 multiplier, and unit delays. Processing time and concurrency issues of such approach are briefly discussed. & 2008 Elsevier B.V. All rights reserved.
Keywords: Roesser model 2-D system State-space equations Orthogonal digital filter Givens rotations Sample-by-sample scheduling
1. Introduction Two-dimensional (2-D) signal processing has received considerable attention since 1960s. For the review of problems and progress in the 2-D and multidimensional signal processing, the reader is referred to [1]. There are several linear models which can be used to describe 2-D systems. However, Roesser model is usually considered the most satisfactory [2]. It follows from the fact that it can be treated as an extension of one-dimensional (1-D) state-space equations to the 2-D case. The model is successfully used to obtain structure realizations for a given transfer function, for example in [3,4]. Definitions of the models, as well as the relationship between them, can be found in [5]. There are few information about Roesser model computation, although one can adopt techniques used for 2-D transfer function implementations [6, Chapter 4.1]. Kung et al. [2] have used index mapping functions to obtain a hardware design of 2-D digital filters. Following this approach, several hardware realizations, based on Roesser model, were obtained which utilize row and column ordering.
Although, several synthesis techniques for 2-D systems are known to exist, a design technique for lossless structures remains unknown. In 1-D domain, orthogonal systems are used successfully. These systems possess very good properties such as low sensitivity to finite precision arithmetic, no limit circle and overflow oscillations, and natural stability. They are usually constructed as a connection of delay elements and Givens rotations which can be mapped onto CORDIC based processors. There are mainly two ways to design them: a transfer function approach [7–9] and a state-space approach [10,11]. Although orthogonal systems are good candidates to be extended to the 2-D case, a systematic approach to that task remains unknown. Their synthesis process would split into three parts, namely a lossless transfer function synthesis, a state-space model synthesis, and a structure realization. The first step of the technique was discussed in [12–16]. In [17], a synthesis technique for orthogonal Roesser model is developed. This paper concerns the problem of Roesser model realization [18]: 2
$
Supported by the Polish Ministry of Science and Higher Education under Grant Singapore/106/2007. E-mail address:
[email protected] 0165-1684/$ - see front matter & 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2008.05.018
3 2 h 3 xh ðn þ 1; mÞ x ðn; mÞ 6 xv ðn; m þ 1Þ 7 6 xv ðn; mÞ 7 4 5 ¼ H2 4 5; yðn; mÞ uðn; mÞ
H2 ¼
A
B
C
D
,
(1)
ARTICLE IN PRESS 2748
R.T. Wirski / Signal Processing 88 (2008) 2747–2753
2. The diagonal-by-diagonal implementation of the 2-D state-space equations
where H2 is called the system matrix. We will assume that it is an orthogonal one, i.e. Ht2 H2 ¼ I. Vectors uðn; mÞ and yðn; mÞ are, respectively, l 1 input and k 1 output. Vectors xh ðn; mÞ, xv ðn; mÞ are, respectively, h 1 horizontal and v 1 vertical states. A, B, C, and D are constant matrices of appropriate dimensions. There are also boundary conditions: xh ð0; kÞ and xv ðk; 0Þ, defined for k ¼ 0; 1; . . . . A block diagram for (1) is given in Fig. 1 [17]. Realization of Roesser model reveals the problem of implementing its three parts, namely two 2-D delay blocks and memoryless computational block H2 (Fig. 1). In practical 2-D filter realizations, 1-D techniques are usually used [2]. We will follow that approach through the use of index mapping functions of the form t ¼ Iðn; mÞ. They assign the ordering index t to the ðn; mÞ pairs, so the state-space equations (1) can be recursively computed [6, Section 4.1.4]. The most popular are rowby-row and column-by-column implementations, in which 2-D delay blocks are replaced by cascades of ordinary 1-D delays. The paper is organized as follows. In Section 2, it is shown that for diagonal-by-diagonal implementations one cannot construct a regular 1-D realization, for which novel switched delay structures are proposed. In Section 3, an extension of the 1-D orthogonal filter approach to obtain the realization of a 2-D orthogonal Roesser model is discussed. A new efficient algorithm for the extension of a rectangular system matrix to become a square one is presented in Section 4. Section 5 shows some design examples. Finally, processing time and concurrency issues of such approach are discussed, in Section 6.
To analyze the diagonal-by-diagonal implementation, consider input xðn; mÞ defined over the range n ¼ 0; 1; 2 and m ¼ 0; 1; 2; 3. The ordering generated by the mapping function applied to (1) is given in Table 1. One can verify that diagonal-by-diagonal processing cannot be performed in a regular 1-D structure. Firstly, boundary conditions must be inserted between values obtained from the outputs xh ðn þ 1; mÞ and xv ðn; m þ 1Þ. For example, from the output xh ðn þ 1; mÞ for t ¼ 0; 1 we obtain xh ð1; 0Þ, xh ð1; 1Þ, but the input xh ðn; mÞ for t ¼ 2; 3; 4 requires xh ð1; 0Þ, xh ð0; 2Þ, and xh ð1; 1Þ. Moreover, the delay rates of the outputs xh ðn þ 1; mÞ and xv ðn; m þ 1Þ change during processing. For example, the output xv ðn; m þ 1Þ obtained for t ¼ 0 must be delivered to the input xv ðn; mÞ for t ¼ 1, but the same output obtained for t ¼ 3 must be delivered to the input for t ¼ 6. A block diagram for the diagonal-by-diagonal implementation, which bypasses the mentioned problems, is presented in Fig. 2. It consists of unit delay block containers for xh and xv , and two multiplexers–demultiplexers to put a desirable unit delay block on line. The xh container must be able to store four values of the output xh , and each of its block consists of h unit delays. For the xv container, we get three blocks for the output xv , consisting of v unit delays. The number of blocks in each container follows from the range of the input variables. To implement the diagonal-by-diagonal processing we need to define a switching ordering for multiplexers–demultiplexh h ers. It is given in Table 2, where the numbers in bL ðiÞ; bR ðiÞ v v and bL ðjÞ; bR ðjÞ columns correspond to switch positions of the appropriate container. The switching ordering can be obtained by inspection of Table 1. By analyzing the diagonal-by-diagonal ordering, one can create a list of switch positions which ensures computability of the structure. For details on ordering techniques in the 2-D case the reader is referred to [6, Chapter 4.1]. It follows from Table 2 that all boundary conditions of the states are loaded into unit delay blocks for t ¼ 0. 3. Realization of the memoryless computational block of 2-D orthogonal state-space equations Without loss of generality, we may assume that the number of inputs and outputs in (1) is the same. If it is not true, one can extend the number of system inputs to
Fig. 1. Realization of a 2-D state-space system.
Table 1 Ordering the computation of (1) for the diagonal-by-diagonal implementation State-space eqs. variables xh ðn; mÞ xv ðn; mÞ uðn; mÞ xh ðn þ 1; mÞ xv ðn; m þ 1Þ yðn; mÞ t
n; m 0; 0
0; 1
1; 0
0; 2
1; 1
2; 0
0; 3
1; 2
2; 1
1; 3
2; 2
2; 3
0; 0 0; 0 1; 0
0; 1 0; 1 1; 1
1; 0 1; 0 2; 0
0; 2 0; 2 1; 2
1; 1 1; 1 2; 1
2; 0 2; 0 3; 0
0; 3 0; 3 1; 3
1; 2 1; 2 2; 2
2; 1 2; 1 3; 1
1; 3 1; 3 2; 3
2; 2 2; 2 3; 2
2; 3 2; 3 3; 3
0; 1 0; 0
0; 2 0; 1
1; 1 1; 0
0; 3 0; 2
1; 2 1; 1
2; 1 2; 0
0; 4 0; 3
1; 3 1; 2
2; 2 2; 1
1; 4 1; 3
2; 3 2; 2
2; 4 2; 3
0
1
2
3
4
5
6
7
8
9
10
11
ARTICLE IN PRESS R.T. Wirski / Signal Processing 88 (2008) 2747–2753
2
ph 6 T 2 ¼ 4 0vh 0kh
match the number of its outputs by the technique presented in Section 4. The realization of 2-D orthogonal systems can be obtained by the extension of techniques known for the 1D case. The first step is an application of similarity transformations to Roesser model. Then, we can proceed in the same manner as in the 1-D case using the Givens QR factorization [19, p. 216]. It is known that the system described by (1) where l ¼ k can be transformed by similarity transformations 2
xh ðn þ 1; mÞ
3
2
xh ðn; mÞ
"
b A b C
b B b D
# ¼ T t2 H2 T 2 ,
pv 0kv
0hk
3
0vk 7 5.
(4)
Ik
where A11 , A12 , A21 , and A22 are, respectively, h h, h v, v h, and v v matrices obtained by partitioning the matrix A. Similarly, B1 , B2 , C 1 , and C 2 are, respectively, h k, v k, k h, and k v matrices. Then, from (3) we obtain
(2)
where b2 ¼ H
0hv
It also preserves the orthogonality of the system if T 2 is an orthogonal matrix. So, it can be used to minimize nonzero elements of H2 as the first step of the realization algorithm. Consider the block matrices of (1): " # " # B1 A11 A12 ; B¼ ; C ¼ ½C 1 C 2 , (5) A¼ A21 A22 B2
3
6 xv ðn; m þ 1Þ 7 b 6 xv ðn; mÞ 7 4 5 ¼ H2 4 5, yðn; mÞ uðn; mÞ
2749
2
(3)
pth A11 ph
pth A12 pv pth B1
t b2 ¼ 6 H 4 pv A21 ph C 1 ph
where " t2 ¼
t t2 h2 t 2
pv
0vk
0kv
Ik
3 7 5,
#
(6)
" and
h2 ¼
A22
B2
C2
D
# .
(7)
Introduce, b ¼ tt h t ¼ h 2 2 2 2
"
ptv A22 pv C 2 pv
# ptv B2 . D
(8)
To minimize the number of Givens rotations in the final structure, we should get as much as possible of zeros b 2 , so they will not be below or over the diagonal of H destroyed by the flow of the QR algorithm. There can be two approaches, namely we can operate on lower or upper triangular part of the system matrix. Techniques presented in this subsection will focus on the former one. The latter is essentially the same, and can be easily derived from the former one by applying it, for example, to the transposition of the system matrix. The form of the lower triangular part of (6) suggests two approaches to minimize nonzero elements of H2 . Both methods are described below.
Fig. 2. Block diagram for the switched delay implementation.
Table 2 Multiplexer’s switching ordering for the diagonal-by-diagonal implementation t
v
v
bL ðiÞ; bR ðiÞ
uðn; mÞ
xh ðn; mÞ
xh ðn þ 1; mÞ
bL ðjÞ; bR ðjÞ
xv ðn; mÞ
h
h
xv ðn; m þ 1Þ
0
1
uð0; 0Þ
xh ð0; 0Þ
xh ð1; 0Þ
1
xv ð0; 0Þ
xv ð0; 1Þ
1
2
uð0; 1Þ
xh ð0; 1Þ
xh ð1; 1Þ
1
xv ð0; 1Þ
xv ð0; 2Þ
2
1
uð1; 0Þ
xh ð1; 0Þ
xh ð2; 0Þ
2
xv ð1; 0Þ
xv ð1; 1Þ
3
3
uð0; 2Þ
x ð0; 2Þ
x ð1; 2Þ
1
xv ð0; 2Þ
xv ð0; 3Þ
4
2
uð1; 1Þ
xh ð1; 1Þ
xh ð2; 1Þ
2
xv ð1; 1Þ
xv ð1; 2Þ
5
1
uð2; 0Þ
xh ð2; 0Þ
xh ð3; 0Þ
3
xv ð2; 0Þ
xv ð2; 1Þ
h
h
6
4
uð0; 3Þ
xh ð0; 3Þ
xh ð1; 3Þ
1
xv ð0; 3Þ
xv ð0; 4Þ
7
3
uð1; 2Þ
xh ð1; 2Þ
xh ð2; 2Þ
2
xv ð1; 2Þ
xv ð1; 3Þ
8
2
uð2; 1Þ
x ð2; 1Þ
x ð3; 1Þ
3
xv ð2; 1Þ
xv ð2; 2Þ
9
4
uð1; 3Þ
xh ð1; 3Þ
xh ð2; 3Þ
2
xv ð1; 3Þ
xv ð1; 4Þ
10
3
uð2; 2Þ
xh ð2; 2Þ
xh ð3; 2Þ
3
xv ð2; 2Þ
xv ð2; 3Þ
11
4
uð2; 3Þ
xh ð2; 3Þ
xh ð3; 3Þ
3
xv ð2; 3Þ
xv ð2; 4Þ
h
h
ARTICLE IN PRESS 2750
R.T. Wirski / Signal Processing 88 (2008) 2747–2753
3.1. Zeroing of the last block row of the system matrix (6) The first technique amounts to minimize nonzero b 2 using C 1 p and t t h2 t 2 in (6). From (8), it elements of H h 2 follows that to minimize the nonzero elements of h2 we can use 1-D techniques, especially it can be reduced to a-extended upper Hessenberg matrix [10]. To obtain more b 2 , the multiplication zeros in the lower triangular part of H C 1 ph in (6) can be used to upper triangularize C 1 . This reduction is quite similar to QR factorization, which can be applied directly to the transposition of C 1 with reversed rows. Suppose we have applied the above technique to the 2-D system matrix H. The final step of the implementation is to represent H in factored form. To obtain it, we can use the technique presented in [20]. As the result we have orthogonal diagonal matrix U and a Givens rotations RðÞ such that H ¼ URð1ÞRð2Þ RðMÞ. 3.2. Zeroing of the first block column of the system matrix (6) The second technique amounts to minimize nonzero b 2 using the first block column of (6). By use elements of H of ph , we can follow two ways of reduction: reduce A11 by means of similarity transformations or upper triangularize C 1 . It is known that there exists ph such that A11 can be reduced to upper quasi-triangular form via the real Schur decomposition [19, p. 341]. The decision which way to choose can be made by analyzing the number of zeros introduced by both methods and/or the properties of obtained orthogonal realizations. Lastly, pv can be found to upper triangularize the multiplication A21 ph . Then, the system matrix can be factored into the product H ¼ Rð1ÞRð2Þ RðMÞU. 4. Rectangular system matrices If H2 is rectangular, which occurs when a system possesses less inputs than outputs, i.e. lok in (1), we need to expand it to obtain a square matrix. It is known from Algebra that it is always possible by extension of H2 to a basis by adding any linearly independent vectors and applying the Gram–Schmidt orthogonalization process and normalization to the columns of H2 [21, p. 395]. In this subsection, a new efficient technique to solve this task is presented. Suppose, we are given an orthonormal set of vectors fv1 ; v2 ; . . . ; vk g in an n-dimensional vector space U over the real field R, and kon. Define V k ¼ ½v1 v2 . . . vk .
(9)
It is known that every orthonormal vector set is part of an orthonormal basis [21, p. 394]. Thus, we can find a vector b vkþ1 , orthogonal to V k , such that vkþ1 ¼ 0k1 . V tk b
nonhomogeneous linear equations in k unknowns, which has exactly one solution. To recast (10), consider partitions of b vkþ1 and V k : " # b vkþ1 ð1; . . . ; kÞ b vkþ1 ¼ , (11) b vkþ1 ðk þ 1; . . . ; nÞ " Vk ¼
V k ð1; . . . ; kÞ
#
V k ðk þ 1; . . . ; nÞ
,
(12)
where V k ð1; . . . ; kÞ and V k ðk þ 1; . . . ; nÞ are block matrices obtained from V k by extracting rows 1; . . . ; k and k þ 1; . . . ; n, respectively. Similarly, b vkþ1 is partitioned into b vkþ1 ð1; . . . ; kÞ and b vkþ1 ðk þ 1; . . . ; nÞ. From (10)–(12), we obtain vkþ1 ð1; . . . ; kÞ V tk ð1; . . . ; kÞb vkþ1 ðk þ 1; . . . ; nÞ. ¼ V tk ðk þ 1; . . . ; nÞb
To find a solution to (13), we put constants to b vkþ1 ðk þ 1; . . . ; nÞ, which are not all equal to zero. Let " # 1 b . (14) vkþ1 ðk þ 1; . . . ; nÞ ¼ 0ðnk1Þ1 Hence, (13) is given by vkþ1 ð1; . . . ; kÞ ¼ V tk ðk þ 1Þ. V tk ð1; . . . ; kÞb
(15)
The system of linear equations (15) has a unique solution, which can be combined with (14) to get the orthogonal vector b vkþ1. Applying normalization to b vkþ1 , we get 2 3 b vkþ1 1 6 7 1 (16) vkþ1 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 5. b vkþ1 þ 1 0ðnk1Þ1 vtkþ1 b Thus, we have an orthonormal set of vectors fv1 ; v2 ; . . . ; vk ; vkþ1 g. If k þ 1on we repeat this process recursively to get the orthonormal basis of U. As the result, we obtain a square system matrix H2 . 5. Design examples Suppose we are given an orthogonal 2-D Roesser model (1) with the partitioning (5), displayed in 3-digit decimal arithmetic: 2 3 0:395 0:388 0:695 6 7 A11 ¼ 4 0:447 0:668 0:160 5, (17a) 0:352 0:345 0:068 A21 ¼
0:254
0:023
0:618
0:450
0:444
0:291
2
A12
0:032 6 0:067 ¼4 0:379
0:364
,
(17b)
3
0:174 7 5;
A22 ¼
0:068
(10)
Eq. (10) yields a set of k homogeneous linear equations in n unknowns, so it always has a nonzero solution [21, Chapter VII, Section 6]. Since vi are linearly independent, (10) can be replaced by a set of k linearly independent
(13)
0:198 0:058
0:630 , 0:646 (17c)
2
3
0:043 6 7 B1 ¼ 4 0:539 5; 0:745
B2 ¼
0:205 0:036
,
(17d)
ARTICLE IN PRESS R.T. Wirski / Signal Processing 88 (2008) 2747–2753
C1 ¼
C2 ¼
0:431
0:032
0:006
0:259
0:291
0:144
0:584 0:684
0:013 ; 0:135
D¼
,
(17e)
0:267 . 0:197
(17f)
The dimensions of state vectors xh ðn; mÞ and xv ðn; mÞ are h ¼ 3 and v ¼ 2, respectively. The system has one input and two outputs, so its system matrix is not square. Applying the algorithm presented in Section 4, we obtain a square system matrix. It will change B and D due to the addition of an extra column. Applying the technique presented in Section 3.1, we obtain 2
0:738 6 0:523 6 b ¼6 h 2 4 0:126
0:164 0:106
0:004 0:208
0:570
0:267
0:697
0:197
0
3 0:246 0:329 7 7 7. 0:633 5
0
0:357
0:244
0
0
0:415
Let us analyze the scheduling for the structures obtained in Section 5. Following [8], we denote by t g the number of clocks required for a single Givens rotational operation. We also assume that the time required to manage unit delay blocks is significantly lower than t g and can be omitted. By constructing schedules for structures discussed in Section 5, one can obtain the processing time for column-by-column, row-by-row, and diagonal-by-diagonal implementations, which are given in Table 4. Obtained results are based on the assumption that the number of Givens rotation is unrestricted and it is possible to slide each contact of the structure switches independently. One can verify that the maximum number of Givens rotators working simultaneously is limited, as shown in Table 5.
7. Conclusions
0:550
.
(19)
Finally, we have the decomposition into a 7 7 matrix of the form U ¼ diag½1; 1; . . . ; 1 and RðkÞ, whose parameters are given in Table 3 (Structure A). Obtained realization can be implemented using the switched delay structure, given in Fig. 3. Other structures follow from the technique described in Section 3.2. By the Schur decomposition or upper triangularization of C 1 we have a 7 7 matrix of the form U ¼ diag½1; . . . ; 1; 1 and Givens rotators presented in Table 3 (structures B1 and B2, respectively). Block diagrams for these structures will be omitted here for the sake of brevity, but they can be easily derived from one given in Fig. 3. Table 3 Givens rotations for the systems described in Section 5 Structure B1
6. Analysis of scheduling for switched delay structures
(18)
Then, by the upper triangularization process, we have b1 ¼ C
2751
RðkÞ
Structure A
Structure B2
k
fk
ik
jk
fk
ik
jk
fk
ik
jk
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
1.7123 0.8705 0.6156 0.3741 0.0357 2.6366 0.6298 0.7046 0.3104 1.9497 1.0446 0.2534 0.3945 0.9194 0.1983 0.7909 0.6463 –
1 2 1 3 2 1 4 3 2 1 5 4 3 2 6 5 3 –
2 3 3 4 4 4 5 5 5 5 6 6 6 6 7 7 7 –
0.0501 0.5583 0.9507 0.9596 0.6458 0.5384 0.6787 0.3967 2.7195 0.3188 0.4208 0.2281 2.6850 0.6151 0.4834 3.0548 0.4564 0.6912
1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 5 5 6
7 6 4 2 7 6 5 4 7 6 5 4 7 6 5 7 6 7
1.3187 0.5361 0.8548 2.5484 0.0331 0.0999 0.8737 0.8280 0.7309 0.7121 0.0109 2.4087 1.0475 0.1433 2.8283 0.2477 2.5175 –
1 1 1 2 2 2 2 3 3 3 3 4 4 4 5 5 6 –
4 3 2 6 5 4 3 7 6 5 4 7 6 5 7 6 7 –
In this paper, computational techniques for Roesser model are presented. Such systems are usually realized as 1-D ones utilizing row-by-row or column-by-column orderings, which are essentially the same. For such a case, 2-D delay elements can be implemented as a cascade connection of ordinary unit delays. As shown in Section 2, the diagonal-by-diagonal implementation of a 2-D statespace system cannot be obtained in the same manner. In the paper, a novel switched delay structure is presented, which can be used to obtain the diagonal-by-diagonal implementation of Roesser model. Moreover, the structure can also be used with other orderings, for example in rowby-row or column-by-column implementations. The switching schema for the structure can always be found if the ordering in question is computable by Roesser model. In fact, the switched delay structure is simply a graphical representation of a computational algorithm for 2-D state-space equations. Its switches and multiplexers–demultiplexers can be implemented with use of memory, then the positions of the switches refer to assigned memory addresses. So, to implement assorted switching ordering, one needs to provide an algorithm which would guarantee correct memory address management for storing and retrieving state-space data. In case of simple implementations, like row-by-row or column-bycolumn ones, the movement of the switches can be described by a simple algorithm, as they go in one direction to the nearest slide or from the last position to the first one. Another approach is to use a table of ordered memory addresses. It is especially useful in case of more complicated switch movements, like for example in the diagonal-by-diagonal implementation. The orthogonality condition does not need to be necessarily satisfied to be able to construct the memoryless computational block. In fact, any implementation of matrix–vector multiplication can accomplish the task. So, it is possible to realize the switched delay structure for any system described by a Roesser model, like these given in [3,4], for example. But, in the case of an orthogonal Roesser model, one can proceed like in the 1-D case using similarity transformations and decomposition into Givens
ARTICLE IN PRESS 2752
R.T. Wirski / Signal Processing 88 (2008) 2747–2753
Fig. 3. Block diagram for the structure obtained in Section 5.
Table 4 Minimum time required to process the input data for the systems described in Section 5 Structures
A B1 B2
Implementations Col.-by-col.
Row-by-row
Diag.-by-diag.
105t g 116t g 148t g
149t g 80t g 108t g
77t g 88t g 96t g
Table 5 Maximum number of Givens rotators working simultaneously for the systems described in Section 5 Structures
A B1 B2
the orthogonality assumption. For example, it must be a multi-input multi-output system to realize practically usable frequency characteristics which cannot be greater then 1 [16]. It is shown that the processing time of the orthogonal switched delay structure depends on the 1-D implementation type, and the column-by-column or row-by-row orderings are not always optimal. For the structure given in Fig. 3, for example, changing the ordering from columnby-column to diagonal-by-diagonal reduces the processing time by over 26% using the same number of Givens rotators. As the structure can work with orderings not mentioned here, it opens the possibility to optimize further the processing time or other important parameters, like for example signals bandwidth.
Implementations Col.-by-col.
Row-by-row
Diag.-by-diag.
3 3 2
2 4 4
3 4 4
rotations, possibly with a 1 multiplier. Practically, such approach is expected to posses better parameters, like more accurate frequency response, lower noise, etc., using the same computational power, or allows to achieve the same performance using less hardware resources. However, one can show that there are limitations imposed by
References [1] N.K. Bose, Multidimensional digital signal processing: problems, progress and future scopes, Proc. IEEE 78 (4) (1990) 590–597. [2] S.-Y. Kung, B.C. Le´vy, M. Morf, T. Kailath, New results in 2-D systems theory, part II: 2-D state-space models—realization and the notions of controllability, observability, and minimality, Proc. IEEE 65 (6) (1977) 945–961. [3] G.E. Antoniu, Generalized one-multiplier lattice discrete 2-D filters: minimal circuit and state-space realization, IEEE Trans. Circuits Systems II 48 (2) (2001) 215–218. [4] D. Wang, A. Zilouchian, J. Zhao, Z. Huang, Modular structure realizations of 2-D separable-in-denominator recursive digital filters, Signal Processing 87 (2007) 2686–2694. [5] T. Kaczorek, Two-Dimensional Systems, Springer, Berlin, Germany, 1985.
ARTICLE IN PRESS R.T. Wirski / Signal Processing 88 (2008) 2747–2753
[6] D.E. Dudgeon, R.M. Mersereau, Multidimensional Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, 1984. [7] E. Depretere, P. Dewilde, Orthogonal cascade realization of real multiport digital filters, Internat. J. Circuits Theory Appl. 8 (1980) 245–272. [8] S.K. Rao, T. Kailath, Orthogonal digital filters for VLSI implementation, IEEE Trans. Circuits Systems 31 (11) (1984) 933–945. [9] J. Ma, K.K. Parhi, E.F. Deprettere, Pipelined CORDIC-based cascade orthogonal IIR digital filters, IEEE Trans. Circuits Systems II 47 (11) (2000) 1238–1253. [10] U.B. Desai, A state-space approach to orthogonal digital filters, IEEE Trans. Circuits Systems 38 (2) (1991) 160–169. [11] J. Ma, K.K. Parhi, Pipelined CORDIC-based state-space orthogonal recursive digital filters using matrix look-ahead, IEEE Trans. Signal Process. 52 (7) (2004) 2102–2119. [12] D.C. Youla, The synthesis of networks containing lumped and distributed elements, in: Proceedings of the Symposium on Generalized Networks, Polytechnic Institute of Brooklyn Press, New York, 1966, pp. 289–343. [13] A. Fettweis, On the scattering matrix and the scattering transfer matrix ¨ 36 (1982) 374–381. of multidimensional lossless two-ports, AEU
2753
[14] S. Basu, A. Fettweis, On the factorization of scattering transfer matrices of multidimensional lossless two-ports, IEEE Trans. Circuits Systems 32 (9) (1985) 925–934. [15] A. Kummert, Synthesis of two-dimensional lossless m-ports with prescribed scattering matrix, Circuits Systems Signal Process. 8 (1) (1989) 97–119. [16] M.S. Piekarski, R. Wirski, On the transfer matrix synthesis of two-dimensional orthogonal systems, in: ECCTD 2005—European Conference on Circuit Theory and Design, Cork, Ireland, 2005. [17] M.S. Piekarski, Synthesis algorithm of two-dimensional orthogonal digital system—a state space approach, in: Proceedings of the XXVIIIth URSI General Assembly, New Delhi, India, 2005. URL hwww.ursi.org/Proceedings/ProcGA05/pdf/CP4.8(01080).pdfi. [18] R.P. Roesser, A discrete state-space model for linear image processing, IEEE Trans. Automat. Control 20 (1) (1975) 1–10. [19] G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., The Johns Hopkins University Press, Baltimore, MD, 1996. [20] F.D. Murnaghan, The Unitary and Rotation Groups, Spartan Books, Washington, DC, 1962. [21] S. MacLane, G. Birkhoff, Algebra, Macmillan, New York, 1968.