Applied Mathematics and Computation 162 (2005) 1207–1218
www.elsevier.com/locate/amc
Inexact block Newton methods for solving nonlinear equations q Fenghong Yang a, Miao He a, Yun Tang a
a,*
, Ming Rao
b
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China b CCUC, University of Alberta, Edmonton, Canada T6G 2G6
Abstract In the paper two parallelizable inexact block Newton methods are presented for solving large and sparse nonlinear equations. The basic idea is simple and direct. Combining the simplified Newton method with the component averaging (CAV) method [Parallel Comput. 27 (2001) 777] results in an inexact Newton method, called simplified Newton-CAV method. Parallel tests of the algorithm are implemented on a message-passing distributed-memory multiprocessor architecture such as a cluster of workstations. The results show that the new algorithm can achieve good performance. Moreover as a development of the simplified Newton-CAV method, the overlapped block Newton-CAV method is further proposed and discussed. 2004 Elsevier Inc. All rights reserved. Keywords: Nonlinear equations; Block-iterative solutions; Parallel computation; Inexact Newton methods; Load flows
1. Introduction Let X be an open and convex subset of Rn and F : X ! Rn a differentiable mapping. We are concerned with numerical solutions for the system of n nonlinear equations
q
This work is supported by National Key Basic Research Special Fund (no. G1998020309) and NNSF of China (no. 10272059). * Corresponding author. E-mail address:
[email protected] (Y. Tang). 0096-3003/$ - see front matter 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2004.03.002
1208
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
F ðxÞ ¼ 0;
ð1Þ
where the Jacobian matrix J ðxÞ :¼ F 0 ðxÞ is sparse and nonsingular. Assume there exists a solution x of (1) in X. Such systems often arise in many important practical fields, such as calculation of the load flows for power systems and solving initial or boundary value problems in ordinary or partial differential equations, etc. It is well known that Newton method [1,2], etc. coupled with some direct solution technique such as Gaussian elimination provides powerful solver for these nonlinear systems. The Newton method has the following general form: Given an initial guess x0 , we compute a sequence of steps fsk g and iterates fxk g as follows: J ðxk Þsk ¼ F ðxk Þ;
ð2Þ
xkþ1 ¼ xk þ sk : For n is not too large, the Newton method is attractive because it converges rapidly from any sufficiently good initial guess, which in fact becomes a standard in comparison with rapidly convergent methods for solving (1). One drawback of the Newton method is that it is necessary to solve the Newton equations (2) at each stage. However, computing the exact solution may be expensive if the Jacobian matrix is large and sparse. There are several strategies to overcome the shortcoming in the Newton method. The first one is to set J ðxk Þ J ðx0 Þ for all k > 0. This is the simplified Newton method [1]. We will consider the case in the following sections. The second one is to solve the Newton equations (2) by using iterative methods. In other words, we may use the inexact Newton methods [3–5] to solve (1), which are variations of the classical Newton method. The methods are based on approximate solutions of linearized systems by an iterative method (CAV in our case) which is stopped by an appropriate forcing sequence. Namely, instead of solving (2) exactly, we may take the correction sk satisfying krk k ¼: kJ ðxk Þsk þ F ðxk Þk 6 gk kF ðxk Þk; where fgk g is a forcing sequence. Taking gk 0 gives the Newton method. In [3] it is proved that the method has a local, linear convergence in an appropriate norm, and that it may converge superlinearly or even quadratically under convenient assumptions. Moreover, it needs to point out that the choice of the forcing term is significant for achieving desirably fast local convergence and avoiding oversolving in the inexact method. Some choices have been presented in [5]. In addition to the strategies above, some kind of nonlinear block-iterative methods [6–9] may be used. In the recent years there has been a growing interest in the row-projection methods of Cimmino type [10] for solving the large sparse nonsymmetric linear
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
1209
systems [11–15]. Several numerical experiments have shown that these methods may be competitive with other iterative solvers. Based on the Cimmino method, Censor et al. in [16] proposed the component averaging (CAV) algorithm as a new iterative parallel technique suitable for large and sparse unstructured systems of linear equations. The CAV algorithm simultaneously projects the current iterate onto all the system’s hyperplanes, and then has inherently parallel. Its convergence is completely proved in [17]. In the paper, we present two iterative procedures. The first one combines directly the simplified Newton method with the component averaging algorithm, called simplified Newton-CAV method. The second is an overlapped block Newton-CAV method, which is a generalization of the former. Parallel tests of the first algorithm are implemented on a message-passing distributedmemory multiprocessor architecture such as a cluster of workstations. The organization of the paper is as follows. In Section 2, the CAV method is introduced. Section 3 describes the simplified Newton-CAV algorithm and the overlapped block Newton-CAV algorithm. Convergence of the algorithms is discussed in Section 4. In the next section, the parallel numerical results for solving the load flow equations used as a test problem in [18] are given. In the last section, we draw conclusions and discuss this subject.
2. CAV algorithm In this section, we introduce CAV (component averaging) algorithm [16]. CAV method is a parallel iterative method suitable for large and sparse unstructured linear equations. Consider the linear equations Ax ¼ b;
ð3Þ
where A is an m n sparse matrix, x 2 Rn and b 2 Rm . In the following discussion, let ai ¼ ðai1 ; . . . ; ain Þ denote the ith row of A, sj 2 the number of nonzero elements in the jth column of A and kxkG ¼ hx; Gxi the associated ellipsoidal norm, where G is an n n symmetrically positive definite matrix. CAV algorithm [16]: Initialization: x0 2 Rn is arbitrary. Iterative step: Given xk , compute xkþ1 by using, for j ¼ 1; 2; . . . ; n, the formula: ¼ xkj þ kk xkþ1 j
m X bi hai ; xk i i a; Pn i 2 j i¼1 l¼1 sl ðal Þ
ð4Þ
1210
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
where fkk gk P 0 are relaxation parameters, commonly confined to the interval n e 6 kk 6 2 e for some fixed but arbitrarily small e P 0 and fsl gl¼1 is defined as above. Assume sl 6¼ 0; l ¼ 1; 2; . . . ; n. Let S ¼ Diagðs1 ; s2 ; . . . ; sn Þ and Ds ¼ Diag ka11 k2 ; ka12 k2 ; . . . ; ka1n k2 ; then (4) can be rewritten as S
S
S
xkþ1 ¼ xk þ kk AT Ds ðb Axk Þ:
ð5Þ
Obviously, it is similar to the forms of Jacobi method and Gauss–Seidel method. The proof of its convergence can be found in [16,17].
3. New algorithms Now let us turn to Eq. (1). Combining the simplified Newton method with CAV method mentioned above, we obtain the simplified Newton-CAV method, in which at a major outer iteration the linear system J ðx0 Þsk ¼ F ðxk Þ is solved in parallel by using CAV method. Algorithm 1 (Simplified Newton-CAV method). 1. Let x0 be an initial vector, e1 > 0 small enough and k ¼ 0. 2. Calculate A ¼ J ðx0 Þ: 3. While kF ðxk Þk P e1 Do 3.1 Compute b ¼ F ðxk Þ: 3.2 Solve Ask ¼ b by the CAV method (noting it may be implemented in parallel). 3.3 xkþ1 ¼ xk þ sk . 3.4 k ¼ k þ 1. End while The exit test for the CAV iteration is kb Ask k small enough. Based on the algorithm above, we present Algorithm 2, called overlapped block Newton-CAV method. The basic idea is simple and direct. Partition Newton equation into several overlapped blocks, and then solving each of them by using the CAV method. Now we describe it in detail. Let us consider system (3). Suppose the matrix A and b are conformably partitioned as follows: fAg ¼ fA1 ; . . . ; AM g; fbg ¼ fb1 ; . . . ; bM g; T
T T
T
where Ai ¼ ððai1 Þ ; . . . ; ðaini Þ Þ ; bi ¼ ðbi1 ; . . . ; bini Þ : Let Si ¼ fi1 ; . . . ; ini g, where the partition is chosen such that [M i¼1 Si ¼ f1; 2; . . . ; mg and Si \ Siþ1 6¼ ;; i ¼ 1; . . . ; M 1: This partition may be obtained by any strategy, and it may be significant for the convergence of
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
1211
algorithm. So we get M linear systems, each of which can be solved by using the CAV method. Algorithm 2 (Overlapped block Newton-CAV method). 1. Let x0 be an initial guess; 2. Calculate J 0 ¼ J ðx0 Þ; r0 ¼ F ðx0 Þ, partition J 0 ; r0 into M partially overlapped blocks Ji0 ; ri0 ; i ¼ 1; 2; . . . ; M; PM 3. Select ai ; i ¼ 1; . . . ; M; such that 0 < ai < 1 and i¼1 ai ¼ 1; 4. For k ¼ 0; 1; . . ., until convergence: 4.1 Let Dx0i ¼ D~xði ¼ 1; 2; . . . ; MÞ; p ¼ 0; 4.2 If p 6 nk , compute (a) p ¼ p þ 1; j ¼ 0, (b) If j 6 lk , calculate T (i) Dxji ¼ Dxji þ kji;k ðJi0 Þ Ds;i ðrik Ji0 Dxji Þ; i ¼ 1; . . . ; M, (ii) j ¼Pj þ 1. M (c) Dxp ¼ i¼1 ai Dxli k ; 0 (d) Dxi ¼ Dxp ; i ¼ 1; . . . ; M; 4.3 xkþ1 ¼ xk þ Dxnk ; 4.4 Compute rkþ1 ¼ F ðxkþ1 Þ and rikþ1 ¼ Fi ðxkþ1 Þ; i ¼ 1; . . . ; M. If rkþ1 is small enough, stop.
Remark 1. nk and lk are integers that are either dependent on the iterative number k or invariant in iteration. Remark 2. For M ¼ 1 Algorithm 2 can be reduced to Algorithm 1. Algorithm 2 proposed here is for solving large nonlinear systems in parallel. The reason is that steps 4.2 and 4.4 are parallelizable in Algorithm 2. The algorithm comprises of three iterative stages corresponding to k; nk and lk , that are called outer iteration, sub-outer iteration and inner iteration, respectively.
4. Convergence analysis In this section, we discuss the conditions under which the new methods presented in the previous section are locally convergent. The convergence proof of Algorithm 1 is trivial. The key of convergence proof for Algorithm 2 is to establish the convergence of solving Newton equation by using step 4.2. For simplicity, we only consider the linear system as follows: Ax ¼ b;
ð6Þ
1212
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
where A is an n n large and sparse matrix and x; b 2 Rn . First, we give the steps of solving (6) using step 4.2 of Algorithm 2. 1. Let x0 be an initial guess; and ai ; i ¼ 1; . . . ; M; satisfy 0 < ai < 1 and ai ¼ 1; 2. For k ¼ 0; 1; . . . ; until convergence; 2.1 x0i;k ¼ xk ; i ¼ 1; 2; . . . ; M; j ¼ 0; If j 6 lk , compute j T i(i) xji;k ¼ ðI kji;k ATi Ds;i Ai Þxj 1 i;k þ ki;k Ai Ds;i bi ; (ii) j ¼ j þP 1; lk 2.2 xkþ1 ¼ M i¼1 ai xi;k .
PM
i¼1
Let Aji;k ¼ I kji;k ATi Ds;i Ai and bji;k ¼ kji;k ATi Ds;i bi ; then xkþ1 ¼
M X
ai xli;kk
i¼1 M X
¼
ai
þ
!! Aji;k
xk
j¼1
i¼1 M X
lk Y
ai
i¼1
lk 1 X
lk Y
p¼1
j¼pþ1
! Aji;k
! bpi;k
! þ
bli;kk
:
ð7Þ
Q P P P lk lk 1 Qlk j j e kþ1 ¼ M ai e kþ1 ¼ M ai Let G A A and R i;k i;k i¼1 i¼1 p¼1 j¼1 j¼pþ1 bpi;k þ bli;kk . Then (7) can be rewritten as e kþ1 xk þ R e kþ1 ; xkþ1 ¼ G
k P 0:
ð8Þ
This is a nonstationary linear iterative method for solving linear systems. In the following, we list some related results (more details refer to [24]). Consider the nonstationary iterative formula unþ1 ¼ Gnþ1 un þ Knþ1 ;
n P 0:
ð9Þ
Obviously, (9) can be rewritten as un ¼ Gn u0 þ ^kn ;
n P 1;
ð10Þ
where Gn ¼ Gn Gn 1 G1 and ^kn ¼ kn þ Gn kn 1 þ þ Gn Gn 1 G2 k1 : Let uðA; bÞ ¼ fx 2 Rn jAx ¼ bg; I the n n identity matrix. Definition 1. The method (10) is consistent with (6) if the following condition holds: If un 2 uðA; bÞ, then un0 2 uðA; bÞ for all n0 P n. Lemma 1. If the nonstationary method (10) is obtained by (9) and if for each j ¼ 1; 2; . . ., the stationary iterative method defined by
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
unþ1 ¼ Gj un þ kj ;
1213
ð11Þ
n P 0;
is consistent, that is uðA; bÞ uðI Gj ; kj Þ, then the method (10) is consistent. Conversely, if the method (10) is consistent, then for each j the method (11) is consistent. Lemma 2. If A is nonsingular and the method (10) is consistent, then for each n we have ^kn ¼ ðI Gn ÞA 1 b: Lemma 3. A necessary and sufficient condition for convergence of the method (10) are f^kn g converges and limn ! 1 Gn ¼ 0: The details of proofs of the three lemmas above can be found in [23]. In the following we consider the convergence of (8). Herein we suppose the matrix A of system (6) is nonsingular. Using Lemmas 2 and 3, if ~ k ¼ 0; lim G
ð12Þ
k!1
~k ¼ G ekG e 1, G e k defined as above, then the sequence fxk g obe k 1 G where G tained by (8) converges to the solution of system (6). Now we prove (12). Let Bli;kk ¼
lk Y
Aji;k ¼
j¼1
lk Y
I kji;k ATi Ds;i Ai ; i ¼ 1; . . . ; M;
j¼1
PM
lk e kþ1 ¼ then G i¼1 ai Bi;k in (8). Because A is nonsingular, so Ai ; i ¼ 1; . . . ; M, are raw full rank. By Theorem 6.1 in [17] if 0 < e 6 kji;k 6 qðAT2 e , then Ds;i Ai Þ i
lim
lk ! 1
Bli;kk
¼ 0;
where e > 0 is an arbitrarily small but fixed constant and q is the spectral radius e k k < 1, which leads to (12). of matrix. So choosing lk appropriately, we have k G Summing up above, we obtain the following theorem. Theorem 1. Let X be an open and convex subset of Rn and F : X ! Rn be a continuously differentiable mapping. We assume that a solution x of the equation F ðxÞ ¼ 0 exists in X and J ðx Þ is an invertible Jacobian matrix. If 0 < e 6 kji;k 6 qðAT2 e , where e > 0 is an arbitrarily small but fixed constant and D AÞ i s;i i q is the spectral radius of matrix. Then the sequence fxk gk P 0 , generated by Algorithm 2, converges to x .
1214
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
e i and bj Suppose kji;k ki and lk l, then Aji;k I ki ATi Ds;i Ai ¼: A i;k T ~ ki Ai Ds;i bi ¼: bi . So (7) can be rewritten as ! ! M M l l p l X X X kþ1 k ~bi ; e e ai A i ai ð13Þ Ai x ¼ x þ i¼1
i¼1
p¼1
which is a stationary iterative method obtained by (6). Based on the analysis above, the convergence of (13) is trivial. In the next section, we suppose that nk ; lk and kji;k for k; i; j are all constants. Then we get a stationary iteration.
5. Computational aspects and numerical tests In this section, the tested results of Algorithm 1 are presented. 5.1. The parallel hardware and software The test bed used in this research is a cluster of workstations consisting of 36 single processor workstations with 4 · Intel Xeon PIII 700 MHz processors and 1 G SDRAM per processor. Each station runs Redhat 7.1 and MPICH 1.2.1.7. The workstations are connected to each other via a 2.56 Gbit/s Ethernet switch. Inter-Processor communication occurs through MPI, i.e., the Message Passing Interface, that is a standard specification for a library of functions implementing the message-passing model of parallel computation. 5.2. Parallel simulation of sparse nonlinear solver We compute the load flows of IEEE 662 system. At first, we introduce simply the model of load flow equations for power systems. The load flow equations are denoted by F ðx; P ; Q; V Þ ¼ 0;
ð14Þ
where x ¼ ðx1 ; x2 ; . . . ; xn Þ is the node voltage vector, and xi ¼ ei þ jfi is of complex form with the imaginary unit j for i ¼ 1; 2; . . . ; n; P and Q are active and reactive vectors of injected power, respectively; V is the voltage magnitude vector of PV buses. In general, (14) is the system of equations with complex coefficients. Separating its real part and imaginary part, we can get the equations with real coefficients, denoted by F ðxÞ ¼ 0; x ¼ ðe1 ; f1 ; . . . ; en ; fn ÞT :
ð15Þ
In power systems, flow equations play a basic and important role in realtime control and real-support of power systems. Solving it rapidly is a key step.
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
1215
The traditional methods of solving (15) are Newton method and Fast Decoupled methods [18]. But along with the development of power systems (such as the network scale enlarged, the structure of network more and more complex, the network load more and more heavy, and the requirement of real-time control increasing and so on), those traditional methods encounter new challenge. The availability of parallel processing hardware and software presents an opportunity for applying this computation technology to solve these problems. Moreover, many researches on parallel methods [19–22] have been investigated to calculate load flows. In the paper, we introduce the component averaging method to computation of load flows and obtain two iterative procedures. Simulation of the new methods will be given in the following. In calculation of load flows, the tolerance is in ð10 3 ; 10 2 Þ generally. Herein the stop is set e1 ¼ 10 3 . In general the initial vector uses the approximate start, that is a good approximate solution. It is significant and available in engineer. Fig. 1 shows that the parameter k influences strongly on the convergence speed of Algorithm 1, which reaches the best performance at k ¼ 3:8. The CPU time and speedup of the implemented methods for the IEEE 662 system are shown in Table 1, where k ¼ 3:8. From Table 1 and Fig. 2, the following conclusion can be drawn:
3
10
one processor two processors four processors eight processors sixteen processors
2
CPU time (s)
10
1
10
0
10
0.1
0.5
0.9
1.3
1.7
2.1
2.5
parameter λ
Fig. 1. Influence of the parameter k.
2.9
3.3
3.7
1216
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
Table 1 Numerical results for IEEE 662-bus power system 1 Processor
2 Processors
4 Processors
8 Processors
16 Processors
CPU-t
CPU-t
Spd
CPU-t
Spd
CPU-t
Spd
CPU-t
Spd
9.145
4.615
1.982
3.243
2.819
2.322
3.983
2.021
4.525
CPU-t: total sums of CPU time for the Newton iteration and the CAV procedure in seconds. Spd: speedup for n processors (1 processor time/n processor time).
5
4. 5
relative speed up
4
3. 5
3
2. 5
2
1. 5 2
4
6
8
10
12
14
16
number of processors
Fig. 2. Parallel load flow speedup.
1. The parallel load flow algorithm achieves good performance on a small cluster of workstations. 2. The results will be better for larger scale power systems. Algorithm 1 as an example is parallel implemented on the cluster of workstations, while Algorithm 2 as a generalization of Algorithm 1 has two levels, which is able to be parallel implemented if there are multi-CPU in each of the workstations. So Algorithm 2 is more competitive for larger scale power systems. 6. Conclusion and discussion In this paper, two algorithms are presented for solving large and sparse nonlinear systems. The first one is the simplified Newton-CAV method, which
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
1217
is obtained by combining the simplified Newton method with the CAV algorithm. The second is a block Newton method, called the overlapped block Newton-CAV algorithm. Algorithm 1 as a special case of the second one is implemented based on a message-passing distributed-memory multiprocessor architecture, such as a cluster of workstations. The results show the new algorithms can achieve good performance for large nonlinear systems. Note that Algorithms 1 and 2 are dependent on several parameters, including the choice of k, the partition scheme and the number of blocks, etc. It is worth studying how these parameters influence the convergence of the algorithms and investigating the comparison with other methods. References [1] J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [2] W.C. Rheinboldt, Methods of Solving Systems of Nonlinear Equations, second ed., SIAM, Philadelphia, PA, 1998. [3] R.S. Demdo, S.C. Eisenstat, T. Steihaug, Inexact Newton methods, SIAM J. Numer. Anal. 19 (1982) 400–408. [4] P.N. Brown, A local convergence theory for combined inexact-Newton/finite-difference projection methods, SIAM J. Numer. Anal. 24 (1987) 407–434. [5] S.C. Eisenstat, H.F. Walker, Choosing the forcing terms in an inexact Newton methods, SIAM J. Sci. Comput. 17 (1996) 17–32. [6] A.I. Zecevic, D.D. Siljak, A block-parallel Newton method via overlapping epsilon decompositions, SIAM J. Matrix Anal. Appl. 15 (1994) 824–844. [7] Y. Chen, D. Cai, Inexact overlapped block Broyden methods for solving nonlinear equations, Appl. Math. Comput. 136 (2003) 215–218. [8] E. Dalligani, The Newton-arithmetic mean method for the solution of systems of nonlinear equations, Appl. Math. Comput. 134 (2003) 9–34. [9] J.J. Xu, Convergence of partially asynchronous block quasi-Newton methods foe nonlinear systems of equations, J. Appl. Math. Comput. 103 (1999) 307–321. [10] G. Cimmino, Calcolo approssimato per soluzioni dei sistemi di equazioni lineari, La Ricerca scientifica XVI, Series II, Anno IX 1, 1938, pp. 326–333. [11] L. Bergamaschi, I. Moret, G. Zilli, Inexact quasi-Newton methods for sparse systems of nonlinear equations, Future Gener. Comput. Syst. 18 (2001) 41–53. [12] M. Arioli, I. Duff, J. Noailles, A.D. Ruiz, Ablock projection method for sparse matrices, SIAM J. Sci. Stat. Comput. 13 (1) (1992) 47–70. [13] R. Bramley, A. Sameh, Row-projection methods for large nonsymmetric linear systems, SIAM J. Sci. Stat. Comput 13 (1) (1992) 168–193. [14] G. Zilli, Parallel implementation of a row-projection method for solving sparse linear systems, Supercomputer 53 (X-1) (1993) 33–43. [15] G. Zilli, Parallel method for sparse non-symmetric linear and non-linear systems of equations on a computer network, Supercomputer 6 (XII-4) (1996) 4–15. [16] Y. Censor, D. Gordon, R. Gordon, Component averaging: an efficient iterative parallel algorithm for large and sparse unstructured problems, Parallel Comput. 27 (2001) 777–808. [17] Y. Censor, T. Elfving, Block-iterative algorithms with diagonally scaled oblique projections for the linear feasibility problems, SIAM J. Matrix Anal. Appl. 24 (2002) 40–58. [18] B.M. Zhang, S.S. Chen, Analysis of Power Network, Tsinghua University Press, 1996.
1218
F. Yang et al. / Appl. Math. Comput. 162 (2005) 1207–1218
[19] A.J. Flueck, H.D. Chiang, Solving the nonlinear power flow equations with an inexact Newton method using GMRES, IEEE Trans. Power Syst. 13 (1998) 267–273. [20] M. Amano, A.I. Zecebic, D.D. Siljak, An improved block parallel Newton method via epsilon decompositions for load-flow calculations, IEEE Trans. Power Syst. 11 (1996) 1519–1524. [21] Tu. Feng, A.J. Flueck, A message-passing distributed-memory parallel power flow algorithm, IEEE PES Winter Meeting, January 2002, pp. 211–216. [22] Tu Feng, A.J. Flueck, A message-passing distributed-memory Newton-GMRES parallel power flow algorithm, IEEE PES Summer Meeting, January 2002, pp. 1477–1482. [23] D. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, 1971.