D

D

78 | The Linear Algebra Survival Guide D Defective matrix An n-by-n matrix is defective if it does not have a set of n linearly independent eigenvec...

313KB Sizes 5 Downloads 127 Views

78 |

The Linear Algebra Survival Guide

D Defective matrix An n-by-n matrix is defective if it does not have a set of n linearly independent eigenvectors. Defective matrices are not diagonalizable.

Illustration ◼ A 4-by-4 defective matrix MatrixForm[A = Normal[SparseArray[{{2, 3} → 1, {3, 2} → 0}, {4, 4}]]] 0 0 0 0

0 0 0 0

0 1 0 0

0 0 0 0

Eigenvectors[A] {{0, 0, 0, 1}, {0, 1, 0, 0}, {1, 0, 0, 0}, {0, 0, 0, 0}} The calculation shows that the 4-by-4 matrix A has a maximum of three linearly independent eigenvectors. It is therefore a defective matrix. An n-by-n real matrix may not have n linearly independent real eigenvectors and may therefore be considered to be defective as a real matrix. ◼ A real 2-by-2 matrix (defective as a real matrix) MatrixForm[A = {{Cos[π /∕ 3], Sin[π /∕ 3]}, {-−Sin[π /∕ 3], Cos[π /∕ 3]}}]

-−

1

3

2

2 3

1

2

2

N[Eigenvectors[A]] -−1.4803 × 10-−16 -− 1. ⅈ, 1., 2.22045 × 10-−16 + 1. ⅈ, 1. The calculations show that the real matrix A has no real eigenvectors. Hence it is defective (as a real matrix).

The Linear Algebra Survival Guide

| 79

Determinant The determinant of a square matrix is a scalar associated with the matrix. It is defined by induction on the size of the matrix.

Illustration ◼ The determinant of a 1-by-1 matrix MatrixForm[A = {{a}}] (a) Det[A] a ◼ The determinant of a 2-by-2 matrix MatrixForm[A = {{a, b}, {c, d}}] a b c d TraditionalForm[Det[A]] a d -− b c ◼ The determinant of a 3-by-3 matrix MatrixForm[A = {{a, b, c}, {d, e, f}, {g, h, i}}] a b c d e f g h i Det[A] -−c e g + b f g + c d h -− a f h -− b d i + a e i Various formulas for calculating determinants exist. Here is an example of the Laplace expansion of the determinant of a 3by-3 matrix. ◼ The determinant of a matrix calculated by an expansion along the first row of the matrix MatrixForm[A = {{a, b, c}, {d, e, f}, {g, h, i}}] a b c d e f g h i The determinant of 3-by-3 matrix A can be calculated as a linear combination of the first row of A and the determinants of three 2-by-2 submatrices of A. A11 = {{e, f}, {h, i}}; A12 = {{d, f}, {g, i}};

80 |

The Linear Algebra Survival Guide

A13 = {{d, e}, {g, h}}; Expand[Det[A] ⩵ a Det[A11] -− b Det[A12] + c Det[A13]] True

Manipulation ◼ Determinants of 3-by-3 matrices Manipulate[Det[{{a, b, c}, {1, 2, 3}, {4, 5, 6}}], {a, -−2, 2, 1}, {b, -−2, 2, 1}, {c, -−2, 2, 1}]

a b c

0

We can combine Manipulate and Det to explore the determinants of matrices. If we assign the values a = b = c = - 2, for example, the manipulation shows that the determinant of the resulting matrix is zero. Other assignments to a, b, and c such as a = - 2, b = - 1, and c = - 2 produce a matrix with a nonzero determinant:

a b c

6

Diagonal See Diagonal of a matrix, Jordan block, subdiagonal, superdiagonal

Diagonal decomposition Eigenvalues and eigenvectors are needed for the diagonal decomposition of a matrix A into a product of the form P dM P -−1 consisting of an invertible matrix P whose columns are eigenvectors of A and a diagonal matrix dM whose diagonal entries are eigenvalues of A. The decomposition of an n-by-n real matrix requires n linearly independent eigenvectors.

The Linear Algebra Survival Guide

| 81

P dM P -−1 consisting of an invertible matrix P whose columns are eigenvectors of A and a diagonal matrix dM whose diagonal entries are eigenvalues of A. The decomposition of an n-by-n real matrix requires n linearly independent eigenvectors.

Illustration Eigenvectors and eigenvalues are the building blocks of diagonal decompositions of real matrices. Suppose we would like to rewrite a matrix A as a product P.DiagonalMatrix[Eigenvalues[A]].Inverse[P]

(1)

then the diagonal matrix DiagonalMatrix[Eigenvalues[A]] must consist of the eigenvalues of A and the columns of P must be associated eigenvectors. ◼ A diagonal decomposition of a 3-by-3 real matrix MatrixForm[A = {{2, 1, 1}, {4, 1, 7}, {5, 3, 0}}] 2 1 1 4 1 7 5 3 0 evalues = Eigenvalues[A] {7, -−4, 0} MatrixForm[dM = DiagonalMatrix[evalues]] 7 0 0 0 -−4 0 0 0 0 MatrixForm[evectors = Eigenvectors[A]] 1 3 2 1 -−19 13 -−3 5 1 Mathematica outputs the eigenvectors of A as row vectors. In order to form a matrix whose columns are eigenvectors, we must transpose them. MatrixForm[A == Transpose[evectors].dM.Inverse[Transpose[evectors]]] True The matrices MatrixForm[P = Transpose[evectors]] and MatrixForm[dM = DiagonalMatrix[evalues]]

82 |

The Linear Algebra Survival Guide

yield a diagonal decomposition of the matrix A. The matrix P is not unique. Different choices of eigenvectors produce different decompositions.

Manipulation All real 3-by-3 matrices have at least one real eigenvalue since their characteristic polynomials are real polynomials of odd degree and real polynomials of odd degree cut the x-axis at least once. The point of intersection of the graph of the polynomial and of the x-axis corresponds to a real eigenvalue of the matrix. ◼ Using Manipulate to explore eigenvalues The matrix MatrixForm[A = {{1, 0, 5}, {6, 2, 0}, {1, 0, 3}}] 1 0 5 6 2 0 1 0 3 has three real eigenvalues. Eigenvalues[A] 6 , 2, 2 -−

2 +

6 

We can use the Plot function to visualize these eigenvalues as the roots of the characteristic polynomials of A. cpA = CharacteristicPolynomial[A, t] -−4 -− 6 t + 6 t2 -− t3 Plot[cpA, {t, -−5, 5}] 300 250 200 150 100 50 -−4

-−2

2

4

What happens if we replace the third row of A by {a, 0, 3}, with a ranging over a wider interval of scalars? A = {{1, 0, 5}, {6, 2, 0}, {a, 0, 3}} {{1, 0, 5}, {6, 2, 0}, {a, 0, 3}} Manipulate[Eigenvalues[{{1, 0, 5}, {6, 2, 0}, {a, 0, 3}}], {a, -−5, 5}]

The Linear Algebra Survival Guide

| 83

a

2 1 + ⅈ

6

, 2 1 -− ⅈ

6

, 2

We can combine Manipulate and Eigenvalues to explore the nature of the eigenvalues of matrices. The manipulation shows that for negative values of a, some of the eigenvalues of the real matrix A are not real. Therefore the resulting matrix is not diagonalizable, although it has three distinct (complex) eigenvalues.

Diagonal matrix A diagonal matrix A is a square array whose elements A[[i, j]] in the i th row and j th column are zero if i≠ j. For some applications it is convenient to extend this definition to rectangular matrices. In that case, the matrices are padded with either zero rows and/or zero columns and are sometimes called generalized diagonal matrices.

Illustration ◼ A 5-by-5 diagonal matrix MatrixForm[DiagonalMatrix[{1, 2, 3, 4, 5}]] 1 0 0 0 0

0 2 0 0 0

0 0 3 0 0

0 0 0 4 0

0 0 0 0 5

◼ A generalized diagonal matrix obtained by appending a row of zeros MatrixForm[Append[DiagonalMatrix[{1, 2, 3, 4, 5}], {0, 0, 0, 0, 0}]] 1 0 0 0 0 0

0 2 0 0 0 0

0 0 3 0 0 0

0 0 0 4 0 0

0 0 0 0 5 0

◼ A generalized diagonal matrix obtained by appending a column of zeros

84 |

The Linear Algebra Survival Guide

MatrixForm[Transpose[Append[DiagonalMatrix[{1, 2, 3, 4, 5}], {0, 0, 0, 0, 0}]]] 1 0 0 0 0

0 2 0 0 0

0 0 3 0 0

0 0 0 4 0

0 0 0 0 5

0 0 0 0 0

Diagonal matrices can be created using the SparseArray function by specifying the nonzero elements. ◼ A 4-by-4 diagonal matrix MatrixForm[Normal[SparseArray[{{1, 1} → 5, {2, 2} → 2, {3, 3} → 5}, {4, 4}]]] 5 0 0 0

0 2 0 0

0 0 5 0

0 0 0 0

◼ A 4-by-5 diagonal matrix MatrixForm[Normal[SparseArray[{{1, 1} → 5, {2, 2} → 2, {3, 3} → 5, {4, 4} → 6}, {4, 5}]]] 5 0 0 0

0 2 0 0

0 0 5 0

0 0 0 6

0 0 0 0

Diagonal of a matrix The diagonal of an m-by-n matrix A is the list of all elements A[[i,i]] of A for i from 1 to m.

Illustration ◼ Diagonal of a 4-by-6 matrix A = RandomInteger[{0, 9}, {4, 6}];

A=

2 0 1 5

8 3 0 8

0 2 7 7

0 4 8 0

8 4 2 9

7 1 0 1

;

Diagonal[A] {2, 3, 7, 0} ◼ Diagonal of a 4-by-4 matrix

The Linear Algebra Survival Guide

A=

8 4 8 8

8 0 3 6

9 6 8 4

3 7 5 5

;

diagonalA = {A[[1, 1]], A[[2, 2]], A[[3, 3]], A[[4, 4]]} {8, 0, 8, 5} diagonalA ⩵ Diagonal[A] True ◼ Diagonal of a 4-by-5 matrix A = RandomInteger[{0, 9}, {4, 5}];

A=

1 9 2 0

4 1 2 5

1 9 7 6

1 9 2 1

8 3 6 9

;

diagonalA = {A[[1, 1]], A[[2, 2]], A[[3, 3]], A[[4, 4]]} {1, 1, 7, 1} The superdiagonal of an m-by-n matrix A is the list of all elements A[[i,i+1]] for i from 1 to (m - 1). ◼ The superdiagonal of a 4-by-6 matrix

A=

2 0 1 5

8 3 0 8

0 2 7 7

0 4 8 0

8 4 2 9

7 1 0 1

;

Diagonal[A, 1] {8, 2, 8, 9} The subdiagonal of an m-by-n matrix A is the list of all elements A[[i+1,i]] for i from 2 to m. ◼ The subdiagonal of a 4-by-6 matrix

A=

2 0 1 5

8 3 0 8

0 2 7 7

0 4 8 0

8 4 2 9

Diagonal[A, -−1] {0, 0, 7}

7 1 0 1

;

| 85

86 |

The Linear Algebra Survival Guide

Difference equation If the vectors in a list {v0 , v1 , v2 , ..., vn , ...} are connected by a matrix A for which vn+ 1 = Avn for n = 0, 1, 2, ..., then the equation vn+1 = Avn is called a linear difference equation.

Illustration ◼ A difference equation based on a 2-by-2 matrix MatrixForm[A = {{0.75, 0.5}, {0.25, 0.5}}] 0.75 0.5 0.25 0.5 v0 = {100 000, 200 000} {100 000, 200 000} The list consisting of the first three elements of the list {v0 , v1 , v2 , ..., vn , ...} is {v0 , v1 = A.v0 , v2 = A.v1 } {{100 000, 200 000}, {175 000., 125 000.}, {193 750., 106 250.}}

Dimension of a vector space A vector space is finite-dimensional if it has a basis consisting of a finite number of basis vectors. Since all bases of a finitedimensional vector space have the same number of elements, this number is defined to be the dimension of the space.

Illustration ◼ A two-dimensional vector space The space ℝ2 of all pairs of real numbers {a, b} is a two-dimensional vector space. The sets B1 = {e1 = {1, 0}, e2 = {0, 1}} B2 = {b1 = {3, -−4}, b2 = {1, 1}} are two bases for the same space. ◼ A one-dimensional vector space The space ℂ of all complex numbers is a one-dimensional complex vector space. The set ℂ = {1} {1} is a basis for ℂ since every complex number z is a multiple of 1.

The Linear Algebra Survival Guide

| 87

◼ A four-dimensional vector space The space ℝ[t,3] of real polynomials of degree 3 or less is a four-dimensional vector space since the set B = 1, t, t2 , t3  is a basis for the space. ◼ A four-dimensional vector space A = RandomInteger[{0, 9}, {4, 5}];

A=

7 2 0 4

1 9 6 7

9 8 4 7

7 2 0 7

5 9 6 5

;

B = RowReduce[A];

B=

1 0 0 0

0 1 0 0

0 0 1 0

0 -−4 0 -−1 0 3 1 1

;

The first four columns of the matrix B are the pivot columns of the matrix A. They therefore form a basis for the column space of A. We can use the Length function to calculate its dimension. Length[B] 4

Dimensions of a matrix The numbers of rows and columns of a matrix, in that order, are called the dimensions of the matrix.

Illustration ◼ A matrix of dimensions {3, 4} A = RandomInteger[{0, 9}, {3, 4}];

A=

0 7 2 2 5 7 8 5 9 4 7 1

;

Dimensions[A] {3, 4} ◼ A matrix of dimensions {4, 3} A = RandomInteger[{0, 9}, {4, 3}];

88 |

The Linear Algebra Survival Guide

A=

4 3 6 3

3 4 1 5

0 7 5 2

;

Dimensions[A] {4, 3} ◼ Dimensions of a square matrix A = RandomInteger[{0, 9}, {4, 4}];

A=

3 0 3 9

4 4 0 4

5 3 5 2

6 1 8 9

;

Dimensions[A] {4, 4}

Dirac matrix The Dirac matrices are 4-by-4 matrices arising in quantum electrodynamics. They are Hermitian and unitary.

Illustration ◼ The 4-by-4 Dirac matrices MatrixForm[I4 = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}}] 1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

MatrixForm[σ1 = {{0, 1, 0, 0}, {1, 0, 0, 0}, {0, 0, 0, 1}, {0, 0, 1, 0}}] 0 1 0 0

1 0 0 0

0 0 0 1

0 0 1 0

MatrixForm[σ2 = {{0, -−ⅈ, 0, 0}, {ⅈ, 0, 0, 0}, {0, 0, 0, -−ⅈ}, {0, 0, ⅈ, 0}}] 0 -−ⅈ 0 0 ⅈ 0 0 0 0 0 0 -−ⅈ 0 0 ⅈ 0

The Linear Algebra Survival Guide

| 89

MatrixForm[σ3 = {{1, 0, 0, 0}, {0, -−1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, -−1}}] 1 0 0 0 0 -−1 0 0 0 0 1 0 0 0 0 -−1 MatrixForm[ρ1 = {{0, 0, 1, 0}, {0, 0, 0, 1}, {1, 0, 0, 0}, {0, 1, 0, 0}}] 0 0 1 0

0 0 0 1

1 0 0 0

0 1 0 0

MatrixForm[ρ2 = {{0, 0, -−ⅈ, 0}, {0, 0, 0, -−ⅈ}, {ⅈ, 0, 0, 0}, {0, ⅈ, 0, 0}}] 0 0 ⅈ 0

0 -−ⅈ 0 0 0 -−ⅈ 0 0 0 ⅈ 0 0

MatrixForm[ρ3 = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, -−1, 0}, {0, 0, 0, -−1}}] 1 0 0 0

0 0 0 1 0 0 0 -−1 0 0 0 -−1

{HermitianMatrixQ[σ1 ], HermitianMatrixQ[ρ3 ]} {True, True} {UnitaryMatrixQ[σ1 ], UnitaryMatrixQ[ρ3 ]} {True, True}

Direct sum of vector spaces The zero subspaces are useful for the definition of direct sums of subspaces. If two subspaces U and V of a vector space W are disjoint, in other words, if they share only the zero vector of the space, and if BU is a basis for U and BV is a basis for V, then every vector w in W can be written as a unique sum u + v, with u in BU and v in BV . The union of U and V, in that order, is called the direct sum of U and V and is written as U ⊕ V. The direct sum symbol ⊕ is produced by typing Esc c+ Esc.

Illustration ◼ A direct sum of two subspaces of ℝ4 If B1 and B2 are the two bases

90 |

The Linear Algebra Survival Guide

B1 = {{1, 0, 0, 0}, {0, 1, 0, 0}}; B2 = {{0, 0, 1, 0}}; of subspaces of ℝ4 and V is the subspace of all vectors of the form {a, b, c, 0}, then W = span[B1 ] ⊕ span[B2 ] : w = {a, b, c, 0} == a {1, 0, 0, 0} + b {0, 1, 0, 0} + c {0, 0, 1, 0} True ◼ The direct sums of four vector spaces generated by the 3-by-5 matrix A = {{3, 1, 0, 2, 4}, {1, 1, 0, 0, 2}, {5, 2, 0, 3, 7}}; Dimensions[A] {3, 5} ◼ The coordinate space ℝ5 as a direct sum of the null space and the row space of a matrix A nsA = NullSpace[A] {{-−1, -−1, 0, 0, 1}, {-−1, 1, 0, 1, 0}, {0, 0, 1, 0, 0}} rsA = RowReduce[A] {{1, 0, 0, 1, 1}, {0, 1, 0, -−1, 1}, {0, 0, 0, 0, 0}} The null space and the row space are subspaces of ℝ5 with dimensions 3 and 2. The spaces are disjoint and the sum of their dimensions is therefore 5. Solve[nsA[[1]] == a rsA[[1]] + b rsA[[2]] , {a, b}] {} Solve[nsA[[2]] == a rsA[[1]] + b rsA[[2]] , {a, b}] {} Solve[nsA[[3]] == a rsA[[1]] + b rsA[[2]] , {a, b}] {} Solve[rsA[[1]] == a nsA[[1]] + b nsA[[2]] + c nsA[[3]] , {a, b, c}] {} Solve[rsA[[2]] == a nsA[[1]] + b nsA[[2]] + c nsA[[3]] , {a, b, c}] {}

The Linear Algebra Survival Guide

| 91

The union of nsA and rsA forms a basis for ℝ5 . This is expressed by saying that ℝ5 is a direct sum of the null and row spaces. The notation NullSpace[A] ⊕ RowSpace[A] ⩵ ℝ5 expresses the fact that the direct sum of the two disjoint subspaces is all of ℝ5 . ◼ The coordinate space ℝ3 as a direct sum of the left null space and the column space of A lnsA = NullSpace[Transpose[A]] {{-−3, -−1, 2}} csA = RowReduce[Transpose[A]] 3 1, 0,

1 , 0, 1,

2

, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} 2

The left null space and the column space are subspaces of ℝ3 with dimensions 1 and 2. The spaces are disjoint and the sum of their dimensions 3. Solve[lnsA[[1]] == a csA[[1]] + b csA[[2]] , {a, b}] {} Solve[csA[[1]] == a lnsA[[1]] , a] {} Solve[csA[[2]] == a lnsA[[1]] , a] {} Hence we can form the direct sum leftnullSpace[A]⊕ columnSpace[A] of the left null space and the column space to build ℝ3 . It may happen that some of the four subspaces of a matrix are zero spaces. In that case, their bases are empty and their dimensions therefore are zero. ◼ Direct sums involving zero subspaces MatrixForm[A = {{1, 2}, {3, 4}}] 1 2 3 4 NullSpace[A] {} MatrixForm[RowReduce[A]] 1 0 0 1

92 |

The Linear Algebra Survival Guide

This shows that the null space of A is the zero subspace Z = {{0, 0}} of ℝ2 and therefore Z ⊕ RowSpace[A] = RowSpace[A] = ℝ2 Similarly, NullSpace[Transpose[A]] {} MatrixForm[RowReduce[Transpose[A]]] 1 0 0 1 The left null space of A is therefore also the zero subspace Z = {{0, 0}} of ℝ2 . Hence Z ⊕ ColumnSpace[A] = ColumnSpace[A] = ℝ2

Discrete Fourier transform The discrete Fourier transform converts a list of data into a list of Fourier series coefficients. The Mathematica Fourier function and its inverse, the InverseFourier function, are the built-in tools for the conversion. The Fourier function can also be defined explicitly in terms of matrix multiplication using Fourier matrices.

Illustration ◼ A Fourier transform and its matrix equivalent data = {-−1, -−1, -−1, -−1, 1, 1, 1, 1}; dftdata = Fourier[data] {0. + 0. ⅈ, -−0.707107 -− 1.70711 ⅈ, 0. + 0. ⅈ, -−0.707107 -− 0.292893 ⅈ, 0. + 0. ⅈ, -−0.707107 + 0.292893 ⅈ, 0. + 0. ⅈ, -−0.707107 + 1.70711 ⅈ} sfmdata = N[Simplify[FourierMatrix[8].data]] {0., -−0.707107 -− 1.70711 ⅈ, 0., -−0.707107 -− 0.292893 ⅈ, 0., -−0.707107 + 0.292893 ⅈ, 0., -−0.707107 + 1.70711 ⅈ} dftdata ⩵ sfmdata True ◼ An inverse Fourier transform and its matrix equivalent fcdata = {0. + 0. I, -−0.707107 -− 1.70711 I, 0. + 0. I, -−0.707107 -− 0.292893 I, 0. + 0. I, -−0.707107 + 0.292893 I, 0. + 0. I, -−0.707107 + 1.70711 I};

The Linear Algebra Survival Guide

| 93

ifdata = InverseFourier[fcdata] {-−1., -−1., -−1., -−1., 1., 1., 1., 1.} ifF = Simplify[Inverse[FourierMatrix[8]]]; sfmdata = Chop[ifF.fcdata] {-−1., -−1., -−1., -−1., 1., 1., 1., 1.}

Discriminant of a Hessian matrix See Hessian matrix

Disjoint subspaces Two subspaces U and V of a vector space W are disjoint if they only have the zero vector in common.

Illustration ◼ Two disjoint proper subspaces Quit[] W = ℝ5 ; S = {{1, 0, 0, 0, 0}, {0, 1, 0, 0, 0}}; T = {{0, 0, 1, 0, 0}, {0, 0, 0, 1, 0}}; U = {a S[[1]] + b S[[2]] } V = {c T[[1]] + d T[[2]] } By construction, the subspaces U and V are disjoint: Solve[a S[[1]] + b S[[2]] ⩵ c T[[1]] + d T[[2]], {a, b, c, d}]

Distance between a point and a plane The Euclidean distance d[point,plane] between a point {p, q, r} and a plane ax + by + cz + d = 0 in the space 𝔼3 is Abs[a x + b y + c z + d /∕. {x → p, y → q, z → r}]  Sqrta2 + b2 + c2 

Illustration ◼ The Euclidean distance between a point and a plane {p, q, r} = {1, 2, 3};

(1)

94 |

The Linear Algebra Survival Guide

plane = 3 x -− y + 4 z -− 9 ⩵ 0; numerator = Abs[3 x -− y + 4 z -− 9 /∕. {x → 1, y → 2, z → 3}] 61 2 denominator = Sqrt32 + (-−1)2 + 42  26 numerator distance = denominator 61 2

26

◼ Using projections and normals to compute the Euclidean distance between a point and a plane The Euclidean distance between an external point P{p, q, r} and the point Q {x0 , y0 , z0 } in the plane ax + by + cz + d = 0 is also equal to the Euclidean norm of the orthogonal projection of the vector (Q - P) = {x0 -− p, y0 -− q, z0 -− r} onto the normal {a, b, c} of the given plane. Clear[x, y, z, p, q, r] plane = 3 x -− y + 4 z -− 9 ⩵ 0; externalpoint = {p, q, r} = {1, 2, 3}; normal = {3, -−1, 4}; Reduce[plane, {x, y, z}] 9 z ⩵

3x -−

y +

4

4

4

planarpoint = {x0 , y0 , z0 } = {0, 0, 9 /∕ 4}; Projection[externalpoint -− planarpoint, normal] 6

2 , -−

 13

13

Norm[%] 2 2 13

8 ,

 13

The Linear Algebra Survival Guide

| 95

Distance function A distance function on a vector space V is a function that assigns a nonnegative real number d(u, v) to every pair of vectors {u, v} in V and has the following properties:

Properties of distance functions d[u, v] ≥ 0

(1)

d[u, v] = d[v, u]

(2)

d[u, v] ≤ d[u, w] + d[w, v]

(3)

d[u, v] = 0 if and only if u = v

(4)

Illustration ◼ The distance between two vectors determined by a norm Like all other norms, the Euclidean norm and the p-norms define distance functions. But there are others. The function d[u, v] =

1 if u ≠ v 0 otherwise

is a distance function. In topology and other fields, distance functions are called metrics and spaces equipped with metrics are called metric spaces. ◼ A distance function for ℝ2 d[u_, v_] := If[u ≠ v, 1, 0] {d[{1, 2}, {2, 3}], d[{a, b}, {a, b}]} {1, 0} ◼ The Euclidean distance function for ℝ2 d[{u_, v_}, {r_, s_}] := Sqrt(u -− r)2 + (v -− s)2  d[{1, 2}, {4, -−5}] 58 d[{1, 2}, {4, -−5}] ⩵ Norm[{1 -− 4, 2 + 5}, 2] True

(5)

96 |

The Linear Algebra Survival Guide

Domain of a linear transformation The domain of a linear transformation T is the vector space on which T acts. The notation T : A ⟶ B identifies the vector space A as the domain of T and the vector space B as its codomain.

Illustration ◼ The domain, codomain, and range of a linear transformation T from ℝ2 to ℝ3 T[{x_, y_}] := {x, y, 0} T[{1, 1}] {1, 1, 0} The domain of T is ℝ2 , the codomain of T is ℝ3 , and the range of T is the subspace of ℝ3 consisting of all vectors of the form {x, y, 0}.

Dot product The dot product of two real vectors is the sum of the componentwise products of the vectors. In spite of its name, Mathematica does not use a dot (.) to represent this function. It must be written in the Dot notation. The period (the dot) is used to designate matrix multiplication.

Properties of dot products Dot[u, v] = Dot[v, u]

(1)

Dot[u, v + w] = Dot[u, v] + Dot[u, w]

(2)

Dot[u, r v + w] = r Dot[u, v] + Dot[u, w]

(3)

Dot[r u, s v] = (r s) Dot[u, v]

(4)

Illustration ◼ The dot product of two vectors in ℝ2 Clear[a, b, v, w] v = {1, 2}; w = {a, b}; Dot[v, w] a+2b ◼ The dot product of two vectors in ℝ3 Clear[x, a, b, c]

The Linear Algebra Survival Guide

x = {1, 2, 3}; y = {a, b, c}; Dot[x, y] a+2b+3c ◼ The dot product of two vectors in ℝ5 Clear[a, b, c, d, e, r, s] r = {1, 2, 3, 4, 5}; s = {a, b, c, d, e}; Dot[r, s] a+2b+3c+4d+5e ◼ The dot product and the standard deviation data = Range[10]; average = Mean[data]; dot = Dot[(data -− average), (data -− average)]; sample = Length[data] -− 1; 1 stdevdata = Sqrt dot sample 55 6 stdevdata ⩵ StandardDeviation[data] True

Manipulation ◼ Exploring the dot product Manipulate[Dot[{a, b}, {-−5, 2}], {a, -−3, 3, 1}, {b, -−4, 4, 1}]

| 97

98 |

The Linear Algebra Survival Guide

a b

7

We use Manipulate and Dot to explore the dot product. If we let a = - 3 and b = - 4, then the manipulation shows, for example, that the dot product of the generated vectors is 7. ◼ Sample standard deviations Manipulatedata = Range[n]; average = Mean[data]; dot = Dot[(data -− average), (data -− average)]; sample = Length[data] -− 1; 1 stdevdata = Sqrt dot, {n, 2, 100, 1} sample

n

101 5 3

We use Manipulate, Range, Mean, Dot, Length, and Sqrt to explore sample standard deviations. StandardDeviation[Range[100]] 101 5 3

For n = 100, the manipulation shows that the sample standard deviation of the list {1, 2, ..., 100} is 5

101 3

.

Dual space Consider the two-dimensional coordinate space V = ℝ2 . A linear functional f is a function ℝ2 → ℝ preserving linear combinations. Each dual space has a basis consisting of linear functionals. It’s called a dual basis and defined on the dual space of V.

The Linear Algebra Survival Guide

| 99

Illustration ◼ A dual space of ℝ3 Let V be the real vector space ℝ3 and consider the following linear functionals on V: f1 [{x_, y_, z_}] := 3 x + y; f2 [{x_, y_, z_}] := y -− z; f3 [{x_, y_, z_}] := x + y + 2 z; We show that set {f1 , f2 , f3 } is a basis for V *⋆ : Clear[x, y, z, a, b, c]; Expand[a (3 x + y) + b (y -− z) + c (x + y + 2 z)] 3 a x + c x + a y + b y + c y -− b z + 2 c z ◼ The set of linear functionals {f1 , f2 , f3 } spans V *⋆ . Solve[{d, e, f} == {3 a + c, a + b + c, -−b + 2 c}, {a, b, c}] 1

1 (3 d -− e -− f), b →

a → 8

1 (-−d + 3 e -− f), c →

4

(-−d + 3 e + 3 f) 8

This shows that every linear combination of linear functionals on V can be written uniquely as a linear combination of the linear functionals f1 , f2 , and f3 . ◼ The set of linear functionals {f1 , f2 , f3 } is also linearly independent. Solve[{3 a + c, a + b + c, -−b + 2 c} ⩵ {0, 0, 0}, {a, b, c}] {{a → 0, b → 0, c → 0}} This shows that the zero linear functional can only be written as the trivial linear combination of the linear functionals f1 , f2 , and f3 . ◼ Construction of a basis for V for which {f1 , f2 , f3 } is a dual basis. To show that {f1 , f2 , f3 } is a dual basis, there must exist a basis {e1 , e2 , e3 } for V for which fi (e j ) = 1 if i = j and 0 if i ≠ j. Let B = {e1 = {x1 , y1 , z1 }, e2 = {x2 , y2 , z2 }, e3 = {x3 , y3 , z3 }}; be the required basis. Then {f1 , f2 , f3 } is a dual basis, provided that e1 , e2 , and e3 are the following vectors: solution1 = Flatten[Solve[{3 x1 + y1 ⩵ 1, y1 -− z1 ⩵ 0, x1 + y1 + 2 z1 ⩵ 0}, {x1 , y1 , z1 }]] 3 x1 →

1 1 , y1 → -− , z1 → -−  8 8 8

100 |

The Linear Algebra Survival Guide

solution2 = Flatten[Solve[{3 x2 + y2 ⩵ 0, y2 -− z2 ⩵ 1, x2 + y2 + 2 z2 ⩵ 0}, {x2 , y2 , z2 }]] 1 3 1 x2 → -− , y2 → , z2 → -−  4 4 4 solution3 = Flatten[Solve[{3 x3 + y3 ⩵ 0, y3 -− z3 ⩵ 0, x3 + y3 + 2 z3 ⩵ 1}, {x3 , y3 , z3 }]] 3 3 1 x3 → -− , y3 → , z3 →  8 8 8 e1 = {x1 , y1 , z1 } /∕. solution1 3 1 1  , -− , -−  8 8 8 e2 = {x2 , y2 , z2 } /∕. solution2 1 3 1 -− , , -−  4 4 4 e3 = {x3 , y3 , z3 } /∕. solution3 1 3 3 -− , ,  8 8 8 To show that B = {e1 , e2 , e3 } is a basis for V, it suffices to show that the matrix B is invertible. B = {e1 , e2 , e3 } 3 1 1 1 3 1 1 3 3  , -− , -− , -− , , -− , -− , ,  8 8 8 4 4 4 8 8 8 Det[B] 1 8 The following calculations show that {f1 , f2 , f3 } is a dual basis for V with respect to the basis {e1 , e2 , e3 } : {f1 [e1 ], f1 [e2 ], f1 [e3 ]} {1, 0, 0} {f2 [e1 ], f2 [e2 ], f2 [e3 ]} {0, 1, 0} {f3 [e1 ], f3 [e2 ], f3 [e3 ]} {0, 0, 1}