SUBSPACES OF R<sup>n</sup> - Euclidean Space and Linear Mappings - Advanced Calculus of Several Variables

Advanced Calculus of Several Variables (1973)

Part I. Euclidean Space and Linear Mappings

Chapter 2. SUBSPACES OF Rn

In this section we will define the dimension of a vector space, and then show that Imagen has precisely n − 1 types of proper subspaces (that is, subspaces other than 0 and Imagen itself)—namely, one of each dimension 1 through n − 1.

In order to define dimension, we need the concept of linear independence. The vectors v1, v2, . . . , vk are said to be linearly independent provided that no one of them is a linear combination of the others; otherwise they are linearly dependent. The following proposition asserts that the vectors v1, . . . , vk are linearly independent if and only if x1 v1 + x2 v2 + · · · + xk vk = 0 implies that x1 = x2 = · · · = xk = 0. For example, the fact that x1 e1 + x2 e2 + · · · + xn en = (x1, x2, . . . , xn) then implies immediately that the standard basis vectors e1, e2, . . . , en in Imagen are linearly independent.

Proposition 2.1 The vectors v1, v2, . . . , vk are linearly dependent if and only if there exist numbers x1, x2, . . . , xk, not all zero, such that x1 v1 + x2 v2 + · · · + xk vk = 0.

PROOFIf there exist such numbers, suppose, for example, that x1 ≠ 0. Then

Image

so v1, v2, . . . , vk are linearly dependent. If, conversely, v1 = a2 v2 + · · · + ak vk, then we have x1 v1 + x2 v2 + · · · + xk vk = 0 with x1 = −1 ≠ 0 and xi = ai for i > 1.

Image

Example 1To show that the vectors x = (1, 1, 0), y = (1, 1, 1), z = (0, 1, 1) are linearly independent, suppose that ax + by + cz = 0. By taking components of this vector equation we obtain the three scalar equations

Image

Subtracting the first from the second, we obtain c = 0. The last equation then gives b = 0, and finally the first one gives a = 0.

Example 2The vectors x = (1, 1, 0), y = (1, 2, 1), z = (0, 1, 1) are linearly dependent, because x − y + z = 0.

It is easily verified (Exercise 2.7) that any two collinear vectors, and any three coplanar vectors, are linearly dependent. This motivates the following definition of the dimension of a vector space. The vector space V has dimension n, dim V = n, provided that V contains a set of n linearly independent vectors, while any n + 1 vectors in V are linearly dependent; if there is no integer n for which this is true, then V is said to be infinite-dimensional. Thus the dimension of a finite-dimensional vector space is the largest number of linearly independent vectors which it contains; an infinite-dimensional vector space is one that contains n linearly independent vectors for every positive integer n.

Example 3Consider the vector space Image of all real-valued functions on Image. The functions 1, x, x2, . . . , xn are linearly independent because a polynomial a0 + a1x + · · · + an xn can vanish identically only if all of its coefficients are zero. Therefore Image is infinite-dimensional.

One certainly expects the above definition of dimension to imply that Euclidean n-space Imagen does indeed have dimension n. We see immediately that its dimension is at least n, since it contains the n linearly independent vectors e1, . . . , en. To show that the dimension of Imagen is precisely n, we must prove that any n + 1 vectors in Imagen are linearly dependent.

Suppose that v1, . . . , vk are k > n vectors in Imagen, and write

Image

We want to find real numbers x1, . . . , xk, not all zero, such that

Image

This will be the case if Image Thus we need to find a nontrivial solution of the homogeneous linear equations

Image

By a nontrivial solution (x1, x2, . . . , xk) of the system (1) is meant one for which not all of the xi are zero. But k > n, and (1) is a system of n homogeneous linear equations in the k unknowns x1, . . . , xk. (Homogeneous meaning that the right-hand side constants are all zero.)

It is a basic fact of linear algebra that any system of homogeneous linear equations, with more unknowns than equations, has a nontrivial solution. The proof of this fact is an application of the elementary algebraic technique of elimination of variables. Before stating and proving the general theorem, we consider a special case.

Example 4Consider the following three equations in four unknowns:

Image

We can eliminate x1 from the last two equations of (2) by subtracting the first equation from the second one, and twice the first equation from the third one. This gives two equations in three unknowns:

Image

Subtraction of the first equation of (3) from the second one gives the single equation

Image

in two unknowns. We can now choose x4 arbitrarily. For instance, if x4 = 1, then x3 = −2. The first equation of (3) then gives Image, and finally the first equation of (2) gives Image So we have found the nontrivial solution Image of the system (2).

The procedure illustrated in this example can be applied to the general case of n equations in the unknowns x1, . . . , xk, k > n. First we order the n equations so that the first equation contains x1, and then eliminate x1 from the remaining equations by subtracting the appropriate multiple of the first equation from each of them. This gives a system of n − 1 homogeneous linear equations in the k − 1 variables x2, . . . , xk. Similarly we eliminate x2 from the last n − 2 of these n − 1 equations by subtracting multiples of the first one, obtaining n − 2 equations in the k − 2 variables x3, x4, . . . , xk. After n − 2 steps of this sort, we end up with a single homogeneous linear equation in the k − n + 1 unknowns xn, xn+1, . . . , xk. We can then choose arbitrary nontrivial values for the “extra” variables xn+1, xn+2, . . . , xk (such as xn+1 = 1, xn+2 = · · · = xk = 0), solve the final equation for xn, and finally proceed backward to solve successively for each of the eliminated variables xn−1, xn−2, . . . , x1. The reader may (if he likes) formalize this procedure to give a proof, by induction on the number n of equations, of the following result.

Theorem 2.2If k > n, then any system of n homogeneous linear equations in k unknowns has a nontrivial solution.

By the discussion preceding Eqs.(1) we now have the desired result that dim Imagen = n.

Corollary 2.3 Any n + 1 vectors in Imagen are linearly dependent.

We have seen that the Linearly Independent vectors e1, e2, . . . , en generate Imagen. A set of linearly independent vectors that generates the vector space V is called a basis for V. Since x = (x1, x2, . . . , xn) = x1e1 + x2 e2 + · · · + xn en, it is clear that the basis vectors e1, . . . , en generate V uniquely; that is, if x = y1e1 + y2 e2 + · · · + yn en also, then xi = yi for each i. Thus each vector in Imagen can be expressed in one and only one way as a linear combination of e1, . . . , en. Any set of n linearly independent vectors in an n-dimensional vector space has this property.

Theorem 2.4If the vectors v1, . . . , vn in the n-dimensional vector space V are linearly independent, then they constitute a basis for V, and furthermore generate V uniquely.

PROOFGiven Image, the vectors v, v1, . . . , vn are linearly dependent, so by Proposition 2.1 there exist numbers x, x1, . . . , xn, not all zero, such that

Image

If x = 0, then the fact that v1, . . . , vn are linearly independent implies that x1 = · · · = xn = 0. Therefore x ≠ 0, so we solve for v:

Image

Thus the vectors v1, . . . , vn generate V, and therefore constitute a basis for V. To show that they generate V uniquely, suppose that

Image

Then

Image

So, since v1, . . . , vn are linearly independent, it follows that aiai′ = 0, or ai = ai′, for each i.

Image

There remains the possibility that Imagen has a basis which contains fewer than n elements. But the following theorem shows that this cannot happen.

Theorem 2.5If dim V = n, then each basis for V consists of exactly n vectors.

PROOFLet w1, w2, . . . , wn be n linearly independent vectors in V. If there were a basis v1, v2, . . . , vm for V with m < n, then there would exist numbers {aij} such that

Image

Since m < n, Theorem 2.2 supplies numbers x1, . . . , xn not all zero, such that

Image

But this implies that

Image

which contradicts the fact that w1, . . . , wn are linearly independent. Consequently no basis for V can have m < n elements.

Image

We can now completely describe the general situation as regards subspaces of Imagen. If V is a subspace of Imagen, then Image by (Corollary 2.3, and if k = n, then V = Imagen by Theorem 2.4. If k > 0, then any k linearly independent vectors in V generate V, and no basis for V contains fewer than k vectors Theorem 2.5).

Exercises

2.1Why is it true that the vectors v1, . . . , vk are linearly dependent if any one of them is zero? If any subset of them is linearly dependent?

2.2Which of the following sets of vectors are bases for the appropriate space Imagen ?

(a)(1, 0) and (1, 1).

(b)(1, 0, 0), (1, 1, 0), and (0, 0, 1).

(c)(1, 1, 1), (1, 1, 0), and (1, 0, 0).

(d)(1, 1, 1, 0), (1, 0, 0, 0), (0, 1, 0, 0), and (0, 0, 1, 0).

(e)(1, 1, 1, 1), (1, 1, 1, 0), (1, 1, 0, 0), and (1, 0, 0, 0).

2.3Find the dimension of the subspace V of Image4 that is generated by the vectors (0, 1, 0, 1), (1, 0, 1, 0), and (1, 1, 1, 1).

2.4Show that the vectors (1, 0, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1) form a basis for the subspace V of Image4 which is defined by the equation x1 + x2 + x3x4 = 0.

2.5Show that any set v1, . . . , vk, of linearly independent vectors in a vector space V can be extended to a basis for V. That is, if k < n = dim V, then there exist vectors vk + 1, . . . , vn in V such that v1, . . . vn is a basis for V.

2.6Show that Theorem 2.5 is equivalent to the following theorem: Suppose that the equations

Image

have only the trivial solution x1 = · · · = xn = 0. Then, for each b = (b1, . . . , bn), the equations

Image

have a unique solution. Hint: Consider the vectors aj = (a1j, a2j, . . . , anj), j = 1, . . . , n.

2.7Verify that any two collinear vectors, and any three coplanar vectors, are linearly dependent.

es/image003.jpg" alt="Image" width="14" height="13" border="0" />n since the function Image may be regarded as the n-tuple (φ(1), φ(2), . . . , φ(n)).