Javascript required
Skip to content Skip to sidebar Skip to footer

How to Find Dimension of a Vector Space

Basis and Dimension

Basis

In our previous discussion, we introduced the concepts of span and linear independence.  In a way a set of vectors S  = {v1, ... , vk} span a vector space V if there are enough of the right vectors in S, while they are linearly independent if there are no redundancies.  We now combine the two concepts.

Definition of Basis

Let V be a vector space and S  =  {v 1, v 2, ... , v k} be a subset of V .  Then S is a basis for V if the following two statements are true.

  1. S spans V .
  2. S is a linearly independent set of vectors in V .

Example

Let V  =  Rn and let S  =  {e1, e2, ... ,en} where ei has i th component equal to 1 and the rest 0.  For example

e2  =  (0,1,0,0,...,0)

Then S is a basis for V called the standard basis .


Example

Let V  =  P3 and let S  =  {1, t, t2, t3}.  Show that S is a basis for V.

Solution

We must show both linear independence and span.

Linear Independence:

Let

        c1(1) + c2(t) + c3(t2) + c4(t3)  =  0

Then since a polynomial is zero if and only if its coefficients are all zero we have

    c1  =  c2  =  c3  =  c4  =  0

Hence S is a linearly independent set of vectors in V.

Span

A general vector in P3 is given by

        a + bt + ct2 + dt3

We need to find constants c1, c2, c3, c4 such that

        c1(1) + c2(t) + c3(t2) + c4(t3)  =  a + bt + ct2 + dt3

We just let

    c1  =  a,   c2  =  b,   c3  =  c,   c4  =  d

Hence S spans V.

We can conclude that S is a basis for V.

In general the basis {1, t, t2, ... , tn} is called the standard basis for P n .


Example

Show that S  =  {v 1, v 2, v 3, v 4} where

is a basis for V  =  M2x2 .

Solution

We need to show that S spans V and is linearly independent.

Linear Independence

We write

this gives the four equations

     c1 + c2 + c3 + 2c4  =  0
2c1              +   c4  =  0
2c2                   =  0
2c3          =  0

Which has the corresponding homogeneous matrix equation

     Ac  =  0

with

We have

     det A  =  -12

Since the determinant is nonzero, we can conclude that only the trivial solution exists.  That is

    c1  =  c2  =  c3  =  c4  =  0

Span

We set

which gives the equations

     c1 + c2 + c3 + 2c4  =  x1
2c1              +   c4  =  x2
2c2                   =  x3
2c3          =  x4

The corresponding matrix equation is

     Ac  = x

Since A is nonsingular, this has a unique solution, namely

c  =  A-1 x

Hence S spans V.

We conclude that S is a basis for V.


If S spans V then we know that every vector in V can be written as a linear combination of vectors in S.  If S is a basis, even more is true.

Theorem

Let S  =  {v 1 , v 2 , ... , v k } be a basis for V .  Then every vector in V can be written uniquely as a linear combination of vectors in S .

Remark: What is new here is the word uniquely.

Proof

Suppose that

v =  a1 v1 + ... + an v n   =  b1 v 1 + ... + bn vn

then

        (a1 - b1)v 1 + ... + (an - bn)v n    =  0

Since S is a basis for V, it is linearly independent and the above equation implies that all the coefficients are zero.  That is

        a1 - b1  =  ... =  an - bn  =  0

We can conclude that

        a1  =  b1, ... , an  =  bn


Since linear independence is all about having no redundant vectors, it should be no surprise that if S = {v1, v2, ... , vk}spans V, then if S is not linearly independent then we can remove the redundant vectors until we arrive at a basis.  That is if S is not linearly independent, then one of the vectors is a linear combination of the rest.  Without loss of generality, we can assume that this is the vector v k .  We have that

vk =  a1 v1 + ... + ak-1 vk-1

If v is any vector in S we can write

v  =  c1 v 1 + ... + ck v k   =  c1 v 1 + ... + ck-1 v k-1 + ck(a1 v1 + ... + ak-1 vk-1 )

which is a linear combination of the smaller set

S' = {v1 , v2 , ... , vk-1 }

If S' is not a basis, as above we can get rid of another vector.  We can continue this process until the vectors are finally linear independent.

We have proved the following theorem.

Theorem

Let S span a vector space V , then there is a subset of S that is a basis for V .


Dimension

We have seen that any vector space that contains at least two vectors contains infinitely many.  It is uninteresting to ask how many vectors there are in a vector space.  However there is still a way to measure the size of a vector space.  For example, R3 should be larger than R2 .  We call this size the dimension of the vector space and define it as the number of vectors that are needed to form a basis.  Tow show that the dimensions is well defined, we need the following theorem.

Theorem

If S  =  {v 1, v 2, ... , v n} is a basis for a vector space V and T = {w 1, w 2, ... , w k} is a linearly independent set of vectors in V , then k < n.

Remark: If S and T are both bases for V then k = n.  This says that every basis has the same number of vectors.  Hence the dimension is will defined.

The dimension of a vector space V is the number of vectors in a basis.  If there is no finite basis we call V an infinite dimensional vector space.  Otherwise, we call V a finite dimensional vector space.

Proof

If k > n, then we consider the set

R1  =  {w 1,v 1, v 2, ... , v n}

Since S spans V, w1 can be written as a linear combination of the vi 's.

w1   =  c1 v 1 + ... + cn vn

Since T is linearly independent, w1 is nonzero and at least one of the coefficients ci is nonzero.  Without loss of generality assume it is c1 .  We can solve for v1 and write v1 as a linear combination of w1, v2 , ... vn .  Hence

     T1  =  {w 1 , v 2 , ... , v n }

is a basis for V.  Now let

R2  =  {w 1, w 2, v 2, ... , v n}

Similarly, w2 can be written as a linear combination of the rest and one of the coefficients is non zero.  Note that since w1 and w2 are linearly independent, at least one of the vi coefficients must be nonzero.  We can assume that this nonzero coefficient is v2 and as before we see that

T2  =  {w 1 , w 2 ,v3, ... , v n }

is a basis for V.  Continuing this process we see that

     Tn  =  {w 1 , w 2 , ... , w n }

is a basis for V.  But then Tn spans V and hence wn+1 is a linear combination of vectors in Tn .  This is a contradiction since the w 's are linearly independent.  Hence n > k.

Example

Since

     E  =  {e 1 , e 2 , ... , e n }

is a basis for R n then dim Rn  =  n.


Example

        dim Pn  =  n + 1

since

      E  =  {1, t, t2, ... , tn}

is a basis for P n .


Example

dim Mmxn  =  mn

We will leave it to you to find a basis containing mn vectors.

If we have a set of linearly independent vectors

S  =  {v 1 , v 2 , ... , vk }

with k < n, then S is not a basis.  From the definition of basis, S does not span V, hence there is a vk+1 such that vk+1 is not in the span of S.  Let

      S1  = {v 1 , v 2 , ... ,vk ,vk+1 }

S1 is linearly independent.  We can continue this until we get a basis.

Theorem

Let

        S  =  {v 1 , v 2 , ... , vk }

be a linearly independent set of vectors in a vector space V .  Then there are vectors

vk+1 , ... , vn

such that

     {v 1 , v 2 , ... , vk, vk+1 , ... , vn }

is a basis for V.


We finish this discussion with some very good news.  We have seen that to find out if a set is a basis for a vector space, we need to check for both linear independence and span.  We know that if there are not the right number of vectors in a set, then the set cannot form a basis.  If the number is the right number we have the following theorem.

Theorem

Let V be an n dimensional vector space and let S be a set with n vectors.  Then the following are equivalent.

  1. S is a basis for V .
  2. S is linearly independent.
  3. S spans V .


Back to the Vectors Home Page

Back to the Linear Algebra Home Page

Back to the Math Department Home Page

e-mail Questions and Suggestions

How to Find Dimension of a Vector Space

Source: https://ltcconline.net/greenl/courses/203/Vectors/basisDimension.htm