Question:

Should the zero vector be excluded from the definition of linearly dependent/ independent vectors?

by  |  earlier

0 LIKES UnLike

Mathworld.wolfram.com defines

"vectors xi are linearly dependent iff there exist scalars ci, not all zero, such that the sum of the products (ci)*(xi) = 0

If no such scalars exist, then the vectors are said to be linearly independent".

Now if one of the xi, say xk, is the zero vector, then the sum is zero for any value of ck provided all the other ci are zero. However xk can only be expressed as a linear combination of the other vectors by using 0 for every coefficient; and indeed if the remaining n-1 vectors are independent of each other, then none of them can be expressed as a linear combination of the others, which contradicts the concept of dependence. Should the zero vector be excluded from the definition of linear dependence/ independence to avoid this paradox?

 Tags:

   Report

2 ANSWERS


  1. This isn't a paradox; you're just slightly confused on the definitions.

    Definition of linear dependence:

    "x_1, ..., x_n are linearly dependent" means "There exist c_1, ..., c_n, not all zero, such that c_1x_1 + ... + c_nx_n = 0."

    Definition of linear combination:

    "y is a linear combination of x_1, ..., x_n" means "There exist c_1, ..., c_n such that y = c_1x_1 + .... + c_nx_n."

    Notice that, in the definition of linear combination, nothing is said about the c_1, ..., c_n being nonzero. So, in particular, the zero vector is a linear combination of *any* set of vectors (just take all the c_1, ..., c_n to be zero).

    You refer to a theorem, which says:

    THEOREM. The vectors x_1, ..., x_n are linearly dependent if and only if some x_i can be written as a linear combination of the other vectors in the set {x_1, ..., x_n}.

    In the event that some x_k = 0, as you suggested, the theorem is still true:

    * x_1, ..., x_n are linearly dependent, because 0x_1 + 0x_2 + ... + 0x_{k - 1} + 1x_k + 0x_{k + 1} + ... + 0x_n = 0.

    * x_k is a linear combination of the other vectors, because x_k = 0x_1 + 0x_2 + ... + 0x_{k - 1} + 0x_{k + 1} + ... + 0x_n.

    This isn't a paradox--the definition of linear combination allows all the weights to be zero.

    The upshot of this is that any set of vectors containing the zero vector is linearly dependent.


  2. Any set of vectors  which contains zero vector cant be linearly independent.  And linear dependency does not mean  that you can express each vector as a linear combination of remaining n--1 vector.  It just tells you that you dont need all n vectors to define the space. some k less than n number of vectors are enough. (this does not mean  that you can randomly choose any k among n  vectors.) This just a notion of minimality of basis.  Check the source for more information.

Question Stats

Latest activity: earlier.
This question has 2 answers.

BECOME A GUIDE

Share your knowledge and help people by answering questions.
Unanswered Questions