Linear Independence and Linear Dependence
Goals
Understand linear independence and linear dependence from three perspectives: definition, geometric meaning, and determination methods. Common stumbling blocks for beginners are addressed with particular care.
1. Intuitive Understanding
1.1 Are there "redundant" vectors?
Given a collection of vectors, if any one of them can be built from combinations of the others, that vector is "redundant".
Example: Consider $\boldsymbol{v}_1 = (1, 0)$, $\boldsymbol{v}_2 = (0, 1)$, $\boldsymbol{v}_3 = (2, 3)$.
Since $\boldsymbol{v}_3 = 2\boldsymbol{v}_1 + 3\boldsymbol{v}_2$, the vector $\boldsymbol{v}_3$ is "redundant".
1.2 Meaning of Linear Independence
Linearly independent: No vector can be expressed as a combination of the others.
Linearly dependent: At least one vector can be expressed as a combination of the others.
2. Formal Definition
2.1 Linear Combination
A linear combination of vectors $\boldsymbol{v}_1, \ldots, \boldsymbol{v}_k$ is:
$$c_1\boldsymbol{v}_1 + c_2\boldsymbol{v}_2 + \cdots + c_k\boldsymbol{v}_k$$The scalars $c_1, \ldots, c_k$ are called coefficients.
2.2 Definition of Linear Independence
Definition: Vectors $\boldsymbol{v}_1, \ldots, \boldsymbol{v}_k$ are linearly independent if
$$c_1\boldsymbol{v}_1 + c_2\boldsymbol{v}_2 + \cdots + c_k\boldsymbol{v}_k = \boldsymbol{0}$$
holds only when $c_1 = c_2 = \cdots = c_k = 0$.
2.3 Definition of Linear Dependence
Definition: Vectors $\boldsymbol{v}_1, \ldots, \boldsymbol{v}_k$ are linearly dependent if there exist
\begin{equation}c_1\boldsymbol{v}_1 + c_2\boldsymbol{v}_2 + \cdots + c_k\boldsymbol{v}_k = \boldsymbol{0} \label{eq:linearly-dependent}\end{equation}
non-trivial coefficients (not all zero).
2.4 Equivalent Formulation
Linearly dependent ⇔ at least one vector can be written as a linear combination of the others.
This can be shown by rearranging the defining equation $\eqref{eq:linearly-dependent}$. Assume $c_1 \neq 0$ among the non-trivial coefficients (by relabelling, we can always put a non-zero one first):
- Move $c_2\boldsymbol{v}_2 + \cdots + c_k\boldsymbol{v}_k$ in $\eqref{eq:linearly-dependent}$ to the right-hand side: $$c_1\boldsymbol{v}_1 = -c_2\boldsymbol{v}_2 - \cdots - c_k\boldsymbol{v}_k$$
- Divide both sides by $c_1 \neq 0$: $$\boldsymbol{v}_1 = -\dfrac{c_2}{c_1}\boldsymbol{v}_2 - \cdots - \dfrac{c_k}{c_1}\boldsymbol{v}_k$$
Thus $\boldsymbol{v}_1$ is expressed as a linear combination of the others. The same argument works for any non-zero $c_i$, so "a non-trivial linear combination equals zero" and "at least one vector is a linear combination of the others" are equivalent.
3. Geometric Meaning
3.1 The 2-Dimensional Case
Two vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$ are:
- Linearly independent: not on the same line (they span a plane)
- Linearly dependent: on the same line (parallel, or one of them is the zero vector)
3.2 The 3-Dimensional Case
Three vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\boldsymbol{v}_3$ are:
- Linearly independent: not on the same plane (they span the whole space)
- Linearly dependent: on the same plane
3.3 Generalization
In $n$-dimensional space:
- Any set of $n+1$ or more vectors is necessarily linearly dependent
- $n$ linearly independent vectors span the whole space
Recap: "basis" (full treatment in Chapter 1 §5)
A basis of a space $V$ is a set of vectors $\{\boldsymbol{e}_1, \ldots, \boldsymbol{e}_n\}$ satisfying both:
- Linearly independent — no $\boldsymbol{e}_i$ can be built from the others (no waste)
- Spans $V$ — every vector in $V$ is a linear combination of $\boldsymbol{e}_1, \ldots, \boldsymbol{e}_n$ (reaches everywhere)
Intuitively, "a minimal toolkit that covers the whole space". Too few and it fails to cover; too many and there is redundancy.
Examples:
- Standard basis of $\mathbb{R}^2$: $\{(1,0), (0,1)\}$ — 2 vectors span the plane
- Standard basis of $\mathbb{R}^3$: $\{(1,0,0), (0,1,0), (0,0,1)\}$ — 3 vectors span 3-space
- In $\mathbb{R}^2$, $\{(1,0), (1,1)\}$ is also a basis (non-standard bases are possible)
- In $\mathbb{R}^2$, $\{(1,0), (2,0)\}$ is not a basis (linearly dependent; cannot reach the $y$ direction)
The number of vectors in a basis is called the dimension of the space; $\mathbb{R}^n$ has dimension $n$. Non-uniqueness of a basis, plus bases for polynomial / matrix / function spaces are discussed in Chapter 1 §5.
4. Determination Methods
Prerequisites for this section — We use the rank of a matrix and the determinant (det) below. Detailed definitions:
- Rank: the maximum number of linearly independent column vectors (equivalently, row vectors). Details in Chapter 11.
- Determinant: a scalar defined for square matrices, with $\det(A) \neq 0$ ⇔ columns are linearly independent. Derivation and computation in Chapter 3–Chapter 5; a gentler introduction in the Intro §7.
In this section we introduce them as mechanical tools for deciding linear independence.
4.1 Using the Rank of a Matrix
Arrange the vectors as columns of a matrix $A = (\boldsymbol{v}_1 | \boldsymbol{v}_2 | \cdots | \boldsymbol{v}_k)$.
Test: $\boldsymbol{v}_1, \ldots, \boldsymbol{v}_k$ are linearly independent ⇔ $\mathrm{rank}(A) = k$.
Intuitively, $\mathrm{rank}(A)$ is "the dimension of the space spanned by the columns of $A$". If all $k$ vectors are linearly independent, the rank is $k$; if there is any linear dependence, the rank is less than $k$.
4.2 The Square Matrix Case
When the number of vectors equals the dimension ($n$ vectors in $n$-dimensional space):
Test: $\boldsymbol{v}_1, \ldots, \boldsymbol{v}_n$ are linearly independent ⇔ $\det(A) \neq 0$.
The determinant vanishes exactly when "the space spanned by the columns has dimension less than $n$", which is precisely the linearly dependent case. Chapter 5 revisits this fact from the geometric perspective "$\det(A)$ = signed volume of the parallelepiped".
4.3 Worked Example
Are $\boldsymbol{v}_1 = (1, 2, 3)$, $\boldsymbol{v}_2 = (4, 5, 6)$, $\boldsymbol{v}_3 = (7, 8, 9)$ linearly independent?
$$A = \begin{pmatrix} 1 & 4 & 7 \\ 2 & 5 & 8 \\ 3 & 6 & 9 \end{pmatrix}$$ $$\det(A) = 1(45-48) - 4(18-24) + 7(12-15) = -3 + 24 - 21 = 0$$Since $\det(A) = 0$, the vectors are linearly dependent. Indeed, $\boldsymbol{v}_3 = 2\boldsymbol{v}_2 - \boldsymbol{v}_1$.
4.4 Linear Independence of Functions: The Wronskian ★Advanced (optional)
This subsection is advanced material — it goes beyond the "vector = number tuple" framework and treats functions as "infinite-dimensional vectors". It is an application combining differentiation with determinants; feel free to skip it on first reading (it also serves as a preview of the intermediate topic Function Spaces).
For differentiable functions $f_1, \ldots, f_n$, the Wronskian is defined as
$$W(f_1, \ldots, f_n) = \det\begin{pmatrix} f_1 & \cdots & f_n \\ f_1' & \cdots & f_n' \\ \vdots & & \vdots \\ f_1^{(n-1)} & \cdots & f_n^{(n-1)} \end{pmatrix}$$If $W \neq 0$ at some point, the functions are linearly independent.
Caveat: the converse is not true in general. There exist examples where $W \equiv 0$ (identically zero) yet the functions are not linearly dependent (see Peano's counterexample). However, for analytic functions, $W \equiv 0$ ⇔ linearly dependent does hold.
Example: To see that $1, x, x^2$ are linearly independent in $C(\mathbb{R})$, note that $c_1 + c_2 x + c_3 x^2 = 0$ for all $x$ forces $c_1 = c_2 = c_3 = 0$. The Wronskian confirms this:
$$W(1, x, x^2) = \det\begin{pmatrix} 1 & x & x^2 \\ 0 & 1 & 2x \\ 0 & 0 & 2 \end{pmatrix} = 2 \neq 0$$5. Common Misconceptions
5.1 "Orthogonal" is Different from "Linearly Independent"
Orthogonal (inner product = 0) ⇒ linearly independent (true)
Linearly independent ⇒ orthogonal (false)
Counterexample: $(1, 0)$ and $(1, 1)$ are linearly independent but not orthogonal.
5.2 A Set Containing the Zero Vector is Linearly Dependent
Any set that contains $\boldsymbol{0}$ is automatically linearly dependent.
This is because $1 \cdot \boldsymbol{0} + 0 \cdot \boldsymbol{v}_1 + \cdots = \boldsymbol{0}$ is a non-trivial combination giving $\boldsymbol{0}$.
5.3 Linear Independence Even Applies to a Single Vector
If $\boldsymbol{v} \neq \boldsymbol{0}$, then $\{\boldsymbol{v}\}$ is linearly independent.
If $\boldsymbol{v} = \boldsymbol{0}$, then $\{\boldsymbol{v}\}$ is linearly dependent.
6. Summary
Key Takeaways
- Linearly independent: $\sum c_i\boldsymbol{v}_i = \boldsymbol{0}$ forces all $c_i = 0$
- Linearly dependent: a non-trivial linear combination gives the zero vector
- Geometric meaning: not on the same line in 2D; not on the same plane in 3D
- Test: matrix rank, or determinant
- Any set of $n+1$ or more vectors in $n$-dimensional space is linearly dependent