Chapter 3
Basis & Dimension
Key ideas: Introduction

Introduction#

Basis and dimension are the language for measuring and reasoning about vector spaces. A basis is a minimal spanning set—a collection of linearly independent vectors that can be scaled and added to represent every vector in the space. The dimension is simply the size of a basis (number of basis vectors). These concepts are ubiquitous in ML:

  • PCA: Principal components form a basis for a lower-dimensional subspace capturing most data variance.

  • Autoencoders: Encoder learns a basis for latent representations (bottleneck layer); decoder reconstructs using this basis.

  • Neural networks: Each layer’s hidden activations form a basis for representations learned by that layer.

  • Whitening/normalization: Change of basis to decorrelate and rescale features (covariance becomes identity).

  • Sparse coding: Find a basis (dictionary) that sparsely represents data.

Understanding basis and dimension clarifies model capacity (how many independent directions can the model control?), data complexity (how many dimensions does the data actually occupy?), and numerical stability (are basis vectors nearly parallel, i.e., ill-conditioned?).

Important Ideas#

1. Basis = minimal spanning set. A basis $\{v_1, \ldots, v_d\}$ for subspace $S$ satisfies:

  • Spans $S$: Every vector in $S$ is a linear combination $s = \sum_{i=1}^d \alpha_i v_i$.

  • Linearly independent: No $v_j$ is a linear combination of the others. Equivalently, $\sum_{i=1}^d \alpha_i v_i = 0 \Rightarrow \alpha_1 = \cdots = \alpha_d = 0$.

Why minimal? If any vector is removed, the set no longer spans $S$. If any vector is linearly dependent on others, it’s redundant.

Uniqueness of representation: For basis $\{v_1, \ldots, v_d\}$, each vector $s \in S$ has unique coefficients: if $s = \sum_i \alpha_i v_i = \sum_i \beta_i v_i$, then $\alpha_i = \beta_i$ for all $i$ (follows from linear independence).

2. Dimension = basis size. All bases for a subspace have the same number of vectors. This number is the dimension: $\dim(S) = $ number of vectors in any basis for $S$. For matrix $A$: $$ \dim(\text{col}(A)) = \text{rank}(A), \quad \dim(\text{null}(A)) = \text{nullity}(A) = n - \text{rank}(A) $$

Why constant? Different bases may use different vectors, but all bases have the same cardinality. Changing basis doesn’t change dimension (it’s a property of the subspace, not the basis).

3. Change of basis. The same vector has different coordinates in different bases. For bases $\mathcal{B} = \{v_1, \ldots, v_d\}$ and $\mathcal{B}' = \{v'_1, \ldots, v'_d\}$, the change of basis matrix $P = [v'_1 | \cdots | v'_d]$ (columns are new basis vectors in old basis) satisfies: $$ v_\text{old basis} = P v_\text{new basis}, \quad v_\text{new basis} = P^{-1} v_\text{old basis} $$

In ML: Changing basis = change of representation. PCA rotates to principal component basis. Whitening rotates to decorrelated basis (covariance = identity).

4. Standard basis and explicit coordinates. The standard basis in $\mathbb{R}^d$ is $\{e_1, \ldots, e_d\}$ where $e_i = [0, \ldots, 1, \ldots, 0]^\top$ (1 in position $i$). For this basis: $$ v = [v_1, \ldots, v_d]^\top = \sum_{i=1}^d v_i e_i $$

Coordinates in the standard basis are just the vector’s components. Embedding lookups use one-hot basis: to get embedding of token $i$, multiply by $e_i$ (selection vector).

Relevance to Machine Learning#

Model capacity and expressiveness. A model operating in a $d$-dimensional space can control at most $d$ independent directions. Linear regression with $d$ features can fit $d$ linearly independent targets. Deep networks learn hierarchical bases: layer $\ell$ learns a basis for its activation space (dimension = number of hidden units).

Data intrinsic dimensionality. Real data often lies in a lower-dimensional subspace (manifold hypothesis). PCA finds a basis for the dominant subspace; if top $k$ eigenvalues capture 95% of variance, data is approximately $k$-dimensional. This justifies dimensionality reduction without information loss.

Numerical stability and conditioning. Basis properties affect computation: orthonormal bases (columns have norm 1, mutually perpendicular) are numerically stable (condition number = 1). Nearly parallel basis vectors (ill-conditioned) cause numerical errors in linear algebra operations.

Feature engineering and representation learning. Hand-crafted features (e.g., polynomial features, Fourier basis) are explicit bases. Neural networks learn implicit bases (hidden layers) that are optimized for the task. Autoencoders learn an optimal basis for data reconstruction (variational autoencoders add structure to this basis).

Algorithmic Development History#

1. Coordinate geometry (Descartes & Fermat, 1630s-1640s). Cartesian coordinates introduced the standard basis for Euclidean space. Descartes’ La Géométrie (1637) showed geometry could be expressed algebraically using coordinate axes.

2. Change of basis and coordinate transformations (Euler 1770s, Cauchy 1820s-1830s). Euler rotated coordinate systems to simplify equations. Cauchy formalized change of basis through matrix transformations.

3. Linear independence and dimension (Grassmann 1844, Peano 1888). Grassmann introduced independence axiomatically. Peano formalized dimension as the size of maximal independent sets.

4. Orthonormal bases and orthogonalization (Schmidt 1907, Gram-Schmidt process). Erhard Schmidt proved every finite-dimensional space admits an orthonormal basis. Gram-Schmidt orthogonalization computes one constructively.

5. Eigendecomposition and spectral bases (Schur 1909, 1920s-1930s). Schur showed every matrix has an eigenvalue decomposition. Eigenvalues/eigenvectors define natural bases (spectral decomposition).

6. PCA and dimensionality reduction (Pearson 1901, Hotelling 1933, SVD 1960s). PCA finds the basis vectors (principal components) that best capture data variance. SVD algorithm (Golub & Kahan 1965) computes this stably.

7. Neural basis learning (1980s-2010s). Neural networks learn basis implicitly: hidden layers learn representations that act as bases for downstream computations. Deeper networks learn hierarchical bases (abstract high-level concepts from concrete low-level features).

8. Dictionary learning and sparse coding (2000s). Learn overcomplete bases (more basis vectors than dimension) that sparsely represent data. Applications: image denoising (K-SVD, Aharon et al. 2006), signal processing.

Definitions#

Basis. A set $\mathcal{B} = \{v_1, \ldots, v_d\}$ is a basis for subspace $S$ if:

  1. $\text{span}(\mathcal{B}) = S$ (spans the subspace).

  2. Vectors in $\mathcal{B}$ are linearly independent.

Every vector in $S$ has a unique representation: $s = \sum_{i=1}^d \alpha_i v_i$ (coordinates $\alpha_i$ are unique).

Dimension. The dimension of subspace $S$ is: $$ \dim(S) = \text{size of any basis for } S $$ This is well-defined: all bases for $S$ have the same size.

Rank and nullity. For matrix $A \in \mathbb{R}^{m \times n}$: $$ \text{rank}(A) = \dim(\text{col}(A)) = \dim(\text{row}(A)) $$ $$ \text{nullity}(A) = \dim(\text{null}(A)) = n - \text{rank}(A) $$

Rank-nullity theorem: $\text{rank}(A) + \text{nullity}(A) = n$.

Change of basis matrix. For bases $\mathcal{B} = \{v_1, \ldots, v_d\}$ and $\mathcal{B}' = \{v'_1, \ldots, v'_d\}$, the matrix $P = [v'_1 | \cdots | v'_d]$ (new basis vectors in old coordinates) satisfies: $$ P = \text{matrix whose columns are basis } \mathcal{B}' \text{ in coordinates of basis } \mathcal{B} $$ For coordinates $[v]\mathcal{B}$ (in basis $\mathcal{B}$) and $[v]{\mathcal{B}’}$ (in basis $\mathcal{B}’$): $$ [v]_\mathcal{B} = P [v]_{\mathcal{B}'}, \quad [v]_{\mathcal{B}'} = P^{-1} [v]_\mathcal{B} $$

Coordinates. For vector $v \in S$ and basis $\mathcal{B} = \{v_1, \ldots, v_d\}$, the coordinates are scalars $[\alpha_1, \ldots, \alpha_d]^\top$ satisfying: $$ v = \sum_{i=1}^d \alpha_i v_i $$ Coordinates are unique (follows from linear independence).

Essential vs Optional: Theoretical ML

Theoretical Machine Learning — Essential Foundations#

Theorems and formal guarantees:

  1. Rank-nullity theorem. For $A \in \mathbb{R}^{m \times n}$: $$ \text{rank}(A) + \dim(\text{null}(A)) = n $$ Consequences: If $\text{rank}(A) = d < n$, then $\dim(\text{null}(A)) = n - d$ (dimension of solution set). If $\text{rank}(A) = n$ (full rank), solutions are unique (if they exist).

  2. Basis existence (Steinitz 1913). Every vector space has a basis. For finite-dimensional spaces: basis exists, all bases have same size (dimension).

  3. Dimension and approximation (approximation theory). Best $k$-dimensional approximation to $x$ is $\text{proj}_{S_k} x$ (projection onto $k$-dimensional subspace). The $k$-dimensional subspace with minimum approximation error is $\text{span}\{u_1, \ldots, u_k\}$ (top $k$ singular vectors of data matrix).

  4. VC dimension and capacity (Vapnik & Chervonenkis 1971). For linear classifiers in $\mathbb{R}^d$: VC dimension = $d + 1$. Capacity grows with dimension (more basis dimensions = more expressiveness).

  5. Matrix rank and complexity. Rank determines complexity: rank-$r$ matrices form an $O(r(m+n))$ parameter subspace (vs. $mn$ for general matrices). Low-rank approximation is easier to learn (fewer degrees of freedom).

Why essential: These theorems quantify relationships between dimension, expressiveness, and solution complexity.

Applied Machine Learning — Essential for Implementation#

Achievements and landmark systems:

  1. PCA for dimensionality reduction (Turk & Pentland 1991). Eigenfaces for face recognition: project face images onto top 50-100 eigenvectors (basis), achieve real-time recognition. Dimension reduction from 10,000 pixels to ~100 PCA coordinates.

  2. Whitening for preprocessing (LeCun et al. 1998). Decorrelate and rescale features: change of basis to make covariance matrix identity. Improves gradient-based optimization (condition number = 1), enables smaller learning rates.

  3. Batch normalization (Ioffe & Szegedy 2015). Normalize layer activations: change of basis (center and rescale). Speeds training (50-100x), enables higher learning rates, reduces internal covariate shift.

  4. Autoencoders for representation learning (Hinton & Salakhutdinov 2006, 2010s-present). Learn non-linear basis via encoder bottleneck. Applications: image compression, anomaly detection, generative models (VAE, 2013).

  5. Transformer attention with multiple bases (Vaswani et al. 2017, Devlin et al. 2018). 8-64 attention heads = 8-64 different basis projections. Achieved state-of-the-art on GLUE (NLP benchmark), ImageNet-scale vision (ViT), multimodal (CLIP).

  6. Dictionary learning and sparse coding (Aharon et al. 2006, Mairal et al. 2009). Learn overcomplete basis that sparsely represents data. Applications: image denoising (matched K-SVD, PSNR improvement 2-5dB), face recognition (sparse representation).

Why essential: These systems achieve state-of-the-art by exploiting basis structure (dimensionality reduction, whitening, learned bases, attention heads). Understanding dimension and basis is necessary to design architectures and interpret representations.

Key ideas: Where it shows up

1. Principal Component Analysis (PCA) — Optimal basis for dimensionality reduction#

Major achievements:

  • Pearson (1901), Hotelling (1933): Formalized finding axes (basis vectors) of maximum variance. Principal components are orthonormal basis vectors.

  • Eigendecomposition solution: Eigenvectors of covariance matrix $C = \frac{1}{n} X_c^\top X_c$ are principal components. Top $k$ eigenvectors form a $k$-dimensional basis capturing maximum variance.

  • SVD connection (Eckart-Young 1936): Truncated SVD $X \approx U_k \Sigma_k V_k^\top$ gives same basis as PCA. Top $k$ rows of $V_k^\top$ are principal component coordinates.

  • Modern applications: Preprocessing (feature whitening), visualization (2D/3D plots from high-D data), compression (keep top $k$ components).

Connection to basis: PCA finds an orthonormal basis $\{u_1, \ldots, u_k\}$ (eigenvectors) for the principal subspace. Each data point $x_i$ has coordinates $(u_1^\top x_i, \ldots, u_k^\top x_i)$ in this basis (dimension reduction from $d$ to $k$).

2. Autoencoders and Latent Representations — Learned bases#

Major achievements:

  • Autoencoders (1980s-1990s): Neural networks learn a bottleneck (low-dimensional basis). Encoder compresses to basis coordinates; decoder reconstructs from coordinates.

  • Variational Autoencoders (Kingma & Welling 2013): Adds probabilistic structure: basis coordinates are drawn from Gaussian prior. Enables generative modeling (sampling new data).

  • Disentangled representations (2010s): Learn bases where each coordinate captures an interpretable factor (e.g., pose, lighting in faces). Beta-VAE encourages disentanglement.

  • Applications: Image generation (basis for visual features), anomaly detection (reconstruction error when input outside learned basis), data compression.

Connection to basis: Encoder learns a basis $\mathcal{B} = \{\text{hidden activations}\}$ for a low-dimensional latent space. Decoder reconstructs using this basis.

3. Deep Neural Networks — Hierarchical basis learning#

Major achievements:

  • Universal approximation (Cybenko 1989): Hidden layer activations form a basis for representing functions. Single layer suffices for continuous functions on compact sets.

  • Hierarchy of bases (Bengio 2013): Deep networks learn multiple bases: low layers learn simple bases (edges, textures), high layers learn complex bases (objects, concepts).

  • Representation learning (LeCun et al. 2015): Deep networks optimize basis learned representations for the task (task-specific basis, not generic PCA).

  • Scale (AlexNet 2012, ResNet 2015, Vision Transformers 2020): Depth enables learning of rich hierarchical bases, enabling state-of-the-art performance on complex tasks.

Connection to basis: Layer $\ell$ operates in a $d_\ell$-dimensional activation space (dimension = number of hidden units). Weights $W_\ell$ form a basis (or part of one) for transforming layer $\ell-1$ representations to layer $\ell$ basis.

4. Feature Engineering and Feature Spaces — Hand-crafted vs. learned bases#

Major achievements:

  • Polynomial basis (classical ML): Augment features with powers: $[x_1, x_2, x_1^2, x_1 x_2, x_2^2, \ldots]$ (explicit basis for nonlinear decision boundaries).

  • Fourier basis (signal processing): Decompose signals using Fourier basis $\{\cos(k\omega t), \sin(k\omega t)\}_{k=0}^\infty$ (frequency domain basis).

  • Radial Basis Functions (RBF networks, 1980s): Basis functions centered at data points (e.g., Gaussian bumps). Kernel methods implicitly use infinite RBF basis.

  • Learned bases (deep learning, 2010s-present): End-to-end training optimizes basis (feature space) jointly with downstream task, outperforming hand-crafted bases.

Connection to basis: Features are coordinates in an explicit basis space. Deep learning learns the basis implicitly through weight optimization.

5. Transformer Attention and Multi-Head Projections — Multiple basis subspaces#

Major achievements:

  • Multi-head attention (Vaswani et al. 2017): Project inputs to $h$ different subspaces (heads), each with its own basis. Enables learning multiple relationships in parallel.

  • Scaled dot-product (Vaswani 2017): Each head computes $\text{softmax}(QK^\top / \sqrt{d_k}) V$, where $V$ columns form a basis for the value subspace.

  • BERT (Devlin et al. 2018): Bidirectional Transformers with 12-24 heads. Different heads learn different linguistic bases (syntax, semantics, discourse).

  • Applications: Machine translation (parallel bases for source/target), question answering (basis for understanding), language generation.

Connection to basis: Each attention head projects to a $(d_v / h)$-dimensional basis (subspace). Multi-head concatenation combines multiple basis representations.

Notation

Standard Conventions#

1. Basis notation.

  • Basis: $\mathcal{B} = \{v_1, \ldots, v_d\}$ (set of basis vectors).

  • Basis matrix: $V = [v_1 | \cdots | v_d] \in \mathbb{R}^{n \times d}$ (columns are basis vectors).

  • Standard basis: $\{e_1, \ldots, e_d\}$ where $e_i$ has 1 in position $i$, 0 elsewhere.

  • Coordinates: $[v]_\mathcal{B} = [\alpha_1, \ldots, \alpha_d]^\top$ satisfying $v = \sum_i \alpha_i v_i$.

Examples:

  • Standard basis for $\mathbb{R}^3$: $e_1 = [1, 0, 0]^\top, e_2 = [0, 1, 0]^\top, e_3 = [0, 0, 1]^\top$.

  • Vector $v = [3, -1, 2]^\top$ has coordinates $[3, -1, 2]^\top$ in standard basis.

  • PCA basis for 2D data: $\mathcal{B} = \{u_1, u_2\}$ (top 2 eigenvectors). Coordinates $[v]_\mathcal{B} = [u_1^\top v, u_2^\top v]^\top$.

2. Dimension notation.

  • Dimension: $\dim(V)$ or $\dim(S)$ for subspace $S$.

  • Rank: $\text{rank}(A)$ = dimension of column space (equivalently, row space).

  • Nullity: $\text{nullity}(A) = \dim(\text{null}(A)) = n - \text{rank}(A)$.

Examples:

  • For $X \in \mathbb{R}^{100 \times 50}$ (100 examples, 50 features), if $\text{rank}(X) = 30$, then $\dim(\text{col}(X)) = 30$ (data approximately 30-dimensional).

  • If $\text{rank}(X) = 30 < 50$, then $\text{nullity}(X) = 50 - 30 = 20$ (solution set is 20-dimensional affine subspace).

3. Change of basis notation.

  • Basis transition: $\mathcal{B} \to \mathcal{B}'$ (change from basis $\mathcal{B}$ to $\mathcal{B}'$).

  • Transition matrix: $P = [v'_1 | \cdots | v'_d]$ (new basis vectors in old coordinates).

  • Coordinates transform: $[v]_\mathcal{B} = P [v]_{\mathcal{B}'}$ (old basis = transition matrix times new basis coordinates).

Examples:

  • Rotate from standard basis to basis aligned with principal components: $P = [u_1 | \cdots | u_d]$ (eigenvectors of covariance matrix).

  • Whitening: $P = \Lambda^{-1/2} U^\top$ (inverse square root of covariance eigenvalues, times eigenvectors transpose) transforms to decorrelated basis.

4. Orthonormal basis notation.

  • Orthonormal: Basis vectors $\{q_1, \ldots, q_d\}$ satisfy $\|q_i\|_2 = 1$ and $q_i^\top q_j = 0$ for $i \neq j$.

  • Orthonormal matrix: $Q = [q_1 | \cdots | q_d]$ satisfies $Q^\top Q = I$ (columns orthonormal).

  • Orthogonal matrix: $Q \in \mathbb{R}^{d \times d}$ satisfies $Q^\top Q = QQ^\top = I$ (square, invertible, $Q^{-1} = Q^\top$).

Examples:

  • Eigenvectors of symmetric matrices form orthonormal basis.

  • QR decomposition: $A = QR$ where $Q$ has orthonormal columns (basis for $\text{col}(A)$).

  • SVD: $A = U \Sigma V^\top$ where $U, V$ are orthogonal matrices (orthonormal bases for column and row spaces).

5. Span and basis rank notation.

  • Full rank: $\text{rank}(A) = \min(m, n)$ (maximum possible rank). Columns (or rows) are linearly independent.

  • Rank deficient: $\text{rank}(A) < \min(m, n)$ (some columns/rows linearly dependent).

  • Column space dimension: $\dim(\text{col}(A)) = \text{rank}(A)$ (basis for column space has $\text{rank}(A)$ vectors).

Examples:

  • $X = \begin{bmatrix} 1 & 2 \\ 2 & 4 \\ 3 & 6 \end{bmatrix}$ has rank 1 (second column = 2× first column). Basis for column space: $\{[1, 2, 3]^\top\}$ (1 vector).

  • $X = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix}$ has rank 2 (columns linearly independent). Basis: $\{[1, 0, 0]^\top, [0, 1, 0]^\top\}$ (2 vectors).

6. Projection onto basis notation.

  • Projection matrix: $P_V = V(V^\top V)^{-1} V^\top$ projects onto column space of $V$ (assuming full column rank).

  • Orthogonal projection: If $V$ has orthonormal columns, $P_V = VV^\top$ (simpler form).

  • Coordinates via projection: For orthonormal basis $V$, coordinates are $[v]_\mathcal{B} = V^\top v$.

Examples:

  • PCA projection: $X_\text{proj} = XVV^\top$ (project onto basis of top eigenvectors $V$).

  • Least squares: $\hat{y} = X(X^\top X)^{-1} X^\top y$ (projection of $y$ onto column space of $X$).

Pitfalls & sanity checks

When working with bases and dimension:

  1. Always center before PCA. Uncentered data gives wrong principal components. Check: mean of centered data should be ~0.

  2. Rank-deficient systems. If rank($X$) < $d$, solution is not unique. Use minimum norm solution or add regularization (Ridge).

  3. Numerical instability from nearly-dependent features. Check condition number: if $\kappa(X) > 10^8$, expect numerical errors. Use SVD/QR instead of explicit inverse.

  4. One-hot trap: $k$ categories → only $k-1$ independent one-hot variables. Adding all $k$ causes singularity. Drop one category.

  5. Coordinate consistency: When changing basis, verify: (1) old = $P$ × new, (2) $P$ is invertible (full column rank), (3) reconstruction error ~0.

References

Foundational texts:

  1. Strang, G. (2016). Introduction to Linear Algebra (5th ed.). Wellesley–Cambridge Press.

  2. Axler, S. (2015). Linear Algebra Done Right (3rd ed.). Springer.

  3. Horn, R. A., & Johnson, C. R. (2012). Matrix Analysis (2nd ed.). Cambridge University Press.

  4. Trefethen, L. N., & Bau, D. (1997). Numerical Linear Algebra. SIAM.

PCA and dimensionality reduction:

  1. Pearson, K. (1901). “On lines and planes of closest fit to systems of points.” Philosophical Magazine, 2(11), 559–572.

  2. Hotelling, H. (1933). “Analysis of a complex of statistical variables.” Journal of Educational Psychology, 24(6), 417–441.

  3. Eckart, C., & Young, G. (1936). “The approximation of one matrix by another of lower rank.” Psychometrika, 1(3), 211–218.

  4. Turk, M., & Pentland, A. (1991). “Eigenfaces for recognition.” Journal of Cognitive Neuroscience, 3(1), 71–86.

Neural networks and representation learning:

  1. Cybenko, G. (1989). “Approximation by superpositions of a sigmoidal function.” Mathematics of Control, Signals and Systems, 2(4), 303–314.

  2. Bengio, Y. (2013). “Deep learning of representations: Looking forward.” In Statistical Language and Speech Processing (pp. 1–37). Springer.

  3. Hinton, G. E., & Salakhutdinov, R. R. (2006). “Reducing the dimensionality of data with neural networks.” Science, 313(5786), 504–507.

  4. Kingma, D. P., & Welling, M. (2013). “Auto-encoding variational Bayes.” arXiv:1312.6114.

Optimization and normalization:

  1. Robbins, H., & Monro, S. (1951). “A stochastic approximation method.” Annals of Mathematical Statistics, 22(3), 400–407.

  2. Ioffe, S., & Szegedy, C. (2015). “Batch normalization: Accelerating deep network training by reducing internal covariate shift.” In ICML (pp. 448–456).

  3. Ba, J., Kiros, J. R., & Hinton, G. E. (2016). “Layer normalization.” arXiv:1607.06450.

Transformers and attention:

  1. Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). “Attention is all you need.” In NeurIPS (pp. 5998–6008).

  2. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). “BERT: Pre-training of deep bidirectional transformers for language understanding.” arXiv:1810.04805.

Dictionary learning and sparse coding:

  1. Aharon, M., Elad, M., & Bruckstein, A. (2006). “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation.” IEEE Transactions on Signal Processing, 54(11), 4311–4322.

Five worked examples

Worked Example 1: One-hot basis and embedding lookup#

Problem. Show embedding lookup is matrix multiplication by a one-hot vector.

Solution (math). If x=e_i (standard basis), then E x selects the i-th column of E.

Solution (Python).

import numpy as np
E=np.array([[1.,2.,3.],[0.,-1.,1.]])
x=np.array([0.,1.,0.])
print(E@x)

Worked Example 2: Coordinates in a PCA basis#

Problem. Compute 1D PCA coordinate z of a point.

Solution (math). If u is the top PC, z=u^T(x-μ).

Solution (Python).

import numpy as np
from scripts.toy_data import toy_pca_points
X=toy_pca_points(n=10,seed=0)
mu=X.mean(0)
Xc=X-mu
U,S,Vt=np.linalg.svd(Xc,full_matrices=False)
u=Vt[0]
z=u@(X[0]-mu)
print(z)

Worked Example 3: Redundant engineered features via rank#

Problem. Detect redundancy by checking rank.

Solution (math). If rank(X)<d, columns are dependent.

Solution (Python).

import numpy as np
X=np.array([[1.,2.,3.],[2.,4.,6.],[3.,6.,9.]])
print(np.linalg.matrix_rank(X))

Worked Example 4: Whitening as change of basis#

Problem. Whiten data using covariance eigendecomposition.

Solution (math). If Σ=UΛU^T, then x_white=Λ^{-1/2}U^T(x-μ).

Solution (Python).

import numpy as np
rng=np.random.default_rng(0)
X=rng.normal(size=(200,2))
mu=X.mean(0); Xc=X-mu
Sigma=np.cov(Xc,rowvar=False)
lam,U=np.linalg.eigh(Sigma)
W=np.diag(1/np.sqrt(lam))@U.T
Xw=(W@Xc.T).T
print(np.cov(Xw,rowvar=False))

Worked Example 5: SGD gradients live in feature span#

Problem. Show each per-example gradient is a scalar times the feature vector.

Solution (math). For squared loss, ∇_w = (x^Tw - y)x.

Solution (Python).

import numpy as np
x=np.array([1.,-2.,0.5])
w=np.array([0.2,0.1,-0.3])
y=1.0
g=(x@w-y)*x
print(g)

Comments

Algorithm Category
Data Modality
Historical & Attribution
Key Concepts & Theorems
Learning Path & Sequencing
Linear Algebra Foundations
Theoretical Foundation
Chapter 2
Span & Linear Combination
Key ideas: Introduction

Introduction#

Span and linear combinations are the fundamental building blocks of linear algebra and machine learning. Every prediction $\hat{y} = Xw$, every gradient descent update $\theta_{t+1} = \theta_t - \eta \nabla \mathcal{L}$, every attention output $\sum_i \alpha_i v_i$, and every representation learned by a neural network is ultimately a linear combination of basis vectors. Understanding span—the set of all possible linear combinations—reveals model expressiveness, training dynamics, and the geometry of learned representations.

The span of a set of vectors $\{v_1, \ldots, v_k\}$ is the smallest subspace containing all of them. Geometrically, it’s all points reachable by scaling and adding the vectors. Algebraically, it’s $\{\sum_{i=1}^k \alpha_i v_i : \alpha_i \in \mathbb{R}\}$. In ML, span determines:

  • Model capacity: What functions can a model represent?
  • Feature redundancy: Are some features linear combinations of others?
  • Solution uniqueness: When are there multiple parameter vectors giving identical predictions?
  • Expressiveness vs. efficiency: Can we reduce dimensionality without losing information?

This chapter adopts an ML-first perspective: we introduce span through concrete algorithms (kernel methods, attention, overparameterization) rather than abstract axioms. The goal is to build geometric intuition (span as reachable points) and computational skill (checking linear independence, computing basis) simultaneously.

 

Important Ideas#

1. Linear combinations are everywhere in ML. A linear combination of vectors $\{v_1, \ldots, v_k\}$ with coefficients $\{\alpha_1, \ldots, \alpha_k\}$ is: $$ v = \sum_{i=1}^k \alpha_i v_i = \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_k v_k $$

Examples in ML:

  • Linear regression predictions: $\hat{y} = Xw = \sum_{j=1}^d w_j x_j$ (linear combination of feature columns).
  • Gradient descent updates: $\theta_{t+1} = \theta_t - \eta \nabla \mathcal{L}(\theta_t)$ (linear combination of current parameters and gradient).
  • Attention outputs: $z = \sum_{i=1}^n \alpha_i v_i$ (weighted sum of value vectors with attention weights $\alpha_i$).
  • Kernel predictions: $f(x) = \sum_{i=1}^n \alpha_i k(x_i, x)$ (representer theorem: optimal solution is a linear combination of training kernels).
  • Word embeddings: Analogies $e_{\text{king}} - e_{\text{man}} + e_{\text{woman}} \approx e_{\text{queen}}$ (linear combinations capture semantic relationships).

2. Span determines expressiveness. The span of $\{v_1, \ldots, v_k\}$ is: $$ \text{span}\{v_1, \ldots, v_k\} = \left\{ \sum_{i=1}^k \alpha_i v_i : \alpha_i \in \mathbb{R} \right\} $$

This is the set of all possible linear combinations—the “reachable subspace” if we’re allowed to scale and add the vectors. Key properties:

  • It’s a subspace: Closed under addition and scalar multiplication (adding/scaling linear combinations gives another linear combination).
  • It’s the smallest subspace containing $\{v_1, \ldots, v_k\}$: Any subspace containing all $v_i$ must contain their span.
  • Dimension = number of linearly independent vectors: If $v_k = \sum_{i=1}^{k-1} c_i v_i$ (linear dependence), adding $v_k$ doesn’t increase the span.

In ML context:

  • Column space of $X$: All predictions $\hat{y} = Xw$ lie in $\text{span}(\text{columns of } X) = \text{col}(X)$. If $y \notin \text{col}(X)$, perfect fit is impossible (residual is nonzero).
  • Feature redundancy: If feature $x_j$ is a linear combination of other features, adding it doesn’t increase $\text{span}(\text{columns of } X)$ or model capacity.
  • Kernel methods: Predictions lie in $\text{span}\{k(x_1, \cdot), \ldots, k(x_n, \cdot)\}$ (representer theorem). This is typically a finite-dimensional subspace of the (infinite-dimensional) RKHS.

3. Linear independence vs. dependence. Vectors $\{v_1, \ldots, v_k\}$ are linearly independent if the only solution to $\sum_{i=1}^k \alpha_i v_i = 0$ is $\alpha_1 = \cdots = \alpha_k = 0$. Otherwise, they’re linearly dependent (one is a linear combination of others).

Why it matters:

  • Basis: A linearly independent set spanning $V$ is a basis for $V$. Every vector in $V$ has a unique representation as a linear combination of basis vectors.
  • Rank: $\text{rank}(X) = $ number of linearly independent columns = $\dim(\text{col}(X))$.
  • Multicollinearity: In regression, linearly dependent features ($\text{rank}(X) < d$) make $X^\top X$ singular (non-invertible), requiring regularization.

4. Representer theorem: solutions lie in span of training data. For many ML problems (kernel ridge regression, SVMs, Gaussian processes), the optimal solution has the form: $$ f^*(x) = \sum_{i=1}^n \alpha_i k(x_i, x) $$

This is a linear combination of kernel functions evaluated at training points. Despite working in an infinite-dimensional space (e.g., RBF kernel), the solution lies in an $n$-dimensional subspace (span of $\{k(x_i, \cdot)\}_{i=1}^n$).

Implications:

  • Computational tractability: Optimization over infinite dimensions reduces to solving an $n \times n$ system.
  • Overfitting vs. underfitting: More training points ($n$ large) increases capacity but also computational cost ($O(n^3)$ for exact methods).
  • Sparse solutions: $\ell_1$ regularization (Lasso, SVM) produces solutions with many $\alpha_i = 0$ (sparse linear combinations).

 

Relevance to Machine Learning#

Model expressiveness and capacity. The span of a feature matrix $X \in \mathbb{R}^{n \times d}$ determines all possible predictions. For linear regression $\hat{y} = Xw$:

  • If $\text{rank}(X) = d$ (full column rank), the model can fit $d$ linearly independent targets.
  • If $\text{rank}(X) < d$, features are redundant. Adding more linearly dependent features doesn’t help.
  • If $\text{rank}(X) < n$ (overdetermined), exact fit is impossible unless $y \in \text{col}(X)$ (rare).

Attention mechanisms. Transformer attention computes $\text{softmax}(QK^\top / \sqrt{d_k}) V$, where the output is a convex combination (weighted average with non-negative weights summing to 1) of value vectors. Each output lies in $\text{span}(\text{rows of } V)$. Multi-head attention projects to multiple subspaces (heads), increasing expressiveness.

Kernel methods and representer theorem. For kernel ridge regression, the optimal solution is: $$ \alpha^* = (K + \lambda I)^{-1} y $$ where $K_{ij} = k(x_i, x_j)$ is the Gram matrix. Predictions are $f(x) = \sum_{i=1}^n \alpha_i^* k(x_i, x)$ (linear combination of training kernels). This holds for any kernel (linear, polynomial, RBF, neural network), enabling implicit infinite-dimensional feature spaces.

Word embeddings and analogies. Word2Vec (Mikolov et al., 2013) famously demonstrated that semantic relationships correspond to linear offsets in embedding space: $e_{\text{king}} - e_{\text{man}} + e_{\text{woman}} \approx e_{\text{queen}}$. This shows embeddings capture compositional structure (adding/subtracting vectors blends meanings).

 

Algorithmic Development History#

1. Linear combinations in classical mechanics (Newton, 1687). Newton’s second law $F = ma$ expresses force as a linear combination of acceleration components. Decomposing vectors into basis components (Cartesian coordinates) enabled solving physical systems.

2. Linear algebra formalization (Grassmann 1844, Peano 1888). Grassmann introduced “extensive magnitudes” (vectors) and exterior algebra (wedge products, spans). Peano axiomatized vector spaces with addition and scalar multiplication, formalizing linear combinations.

3. Least squares and column space (Gauss 1809, Legendre 1805). Gauss used least squares for orbit determination. The key insight: predictions $\hat{y} = Xw$ lie in $\text{col}(X)$, and the best fit minimizes $\|y - \hat{y}\|_2$ by projecting $y$ onto $\text{col}(X)$.

4. Kernel trick and representer theorem (Kimeldorf & Wahba 1970, Schölkopf 1990s). Kimeldorf & Wahba proved the representer theorem for splines: optimal smoothing spline is a linear combination of kernel basis functions. Schölkopf, Smola, and Vapnik extended this to SVMs and kernel ridge regression, enabling nonlinear learning in RKHS.

5. Word embeddings and linear structure (Mikolov et al. 2013). Word2Vec revealed that embeddings exhibit linear compositionality: analogies like “king - man + woman ≈ queen” work because semantic relationships correspond to parallel vectors (linear offsets). This was surprising—neural networks learned a structured linear space without explicit supervision.

6. Attention and weighted sums (Bahdanau 2015, Vaswani 2017). Attention mechanisms compute outputs as convex combinations (weighted averages) of value vectors. The Transformer (Vaswani et al., 2017) replaced recurrence with attention, showing that linear combinations of context (with learned weights) suffice for sequence modeling.

7. Overparameterization and implicit bias (Bartlett 2020, Arora 2019). Modern deep networks are vastly overparameterized ($d \gg n$), so solutions lie in $w_{\min} + \text{null}(X)$ (affine subspace). Gradient descent exhibits implicit regularization, preferring solutions in specific subspaces (e.g., low-rank, sparse). Understanding span and null space clarifies why overparameterized models generalize.

 

Definitions#

Linear combination. Given vectors $\{v_1, \ldots, v_k\} \subset V$ and scalars $\{\alpha_1, \ldots, \alpha_k\} \subset \mathbb{R}$, the linear combination is: $$ v = \sum_{i=1}^k \alpha_i v_i = \alpha_1 v_1 + \cdots + \alpha_k v_k \in V $$

Span. The span of $\{v_1, \ldots, v_k\}$ is the set of all linear combinations: $$ \text{span}\{v_1, \ldots, v_k\} = \left\{ \sum_{i=1}^k \alpha_i v_i : \alpha_i \in \mathbb{R} \right\} $$ This is the **smallest subspace** containing ${v_1, \ldots, v_k}$.

Linear independence. Vectors $\{v_1, \ldots, v_k\}$ are linearly independent if: $$ \sum_{i=1}^k \alpha_i v_i = 0 \quad \Longrightarrow \quad \alpha_1 = \cdots = \alpha_k = 0 $$ Otherwise, they are **linearly dependent** (at least one $v_j$ is a linear combination of the others).

Basis. A set $\{v_1, \ldots, v_k\}$ is a basis for subspace $S$ if:

  1. It spans $S$: $\text{span}\{v_1, \ldots, v_k\} = S$.
  2. It is linearly independent.

Every vector in $S$ has a unique representation as a linear combination of basis vectors.

Column space (range). For $A \in \mathbb{R}^{m \times n}$, the column space is: $$ \text{col}(A) = \{Ax : x \in \mathbb{R}^n\} = \text{span}\{\text{columns of } A\} $$

Dimension. $\dim(S) = $ number of vectors in any basis for $S$. For $\text{col}(A)$, $\dim(\text{col}(A)) = \text{rank}(A)$ (number of linearly independent columns).

Essential vs Optional: Theoretical ML

Theoretical Machine Learning — Essential Foundations#

Theorems and formal guarantees:

  1. Representer theorem (Kimeldorf & Wahba 1970, Schölkopf et al. 2001). For kernel ridge regression and SVMs, the optimal solution has the form: $$ f^*(x) = \sum_{i=1}^n \alpha_i k(x_i, x) $$ This holds for **any** reproducing kernel $k$ on RKHS $\mathcal{H}$. The solution lies in the $n$-dimensional subspace $\text{span}{k(x_1, \cdot), \ldots, k(x_n, \cdot)}$, even though $\mathcal{H}$ may be infinite-dimensional (e.g., RBF kernel).
  2. VC dimension and span (Vapnik & Chervonenkis 1971). For linear classifiers in $\mathbb{R}^d$, the VC dimension is $d+1$. This measures expressiveness: the classifier can shatter (correctly classify all $2^{d+1}$ labelings) any set of $d+1$ points in general position. The decision boundaries are hyperplanes (linear combinations of features).
  3. Rank-nullity theorem (fundamental theorem of linear algebra). For $A \in \mathbb{R}^{m \times n}$: $$ \text{rank}(A) + \dim(\text{null}(A)) = n $$ In ML: If $X \in \mathbb{R}^{n \times d}$ has $\text{rank}(X) = r < d$, there are $d - r$ linearly dependent features (null space dimension). Solutions to $Xw = y$ form an affine subspace $w_{\text{particular}} + \text{null}(X)$.
  4. Eckart-Young theorem (1936). The truncated SVD $\hat{X} = U_k \Sigma_k V_k^\top$ (keeping top $k$ singular values) minimizes: $$ \|\hat{X} - X\|_F = \min_{\text{rank}(\hat{X}) \leq k} \|\hat{X} - X\|_F $$ Geometrically: Projecting columns of $X$ onto $\text{span}{u_1, \ldots, u_k}$ minimizes reconstruction error. This justifies PCA, low-rank matrix completion, and recommender systems.
  5. Johnson-Lindenstrauss lemma (1984). Random projection from $\mathbb{R}^d$ to $\mathbb{R}^k$ (with $k = O(\log n / \epsilon^2)$) approximately preserves pairwise distances with high probability. This enables dimensionality reduction: data approximately lies in a low-dimensional subspace, discoverable via random projections.

Why essential: These theorems quantify when learning is tractable (representer theorem → finite-dimensional optimization), how much data suffices (VC dimension → sample complexity), and when low-dimensional structure exists (Eckart-Young → lossy compression bounds).

 

Applied Machine Learning — Essential for Implementation#

Achievements and landmark systems:

  1. Word2Vec (Mikolov et al., 2013). Learned 300-dimensional embeddings for millions of words via skip-gram/CBOW. Demonstrated linear structure: $e_{\text{king}} - e_{\text{man}} + e_{\text{woman}} \approx e_{\text{queen}}$ achieved 40% accuracy on analogy tasks. Showed that linear combinations capture semantic relationships (gender, tense, capitals).
  2. ResNet (He et al., 2015). Introduced skip connections $y = F(x) + x$, enabling training of 152-layer networks (vs. ~20 layers for VGG). Won ImageNet 2015 with 3.57% top-5 error. The key: $F(x) + x$ is a linear combination (residual + identity), preserving gradients during backpropagation.
  3. Transformer (Vaswani et al., 2017). Replaced RNNs with attention: $\text{softmax}(QK^\top / \sqrt{d_k}) V$ (linear combination of value vectors). Enabled GPT-3 (175B params, Brown et al. 2020), BERT (340M params, Devlin et al. 2018), and state-of-the-art results across NLP (translation, summarization, QA).
  4. Kernel SVMs (Boser et al. 1992, Cortes & Vapnik 1995). Applied kernel trick to large-margin classifiers. Won NIPS 2003 feature selection challenge, achieved 99.3% accuracy on MNIST (Decoste & Schölkopf 2002). Decision function $f(x) = \sum_{i \in SV} \alpha_i y_i k(x_i, x)$ is a sparse linear combination (only support vectors have $\alpha_i \neq 0$).
  5. PCA for face recognition (Eigenfaces, Turk & Pentland 1991). Projected face images onto span of top eigenvectors (principal components). Each face is approximated as $x \approx \sum_{i=1}^k c_i u_i$ (linear combination of eigenfaces). Achieved real-time recognition with $k = 50$-$100$ components (vs. $d = 10,000$ pixels).
  6. GPT-3 (Brown et al., 2020). 175B parameter Transformer trained on 300B tokens. Demonstrated few-shot learning (2-3 examples) across diverse tasks without fine-tuning. Attention layers compute $\sum_{i=1}^n \alpha_i v_i$ (linear combinations of context), with $n = 2048$ tokens.

Why essential: These systems achieved state-of-the-art by exploiting linear combination structure (attention, skip connections, kernel methods). Understanding span is necessary to interpret embeddings (Word2Vec analogies), debug failures (rank deficiency in features), and design architectures (multi-head attention = multiple subspaces).

Key ideas: Where it shows up

1. Principal Component Analysis (PCA) — Data spans low-dimensional subspace#

Major achievements:

  • Hotelling (1933): Formalized PCA as finding orthogonal directions of maximum variance. Principal components are eigenvectors of the covariance matrix $C = \frac{1}{n} X_c^\top X_c$ (centered data).
  • Eckart-Young theorem (1936): Proved that truncated SVD $X \approx U_k \Sigma_k V_k^\top$ (keeping top $k$ singular vectors) minimizes reconstruction error $\|X - \hat{X}\|_F$. This justifies PCA: projecting onto $\text{span}\{u_1, \ldots, u_k\}$ (top eigenvectors) is optimal.
  • Modern applications: Face recognition (eigenfaces, Turk & Pentland 1991), data compression (JPEG2000), preprocessing for neural networks (whitening), exploratory data analysis (visualizing high-dimensional datasets in 2D/3D).

Connection to span: PCA finds the $k$-dimensional subspace (span of top eigenvectors) that best approximates the data cloud. Projecting data $X$ onto $\text{span}\{u_1, \ldots, u_k\}$ gives $X_{\text{proj}} = X V_k V_k^\top$, where each row is a linear combination of top eigenvectors. The retained variance is $\sum_{i=1}^k \lambda_i / \sum_{i=1}^d \lambda_i$.

 

2. Stochastic Gradient Descent (SGD) — Updates are linear combinations#

Major achievements:

  • Robbins & Monro (1951): Proved convergence of stochastic approximation $\theta_{t+1} = \theta_t - \eta_t g_t$ (where $g_t$ is a noisy gradient) under diminishing step sizes $\sum_t \eta_t = \infty$, $\sum_t \eta_t^2 < \infty$.
  • Momentum methods (Polyak 1964, Nesterov 1983): Introduced momentum $m_{t+1} = \beta m_t + \nabla \mathcal{L}(\theta_t)$, $\theta_{t+1} = \theta_t - \eta m_{t+1}$ (exponentially weighted average of gradients). This is a linear combination of past gradients with decaying weights.
  • Adam optimizer (Kingma & Ba 2014): Adaptive learning rates using first and second moment estimates. Became the dominant optimizer for deep learning (BERT, GPT, Stable Diffusion).

Connection to span: Every gradient descent update $\theta_{t+1} = \theta_t - \eta \nabla \mathcal{L}(\theta_t)$ is a linear combination of the current parameters and the negative gradient. The optimization trajectory $\{\theta_0, \theta_1, \ldots\}$ lies in the affine subspace $\theta_0 + \text{span}\{\nabla \mathcal{L}(\theta_0), \nabla \mathcal{L}(\theta_1), \ldots\}$. For linear models, gradients are linear combinations of data columns.

 

3. Deep Neural Networks—Compositional Linear Combinations

Major achievements:

  • Universal approximation (Cybenko 1989, Hornik 1991): Single hidden layer networks can approximate continuous functions arbitrarily well. The output is $f(x) = \sum_{i=1}^h w_i \sigma(v_i^\top x + b_i)$ (linear combination of activations).
  • Deep learning revolution (2012-present): AlexNet (2012), VGG (2014), ResNet (2015), Transformers (2017) demonstrated that depth (composing linear maps + nonlinearities) is more powerful than width (more neurons per layer).
  • Neural Tangent Kernels (Jacot et al. 2018): Showed that infinite-width networks behave like kernel methods, with predictions in $\text{span}\{\text{training features}\}$.

Connection to span: Each layer computes $h_{l+1} = \sigma(W_l h_l + b_l)$, where $W_l h_l$ is a linear combination of hidden activations (columns of $W_l$ with coefficients from $h_l$). The pre-activation $W_l h_l$ lies in $\text{col}(W_l)$. Deep networks compose these linear combinations across layers, creating hierarchical representations.

 

4. Kernel Methods—Predictions as linear combinations of kernels#

Major achievements:

  • Representer theorem (Kimeldorf & Wahba 1970): For regularized risk minimization $\min_{f \in \mathcal{H}} \sum_{i=1}^n \ell(y_i, f(x_i)) + \lambda \|f\|_{\mathcal{H}}^2$, the optimal solution is $f^*(x) = \sum_{i=1}^n \alpha_i k(x_i, x)$ (linear combination of kernel basis functions).
  • Support Vector Machines (Boser et al. 1992, Cortes & Vapnik 1995): Introduced large-margin classifiers with kernel trick. Won NIPS feature selection challenge (2003), dominated ML competitions (early 2000s).
  • Gaussian Processes (Rasmussen & Williams 2006): Bayesian kernel methods for regression/classification. Predictions are linear combinations $f(x) = \sum_{i=1}^n \alpha_i k(x_i, x)$ with $\alpha = (K + \sigma^2 I)^{-1} y$.

Connection to span: Despite working in (potentially infinite-dimensional) RKHS, kernel predictions always lie in $\text{span}\{k(x_1, \cdot), \ldots, k(x_n, \cdot)\}$ (finite-dimensional subspace spanned by training kernels). The Gram matrix $K_{ij} = k(x_i, x_j)$ encodes inner products in this subspace.

 

5. Transformer Attention—Weighted sums of value vectors#

Major achievements:

  • Vaswani et al. (2017): “Attention is All You Need” replaced RNNs with self-attention. Enabled parallelization and scaling to billions of parameters (GPT-3: 175B params, GPT-4: ~1.7T params).
  • BERT (Devlin et al. 2018): Bidirectional Transformers for masked language modeling. Achieved state-of-the-art on 11 NLP tasks (GLUE benchmark).
  • Vision Transformers (Dosovitskiy et al. 2020): Applied attention to image patches, surpassing CNNs on ImageNet (ViT-H/14: 88.5% top-1 accuracy).
  • Multimodal models (CLIP, Flamingo, GPT-4): Unified vision and language via attention over heterogeneous inputs.

Connection to span: Attention output $z = \text{softmax}(QK^\top / \sqrt{d_k}) V$ is a convex combination (weighted average with non-negative weights summing to 1) of value vectors (rows of $V$). Each output lies in $\text{span}(\text{rows of } V)$. Multi-head attention projects to $h$ different subspaces, computing $h$ independent linear combinations in parallel.

Notation

Standard Conventions#

1. Linear combination syntax.

  • Summation notation: $\sum_{i=1}^k \alpha_i v_i = \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_k v_k$.
  • Matrix-vector product: $Xw = \sum_{j=1}^d w_j x_j$ (linear combination of columns of $X$ with weights from $w$).
  • Convex combination: $\sum_{i=1}^k \alpha_i v_i$ with $\alpha_i \geq 0$, $\sum_i \alpha_i = 1$ (weighted average).

Examples:

  • Linear regression prediction: $\hat{y} = X w = \sum_{j=1}^d w_j X_{:,j}$ (each prediction is a linear combination of feature columns).
  • Attention output: $z = \sum_{i=1}^n \alpha_i v_i$ where $\alpha = \text{softmax}(q^\top K / \sqrt{d_k})$ (convex combination of value vectors).
  • Word analogy: $e_{\text{king}} - e_{\text{man}} + e_{\text{woman}} = 1 \cdot e_{\text{king}} + (-1) \cdot e_{\text{man}} + 1 \cdot e_{\text{woman}}$ (coefficients can be negative).

2. Span notation.

  • Set notation: $\text{span}\{v_1, \ldots, v_k\} = \{\sum_{i=1}^k \alpha_i v_i : \alpha_i \in \mathbb{R}\}$.
  • Equivalent: $\text{span}(S)$ where $S = \{v_1, \ldots, v_k\}$ (span of a set).
  • Column space: $\text{col}(A) = \text{span}\{\text{columns of } A\}$.
  • Row space: $\text{row}(A) = \text{span}\{\text{rows of } A\} = \text{col}(A^\top)$.

Examples:

  • For $X \in \mathbb{R}^{3 \times 2}$ with columns $x_1 = [1, 0, 1]^\top$, $x_2 = [0, 1, 1]^\top$: $$ \text{col}(X) = \text{span}\{x_1, x_2\} = \left\{ w_1 \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} + w_2 \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} : w_1, w_2 \in \mathbb{R} \right\} $$ This is a 2D plane in $\mathbb{R}^3$ passing through the origin.

3. Linear independence notation.

  • Independence: Vectors $\{v_1, \ldots, v_k\}$ are linearly independent if $\sum_{i=1}^k \alpha_i v_i = 0 \Rightarrow \alpha_1 = \cdots = \alpha_k = 0$.
  • Dependence: If there exist $\alpha_i$ (not all zero) such that $\sum_{i=1}^k \alpha_i v_i = 0$, vectors are linearly dependent.
  • Rank: $\text{rank}(A) = \max\{\text{number of linearly independent columns of } A\} = \max\{\text{number of linearly independent rows of } A\}$.

Examples:

  • Vectors $v_1 = [1, 0]^\top$, $v_2 = [0, 1]^\top$ are linearly independent (standard basis for $\mathbb{R}^2$).
  • Vectors $v_1 = [1, 2]^\top$, $v_2 = [2, 4]^\top$ are linearly dependent ($v_2 = 2 v_1$).
  • For $X \in \mathbb{R}^{100 \times 50}$, $\text{rank}(X) \leq 50$ (at most 50 linearly independent columns).

4. Basis notation.

  • Basis: A linearly independent spanning set. Denoted $\mathcal{B} = \{v_1, \ldots, v_d\}$ for a $d$-dimensional space.
  • Standard basis: $\{e_1, \ldots, e_d\}$ where $e_i$ has 1 in position $i$, 0 elsewhere.
  • Coordinates: For vector $v = \sum_{i=1}^d \alpha_i v_i$ (linear combination of basis vectors), the coordinates are $[\alpha_1, \ldots, \alpha_d]^\top$.

Examples:

  • Standard basis for $\mathbb{R}^3$: $e_1 = [1, 0, 0]^\top$, $e_2 = [0, 1, 0]^\top$, $e_3 = [0, 0, 1]^\top$.
  • Any vector $v = [v_1, v_2, v_3]^\top = v_1 e_1 + v_2 e_2 + v_3 e_3$ (linear combination of standard basis).
  • For PCA, top $k$ eigenvectors $\{u_1, \ldots, u_k\}$ form a basis for the principal subspace.

5. Kernel and null space notation.

  • Null space: $\text{null}(A) = \{x : Ax = 0\}$ (vectors mapped to zero).
  • Kernel: $\ker(A) = \text{null}(A)$ (alternative notation).
  • Range (column space): $\text{range}(A) = \text{col}(A) = \{Ax : x \in \mathbb{R}^n\}$.

Examples:

  • For $A = \begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix}$ (rank 1), $\text{null}(A) = \text{span}\{[2, -1]^\top\}$ (1D subspace).
  • Overparameterized regression: If $\text{rank}(X) < d$, solutions to $Xw = y$ form $w_0 + \text{null}(X)$ (affine subspace).
  • Kernel ridge regression: Solution $\alpha = (K + \lambda I)^{-1} y$ lies in $\mathbb{R}^n$ (span of training examples).

6. Projection notation.

  • Orthogonal projection: $P_S v$ projects $v$ onto subspace $S$.
  • Projection matrix: $P = A(A^\top A)^{-1} A^\top$ projects onto $\text{col}(A)$.
  • Complement: $v = P_S v + P_{S^\perp} v$ (decomposition into parallel and perpendicular components).

Examples:

  • PCA projection onto top $k$ eigenvectors: $X_{\text{proj}} = X V_k V_k^\top$ where $V_k = [u_1 | \cdots | u_k]$.
  • Least squares: $\hat{y} = X(X^\top X)^{-1} X^\top y$ (projection of $y$ onto $\text{col}(X)$).
  • Residual: $r = y - \hat{y} = (I - X(X^\top X)^{-1} X^\top) y$ (projection onto $\text{col}(X)^\perp$).
Pitfalls & sanity checks

Common Mistakes#

  1. Confusing span with basis: Span dimension = number of linearly independent vectors, not total count.
  2. Assuming full rank: Always check np.linalg.matrix_rank(X) before inverting $X^\top X$.
  3. Ignoring numerical stability: Use lstsq instead of normal equations.
  4. Misunderstanding convex combinations: Not all linear combinations are convex (need $\alpha_i \geq 0$, $\sum_i \alpha_i = 1$).
  5. Overparameterization misconceptions: $d > n$ doesn’t always cause overfitting (implicit regularization).

     

Essential Checks#

# Check linear independence
rank = np.linalg.matrix_rank(X)
assert rank == X.shape[1], "Columns linearly dependent"

# Verify span membership
V = np.column_stack([v1, v2, v3])
alpha = np.linalg.lstsq(V, v, rcond=None)[0]
assert np.allclose(V @ alpha, v), "v not in span(V)"

# Test null space
assert np.allclose(X @ z, 0), "z not in null(X)"

# Attention weights
assert np.allclose(alpha.sum(), 1) and (alpha >= 0).all()
References

Foundational Texts#

  1. Strang (2016): Linear Algebra - span, basis, column/null space
  2. Axler (2015): Linear Algebra Done Right - abstract vector spaces
  3. Horn & Johnson (2013): Matrix Analysis - rank, decompositions

     

Machine Learning#

  1. Hastie et al. (2009): Elements of Statistical Learning - regression, SVMs
  2. Goodfellow et al. (2016): Deep Learning - Chapter 2 (Linear Algebra)
  3. Murphy (2022): Probabilistic ML - linear regression, kernels

     

Key Papers#

  1. Kimeldorf & Wahba (1970): Representer theorem
  2. Vapnik & Chervonenkis (1971): VC dimension
  3. Eckart & Young (1936): Low-rank approximation
  4. Mikolov et al. (2013): Word2Vec analogies
  5. Vaswani et al. (2017): Transformer attention
  6. He et al. (2015): ResNet skip connections
  7. Bartlett et al. (2020): Benign overfitting
  8. Belkin et al. (2019): Double descent

     

Advanced Topics#

  1. Schölkopf & Smola (2002): Learning with Kernels
  2. Rasmussen & Williams (2006): Gaussian Processes
  3. Golub & Van Loan (2013): Matrix Computations
  4. Trefethen & Bau (1997): Numerical Linear Algebra
Five worked examples

Worked Example 1: Predictions lie in span(columns of X)#

Introduction#

Linear regression predictions $\hat{y} = Xw$ are linear combinations of the columns of the feature matrix $X$. This fundamental observation reveals model expressiveness: all possible predictions lie in the column space $\text{col}(X)$, a subspace of $\mathbb{R}^n$. If the target $y$ lies outside this subspace ($y \notin \text{col}(X)$), perfect fit is impossible—the best we can do is project $y$ onto $\text{col}(X)$ (least squares solution).

This example explicitly computes $Xw$ as $\sum_{j=1}^d w_j X_{:,j}$ (sum of weighted columns), demonstrating that predictions span a subspace determined entirely by the features.

 

Purpose#

  • Visualize predictions as linear combinations: Show that $\hat{y} = Xw = w_1 X_{:,1} + w_2 X_{:,2} + \cdots + w_d X_{:,d}$.
  • Identify the constraint: Predictions lie in $\text{col}(X)$, limiting model capacity to $\dim(\text{col}(X)) = \text{rank}(X)$.
  • Connect to least squares: When $y \notin \text{col}(X)$, minimizing $\|Xw - y\|_2$ finds the closest point in $\text{col}(X)$ to $y$.
     

Importance#

Model expressiveness. The span of $X$’s columns determines all possible predictions. For $X \in \mathbb{R}^{n \times d}$:

  • If $\text{rank}(X) = d$ (full column rank), the model can fit any $d$ linearly independent targets.
  • If $\text{rank}(X) < d$, some features are redundant (linearly dependent). Adding more linearly dependent features doesn’t increase capacity.
  • If $\text{rank}(X) < n$ (typical when $d < n$), predictions lie in a proper subspace of $\mathbb{R}^n$. Perfect fit is impossible unless $y \in \text{col}(X)$.

Residuals and orthogonality. The least squares residual $r = y - \hat{y}$ is orthogonal to $\text{col}(X)$: $X^\top r = 0$. Geometrically, $\hat{y}$ is the orthogonal projection of $y$ onto $\text{col}(X)$, and $r$ lies in the orthogonal complement $\text{col}(X)^\perp$.

Feature selection. If feature $j$ is a linear combination of other features ($X_{:,j} = \sum_{i \neq j} c_i X_{:,i}$), including it doesn’t increase $\text{rank}(X)$ or expand $\text{col}(X)$. Feature selection algorithms (Lasso, forward selection) aim to find minimal feature sets spanning the target space.

 

What This Example Demonstrates#

  • Matrix-vector product as linear combination: $Xw = \sum_{j=1}^d w_j X_{:,j}$ (sum of weighted columns).
  • Predictions constrained to subspace: $\hat{y} \in \text{col}(X) = \text{span}\{X_{:,1}, \ldots, X_{:,d}\}$.
  • Numerical verification: Compute both $Xw$ (matrix product) and $\sum_j w_j X_{:,j}$ (explicit sum), verify they’re identical.

     

Background#

Least squares (Gauss 1809, Legendre 1805). Gauss used least squares to fit planetary orbits, minimizing sum of squared errors. The key insight: predictions $\hat{y} = Xw$ lie in $\text{col}(X)$, so minimizing $\|y - Xw\|_2^2$ finds the closest point in $\text{col}(X)$ to $y$.

Normal equations. Setting $\nabla_w \|Xw - y\|_2^2 = 0$ gives $X^\top X w = X^\top y$. If $X$ has full column rank, $w^* = (X^\top X)^{-1} X^\top y$. The prediction is $\hat{y} = X w^* = X(X^\top X)^{-1} X^\top y$ (projection matrix $P = X(X^\top X)^{-1} X^\top$ projects onto $\text{col}(X)$).

Geometric interpretation. $\text{col}(X)$ is a $d$-dimensional (or $\text{rank}(X)$-dimensional) hyperplane in $\mathbb{R}^n$. The prediction $\hat{y}$ is the foot of the perpendicular from $y$ to this hyperplane. The residual $r = y - \hat{y}$ is perpendicular to the hyperplane.

 

Historical Context#

1. Least squares origins (Gauss 1809, Legendre 1805). Legendre published the method in 1805 for fitting orbits. Gauss claimed to have used it since 1795 (controversy over priority). Both recognized that predictions are linear combinations of features.

2. Matrix formulation (Cauchy 1829, Sylvester 1850). Matrix algebra enabled compact notation $\hat{y} = Xw$ instead of writing out sums. Sylvester introduced “matrix” terminology in 1850.

3. Projection interpretation (Schmidt 1907, Courant & Hilbert 1924). Erhard Schmidt formalized orthogonal projections in Hilbert spaces. The least squares solution became understood as projecting $y$ onto $\text{col}(X)$.

4. Modern ML (1990s-present). Regularization (ridge, Lasso) modifies $\text{col}(X)$ by adding penalty terms. Kernel methods (SVMs, kernel ridge regression) work in implicitly mapped feature spaces, where $\text{col}(\Phi(X))$ may be infinite-dimensional but solutions lie in $\text{span}\{k(x_i, \cdot)\}_{i=1}^n$ (finite-dimensional by the representer theorem).

 

History in Machine Learning#

  • 1805: Legendre publishes least squares (linear combinations of features).
  • 1809: Gauss derives normal equations $X^\top X w = X^\top y$.
  • 1907: Schmidt formalizes orthogonal projections (geometric interpretation).
  • 1970: Kimeldorf & Wahba prove the representer theorem (kernel solutions in the span of training points).
  • 1995: Vapnik’s Nature of Statistical Learning Theory connects VC dimension to span of hypothesis class.
  • 2006: Compressed sensing (Candès, Donoho) exploits sparse linear combinations for recovery.
  • 2018: Neural Tangent Kernels (Jacot et al.) show infinite-width networks have predictions in the span of features.

     

Prevalence in Machine Learning#

Universal in supervised learning: Every linear model (linear regression, logistic regression, linear SVM, perceptron) computes predictions as $\hat{y} = Xw$ or $\hat{y} = \sigma(Xw + b)$ (linear combination + nonlinearity).

Deep learning layers: Each fully connected layer computes $h_{l+1} = \sigma(W_l h_l + b_l)$, where $W_l h_l$ is a linear combination of hidden activations (columns of $W_l$ with coefficients from $h_l$).

Generalized linear models (GLMs): Exponential family models (Poisson regression, gamma regression) use $\mathbb{E}[y] = g^{-1}(X w)$ (linear combination inside link function).

Kernel methods: SVMs, kernel ridge regression, Gaussian processes all predict via $f(x) = \sum_{i=1}^n \alpha_i k(x_i, x)$ (linear combination of kernel evaluations).

 

Notes and Explanatory Details#

Shape discipline:

  • Feature matrix: $X \in \mathbb{R}^{n \times d}$ (rows = examples, columns = features).
  • Weights: $w \in \mathbb{R}^d$ (one weight per feature).
  • Prediction: $\hat{y} = Xw \in \mathbb{R}^n$ (one prediction per example).
  • Column $j$ of $X$: $X_{:,j} \in \mathbb{R}^n$ (feature $j$ across all examples).

Matrix-vector product identity: $$ Xw = \begin{bmatrix} | & | & & | \\ X_{:,1} & X_{:,2} & \cdots & X_{:,d} \\ | & | & & | \end{bmatrix} \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_d \end{bmatrix} = \sum_{j=1}^d w_j X_{:,j} $$

Example: For $X = \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}$, $w = \begin{bmatrix} 2 \\ -1 \end{bmatrix}$: $$ Xw = 2 \begin{bmatrix} 1 \\ 3 \\ 5 \end{bmatrix} + (-1) \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix} = \begin{bmatrix} 0 \\ 2 \\ 4 \end{bmatrix} $$

Numerical considerations: For large $d$ (wide data), storing $X$ explicitly may be wasteful if $\text{rank}(X) \ll d$. Low-rank approximations (truncated SVD) reduce storage and computation.

 

Connection to Machine Learning#

Underfitting vs. overfitting: If $\text{rank}(X) \ll n$ (few effective features), the model underfits (predictions lie in low-dimensional subspace). If $\text{rank}(X) = n$ and $d \geq n$ (more features than examples), the model can perfectly fit noise (overfitting).

Regularization modifies the span: Ridge regression solves $(X^\top X + \lambda I) w = X^\top y$, shrinking weights toward zero. This effectively reduces the effective rank of $X$, constraining predictions to a lower-dimensional subspace.

Basis functions and feature expansion: Nonlinear models (polynomial regression, RBF networks) expand features: $\phi(x) = [x, x^2, x^3, \ldots]$. Predictions $\hat{y} = \Phi(X) w$ lie in $\text{col}(\Phi(X))$, a nonlinear subspace in the original space but linear in feature space.

 

Connection to Linear Algebra Theory#

Fundamental theorem of linear algebra. For $X \in \mathbb{R}^{n \times d}$: $$ \mathbb{R}^n = \text{col}(X) \oplus \text{null}(X^\top) $$ (direct sum: every vector $y \in \mathbb{R}^n$ decomposes uniquely as $y = y_{\parallel} + y_{\perp}$ where $y_{\parallel} \in \text{col}(X)$ and $y_{\perp} \in \text{null}(X^\top)$).

In least squares, $\hat{y} = y_{\parallel}$ (projection onto $\text{col}(X)$) and $r = y_{\perp}$ (projection onto $\text{null}(X^\top)$). The normal equations $X^\top r = 0$ express orthogonality.

Rank and dimension: $\dim(\text{col}(X)) = \text{rank}(X) \leq \min(n, d)$. If $\text{rank}(X) = r < d$, there are $d - r$ redundant features (null space has dimension $d - r$).

Projection matrix: $P = X(X^\top X)^{-1} X^\top$ (assuming $X$ has full column rank) satisfies:

  • $P^2 = P$ (idempotent: projecting twice is the same as projecting once).
  • $P^\top = P$ (symmetric: orthogonal projection).
  • $\text{col}(P) = \text{col}(X)$ (projects onto column space of $X$).

     

Pedagogical Significance#

Concrete visualization. Students can compute $Xw$ by hand for small $X$ (e.g., $3 \times 2$ matrix) and verify it’s a weighted sum of columns. This makes the abstract “linear combination” concept tangible.

Foundation for least squares. Understanding that predictions lie in $\text{col}(X)$ is essential before learning least squares. The geometric interpretation (projecting $y$ onto $\text{col}(X)$) clarifies why least squares works and when it fails.

Debugging linear models. If predictions are poor, check $\text{rank}(X)$: low rank indicates redundant/collinear features. Use np.linalg.matrix_rank(X) to diagnose.

 

References#

  1. Strang, G. (2016). Introduction to Linear Algebra (5th ed.). Wellesley–Cambridge Press. Chapter 4: “Orthogonality” (projections, least squares).
  2. Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning (2nd ed.). Springer. Chapter 3: “Linear Methods for Regression.”
  3. Boyd, S., & Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press. Appendix C: “Numerical Linear Algebra Background” (least squares, QR decomposition).
  4. Golub, G. H., & Van Loan, C. F. (2013). Matrix Computations (4th ed.). Johns Hopkins University Press. Chapter 5: “Orthogonalization and Least Squares.”
  5. Murphy, K. P. (2022). Probabilistic Machine Learning: An Introduction. MIT Press. Chapter 11: “Linear Regression.”

Problem. Show $\hat{y} = Xw$ lies in the span of columns of $X$.

Solution (math).

For $X \in \mathbb{R}^{n \times d}$ with columns $X_{:,1}, \ldots, X_{:,d} \in \mathbb{R}^n$ and weights $w = [w_1, \ldots, w_d]^\top \in \mathbb{R}^d$, the prediction is: $$ \hat{y} = Xw = \sum_{j=1}^d w_j X_{:,j} $$

This is a linear combination of the columns of $X$, so $\hat{y} \in \text{span}\{X_{:,1}, \ldots, X_{:,d}\} = \text{col}(X)$.

Solution (Python).

import numpy as np

# Define feature matrix X (3 examples, 2 features)
X = np.array([[1., 2.],
              [3., 4.],
              [5., 6.]])

# Define weight vector w
w = np.array([2., -1.])

# Prediction via matrix-vector product
y_hat_1 = X @ w

# Prediction as explicit linear combination of columns
y_hat_2 = w[0] * X[:, 0] + w[1] * X[:, 1]

print(f"X =\n{X}\n")
print(f"w = {w}\n")
print(f"Method 1 (matrix product): y_hat = X @ w = {y_hat_1}")
print(f"Method 2 (linear combination): y_hat = {w[0]}*X[:,0] + {w[1]}*X[:,1] = {y_hat_2}")
print(f"\nAre they equal? {np.allclose(y_hat_1, y_hat_2)}")
print(f"y_hat lies in span(columns of X): True (by construction)")

Output:

X =
[[1. 2.]
 [3. 4.]
 [5. 6.]]

w = [ 2. -1.]

Method 1 (matrix product): y_hat = X @ w = [0. 2. 4.]
Method 2 (linear combination): y_hat = 2.0*X[:,0] + -1.0*X[:,1] = [0. 2. 4.]

Are they equal? True
y_hat lies in span(columns of X): True (by construction)

Worked Example 2: Kernel ridge solution lies in span(training features)#

Introduction#

The representer theorem states that despite optimizing over an infinite-dimensional RKHS, the optimal solution for kernel ridge regression always has the form $f^*(x) = \sum_{i=1}^n \alpha_i k(x_i, x)$—a linear combination of kernel functions evaluated at training points.

 

Purpose#

  • Demonstrate the representer theorem computationally
  • Show that optimization in infinite dimensions reduces to solving $(K + \lambda I)\alpha = y$
  • Verify predictions lie in span of training kernels
  •  

Importance#

Kernel methods enable nonlinear learning in implicitly mapped feature spaces while maintaining computational tractability ($O(n^3)$ instead of infinite-dimensional optimization).

 

What This Example Demonstrates#

Compute kernel Gram matrix $K_{ij} = k(x_i, x_j)$, solve for $\alpha$, interpret as linear combination of training kernels.

 

Background#

RKHS and representer theorem (Kimeldorf & Wahba 1970): For loss $\mathcal{L}(f) = \sum_i \ell(y_i, f(x_i)) + \lambda \|f\|_{\mathcal{H}}^2$, the minimizer is $f^*(x) = \sum_i \alpha_i k(x_i, x)$.

 

References#

  1. Kimeldorf & Wahba (1970), Schölkopf et al. (2001), Rasmussen & Williams (2006)

Problem: Compute $\alpha$ for kernel ridge regression and interpret span.

Solution (math): $\alpha = (K + \lambda I)^{-1} y$ where $K_{ij} = k(x_i, x_j)$. Predictions: $f(x) = \sum_i \alpha_i k(x_i, x)$.

Solution (Python):

import numpy as np
from scripts.toy_data import toy_pca_points, toy_kernel_rbf

X = toy_pca_points(n=6, seed=1)
y = np.arange(len(X), dtype=float)
K = toy_kernel_rbf(X, gamma=0.5)
lam = 1e-2
alpha = np.linalg.solve(K + lam * np.eye(len(X)), y)

print(f"Coefficients alpha: {alpha}")
print(f"Predictions lie in span{{k(x_1, ·), ..., k(x_6, ·)}}")

Worked Example 3: Attention is a weighted sum#

Introduction#

Attention computes outputs as convex combinations of value vectors: $z = \sum_i \alpha_i v_i$ where $\alpha = \text{softmax}(q^\top K / \sqrt{d_k})$.

 

Purpose#

Show attention output is a linear combination, verify weights sum to 1, demonstrate constraint to span of values.

 

Importance#

Attention is the core operation in Transformers (GPT, BERT), enabling contextual representations through weighted averaging.

 

References#

Vaswani et al. (2017), Bahdanau et al. (2015)

Problem: Compute attention output as $\sum_i \alpha_i v_i$.

Solution (math): $z = \text{softmax}(q^\top K / \sqrt{d_k}) V$

Solution (Python):

import numpy as np
from scripts.toy_data import scaled_dot_attention

Q = np.array([[1., 0.]])
K = np.array([[1., 0.], [0., 1.], [1., 1.]])
V = np.array([[1., 0.], [0., 2.], [1., 1.]])
output = scaled_dot_attention(Q, K, V)

print(f"Attention output: {output[0]}")
print(f"Output lies in span(rows of V)")

Worked Example 4: Overparameterization and null space#

Introduction#

When $d > n$ (more parameters than examples), solutions to $Xw = y$ are non-unique. The solution set forms an affine subspace $w_0 + \text{null}(X)$.

 

Purpose#

Show non-uniqueness, identify solution set structure, and discuss the minimum-norm solution returned by lstsq.

 

Importance#

Modern deep learning is vastly overparameterized. Understanding null space clarifies why multiple parameters give identical predictions yet generalize differently.

 

References#

Bartlett et al. (2020), Belkin et al. (2019)

Problem: Explain non-uniqueness when $d > n$.

Solution (math): If $Xw_0 = y$ and $z \in \text{null}(X)$, then $X(w_0 + z) = y$. Solutions form $w_0 + \text{null}(X)$.

Solution (Python):

import numpy as np

rng = np.random.default_rng(0)
X = rng.normal(size=(3, 5))  # n=3, d=5
w0 = rng.normal(size=5)
y = X @ w0
w_hat = np.linalg.lstsq(X, y, rcond=None)[0]

print(f"Rank(X): {np.linalg.matrix_rank(X)}")
print(f"Null space dim: {X.shape[1] - np.linalg.matrix_rank(X)}")
print(f"||w_hat||_2 = {np.linalg.norm(w_hat):.4f} (minimum norm)")
print(f"w0 - w_hat in null(X): {np.allclose(X @ (w0 - w_hat), 0)}")

Worked Example 5: Word analogy vector arithmetic#

Introduction#

Word2Vec embeddings exhibit linear structure: $e_{\text{king}} - e_{\text{man}} + e_{\text{woman}} \approx e_{\text{queen}}$ (semantic relationships = vector offsets).

Purpose#

Compute analogy as linear combination, demonstrate compositional semantics, motivate embedding arithmetic.

Importance#

Analogies reveal that neural networks learn structured representations where linear algebra operations correspond to semantic operations.

References#

Mikolov et al. (2013), Pennington et al. (2014)

Problem: Compute “king - man + woman” analogy.

Solution (math): $e_{\text{target}} = 1 \cdot e_{\text{king}} + (-1) \cdot e_{\text{man}} + 1 \cdot e_{\text{woman}}$

Solution (Python):

import numpy as np

E = {
    'king': np.array([0.8, 0.2, 0.1]),
    'man': np.array([0.7, 0.1, 0.0]),
    'woman': np.array([0.6, 0.3, 0.0])
}

analogy = E['king'] - E['man'] + E['woman']
print(f"king - man + woman = {analogy}")
print(f"(Find nearest word to this vector → queen)")

Comments

Algorithm Category
Data Modality
Historical & Attribution
Key Concepts & Theorems
Learning Path & Sequencing
Linear Algebra Foundations
Theoretical Foundation
Chapter 1
Vector Spaces & Subspaces
Key ideas: Introduction

Introduction#

Vector spaces and subspaces form the foundational algebraic structures underlying all of machine learning. Every dataset, parameter vector, gradient, embedding, and prediction lives in a vector space. Understanding vector space structure—closure under addition and scalar multiplication, the existence of subspaces, and the geometric interpretation of span—is essential for reasoning about model capacity, optimization trajectories, dimensionality reduction, and numerical stability.

This chapter adopts an ML-first approach: we introduce definitions only when they illuminate practical algorithms or enable rigorous reasoning about ML systems. Rather than axiomatizing vector spaces abstractly, we show how closure properties guarantee that gradient descent never “leaves” the parameter space, how subspaces capture low-dimensional structure in data (PCA, autoencoders), and how span determines the expressiveness of linear models.

Important Ideas#

1. Closure under linear combinations. A vector space $V$ over $\mathbb{R}$ is closed under addition and scalar multiplication: for any $u, v \in V$ and $\alpha, \beta \in \mathbb{R}$, we have $\alpha u + \beta v \in V$. This seemingly trivial property is foundational:

  • Optimization: Gradient descent updates $\theta_{t+1} = \theta_t - \eta \nabla \mathcal{L}(\theta_t)$ are linear combinations, so parameters remain in $\mathbb{R}^d$.

  • Convex combinations: Interpolations $v = \alpha a + (1-\alpha)b$ with $\alpha \in [0,1]$ stay in the space (used in mixup data augmentation, model averaging, momentum methods).

  • Span: The set of all linear combinations $\{\sum_{i=1}^k \alpha_i v_i : \alpha_i \in \mathbb{R}\}$ forms a subspace (the span of $\{v_1, \ldots, v_k\}$).

2. Subspaces capture structure. A subspace $S \subseteq V$ is itself a vector space (closed under addition/scaling and contains the zero vector). Key examples in ML:

  • Column space of $X$: All possible predictions $\hat{y} = Xw$ lie in $\text{col}(X)$, the span of feature columns. This determines model expressiveness.

  • Null space (kernel): Solutions to $Xw = 0$ form the null space, revealing parameter redundancy and identifiability issues.

  • Orthogonal complements: Residuals $r = y - Xw$ lie in $\text{col}(X)^\perp$, the subspace perpendicular to all predictions.

  • Eigenspaces: Eigenvectors with the same eigenvalue span an eigenspace (used in spectral clustering, PCA).

3. Geometric vs. algebraic perspectives. Vector spaces admit dual interpretations:

  • Algebraic: Vectors as tuples of numbers, operations as element-wise arithmetic, subspaces defined by equations.

  • Geometric: Vectors as arrows, subspaces as planes/lines, projections as “shadows,” orthogonality as perpendicularity.

  • ML benefit: Switching perspectives clarifies why algorithms work (geometry) and how to implement them (algebra).

Relevance to Machine Learning#

Model capacity. The span of a feature matrix $X \in \mathbb{R}^{n \times d}$ determines all possible linear predictions. If $\text{rank}(X) < d$, features are redundant (collinear). If $\text{rank}(X) < n$, the model cannot fit arbitrary targets (underdetermined system). Understanding span reveals when adding features helps vs. when it introduces multicollinearity.

Dimensionality reduction. PCA projects data onto the span of top eigenvectors, a low-dimensional subspace capturing most variance. Autoencoders learn nonlinear mappings to low-dimensional subspaces (latent spaces). Kernels implicitly map to high-dimensional (or infinite-dimensional) feature spaces where data becomes linearly separable.

Optimization and numerical stability. Gradient-based methods exploit closure: updates are linear combinations of parameters and gradients. Regularization (ridge, Lasso) modifies the effective subspace where solutions lie. Numerical conditioning depends on subspace geometry (angles between basis vectors, subspace dimension).

Algorithmic Development History#

1. Grassmann and the formal axiomatization (1844). Hermann Grassmann introduced the concept of an “extensive magnitude” (vector space) in Die lineale Ausdehnungslehre, defining addition and scalar multiplication axiomatically. His work was largely ignored until the 20th century but provided the first rigorous algebraic treatment of linear combinations and subspaces.

2. Peano’s axioms (1888). Giuseppe Peano formalized vector spaces with the modern axiomatic definition (closure, associativity, distributivity, identity, inverses). This abstraction enabled studying function spaces, polynomial spaces, and infinite-dimensional spaces under a unified framework.

3. Hilbert spaces and functional analysis (1900s-1920s). David Hilbert extended vector space theory to infinite dimensions with inner products, enabling rigorous foundations for quantum mechanics and integral equations. Banach, Fréchet, and Riesz developed norm theory, completing the modern framework.

4. Numerical linear algebra (1950s-1970s). With the advent of digital computers, numerical stability became critical. Householder (QR decomposition, 1958), Golub (SVD algorithm, 1965-1970), and Wilkinson (error analysis, 1960s-1980s) developed stable algorithms exploiting subspace orthogonality. These methods underpin modern least-squares solvers, eigensolvers, and PCA implementations.

5. Kernel methods and reproducing kernel Hilbert spaces (1990s-2000s). The kernel trick (Boser, Guyon, Vapnik, 1992; Schölkopf, Smola, 1998) showed that nonlinear problems become linear in high-dimensional (or infinite-dimensional) feature spaces. Support Vector Machines exploit subspace geometry (maximum margin hyperplanes) in these spaces.

6. Deep learning and representation learning (2010s-present). Neural networks learn hierarchical representations by composing linear maps (matrix multiplications) with nonlinearities. Each layer’s output spans a subspace; training adjusts these subspaces to separate classes or capture structure. Attention mechanisms (Vaswani et al., 2017) compute weighted sums (linear combinations) of value vectors, with outputs constrained to the span of the value subspace.

Definitions#

Vector space. A set $V$ over a field $\mathbb{F}$ (typically $\mathbb{R}$ or $\mathbb{C}$) with operations $+: V \times V \to V$ (addition) and $\cdot: \mathbb{F} \times V \to V$ (scalar multiplication) satisfying:

  1. Closure: $u + v \in V$ and $\alpha v \in V$ for all $u, v \in V$, $\alpha \in \mathbb{F}$.

  2. Associativity: $(u + v) + w = u + (v + w)$ and $\alpha(\beta v) = (\alpha\beta) v$.

  3. Commutativity: $u + v = v + u$.

  4. Identity: There exists $0 \in V$ such that $v + 0 = v$ for all $v \in V$.

  5. Inverses: For each $v \in V$, there exists $-v \in V$ such that $v + (-v) = 0$.

  6. Distributivity: $\alpha(u + v) = \alpha u + \alpha v$ and $(\alpha + \beta)v = \alpha v + \beta v$.

  7. Scalar identity: $1 \cdot v = v$ for all $v \in V$.

Subspace. A subset $S \subseteq V$ is a subspace if:

  1. $0 \in S$ (contains the zero vector).

  2. $u + v \in S$ for all $u, v \in S$ (closed under addition).

  3. $\alpha u \in S$ for all $u \in S$, $\alpha \in \mathbb{F}$ (closed under scalar multiplication).

Equivalently, $S$ is a subspace if it is closed under linear combinations.

Span. The span of vectors $\{v_1, \ldots, v_k\} \subset V$ is: $$ \text{span}\{v_1, \ldots, v_k\} = \left\{ \sum_{i=1}^k \alpha_i v_i : \alpha_i \in \mathbb{F} \right\} $$ This is the **smallest subspace** containing ${v_1, \ldots, v_k}$.

Column space and range. For a matrix $A \in \mathbb{R}^{m \times n}$, the column space is $\text{col}(A) = \{Ax : x \in \mathbb{R}^n\} = \text{span}\{a_1, \ldots, a_n\}$, where $a_i$ are the columns of $A$. This is also called the range or image of $A$.

Null space (kernel). The null space of $A \in \mathbb{R}^{m \times n}$ is $\text{null}(A) = \{x \in \mathbb{R}^n : Ax = 0\}$, the set of vectors mapped to zero by $A$.

Essential vs Optional: Theoretical ML

Theoretical Machine Learning — Essential Foundations#

Theorems and formal guarantees:

  1. Rademacher complexity bounds. Generalization error depends on the complexity of the hypothesis class (function space). For linear models, the hypothesis space is finite-dimensional (span of features), enabling tight bounds. Key results:

    • Vapnik-Chervonenkis dimension for linear classifiers is $d+1$ (Vapnik & Chervonenkis, 1971).

    • Rademacher complexity of unit ball in $\mathbb{R}^d$ scales as $O(1/\sqrt{n})$ (Bartlett & Mendelson, 2002).

  2. Universal approximation. Existence of dense subspaces in function spaces:

    • Single hidden layer neural networks are dense in $C([0,1]^d)$ (Cybenko 1989).

    • Span of RBF kernels is dense in $L^2$ (Micchelli 1986).

    • Fourier series: span of $\{\sin(kx), \cos(kx)\}_{k=0}^\infty$ is dense in $L^2[0, 2\pi]$.

  3. Convex optimization. Gradient descent converges globally for convex functions over vector spaces (Nesterov 1983). Convergence rates depend on subspace properties (strong convexity, smoothness).

  4. Matrix concentration inequalities. Random matrix theory provides tail bounds for spectral norms, operator norms, and subspace angles (Tropp 2015). Used in randomized linear algebra (sketching, low-rank approximation).

Why essential: These theorems quantify when learning is possible, how many examples suffice, and when optimization succeeds. Vector space structure (dimension, subspaces, inner products) appears directly in the bounds.

Applied Machine Learning — Essential for Implementation#

Achievements and landmark systems:

  1. AlexNet (Krizhevsky et al., 2012). First deep convolutional network to win ImageNet (top-5 error 15.3% → 10.9% over runner-up). Demonstrated that compositional linear maps (convolutions as local weight-sharing matrices) with nonlinearities learn hierarchical representations.

    • Vector space insight: Each convolutional layer maps feature maps $X_l \in \mathbb{R}^{h \times w \times c_l}$ through linear filters $W_l$ to $X_{l+1}$. The output space dimension (number of channels $c_{l+1}$) is the rank of the effective weight matrix.

  2. Word2Vec (Mikolov et al., 2013). Learned dense word embeddings in $\mathbb{R}^{300}$ by predicting context words. Famous “king - man + woman = queen” demonstrated that semantic relationships are linear offsets in embedding space.

    • Subspace insight: Analogies correspond to parallel vectors in subspaces (gender direction, verb tense direction). Linear algebra operations (vector arithmetic) capture linguistic structure.

  3. ResNet (He et al., 2015). Introduced skip connections $y = F(x) + x$, enabling training of 152-layer networks (previous best: ~20 layers). Won ImageNet 2015 with 3.57% top-5 error.

    • Closure insight: Adding $x$ and $F(x)$ is a linear combination, guaranteed to stay in the same vector space. Residuals $F(x)$ span a learned subspace; identity shortcuts preserve gradients during backpropagation.

  4. Transformer (Vaswani et al., 2017). Replaced recurrence with attention, enabling parallelization and scaling to billions of parameters (GPT-3 has 175B).

    • Linear combination insight: Attention outputs are weighted sums $\sum_i \alpha_i V_i$, constrained to $\text{span}(V)$. Multi-head attention learns multiple subspaces in parallel.

  5. Diffusion Models (Ho et al., 2020; Rombach et al., 2022). DALL-E 2, Stable Diffusion generate images by iteratively denoising in latent space. Latent vectors $z \in \mathbb{R}^{d_{\text{latent}}}$ lie in an autoencoder’s learned subspace.

Why essential: These systems achieve state-of-the-art performance by exploiting vector space structure (linear combinations, subspaces, closure). Understanding span, null space, and projections is necessary to debug failures, interpret representations, and design architectures.

Key ideas: Where it shows up

1. Principal Component Analysis (PCA) — Subspace projections for dimensionality reduction#

Major achievements:

  • Hotelling (1933): Formalized PCA as finding orthogonal axes of maximum variance. Applied to psychology/economics data.

  • Pearson (1901): Introduced the concept of “lines of closest fit” (principal components) for reducing multidimensional data to low-dimensional representations.

  • Modern applications: Face recognition (eigenfaces, Turk & Pentland 1991), image compression (JPEG2000 uses SVD/PCA principles), preprocessing for neural networks (whitening, decorrelation), latent semantic analysis (LSA for text, Deerwester et al. 1990).

  • Computational impact: Covariance matrix $C = \frac{1}{n} X^\top X$ is PSD, eigenspaces are orthogonal subspaces, data projected onto top-$k$ eigenvectors minimizes reconstruction error.

Connection to subspaces: PCA finds the $k$-dimensional subspace (span of top eigenvectors) that best approximates the data cloud. The residuals lie in the orthogonal complement (discarded eigenspaces).

2. Stochastic Gradient Descent (SGD) — Parameter updates as linear combinations#

Major achievements:

  • Robbins & Monro (1951): Proved convergence of stochastic approximation methods under diminishing step sizes.

  • Deep learning era (2012-present): SGD with minibatches is the dominant optimizer for neural networks. Variants (momentum, Adam, RMSprop) use weighted averages of gradients—linear combinations in parameter space.

  • Theoretical foundations: Gradient descent never leaves the parameter vector space $\mathbb{R}^d$ because updates $\theta_{t+1} = \theta_t - \eta \nabla \mathcal{L}(\theta_t)$ are linear combinations. Convergence analysis relies on inner products (gradient angles) and subspace projections (low-rank gradients, Hessian-free optimization).

Connection to vector spaces: The optimization trajectory $\{\theta_0, \theta_1, \theta_2, \ldots\}$ lies entirely within the parameter space by closure. Momentum methods average previous gradients (linear combinations with exponential decay weights). Coordinate descent restricts updates to axis-aligned subspaces.

3. Deep Neural Networks — Compositional linear maps between layer subspaces#

Major achievements:

  • Universal approximation (Cybenko 1989, Hornik 1991): Neural networks with one hidden layer can approximate continuous functions arbitrarily well. The span of hidden layer activations determines expressiveness.

  • ImageNet revolution (Krizhevsky, Sutskever, Hinton 2012): AlexNet demonstrated that deep networks learn hierarchical feature representations. Each layer maps inputs through a linear transformation (matrix multiplication) followed by nonlinearity.

  • Residual connections (He et al. 2015): ResNets add skip connections $y = f(x) + x$, keeping outputs in the span of inputs plus a learned residual subspace.

Connection to linear maps: Each layer $h_{l+1} = \sigma(W_l h_l + b_l)$ applies a linear map $W_l$ (matrix multiplication) followed by a nonlinearity $\sigma$. The intermediate representation $h_l$ lives in a vector space; the column space of $W_l$ determines which subspace $h_{l+1}$ (pre-activation) can span.

4. Kernel Methods — Implicit infinite-dimensional feature spaces#

Major achievements:

  • Support Vector Machines (Boser, Guyon, Vapnik 1992): Introduced the kernel trick for implicitly computing inner products in high-dimensional spaces without explicitly constructing features.

  • Reproducing Kernel Hilbert Spaces (Aronszajn 1950): Provided rigorous mathematical foundation. Kernels $k(x, x')$ correspond to inner products in a (possibly infinite-dimensional) feature space $\mathcal{H}$: $k(x, x') = \langle \phi(x), \phi(x') \rangle_{\mathcal{H}}$.

  • Modern applications: Gaussian processes (Rasmussen & Williams 2006), kernel PCA, kernel ridge regression, attention mechanisms (scaled dot-product is an inner product in value space).

Connection to vector spaces: The feature map $\phi: \mathcal{X} \to \mathcal{H}$ embeds inputs into a vector space (often infinite-dimensional). The kernel trick avoids explicit computation by working in the dual (span of training examples). Decision boundaries are hyperplanes in $\mathcal{H}$, corresponding to nonlinear boundaries in input space.

5. Transformer Attention — Weighted sums over value subspaces#

Major achievements:

  • Vaswani et al. (2017): “Attention is All You Need” introduced the Transformer architecture, replacing recurrence with self-attention. Enabled scaling to billion-parameter models (GPT-3, GPT-4, LLaMA).

  • Mechanism: Attention computes $\text{softmax}(QK^\top / \sqrt{d_k}) V$, where $Q, K, V$ are linear projections of inputs. The output is a linear combination of value vectors $V$, with weights from softmax-normalized inner products $QK^\top$.

  • Multi-head attention: Projects to multiple subspaces (heads), learns different span representations in parallel, concatenates results.

Connection to subspaces: Each head’s output lies in the span of its value matrix $V$. The attention weights $\alpha_i$ (softmax scores) determine the convex combination $\sum_{i=1}^n \alpha_i V_i$ (each row is a weighted sum of value vectors). The final representation is constrained to $\text{span}(\{V_1, \ldots, V_n\})$.

Notation

Standard Conventions#

1. Vectors and matrices.

  • Scalars: Lowercase Roman or Greek letters ($a, b, \alpha, \beta, \lambda$).

  • Vectors: Lowercase bold ($\mathbf{x}, \mathbf{w}$) or with explicit space annotation ($x \in \mathbb{R}^d$). Default: column vectors.

  • Matrices: Uppercase Roman letters ($A, X, W, \Sigma$). $A \in \mathbb{R}^{m \times n}$ has $m$ rows and $n$ columns.

  • Transpose: $A^\top$ (not $A^T$).

Examples:

  • MNIST images flattened to $x \in \mathbb{R}^{784}$ (28×28 pixels).

  • Dataset matrix $X \in \mathbb{R}^{n \times d}$ with $n$ examples (rows) and $d$ features (columns). Example: ImageNet batch $X \in \mathbb{R}^{256 \times 150528}$ (256 images, 224×224×3 pixels).

  • Weight matrix for a linear layer: $W \in \mathbb{R}^{d_{\text{out}} \times d_{\text{in}}}$ maps $\mathbb{R}^{d_{\text{in}}} \to \mathbb{R}^{d_{\text{out}}}$ via $y = Wx$.

2. Norms and inner products.

  • Euclidean norm (L2 norm): $\|x\|_2 = \sqrt{x_1^2 + \cdots + x_d^2} = \sqrt{x^\top x}$.

  • L1 norm (sparsity-inducing): $\|x\|_1 = |x_1| + \cdots + |x_d|$ (used in Lasso regression).

  • Frobenius norm (matrix): $\|A\|_F = \sqrt{\sum_{i,j} A_{ij}^2} = \sqrt{\text{trace}(A^\top A)}$.

  • Inner product (dot product): $\langle x, y \rangle = x^\top y = \sum_{i=1}^d x_i y_i$.

Examples:

  • Regularization: Ridge regression minimizes $\|Xw - y\|_2^2 + \lambda \|w\|_2^2$ (L2 penalty).

  • Lasso regression: $\|Xw - y\|_2^2 + \lambda \|w\|_1$ (L1 penalty encourages sparse $w$).

  • Gradient magnitude: $\|\nabla \mathcal{L}(\theta)\|_2$ measures steepness of loss surface.

3. Subspaces and projections.

  • Column space: $\text{col}(A)$ or $\text{range}(A)$ or $\mathcal{R}(A)$.

  • Null space (kernel): $\text{null}(A)$ or $\ker(A)$ or $\mathcal{N}(A)$.

  • Orthogonal complement: $S^\perp = \{v \in V : \langle v, s \rangle = 0 \text{ for all } s \in S\}$.

  • Span: $\text{span}\{v_1, \ldots, v_k\}$ = all linear combinations $\sum_{i=1}^k \alpha_i v_i$.

Examples:

  • Least squares: predictions $\hat{y} = Xw$ lie in $\text{col}(X) \subseteq \mathbb{R}^n$. Residuals $r = y - \hat{y}$ lie in $\text{col}(X)^\perp$.

  • PCA: data projected onto $\text{span}\{u_1, \ldots, u_k\}$ where $u_i$ are top eigenvectors of covariance matrix.

  • Underdetermined systems: $Xw = y$ has infinitely many solutions in $w_0 + \text{null}(X)$ (affine subspace).

4. Special matrices and decompositions.

  • Identity matrix: $I$ (or $I_n$ for $n \times n$). Satisfies $Ix = x$ for all $x$.

  • Zero vector: $0$ (or $\mathbf{0}$). Satisfies $v + 0 = v$ for all $v$.

  • Eigenvalues/eigenvectors: $Ax = \lambda x$ with $x \neq 0$. Eigenvalue $\lambda \in \mathbb{R}$ (or $\mathbb{C}$), eigenvector $x \in \mathbb{R}^d$.

  • Singular value decomposition: $X = U \Sigma V^\top$ with $U \in \mathbb{R}^{n \times n}$ (left singular vectors), $\Sigma \in \mathbb{R}^{n \times d}$ (diagonal singular values $\sigma_i \geq 0$), $V \in \mathbb{R}^{d \times d}$ (right singular vectors).

Examples:

  • Covariance matrix: $C = \frac{1}{n} X^\top X$ is PSD, has eigenpairs $(\lambda_i, u_i)$ with $\lambda_i \geq 0$.

  • SVD truncation: $X \approx U_k \Sigma_k V_k^\top$ (rank-$k$ approximation minimizing $\|X - \hat{X}\|_F$).

  • Condition number: $\kappa(X) = \sigma_{\max} / \sigma_{\min}$ measures numerical stability (large $\kappa$ → ill-conditioned).

5. Index conventions.

  • Matrix indexing: $A_{ij}$ = element in row $i$, column $j$. Python uses 0-indexing; math uses 1-indexing.

  • Vector indexing: $x_i$ = $i$-th element of $x$. In Python: x[i] (0-based).

  • Colon notation: $A_{:,j}$ = $j$-th column of $A$. $A_{i,:}$ = $i$-th row. Ranges: $A_{1:k, :}$ = first $k$ rows.

Examples:

  • Feature $j$ across all examples: $X_{:,j} \in \mathbb{R}^n$ (column vector).

  • Example $i$ features: $X_{i,:} \in \mathbb{R}^{1 \times d}$ (row vector).

  • Top-$k$ singular vectors: $U_{:, 1:k} \in \mathbb{R}^{n \times k}$ (first $k$ columns of $U$).

Pitfalls & sanity checks

Common Mistakes#

1. Confusing affine and linear maps.

  • Error: Calling $f(x) = Wx + b$ a “linear” function.

  • Correction: It’s affine (not linear) if $b \neq 0$. Linear maps satisfy $f(0) = 0$; affine maps don’t.

  • Why it matters: Composition of affine maps is affine (not linear unless biases cancel). Regularization treats $W$ and $b$ differently.

2. Forgetting to center data for PCA.

  • Error: Computing eigenvalues of $X^\top X$ without centering $X$.

  • Correction: First compute $X_c = X - \frac{1}{n} \mathbf{1}\mathbf{1}^\top X$ (subtract column means), then use $X_c^\top X_c$.

  • Why it matters: Without centering, the first principal component points toward the mean (captures location, not variance).

3. Assuming rank(X) = d by default.

  • Error: Solving $X^\top X w = X^\top y$ without checking if $X^\top X$ is invertible.

  • Correction: Check $\text{rank}(X)$ with np.linalg.matrix_rank(X). If $\text{rank}(X) < d$, use regularization (ridge regression) or pseudoinverse.

  • Why it matters: Singular $X^\top X$ causes LinAlgError or numerical instability (condition number $\kappa \to \infty$).

4. Confusing column space and row space.

  • Error: Saying “predictions $Xw$ lie in the span of rows of $X$.”

  • Correction: $Xw$ lies in the span of columns of $X$ (column space). Row space is the span of rows (equivalently, column space of $X^\top$).

  • Why it matters: For $X \in \mathbb{R}^{n \times d}$, column space is in $\mathbb{R}^n$ (prediction space), row space is in $\mathbb{R}^d$ (feature space).

5. Ignoring numerical stability.

  • Error: Computing $(X^\top X)^{-1} X^\top y$ explicitly (normal equations).

  • Correction: Use np.linalg.lstsq(X, y) (QR or SVD internally) or scipy.linalg.solve(X.T @ X, X.T @ y, assume_a='pos') (Cholesky).

  • Why it matters: Explicitly forming $X^\top X$ squares the condition number ($\kappa(X^\top X) = \kappa(X)^2$), amplifying errors.

Essential Sanity Checks#

Always verify shapes:

  • After matrix multiply $C = AB$, check C.shape == (A.shape[0], B.shape[1]).

  • For batch processing, ensure leading dimensions match (e.g., $X \in \mathbb{R}^{B \times d}$, $W \in \mathbb{R}^{d \times m}$ gives $XW \in \mathbb{R}^{B \times m}$).

Check rank before solving:

rank = np.linalg.matrix_rank(X)
if rank < X.shape[1]:
    print(f"Warning: X is rank-deficient ({rank} < {X.shape[1]}). Use regularization.")

Verify projections are idempotent: For projection matrix $P$, check $P^2 = P$ and $P^\top = P$ (orthogonal projection).

assert np.allclose(P @ P, P), "Projection not idempotent"
assert np.allclose(P.T, P), "Projection not symmetric"

Test centering explicitly: After centering $X_c = X - \text{mean}(X)$, verify column means are zero:

assert np.allclose(X_c.mean(axis=0), 0), "Data not centered"

Condition number monitoring: For ill-conditioned systems, check $\kappa(X) = \sigma_{\max}(X) / \sigma_{\min}(X)$:

cond = np.linalg.cond(X)
if cond > 1e10:
    print(f"Warning: X is ill-conditioned (κ = {cond:.2e}). Results may be numerically unstable.")

Debugging Checklist#

  • Shapes mismatch? Print X.shape, w.shape before every matrix operation.

  • Unexpected zeros? Check for rank deficiency (np.linalg.matrix_rank(X)).

  • Large errors? Compute residuals $\|Xw - y\|_2$, check if $y \in \text{col}(X)$.

  • Numerical issues? Switch to stable solvers (np.linalg.lstsq, QR, SVD instead of normal equations).

  • Non-converging optimization? Verify gradients $\nabla \mathcal{L}(\theta)$ stay in parameter space (closure), check learning rate.

References

Foundational Texts#

  1. Strang, G. (2016). Introduction to Linear Algebra (5th ed.). Wellesley–Cambridge Press.

    • Chapters 1-4: Vector spaces, subspaces, orthogonality, least squares.

    • Emphasizes geometric intuition and computational methods.

    • Companion video lectures: MIT OpenCourseWare 18.06.

  2. Axler, S. (2015). Linear Algebra Done Right (3rd ed.). Springer.

    • Rigorous, abstract treatment (avoids determinants until late).

    • Focuses on vector spaces, linear maps, eigenvalues.

    • Best for theoretical foundations.

  3. Horn, R. A., & Johnson, C. R. (2013). Matrix Analysis (2nd ed.). Cambridge University Press.

    • Comprehensive reference for matrix theory.

    • Covers norms, singular values, matrix decompositions, perturbation theory.

    • Graduate-level depth.

Machine Learning Perspectives#

  1. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

    • Chapter 2: Linear Algebra (vectors, matrices, norms, eigendecomposition, SVD).

    • Chapter 6: Feedforward Networks (linear layers, activation functions).

    • Free online: deeplearningbook.org

  2. Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning (2nd ed.). Springer.

    • Chapter 3: Linear Methods for Regression (least squares, ridge, lasso, PCA).

    • Chapter 4: Linear Methods for Classification (LDA, logistic regression).

    • Emphasizes statistical perspective (bias-variance, model selection).

  3. Murphy, K. P. (2022). Probabilistic Machine Learning: An Introduction. MIT Press.

    • Chapter 7: Linear Algebra (subspaces, rank, matrix calculus).

    • Chapter 11: Linear Regression (Bayesian, regularization).

    • Modern treatment with probabilistic framing.

Historical Papers#

  1. Pearson, K. (1901). “On Lines and Planes of Closest Fit to Systems of Points in Space.” Philosophical Magazine, 2(11), 559–572.

    • Introduced principal components (PCA).

  2. Hotelling, H. (1933). “Analysis of a Complex of Statistical Variables into Principal Components.” Journal of Educational Psychology, 24(6), 417–441.

    • Formalized PCA with covariance matrices.

  3. Eckart, C., & Young, G. (1936). “The Approximation of One Matrix by Another of Lower Rank.” Psychometrika, 1(3), 211–218.

    • Proved SVD gives optimal low-rank approximation.

Modern Machine Learning#

  1. Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013). “Distributed Representations of Words and Phrases and their Compositionality.” NeurIPS 2013.

    • Word2Vec embeddings; demonstrated linear structure (analogies).

  2. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). “Attention is All You Need.” NeurIPS 2017.

    • Transformer architecture; attention as weighted sums (linear combinations).

  3. He, K., Zhang, X., Ren, S., & Sun, J. (2015). “Deep Residual Learning for Image Recognition.” CVPR 2016.

    • ResNets with skip connections ($y = F(x) + x$, closure in vector space).

  4. Ioffe, S., & Szegedy, C. (2015). “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” ICML 2015.

    • Batch norm (centering + scaling activations).

Numerical Linear Algebra#

  1. Golub, G. H., & Van Loan, C. F. (2013). Matrix Computations (4th ed.). Johns Hopkins University Press.

    • Authoritative reference for numerical algorithms (QR, SVD, eigensolvers).

    • Emphasizes stability, conditioning, complexity.

  2. Trefethen, L. N., & Bau, D. (1997). Numerical Linear Algebra. SIAM.

    • Concise treatment of QR, SVD, least squares, eigenvalue algorithms.

    • Focus on geometric intuition and practical computation.

Online Resources#

  1. 3Blue1Brown (Grant Sanderson). Essence of Linear Algebra (video series).

  2. Gilbert Strang. MIT OpenCourseWare 18.06: Linear Algebra (video lectures).

  3. The Matrix Cookbook (Petersen & Pedersen, 2012).

Five worked examples

Worked Example 1: Embedding interpolation is still a vector#

Introduction#

Token embeddings in NLP models (Word2Vec, GloVe, BERT, GPT) map discrete tokens to continuous vectors in $\mathbb{R}^d$. A fundamental property: any linear combination of embeddings remains a valid embedding (closure under vector space operations). This enables semantic arithmetic (“king” - “man” + “woman” ≈ “queen”), interpolation between concepts, and averaging embeddings for sentences or documents.

This example demonstrates that embedding spaces are vector spaces by explicitly computing an interpolation (convex combination) of two token embeddings. The result stays in $\mathbb{R}^d$, illustrating closure under linear combinations.

Purpose#

  • Verify closure: Show that $\alpha e(a) + (1-\alpha) e(b) \in \mathbb{R}^d$ for any embeddings $e(a), e(b)$ and scalar $\alpha \in [0,1]$.

  • Introduce convex combinations: Interpolation with $\alpha \in [0,1]$ produces points on the line segment between $e(a)$ and $e(b)$.

  • Connect to ML: Embedding arithmetic is used in analogy tasks, compositional semantics, and prompt engineering (e.g., blending concepts for image generation).

Importance#

Semantic compositionality. The vector space structure of embeddings enables composing meanings via linear algebra. Famous examples:

  • Word2Vec analogies (Mikolov et al., 2013): $v_{\text{king}} - v_{\text{man}} + v_{\text{woman}} \approx v_{\text{queen}}$ achieves ~40% accuracy on analogy tasks.

  • Sentence embeddings: Average token embeddings $\bar{v} = \frac{1}{n} \sum_{i=1}^n e(t_i)$ (simple but effective baseline for sentence similarity).

  • Image-text embeddings (CLIP, 2021): Contrastive learning aligns image and text embeddings in a shared vector space. Interpolations blend visual/textual concepts.

Training stability. Gradient descent updates embeddings via $e(t) \leftarrow e(t) - \eta \nabla \mathcal{L}$. Closure ensures embeddings never “leave” $\mathbb{R}^d$ during training.

What This Example Demonstrates#

This example shows that embedding spaces are closed under linear combinations, a necessary condition for being a vector space. Interpolation $v = \alpha e(a) + (1-\alpha)e(b)$ produces a point between $e(a)$ and $e(b)$, illustrating that we can “blend” semantic meanings by taking weighted averages.

The geometric interpretation: $e(a)$ and $e(b)$ define a line in $\mathbb{R}^d$; all convex combinations lie on the line segment $[e(a), e(b)]$. This extends to arbitrary linear combinations (not just convex), forming the span $\{\alpha e(a) + \beta e(b) : \alpha, \beta \in \mathbb{R}\}$ (a 2D subspace if $e(a)$ and $e(b)$ are linearly independent).

Background#

Distributional semantics. The idea that “words are characterized by the company they keep” (Firth, 1957) led to vector space models in NLP. Early methods (latent semantic analysis, 1990; HAL, 1997) used co-occurrence matrices. Modern neural embeddings (Word2Vec, 2013; GloVe, 2014) learn dense representations by predicting context words.

Vector space models in NLP:

  • Bag-of-words: Represent documents as sparse vectors in $\mathbb{R}^{|\text{vocab}|}$ (counts or TF-IDF weights).

  • Word embeddings: Learn dense vectors $e(w) \in \mathbb{R}^d$ ($d \approx 50$-$1000$) capturing semantic similarity. Similar words have nearby vectors (measured by cosine similarity or Euclidean distance).

  • Contextual embeddings (BERT, GPT): Embeddings depend on context; $e(w | \text{context})$ varies across sentences. Still vectors in $\mathbb{R}^d$ at each layer.

Closure and linearity: The vector space axioms (closure, distributivity) are assumed in embedding models but rarely verified explicitly. This example makes closure concrete: interpolation $\alpha e(a) + (1-\alpha)e(b)$ stays in $\mathbb{R}^d$ because $\mathbb{R}^d$ is a vector space.

Historical Context#

1. Distributional hypothesis (1950s-1960s). Harris (1954) and Firth (1957) proposed that word meaning is determined by distribution (co-occurrence patterns). This motivated vector representations based on context counts.

2. Latent Semantic Analysis (Deerwester et al., 1990). Applied SVD to term-document matrices, projecting words/documents into low-dimensional subspaces. Demonstrated that dimensionality reduction (via truncated SVD) preserves semantic relationships.

3. Word2Vec (Mikolov et al., 2013). Introduced skip-gram and CBOW models, training shallow neural networks to predict context words. Showed that embeddings exhibit linear structure: analogies correspond to parallel vectors ($v_{\text{king}} - v_{\text{man}} + v_{\text{woman}} \approx v_{\text{queen}}$).

4. GloVe (Pennington et al., 2014). Combined global co-occurrence statistics with local context prediction, achieving state-of-the-art performance on analogy and similarity tasks.

5. Contextual embeddings (2018-present). BERT (Devlin et al., 2018) and GPT (Radford et al., 2018) compute embeddings that vary by context, using Transformer architectures. Embeddings at each layer are still vectors in $\mathbb{R}^d$, but $e(w)$ depends on the entire input sequence.

History in Machine Learning#

  • 1990: LSA applies SVD to term-document matrices (vector space models).

  • 2013: Word2Vec popularizes dense embeddings; analogy tasks demonstrate linear structure.

  • 2014: GloVe combines global statistics with neural methods.

  • 2017: Transformers (Vaswani et al.) enable contextualized embeddings via attention.

  • 2018: BERT and GPT revolutionize NLP by learning contextual representations at scale.

  • 2021: CLIP (Radford et al.) aligns image and text embeddings in a shared vector space, enabling zero-shot image classification and text-to-image generation.

Prevalence in Machine Learning#

Ubiquitous in NLP: Every modern NLP model (BERT, GPT, T5, LLaMA) uses token embeddings in $\mathbb{R}^d$ ($d = 768$ for BERT-base, $d = 4096$ for GPT-3, $d = 12288$ for GPT-4). Embeddings are the primary representation for text.

Vision and multimodal models:

  • Vision Transformers (ViT, 2020): Patch embeddings in $\mathbb{R}^d$ replace pixel representations.

  • CLIP (2021): Image and text embeddings in a shared $\mathbb{R}^{512}$ space enable cross-modal retrieval.

  • DALL-E, Stable Diffusion (2021-2022): Text embeddings condition diffusion models for image generation.

Recommendation systems: Item embeddings in $\mathbb{R}^d$ capture user preferences. Collaborative filtering factorizes user-item matrices into embeddings.

Notes and Explanatory Details#

Shape discipline: $e(a) \in \mathbb{R}^d$, $e(b) \in \mathbb{R}^d$, $\alpha \in \mathbb{R}$. The interpolation $v = \alpha e(a) + (1-\alpha)e(b)$ is a linear combination, so $v \in \mathbb{R}^d$ by closure.

Convex combinations: Restricting $\alpha \in [0,1]$ ensures $v$ lies on the line segment $[e(a), e(b)]$. Allowing $\alpha \in \mathbb{R}$ gives the entire line through $e(a)$ and $e(b)$ (the span).

Geometric interpretation: In 3D, if $e(a) = [1, 0, 2]$ and $e(b) = [-1, 3, 0]$, then $v = 0.3 e(a) + 0.7 e(b)$ lies 30% of the way from $e(b)$ to $e(a)$.

Numerical considerations: Embedding norms vary (typical $\|e(w)\|_2 \approx 1$-$10$ depending on initialization). Normalization (dividing by $\|e(w)\|_2$) is common for cosine similarity metrics.

Connection to Machine Learning#

Analogy tasks. Linear offsets capture semantic relationships: $v_{\text{France}} - v_{\text{Paris}} \approx v_{\text{Germany}} - v_{\text{Berlin}}$ (capital relationship). The vector $v_{\text{France}} - v_{\text{Paris}}$ represents the “capital-of” direction in embedding space.

Prompt interpolation. In text-to-image models, interpolating prompt embeddings generates images blending two concepts. Example: $\alpha e(\text{"dog"}) + (1-\alpha)e(\text{"cat"})$ with $\alpha = 0.5$ might generate a hybrid “doge” image.

Sentence embeddings. Averaging token embeddings $\bar{v} = \frac{1}{n} \sum_{i=1}^n e(t_i)$ is a simple but effective sentence representation (used in Skip-Thought, InferSent). More sophisticated: weighted averages (TF-IDF weights) or learned aggregations (attention).

Connection to Linear Algebra Theory#

Vector space axioms. $\mathbb{R}^d$ satisfies all vector space axioms:

  1. Closure: $e(a) + e(b) \in \mathbb{R}^d$ and $\alpha e(a) \in \mathbb{R}^d$.

  2. Associativity: $(e(a) + e(b)) + e(c) = e(a) + (e(b) + e(c))$.

  3. Commutativity: $e(a) + e(b) = e(b) + e(a)$.

  4. Identity: $e(a) + 0 = e(a)$ where $0 = [0, \ldots, 0] \in \mathbb{R}^d$.

  5. Inverses: $e(a) + (-e(a)) = 0$.

  6. Scalar distributivity: $\alpha(e(a) + e(b)) = \alpha e(a) + \alpha e(b)$.

Subspaces. The span of embeddings $\{\text{span}\{e(t_1), \ldots, e(t_k)\}\}$ is a subspace of $\mathbb{R}^d$. For a vocabulary of size $|V|$, all embeddings lie in a $k$-dimensional subspace if $k < d$ (low-rank embedding matrix).

Affine combinations. Convex combinations $\sum_{i=1}^k \alpha_i e(t_i)$ with $\alpha_i \geq 0$, $\sum_i \alpha_i = 1$ form a convex hull (polytope in $\mathbb{R}^d$). Sentence embeddings via averaging lie in this convex hull.

Pedagogical Significance#

Concrete verification of closure. Many students learn vector space axioms abstractly but rarely see explicit numerical verification. This example shows that $\alpha e(a) + (1-\alpha)e(b) \in \mathbb{R}^d$ by computing actual numbers.

Geometric intuition. Interpolation visualizes the line segment between two points in $\mathbb{R}^d$. Extending to $\alpha \notin [0,1]$ shows extrapolation (moving beyond $e(a)$ or $e(b)$ along the line).

Foundation for advanced topics. Understanding embedding spaces as vector spaces is prerequisite for:

  • Analogies: Vector arithmetic $e(a) - e(b) + e(c)$ requires closure.

  • Dimensionality reduction: Projecting embeddings to lower-dimensional subspaces (PCA, t-SNE).

  • Alignment: Mapping embeddings between languages (Procrustes alignment, learned transforms).

References#

  1. Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013). “Distributed Representations of Words and Phrases and their Compositionality.” NeurIPS 2013. Introduced Word2Vec (skip-gram, CBOW); demonstrated analogy tasks.

  2. Pennington, J., Socher, R., & Manning, C. D. (2014). “GloVe: Global Vectors for Word Representation.” EMNLP 2014. Combined global co-occurrence statistics with local context.

  3. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” NAACL 2019. Contextual embeddings via masked language modeling.

  4. Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., … & Sutskever, I. (2021). “Learning Transferable Visual Models From Natural Language Supervision.” ICML 2021. CLIP aligns image/text embeddings in shared space.

  5. Firth, J. R. (1957). “A Synopsis of Linguistic Theory, 1930-1955.” Studies in Linguistic Analysis. Introduced distributional hypothesis: “You shall know a word by the company it keeps.”

Problem. Show token embeddings live in a vector space and compute an interpolation.

Solution (math).

Given embeddings $e(a), e(b) \in \mathbb{R}^d$ and $\alpha \in [0,1]$, the interpolation is: $$ v = \alpha e(a) + (1-\alpha)e(b) $$

By closure of $\mathbb{R}^d$ under linear combinations, $v \in \mathbb{R}^d$. For $\alpha = 0$, $v = e(b)$; for $\alpha = 1$, $v = e(a)$; for $\alpha = 0.5$, $v$ is the midpoint.

Solution (Python).

import numpy as np

# Define embeddings for tokens 'a' and 'b' in R^3
E = {
    'a': np.array([1., 0., 2.]),
    'b': np.array([-1., 3., 0.])
}

# Interpolation parameter (0 <= alpha <= 1)
alpha = 0.3

# Compute convex combination
v = alpha * E['a'] + (1 - alpha) * E['b']

print(f"e(a) = {E['a']}")
print(f"e(b) = {E['b']}")
print(f"v = {alpha} * e(a) + {1-alpha} * e(b) = {v}")
print(f"v is in R^3: {v.shape == (3,)}")

Output:

e(a) = [1. 0. 2.]
e(b) = [-1.  3.  0.]
v = 0.3 * e(a) + 0.7 * e(b) = [-0.4  2.1  0.6]
v is in R^3: True

Worked Example 2: Zero-mean subspace projection#

Introduction#

Centering data (subtracting the mean) is a ubiquitous preprocessing step in machine learning. PCA, covariance estimation, batch normalization, and many other algorithms assume zero-mean data. Mathematically, zero-mean vectors form a subspace: $S = \{x \in \mathbb{R}^n : \mathbf{1}^\top x = 0\}$ where $\mathbf{1} = [1, \ldots, 1]^\top$ is the all-ones vector.

This example shows that $S$ is the null space of the row vector $\mathbf{1}^\top$, demonstrates projection onto $S$ via the centering matrix $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$, and verifies that the projected vector has zero mean.

Purpose#

  • Demonstrate subspaces defined by constraints: $S = \{x : \mathbf{1}^\top x = 0\}$ is a hyperplane through the origin ($(n-1)$-dimensional subspace).

  • Introduce projection matrices: $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$ projects onto $S$ (removes the mean).

  • Connect to ML: Centering data is equivalent to projecting onto the zero-mean subspace.

Importance#

PCA and covariance estimation. PCA operates on the centered data matrix $X_c = X - \frac{1}{n} \mathbf{1}\mathbf{1}^\top X$ (each column has zero mean). The covariance matrix $C = \frac{1}{n} X_c^\top X_c$ measures variance around the mean; if data is not centered, $C$ mixes mean and variance.

Batch normalization (Ioffe & Szegedy, 2015). Normalizes layer activations by subtracting batch mean and dividing by batch std. The mean-centering step is projection onto the zero-mean subspace.

Regularization and identifiability. In linear regression with an intercept $f(x) = w^\top x + b$, centering inputs $x \mapsto x - \bar{x}$ and targets $y \mapsto y - \bar{y}$ decouples the intercept from the weights, improving numerical stability and interpretability.

What This Example Demonstrates#

This example shows that:

  1. Zero-mean vectors form a subspace (closed under addition/scaling, contains zero).

  2. Projection onto $S$ is linear: $x_{\text{proj}} = Px$ with $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$.

  3. The projection removes the mean: $\mathbf{1}^\top (Px) = 0$ for all $x$.

  4. $P$ is idempotent: $P^2 = P$ (projecting twice is the same as projecting once).

  5. $P$ is symmetric: $P^\top = P$ (orthogonal projection).

Background#

Affine subspaces vs. linear subspaces. An affine subspace (hyperplane) has the form $\{x : a^\top x = c\}$ for $c \neq 0$. This is not a subspace unless $c = 0$ (does not contain the origin). Zero-mean vectors form a linear subspace because $\mathbf{1}^\top 0 = 0$.

Centering matrix. The matrix $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$ is called the centering matrix or projection onto zero-mean subspace. It satisfies:

  • $P\mathbf{1} = 0$ (projects all-ones vector to zero).

  • $Px = x - \bar{x} \mathbf{1}$ where $\bar{x} = \frac{1}{n} \mathbf{1}^\top x$ (subtracts the mean).

  • $P^2 = P$ (idempotent: projecting twice does nothing).

  • $P^\top = P$ (symmetric: orthogonal projection).

Null space and column space. $S = \text{null}(\mathbf{1}^\top)$ is the set of vectors perpendicular to $\mathbf{1}$. The orthogonal complement is $S^\perp = \text{span}\{\mathbf{1}\}$ (scalar multiples of $\mathbf{1}$). By the fundamental theorem of linear algebra, $\mathbb{R}^n = S \oplus S^\perp$ (direct sum).

Historical Context#

1. Gaussian elimination and centering (Gauss, 1809). Gauss used mean-centering in least squares for astronomical orbit fitting (method of least squares, published in Theoria Motus).

2. PCA (Pearson 1901, Hotelling 1933). Both Pearson’s “lines of closest fit” and Hotelling’s “principal components” assume centered data. The covariance matrix $C = \frac{1}{n} X_c^\top X_c$ is undefined without centering (would conflate mean and variance).

3. Projection matrices (Penrose 1955, Rao 1955). The theory of orthogonal projections was formalized in the 1950s. The centering matrix $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$ is a rank-$(n-1)$ projection matrix.

4. Batch normalization (Ioffe & Szegedy, 2015). Revolutionized deep learning by normalizing layer activations. The first step is centering: $\hat{x}_i = x_i - \frac{1}{B} \sum_{i=1}^B x_i$ (subtract batch mean).

History in Machine Learning#

  • 1809: Gauss applies least squares with mean-centering (astronomical data).

  • 1901: Pearson introduces PCA (assumes centered data).

  • 1933: Hotelling formalizes PCA (covariance matrix requires centering).

  • 1955: Penrose and Rao develop theory of projection matrices.

  • 2015: Batch normalization (Ioffe & Szegedy) makes centering a learned layer operation.

  • 2016: Layer normalization (Ba et al.) centers across features instead of batches.

  • 2019: Group normalization (Wu & He) centers within feature groups (used in computer vision).

Prevalence in Machine Learning#

Preprocessing: Nearly all classical ML algorithms (PCA, LDA, SVM, ridge regression) assume centered data. Scikit-learn’s StandardScaler first centers (`x \mapsto x - \bar{x}$) then scales ($x \mapsto x / \sigma$).

Deep learning normalization:

  • Batch norm: Centers and scales mini-batch statistics.

  • Layer norm: Centers and scales across feature dimension (used in Transformers).

  • Instance norm: Centers each example independently (style transfer, GANs).

Optimization: Adam optimizer maintains exponential moving averages of gradients and squared gradients. The first moment $m_t$ is effectively a centered gradient estimate.

Notes and Explanatory Details#

Shape discipline:

  • Input: $x \in \mathbb{R}^n$ (column vector).

  • All-ones vector: $\mathbf{1} \in \mathbb{R}^n$ (column vector).

  • Mean: $\bar{x} = \frac{1}{n} \mathbf{1}^\top x \in \mathbb{R}$ (scalar).

  • Centering matrix: $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top \in \mathbb{R}^{n \times n}$.

  • Projected vector: $x_{\text{proj}} = Px \in \mathbb{R}^n$.

Verification of projection properties:

  1. Projects to zero-mean subspace: $\mathbf{1}^\top (Px) = \mathbf{1}^\top \left(I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top \right) x = \mathbf{1}^\top x - \frac{1}{n} (\mathbf{1}^\top \mathbf{1})(\mathbf{1}^\top x) = \mathbf{1}^\top x - \mathbf{1}^\top x = 0$.

  2. Idempotent: $P^2 = \left(I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top \right)^2 = I - \frac{2}{n} \mathbf{1}\mathbf{1}^\top + \frac{1}{n^2} \mathbf{1}(\mathbf{1}^\top \mathbf{1})\mathbf{1}^\top = I - \frac{2}{n} \mathbf{1}\mathbf{1}^\top + \frac{1}{n} \mathbf{1}\mathbf{1}^\top = P$.

  3. Symmetric: $P^\top = \left(I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top \right)^\top = I - \frac{1}{n} (\mathbf{1}\mathbf{1}^\top)^\top = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top = P$.

Numerical considerations: Computing $P$ explicitly (storing $n \times n$ matrix) is wasteful for large $n$. Instead, compute $Px = x - \bar{x} \mathbf{1}$ directly (subtracting the mean).

Connection to Machine Learning#

PCA. The centered data matrix $X_c = PX$ (apply $P$ to each column) ensures principal components capture variance, not mean. The covariance matrix $C = \frac{1}{n} X_c^\top X_c$ would be biased without centering.

Batch normalization. For a mini-batch $\{x_1, \ldots, x_B\}$, batch norm computes: $$ \hat{x}_i = \frac{x_i - \mu_B}{\sqrt{\sigma_B^2 + \epsilon}}, \quad \mu_B = \frac{1}{B} \sum_{i=1}^B x_i, \quad \sigma_B^2 = \frac{1}{B} \sum_{i=1}^B (x_i - \mu_B)^2 $$ The centering step $x_i - \mu_B$ is projection onto the zero-mean subspace.

Residuals in regression. In ordinary least squares, residuals $r = y - \hat{y} = (I - X(X^\top X)^{-1} X^\top)y$ are projections onto the orthogonal complement of $\text{col}(X)$. If $X$ includes a column of ones (intercept), residuals automatically have zero mean.

Connection to Linear Algebra Theory#

Projection theorem. Every vector $x \in \mathbb{R}^n$ can be uniquely decomposed as $x = x_\parallel + x_\perp$ where $x_\parallel \in S$ and $x_\perp \in S^\perp$. For $S = \{x : \mathbf{1}^\top x = 0\}$, we have: $$ x_\parallel = Px = x - \bar{x} \mathbf{1}, \quad x_\perp = (I-P)x = \bar{x} \mathbf{1} $$

Rank of projection matrix. $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$ has rank $n-1$ because:

  • $\text{null}(P) = \text{span}\{\mathbf{1}\}$ (1D subspace).

  • By rank-nullity theorem, $\text{rank}(P) + \dim(\text{null}(P)) = n$, so $\text{rank}(P) = n-1$.

Eigenvalues of $P$. $P$ has eigenvalues $\lambda = 1$ (multiplicity $n-1$) and $\lambda = 0$ (multiplicity 1):

  • Eigenvectors with $\lambda = 1$: any $v \perp \mathbf{1}$ (orthogonal to all-ones).

  • Eigenvector with $\lambda = 0$: $v = \mathbf{1}$.

This confirms $P$ is a projection: eigenvalues are 0 or 1, characteristic of projection matrices.

Relation to covariance. The sample covariance matrix is: $$ C = \frac{1}{n} X_c^\top X_c = \frac{1}{n} (PX)^\top (PX) = \frac{1}{n} X^\top P^\top P X = \frac{1}{n} X^\top P X $$ using $P^\top = P$ and $P^2 = P$.

Pedagogical Significance#

Concrete example of a subspace. Students often learn subspaces abstractly (“closed under addition and scaling”). This example gives a geometric and algebraic definition: $S = \{x : \mathbf{1}^\top x = 0\}$ (algebraic) is an $(n-1)$-dimensional hyperplane through the origin (geometric).

Projection as matrix multiplication. Projecting $x$ onto $S$ is simply $x_{\text{proj}} = Px$. This demystifies projections (often introduced with complicated formulas) by showing they’re linear maps.

Foundation for PCA. Understanding centering is essential before learning PCA. Many textbooks jump to “compute eigenvalues of $X^\top X$” without explaining why $X$ must be centered.

Computational perspective. Explicitly forming $P$ (storing $n^2$ entries) is wasteful. Implementing $Px$ as $x - \bar{x} \mathbf{1}$ (computing mean, subtracting) is much faster ($O(n)$ vs. $O(n^2)$).

References#

  1. Gauss, C. F. (1809). Theoria Motus Corporum Coelestium. Introduced least squares with mean-centering for orbit determination.

  2. Hotelling, H. (1933). “Analysis of a Complex of Statistical Variables into Principal Components.” Journal of Educational Psychology, 24(6), 417–441. Formalized PCA (assumes centered data).

  3. Ioffe, S., & Szegedy, C. (2015). “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” ICML 2015. Introduced batch normalization (centering + scaling layer activations).

  4. Strang, G. (2016). Introduction to Linear Algebra (5th ed.). Wellesley–Cambridge Press. Chapter 4 covers projection matrices and least squares.

  5. Horn, R. A., & Johnson, C. R. (2013). Matrix Analysis (2nd ed.). Cambridge University Press. Section 2.5 covers idempotent matrices (projections).

Problem. Project $x$ onto $S = \{x \in \mathbb{R}^n : \mathbf{1}^\top x = 0\}$ (zero-mean subspace).

Solution (math).

$S$ is a subspace (null space of $\mathbf{1}^\top$). The projection matrix is: $$ P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top $$

Applying $P$ to $x$ gives: $$ x_{\text{proj}} = Px = x - \frac{1}{n} (\mathbf{1}^\top x) \mathbf{1} = x - \bar{x} \mathbf{1} $$ where $\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i$ is the mean of $x$. Verification: $\mathbf{1}^\top x_{\text{proj}} = \mathbf{1}^\top (x - \bar{x} \mathbf{1}) = \mathbf{1}^\top x - n \bar{x} = 0$.

Solution (Python).

import numpy as np

# Define vector x in R^5
n = 5
x = np.array([3., 1., 0., -2., 4.])

# Centering matrix P = I - (1/n) * 1 * 1^T
I = np.eye(n)
one = np.ones((n, 1))
P = I - (1 / n) * (one @ one.T)

# Project x onto zero-mean subspace
x_proj = P @ x

print(f"Original x: {x}")
print(f"Mean of x: {x.mean():.2f}")
print(f"Projected x_proj: {x_proj}")
print(f"Mean of x_proj: {x_proj.sum():.2e} (should be ~0)")
print(f"Verification: 1^T @ x_proj = {one.T @ x_proj.reshape(-1, 1)[0, 0]:.2e}")

Output:

Original x: [ 3.  1.  0. -2.  4.]
Mean of x: 1.20
Projected x_proj: [ 1.8 -0.2 -1.2 -3.2  2.8]
Mean of x_proj: 0.00e+00 (should be ~0)
Verification: 1^T @ x_proj = 0.00e+00

Worked Example 3: Model outputs form range(X)#

Introduction#

In linear regression $\hat{y} = Xw$, all possible predictions lie in the column space (range) of the feature matrix $X$. This fundamental constraint determines model expressiveness: if the target $y \notin \text{col}(X)$, the model cannot fit perfectly (residual is nonzero). Understanding $\text{col}(X)$ as a subspace clarifies when adding features helps, when features are redundant (linearly dependent), and how model capacity relates to matrix rank.

Purpose#

  • Identify $\text{col}(X)$ as a subspace: All vectors $Xw$ (for $w \in \mathbb{R}^d$) form a subspace of $\mathbb{R}^n$.

  • Relate expressiveness to rank: $\dim(\text{col}(X)) = \text{rank}(X) \leq \min(n, d)$.

  • Connect to ML: Model predictions span a $\text{rank}(X)$-dimensional subspace. If $\text{rank}(X) < n$, the model cannot fit arbitrary targets.

Importance#

Underdetermined vs. overdetermined systems.

  • Underdetermined ($n < d$): More features than examples. $\text{rank}(X) \leq n < d$, so infinitely many solutions exist (null space is nontrivial).

  • Overdetermined ($n > d$): More examples than features. $\text{rank}(X) \leq d < n$, so exact fit is impossible unless $y \in \text{col}(X)$ (rare). Least squares finds best approximation.

Multicollinearity. If $\text{rank}(X) < d$, features are linearly dependent (redundant). Example: including both “temperature in Celsius” and “temperature in Fahrenheit” as features makes $X$ rank-deficient. Solutions are non-unique ($w + v$ is also a solution for any $v \in \text{null}(X)$).

What This Example Demonstrates#

  • Column space = set of all predictions: $\{Xw : w \in \mathbb{R}^d\} = \text{col}(X) = \text{span}\{x_1, \ldots, x_d\}$ where $x_j$ are columns of $X$.

  • Rank determines dimension: $\dim(\text{col}(X)) = \text{rank}(X)$.

  • NumPy verification: np.linalg.matrix_rank(X) computes rank via SVD.

Background#

Fundamental Theorem of Linear Algebra (Strang). For $A \in \mathbb{R}^{m \times n}$:

  1. $\text{col}(A)$ and $\text{null}(A^\top)$ are orthogonal complements in $\mathbb{R}^m$: $\mathbb{R}^m = \text{col}(A) \oplus \text{null}(A^\top)$.

  2. $\text{col}(A^\top)$ and $\text{null}(A)$ are orthogonal complements in $\mathbb{R}^n$: $\mathbb{R}^n = \text{col}(A^\top) \oplus \text{null}(A)$.

  3. $\dim(\text{col}(A)) = \dim(\text{col}(A^\top)) = \text{rank}(A)$.

  4. Rank-nullity theorem: $\text{rank}(A) + \dim(\text{null}(A)) = n$.

Historical Context: The concept of rank appeared in Frobenius (1911) and Sylvester (1850s), but the geometric interpretation as “dimension of column space” became standard only in the 20th century with abstract linear algebra.

Connection to Machine Learning#

Regularization and identifiability. If $\text{rank}(X) < d$, the normal equations $X^\top X w = X^\top y$ are singular ($X^\top X$ is not invertible). Ridge regression adds $\lambda I$ to make $(X^\top X + \lambda I)$ invertible, effectively restricting solutions to a preferred subspace.

Feature selection. Adding a feature that’s a linear combination of existing features (e.g., $x_{\text{new}} = 2x_1 + 3x_2$) does not increase $\text{rank}(X)$ or model capacity. Feature selection algorithms (LASSO, forward selection) aim to maximize rank while minimizing redundancy.

Low-rank approximation. If $\text{rank}(X) \ll \min(n, d)$, truncated SVD $X \approx U_k \Sigma_k V_k^\top$ captures most information with $k \ll d$ features. This is the basis of PCA, autoencoders, and matrix factorization (recommender systems).

References#

  1. Strang, G. (2016). Introduction to Linear Algebra (5th ed.). Wellesley–Cambridge Press. Chapter 3: “The Four Fundamental Subspaces.”

  2. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. Chapter 2: “Linear Algebra” (discusses rank and span).

  3. Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning (2nd ed.). Springer. Section 3.4: “Shrinkage Methods” (ridge regression, handling rank deficiency).

Problem. Interpret $\{Xw : w \in \mathbb{R}^d\}$ as a subspace and compute its dimension.

Solution (math).

The set $\{Xw : w \in \mathbb{R}^d\}$ is the column space (range) of $X$: $$ \text{col}(X) = \{Xw : w \in \mathbb{R}^d\} = \text{span}\{x_1, \ldots, x_d\} $$ where $x_j \in \mathbb{R}^n$ are the columns of $X \in \mathbb{R}^{n \times d}$. The dimension is: $$ \dim(\text{col}(X)) = \text{rank}(X) \leq \min(n, d) $$

Solution (Python).

import numpy as np

# Define feature matrix X (3 examples, 2 features)
X = np.array([[1., 0.],
              [0., 1.],
              [1., 1.]])

# Compute rank (dimension of column space)
rank = np.linalg.matrix_rank(X)

print(f"X shape: {X.shape}")
print(f"X =\n{X}")
print(f"Rank(X) = {rank}")
print(f"Column space dimension = {rank}")
print(f"Predictions Xw span a {rank}-dimensional subspace of R^{X.shape[0]}")

Output:

X shape: (3, 2)
X =
[[1. 0.]
 [0. 1.]
 [1. 1.]]
Rank(X) = 2
Column space dimension = 2
Predictions Xw span a 2-dimensional subspace of R^3

Worked Example 4: Bias trick (affine → linear)#

Introduction#

Most machine learning models use affine transformations: $f(x) = Wx + b$ where $W$ is a weight matrix and $b$ is a bias vector. Affine maps are not linear (they don’t map zero to zero if $b \neq 0$), but there’s an elegant trick: augment the input with a constant 1, turning $f(x) = Wx + b$ into a purely linear map $f(x') = W' x'$ in a higher-dimensional space.

This “bias trick” (also called “homogeneous coordinates”) is ubiquitous in ML: neural network layers, logistic regression, SVMs, and computer graphics all use it. It simplifies implementation (one matrix multiply instead of multiply + add) and unifies the treatment of weights and biases.

Purpose#

  • Unify affine and linear maps: Convert $f(x) = Wx + b$ to $f(x') = W'x'$ where $x' = [x; 1]$ (augmented input) and $W' = [W \,|\, b]$ (concatenated weight matrix and bias).

  • Simplify backpropagation: Gradients w.r.t. $W'$ handle both weights and biases uniformly.

  • Connect to projective geometry: Homogeneous coordinates in computer graphics use the same augmentation.

Importance#

Neural network implementation. Every linear layer $h = Wx + b$ can be written as $h = W'x'$ where $x' = [x; 1]$ and $W' = [W \,|\, b]$. Many frameworks (PyTorch, TensorFlow) handle biases separately for efficiency, but conceptually this augmentation clarifies the math.

Logistic regression. The decision boundary $w^\top x + b = 0$ becomes $w'^\top x' = 0$ in augmented space, a linear classifier (hyperplane through the origin in $\mathbb{R}^{d+1}$).

Conditioning and regularization. Regularizing $\|w\|_2^2$ without penalizing $b$ (common practice) is harder to express if $w$ and $b$ are combined. Keeping them separate maintains flexibility, but the augmentation perspective clarifies that they live in different subspaces.

What This Example Demonstrates#

  • Affine maps become linear in augmented space: $f(x) = Wx + b$ (affine in $\mathbb{R}^d$) equals $f(x') = W'x'$ (linear in $\mathbb{R}^{d+1}$).

  • Augmentation preserves structure: Adding a constant 1 extends the input space without losing information.

  • Numerical verification: Compute both $Wx + b$ and $W'x'$, verify they’re identical.

Background#

Affine vs. linear. A map $f: \mathbb{R}^d \to \mathbb{R}^m$ is:

  • Linear if $f(\alpha x + \beta y) = \alpha f(x) + \beta f(y)$ for all $x, y, \alpha, \beta$. Equivalently, $f(x) = Ax$ for some matrix $A$.

  • Affine if $f(x) = Ax + b$ for some matrix $A$ and vector $b$. Affine maps preserve affine combinations (weighted averages with weights summing to 1) but not arbitrary linear combinations.

Homogeneous coordinates. In computer graphics, 3D points $(x, y, z)$ are represented as 4-vectors $(x, y, z, 1)$ to handle translations uniformly. The last coordinate acts as a “scaling factor” (1 for ordinary points, 0 for vectors). This is exactly the bias trick.

Historical Context: Homogeneous coordinates were introduced by August Ferdinand Möbius (1827) for projective geometry. Their use in ML is a modern application of this classical idea.

Connection to Machine Learning#

Deep neural networks. Each layer computes $h_{l+1} = \sigma(W_l h_l + b_l)$ where $\sigma$ is a nonlinearity (ReLU, sigmoid). The affine transformation $W_l h_l + b_l$ can be written as $W'_l h'_l$ with $h'_l = [h_l; 1]$.

Batch processing. For a mini-batch $X \in \mathbb{R}^{B \times d}$ (rows are examples), the transformation $Y = XW^\top + \mathbf{1} b^\top$ (broadcasting bias) becomes $Y = X' W'^\top$ where $X' = [X \,|\, \mathbf{1}]$ (augmented batch) and $W' = [W \,|\, b]$.

Regularization subtlety. Ridge regression penalizes $\|w\|_2^2$ but not $b$ (bias is unregularized). If we augment and use $w' = [w; b]$, regularizing $\|w'\|_2^2$ would incorrectly penalize the bias. This is why most implementations keep $w$ and $b$ separate despite the conceptual elegance of augmentation.

References#

  1. Möbius, A. F. (1827). Der barycentrische Calcul. Introduced homogeneous coordinates for projective geometry.

  2. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. Section 6.1: “Feedforward Networks” (linear layers with biases).

  3. Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer. Section 3.1: “Linear Regression” (discusses augmentation for intercept terms).

Problem. Rewrite $f(x) = Wx + b$ as a linear map in augmented space.

Solution (math).

Define the augmented input $x' \in \mathbb{R}^{d+1}$: $$ x' = \begin{bmatrix} x \\ 1 \end{bmatrix} $$

and the augmented weight matrix $W' \in \mathbb{R}^{m \times (d+1)}$: $$ W' = \begin{bmatrix} W & b \end{bmatrix} $$

Then: $$ f(x) = Wx + b = W' x' = \begin{bmatrix} W & b \end{bmatrix} \begin{bmatrix} x \\ 1 \end{bmatrix} = Wx + b \cdot 1 $$

This is a linear map in $\mathbb{R}^{d+1}$ (no bias term needed).

Solution (Python).

import numpy as np

# Define weight matrix W (2x2) and bias vector b (2,)
W = np.array([[2., 1.],
              [-1., 3.]])
b = np.array([0.5, -2.])

# Input vector x (2,)
x = np.array([1., 4.])

# Standard affine transformation: Wx + b
y_affine = W @ x + b

# Augmented transformation: W' @ x'
# W' = [W | b] (concatenate b as a column)
W_aug = np.c_[W, b]  # Shape: (2, 3)

# x' = [x; 1] (append 1)
x_aug = np.r_[x, 1.]  # Shape: (3,)

# Linear transformation in augmented space
y_linear = W_aug @ x_aug

print(f"W =\n{W}")
print(f"b = {b}")
print(f"x = {x}\n")

print(f"Affine: Wx + b = {y_affine}")
print(f"\nAugmented W' =\n{W_aug}")
print(f"Augmented x' = {x_aug}")
print(f"Linear: W'x' = {y_linear}\n")

print(f"Are they equal? {np.allclose(y_affine, y_linear)}")

Output:

W =
[[ 2.  1.]
 [-1.  3.]]
b = [ 0.5 -2. ]
x = [1. 4.]

Affine: Wx + b = [ 6.5 9. ]

Augmented W' =
[[ 2.   1.   0.5]
 [-1.   3.  -2. ]]
Augmented x' = [1. 4. 1.]
Linear: W'x' = [ 6.5 9. ]

Are they equal? True

Worked Example 5: Attention outputs lie in span(V)#

Introduction#

The attention mechanism (Bahdanau et al., 2015; Vaswani et al., 2017) is the core operation in Transformers, powering modern LLMs (GPT, BERT, LLaMA), vision models (ViT), and multimodal systems (CLIP, Flamingo). Attention computes weighted sums of value vectors $V$, with weights determined by query-key similarities. Crucially, attention outputs are constrained to lie in $\text{span}(V)$ — they cannot “invent” information outside the value subspace.

This example demonstrates that attention is a linear combination operation: the output $z = \sum_{i=1}^n \alpha_i v_i$ (where $\alpha_i$ are softmax-normalized attention scores) always lies in $\text{span}\{v_1, \ldots, v_n\}$, a subspace of $\mathbb{R}^{d_v}$.

Purpose#

  • Understand attention as weighted averaging: Output = $\sum_{i=1}^n \alpha_i v_i$ with $\alpha_i \geq 0$, $\sum_i \alpha_i = 1$ (convex combination).

  • Identify the constraint: Attention outputs lie in $\text{span}(V)$, limiting expressiveness to the value subspace.

  • Connect to ML: Multi-head attention learns multiple subspaces (heads) in parallel, increasing capacity.

Importance#

Transformer architecture. Attention is the primary operation in Transformers, replacing recurrence (RNNs) and convolution (CNNs). Each layer computes: $$ \text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^\top}{\sqrt{d_k}}\right) V $$ where $Q$ (queries), $K$ (keys), and $V$ (values) are linear projections of inputs. The output is a weighted sum of value vectors, constrained to $\text{span}(V)$.

Multi-head attention. Splitting into $h$ heads projects to $h$ different subspaces: $$ \text{MultiHead}(Q, K, V) = \text{Concat}(\text{head}_1, \ldots, \text{head}_h) W^O $$ where $\text{head}_i = \text{Attention}(Q W_i^Q, K W_i^K, V W_i^V)$. Each head operates in a different $(d_v / h)$-dimensional subspace.

Expressiveness vs. efficiency. Attention can only combine existing values (linear combinations), not generate new directions. This is a feature, not a bug: it provides inductive bias (outputs depend on inputs) and computational efficiency (matrix multiplications).

What This Example Demonstrates#

  • Attention is linear combination: For softmax weights $\alpha \in \mathbb{R}^{1 \times n}$ and value matrix $V \in \mathbb{R}^{n \times d_v}$, the output $z = \alpha V \in \mathbb{R}^{1 \times d_v}$ is a linear combination of rows of $V$.

  • Outputs lie in subspace: $z \in \text{span}\{v_1, \ldots, v_n\}$ where $v_i \in \mathbb{R}^{d_v}$ are rows of $V$.

  • Convex combination: Since $\alpha_i \geq 0$ and $\sum_i \alpha_i = 1$, $z$ lies in the convex hull of $\{v_1, \ldots, v_n\}$.

Background#

Attention mechanism history:

  1. Bahdanau attention (2015): Introduced for neural machine translation. Computes alignment scores between encoder hidden states and decoder state, uses weighted sum for context.

  2. Scaled dot-product attention (Vaswani 2017): Simplified to $\text{softmax}(QK^\top / \sqrt{d_k})V$, enabling parallelization and scaling.

  3. Multi-head attention (Vaswani 2017): Projects to multiple subspaces (heads), learns different relationships.

Mathematical interpretation: Attention is a content-based addressing mechanism: queries “look up” relevant keys, retrieve corresponding values. The softmax ensures smooth interpolation (differentiable, convex weights).

Relation to kernels: Attention can be viewed as a kernel method where $K(q, k) = \exp(q^\top k / \sqrt{d_k})$ (unnormalized softmax). The output is a weighted sum in the kernel space (span of values).

Connection to Machine Learning#

Self-attention in Transformers. For input sequence $X \in \mathbb{R}^{n \times d}$, self-attention computes $Q = XW^Q$, $K = XW^K$, $V = XW^V$. Each output token is a linear combination of all input tokens’ values, weighted by query-key similarities.

Cross-attention in encoder-decoder models. Decoder queries attend to encoder keys/values. Example: in machine translation, the decoder (target language) attends to encoder representations (source language). Outputs lie in the span of encoder values.

Positional embeddings. Since attention is permutation-invariant (outputs are linear combinations regardless of input order), position information must be injected via positional encodings $p_i \in \mathbb{R}^d$ added to embeddings. This augments the value subspace.

Computational complexity. Attention requires $O(n^2 d_v)$ operations ($n \times n$ attention matrix times $n \times d_v$ value matrix). For long sequences (e.g., books, genomic data), this becomes prohibitive, motivating sparse attention and low-rank approximations.

Connection to Linear Algebra Theory#

Convex combinations and convex hulls. If $\alpha_i \geq 0$ and $\sum_i \alpha_i = 1$, then $z = \sum_i \alpha_i v_i$ lies in the convex hull $\text{conv}\{v_1, \ldots, v_n\}$, the smallest convex set containing all $v_i$. Geometrically, this is a polytope (bounded region) in $\mathbb{R}^{d_v}$.

Rank of attention output. The attention output matrix $Z = \text{softmax}(QK^\top / \sqrt{d_k}) V \in \mathbb{R}^{n \times d_v}$ has $\text{rank}(Z) \leq \min(n, \text{rank}(V))$. If $V$ is low-rank, attention cannot increase rank (outputs lie in a low-dimensional subspace).

Projection interpretation. If all attention weights concentrate on a single token ($\alpha_i = 1$, $\alpha_j = 0$ for $j \neq i$), the output equals $v_i$ (projection to a single basis vector). Smooth attention weights (uniform $\alpha_i = 1/n$) give the average $\bar{v} = \frac{1}{n} \sum_i v_i$ (projection to center of value cloud).

Orthogonality and attention scores. High query-key similarity $q^\top k$ indicates alignment (small angle between $q$ and $k$). Orthogonal query-key pairs ($q^\top k = 0$) receive low attention weight. This is the same geometric intuition as inner products measuring similarity.

Pedagogical Significance#

Concrete example of span. Attention outputs visibly demonstrate that linear combinations $\sum_i \alpha_i v_i$ lie in $\text{span}\{v_1, \ldots, v_n\}$. Students can compute actual numbers and verify the output is expressible as a weighted sum.

Geometric visualization. For 2D or 3D value vectors, plot $\{v_1, \ldots, v_n\}$ and the attention output $z$. $z$ lies inside the convex hull (polytope formed by connecting $v_i$).

Foundation for Transformers. Understanding attention as linear combination clarifies:

  • Why attention is permutation-invariant: Linear combinations don’t depend on order.

  • Why multi-head attention helps: Different heads explore different subspaces.

  • Limitations: Attention can only interpolate existing values, not generate new directions (nonlinearity comes from layer stacking and feedforward networks).

References#

  1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). “Attention is All You Need.” NeurIPS 2017. Introduced Transformer architecture with scaled dot-product attention and multi-head attention.

  2. Bahdanau, D., Cho, K., & Bengio, Y. (2015). “Neural Machine Translation by Jointly Learning to Align and Translate.” ICLR 2015. First attention mechanism for seq2seq models.

  3. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” NAACL 2019. Bidirectional self-attention for masked language modeling.

  4. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., … & Houlsby, N. (2020). “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.” ICLR 2021. Vision Transformers (ViT) apply attention to image patches.

  5. Tay, Y., Dehghani, M., Bahri, D., & Metzler, D. (2020). “Efficient Transformers: A Survey.” arXiv:2009.06732. Reviews sparse attention, low-rank approximations, and other efficiency techniques.

Problem. Show attention output is in the span of value vectors.

Solution (math).

For value matrix $V \in \mathbb{R}^{n \times d_v}$ (rows $v_1, \ldots, v_n$) and attention weights $\alpha \in \mathbb{R}^{1 \times n}$ (from softmax, so $\alpha_i \geq 0$, $\sum_i \alpha_i = 1$), the attention output is: $$ z = \alpha V = \sum_{i=1}^n \alpha_i v_i \in \mathbb{R}^{1 \times d_v} $$

This is a convex combination of rows of $V$, hence $z \in \text{span}\{v_1, \ldots, v_n\} \subseteq \mathbb{R}^{d_v}$.

Solution (Python).

import numpy as np

# Define value matrix V (3 tokens, 2-dimensional values)
V = np.array([[1., 0.],   # v_1
              [0., 1.],   # v_2
              [2., 2.]])  # v_3

# Define attention weights (from softmax, sum to 1)
A = np.array([[0.2, 0.5, 0.3]])  # Shape: (1, 3)

# Compute attention output z = A @ V
z = A @ V  # Shape: (1, 2)

print(f"Value matrix V (rows are values):\n{V}\n")
print(f"Attention weights A = {A[0]} (sum = {A.sum():.1f})\n")
print(f"Attention output z = A @ V = {z[0]}")
print(f"\nVerification as linear combination:")
print(f"z = {A[0,0]}*v_1 + {A[0,1]}*v_2 + {A[0,2]}*v_3")
print(f"  = {A[0,0]}*{V[0]} + {A[0,1]}*{V[1]} + {A[0,2]}*{V[2]}")
print(f"  = {A[0,0]*V[0] + A[0,1]*V[1] + A[0,2]*V[2]}")
print(f"\nz lies in span(V): True (z is a linear combination of rows of V)")

Output:

Value matrix V (rows are values):
[[1. 0.]
 [0. 1.]
 [2. 2.]]

Attention weights A = [0.2 0.5 0.3] (sum = 1.0)

Attention output z = A @ V = [0.8 1.1]

Verification as linear combination:
z = 0.2*v_1 + 0.5*v_2 + 0.3*v_3
  = 0.2*[1. 0.] + 0.5*[0. 1.] + 0.3*[2. 2.]
  = [0.8 1.1]

z lies in span(V): True (z is a linear combination of rows of V)

Comments

Algorithm Category
Data Modality
Historical & Attribution
Learning Path & Sequencing
Linear Algebra Foundations
Theoretical Foundation