Chapter 1
Vector Spaces & Subspaces
Key ideas: Introduction

Introduction#

Vector spaces and subspaces form the foundational algebraic structures underlying all of machine learning. Every dataset, parameter vector, gradient, embedding, and prediction lives in a vector space. Understanding vector space structure—closure under addition and scalar multiplication, the existence of subspaces, and the geometric interpretation of span—is essential for reasoning about model capacity, optimization trajectories, dimensionality reduction, and numerical stability.

This chapter adopts an ML-first approach: we introduce definitions only when they illuminate practical algorithms or enable rigorous reasoning about ML systems. Rather than axiomatizing vector spaces abstractly, we show how closure properties guarantee that gradient descent never “leaves” the parameter space, how subspaces capture low-dimensional structure in data (PCA, autoencoders), and how span determines the expressiveness of linear models.

Important Ideas#

1. Closure under linear combinations. A vector space $V$ over $\mathbb{R}$ is closed under addition and scalar multiplication: for any $u, v \in V$ and $\alpha, \beta \in \mathbb{R}$, we have $\alpha u + \beta v \in V$. This seemingly trivial property is foundational:

  • Optimization: Gradient descent updates $\theta_{t+1} = \theta_t - \eta \nabla \mathcal{L}(\theta_t)$ are linear combinations, so parameters remain in $\mathbb{R}^d$.

  • Convex combinations: Interpolations $v = \alpha a + (1-\alpha)b$ with $\alpha \in [0,1]$ stay in the space (used in mixup data augmentation, model averaging, momentum methods).

  • Span: The set of all linear combinations $\{\sum_{i=1}^k \alpha_i v_i : \alpha_i \in \mathbb{R}\}$ forms a subspace (the span of $\{v_1, \ldots, v_k\}$).

2. Subspaces capture structure. A subspace $S \subseteq V$ is itself a vector space (closed under addition/scaling and contains the zero vector). Key examples in ML:

  • Column space of $X$: All possible predictions $\hat{y} = Xw$ lie in $\text{col}(X)$, the span of feature columns. This determines model expressiveness.

  • Null space (kernel): Solutions to $Xw = 0$ form the null space, revealing parameter redundancy and identifiability issues.

  • Orthogonal complements: Residuals $r = y - Xw$ lie in $\text{col}(X)^\perp$, the subspace perpendicular to all predictions.

  • Eigenspaces: Eigenvectors with the same eigenvalue span an eigenspace (used in spectral clustering, PCA).

3. Geometric vs. algebraic perspectives. Vector spaces admit dual interpretations:

  • Algebraic: Vectors as tuples of numbers, operations as element-wise arithmetic, subspaces defined by equations.

  • Geometric: Vectors as arrows, subspaces as planes/lines, projections as “shadows,” orthogonality as perpendicularity.

  • ML benefit: Switching perspectives clarifies why algorithms work (geometry) and how to implement them (algebra).

Relevance to Machine Learning#

Model capacity. The span of a feature matrix $X \in \mathbb{R}^{n \times d}$ determines all possible linear predictions. If $\text{rank}(X) < d$, features are redundant (collinear). If $\text{rank}(X) < n$, the model cannot fit arbitrary targets (underdetermined system). Understanding span reveals when adding features helps vs. when it introduces multicollinearity.

Dimensionality reduction. PCA projects data onto the span of top eigenvectors, a low-dimensional subspace capturing most variance. Autoencoders learn nonlinear mappings to low-dimensional subspaces (latent spaces). Kernels implicitly map to high-dimensional (or infinite-dimensional) feature spaces where data becomes linearly separable.

Optimization and numerical stability. Gradient-based methods exploit closure: updates are linear combinations of parameters and gradients. Regularization (ridge, Lasso) modifies the effective subspace where solutions lie. Numerical conditioning depends on subspace geometry (angles between basis vectors, subspace dimension).

Algorithmic Development History#

1. Grassmann and the formal axiomatization (1844). Hermann Grassmann introduced the concept of an “extensive magnitude” (vector space) in Die lineale Ausdehnungslehre, defining addition and scalar multiplication axiomatically. His work was largely ignored until the 20th century but provided the first rigorous algebraic treatment of linear combinations and subspaces.

2. Peano’s axioms (1888). Giuseppe Peano formalized vector spaces with the modern axiomatic definition (closure, associativity, distributivity, identity, inverses). This abstraction enabled studying function spaces, polynomial spaces, and infinite-dimensional spaces under a unified framework.

3. Hilbert spaces and functional analysis (1900s-1920s). David Hilbert extended vector space theory to infinite dimensions with inner products, enabling rigorous foundations for quantum mechanics and integral equations. Banach, Fréchet, and Riesz developed norm theory, completing the modern framework.

4. Numerical linear algebra (1950s-1970s). With the advent of digital computers, numerical stability became critical. Householder (QR decomposition, 1958), Golub (SVD algorithm, 1965-1970), and Wilkinson (error analysis, 1960s-1980s) developed stable algorithms exploiting subspace orthogonality. These methods underpin modern least-squares solvers, eigensolvers, and PCA implementations.

5. Kernel methods and reproducing kernel Hilbert spaces (1990s-2000s). The kernel trick (Boser, Guyon, Vapnik, 1992; Schölkopf, Smola, 1998) showed that nonlinear problems become linear in high-dimensional (or infinite-dimensional) feature spaces. Support Vector Machines exploit subspace geometry (maximum margin hyperplanes) in these spaces.

6. Deep learning and representation learning (2010s-present). Neural networks learn hierarchical representations by composing linear maps (matrix multiplications) with nonlinearities. Each layer’s output spans a subspace; training adjusts these subspaces to separate classes or capture structure. Attention mechanisms (Vaswani et al., 2017) compute weighted sums (linear combinations) of value vectors, with outputs constrained to the span of the value subspace.

Definitions#

Vector space. A set $V$ over a field $\mathbb{F}$ (typically $\mathbb{R}$ or $\mathbb{C}$) with operations $+: V \times V \to V$ (addition) and $\cdot: \mathbb{F} \times V \to V$ (scalar multiplication) satisfying:

  1. Closure: $u + v \in V$ and $\alpha v \in V$ for all $u, v \in V$, $\alpha \in \mathbb{F}$.

  2. Associativity: $(u + v) + w = u + (v + w)$ and $\alpha(\beta v) = (\alpha\beta) v$.

  3. Commutativity: $u + v = v + u$.

  4. Identity: There exists $0 \in V$ such that $v + 0 = v$ for all $v \in V$.

  5. Inverses: For each $v \in V$, there exists $-v \in V$ such that $v + (-v) = 0$.

  6. Distributivity: $\alpha(u + v) = \alpha u + \alpha v$ and $(\alpha + \beta)v = \alpha v + \beta v$.

  7. Scalar identity: $1 \cdot v = v$ for all $v \in V$.

Subspace. A subset $S \subseteq V$ is a subspace if:

  1. $0 \in S$ (contains the zero vector).

  2. $u + v \in S$ for all $u, v \in S$ (closed under addition).

  3. $\alpha u \in S$ for all $u \in S$, $\alpha \in \mathbb{F}$ (closed under scalar multiplication).

Equivalently, $S$ is a subspace if it is closed under linear combinations.

Span. The span of vectors $\{v_1, \ldots, v_k\} \subset V$ is: $$ \text{span}\{v_1, \ldots, v_k\} = \left\{ \sum_{i=1}^k \alpha_i v_i : \alpha_i \in \mathbb{F} \right\} $$ This is the **smallest subspace** containing ${v_1, \ldots, v_k}$.

Column space and range. For a matrix $A \in \mathbb{R}^{m \times n}$, the column space is $\text{col}(A) = \{Ax : x \in \mathbb{R}^n\} = \text{span}\{a_1, \ldots, a_n\}$, where $a_i$ are the columns of $A$. This is also called the range or image of $A$.

Null space (kernel). The null space of $A \in \mathbb{R}^{m \times n}$ is $\text{null}(A) = \{x \in \mathbb{R}^n : Ax = 0\}$, the set of vectors mapped to zero by $A$.

Essential vs Optional: Theoretical ML

Theoretical Machine Learning — Essential Foundations#

Theorems and formal guarantees:

  1. Rademacher complexity bounds. Generalization error depends on the complexity of the hypothesis class (function space). For linear models, the hypothesis space is finite-dimensional (span of features), enabling tight bounds. Key results:

    • Vapnik-Chervonenkis dimension for linear classifiers is $d+1$ (Vapnik & Chervonenkis, 1971).

    • Rademacher complexity of unit ball in $\mathbb{R}^d$ scales as $O(1/\sqrt{n})$ (Bartlett & Mendelson, 2002).

  2. Universal approximation. Existence of dense subspaces in function spaces:

    • Single hidden layer neural networks are dense in $C([0,1]^d)$ (Cybenko 1989).

    • Span of RBF kernels is dense in $L^2$ (Micchelli 1986).

    • Fourier series: span of $\{\sin(kx), \cos(kx)\}_{k=0}^\infty$ is dense in $L^2[0, 2\pi]$.

  3. Convex optimization. Gradient descent converges globally for convex functions over vector spaces (Nesterov 1983). Convergence rates depend on subspace properties (strong convexity, smoothness).

  4. Matrix concentration inequalities. Random matrix theory provides tail bounds for spectral norms, operator norms, and subspace angles (Tropp 2015). Used in randomized linear algebra (sketching, low-rank approximation).

Why essential: These theorems quantify when learning is possible, how many examples suffice, and when optimization succeeds. Vector space structure (dimension, subspaces, inner products) appears directly in the bounds.

Applied Machine Learning — Essential for Implementation#

Achievements and landmark systems:

  1. AlexNet (Krizhevsky et al., 2012). First deep convolutional network to win ImageNet (top-5 error 15.3% → 10.9% over runner-up). Demonstrated that compositional linear maps (convolutions as local weight-sharing matrices) with nonlinearities learn hierarchical representations.

    • Vector space insight: Each convolutional layer maps feature maps $X_l \in \mathbb{R}^{h \times w \times c_l}$ through linear filters $W_l$ to $X_{l+1}$. The output space dimension (number of channels $c_{l+1}$) is the rank of the effective weight matrix.

  2. Word2Vec (Mikolov et al., 2013). Learned dense word embeddings in $\mathbb{R}^{300}$ by predicting context words. Famous “king - man + woman = queen” demonstrated that semantic relationships are linear offsets in embedding space.

    • Subspace insight: Analogies correspond to parallel vectors in subspaces (gender direction, verb tense direction). Linear algebra operations (vector arithmetic) capture linguistic structure.

  3. ResNet (He et al., 2015). Introduced skip connections $y = F(x) + x$, enabling training of 152-layer networks (previous best: ~20 layers). Won ImageNet 2015 with 3.57% top-5 error.

    • Closure insight: Adding $x$ and $F(x)$ is a linear combination, guaranteed to stay in the same vector space. Residuals $F(x)$ span a learned subspace; identity shortcuts preserve gradients during backpropagation.

  4. Transformer (Vaswani et al., 2017). Replaced recurrence with attention, enabling parallelization and scaling to billions of parameters (GPT-3 has 175B).

    • Linear combination insight: Attention outputs are weighted sums $\sum_i \alpha_i V_i$, constrained to $\text{span}(V)$. Multi-head attention learns multiple subspaces in parallel.

  5. Diffusion Models (Ho et al., 2020; Rombach et al., 2022). DALL-E 2, Stable Diffusion generate images by iteratively denoising in latent space. Latent vectors $z \in \mathbb{R}^{d_{\text{latent}}}$ lie in an autoencoder’s learned subspace.

Why essential: These systems achieve state-of-the-art performance by exploiting vector space structure (linear combinations, subspaces, closure). Understanding span, null space, and projections is necessary to debug failures, interpret representations, and design architectures.

Key ideas: Where it shows up

1. Principal Component Analysis (PCA) — Subspace projections for dimensionality reduction#

Major achievements:

  • Hotelling (1933): Formalized PCA as finding orthogonal axes of maximum variance. Applied to psychology/economics data.

  • Pearson (1901): Introduced the concept of “lines of closest fit” (principal components) for reducing multidimensional data to low-dimensional representations.

  • Modern applications: Face recognition (eigenfaces, Turk & Pentland 1991), image compression (JPEG2000 uses SVD/PCA principles), preprocessing for neural networks (whitening, decorrelation), latent semantic analysis (LSA for text, Deerwester et al. 1990).

  • Computational impact: Covariance matrix $C = \frac{1}{n} X^\top X$ is PSD, eigenspaces are orthogonal subspaces, data projected onto top-$k$ eigenvectors minimizes reconstruction error.

Connection to subspaces: PCA finds the $k$-dimensional subspace (span of top eigenvectors) that best approximates the data cloud. The residuals lie in the orthogonal complement (discarded eigenspaces).

2. Stochastic Gradient Descent (SGD) — Parameter updates as linear combinations#

Major achievements:

  • Robbins & Monro (1951): Proved convergence of stochastic approximation methods under diminishing step sizes.

  • Deep learning era (2012-present): SGD with minibatches is the dominant optimizer for neural networks. Variants (momentum, Adam, RMSprop) use weighted averages of gradients—linear combinations in parameter space.

  • Theoretical foundations: Gradient descent never leaves the parameter vector space $\mathbb{R}^d$ because updates $\theta_{t+1} = \theta_t - \eta \nabla \mathcal{L}(\theta_t)$ are linear combinations. Convergence analysis relies on inner products (gradient angles) and subspace projections (low-rank gradients, Hessian-free optimization).

Connection to vector spaces: The optimization trajectory $\{\theta_0, \theta_1, \theta_2, \ldots\}$ lies entirely within the parameter space by closure. Momentum methods average previous gradients (linear combinations with exponential decay weights). Coordinate descent restricts updates to axis-aligned subspaces.

3. Deep Neural Networks — Compositional linear maps between layer subspaces#

Major achievements:

  • Universal approximation (Cybenko 1989, Hornik 1991): Neural networks with one hidden layer can approximate continuous functions arbitrarily well. The span of hidden layer activations determines expressiveness.

  • ImageNet revolution (Krizhevsky, Sutskever, Hinton 2012): AlexNet demonstrated that deep networks learn hierarchical feature representations. Each layer maps inputs through a linear transformation (matrix multiplication) followed by nonlinearity.

  • Residual connections (He et al. 2015): ResNets add skip connections $y = f(x) + x$, keeping outputs in the span of inputs plus a learned residual subspace.

Connection to linear maps: Each layer $h_{l+1} = \sigma(W_l h_l + b_l)$ applies a linear map $W_l$ (matrix multiplication) followed by a nonlinearity $\sigma$. The intermediate representation $h_l$ lives in a vector space; the column space of $W_l$ determines which subspace $h_{l+1}$ (pre-activation) can span.

4. Kernel Methods — Implicit infinite-dimensional feature spaces#

Major achievements:

  • Support Vector Machines (Boser, Guyon, Vapnik 1992): Introduced the kernel trick for implicitly computing inner products in high-dimensional spaces without explicitly constructing features.

  • Reproducing Kernel Hilbert Spaces (Aronszajn 1950): Provided rigorous mathematical foundation. Kernels $k(x, x')$ correspond to inner products in a (possibly infinite-dimensional) feature space $\mathcal{H}$: $k(x, x') = \langle \phi(x), \phi(x') \rangle_{\mathcal{H}}$.

  • Modern applications: Gaussian processes (Rasmussen & Williams 2006), kernel PCA, kernel ridge regression, attention mechanisms (scaled dot-product is an inner product in value space).

Connection to vector spaces: The feature map $\phi: \mathcal{X} \to \mathcal{H}$ embeds inputs into a vector space (often infinite-dimensional). The kernel trick avoids explicit computation by working in the dual (span of training examples). Decision boundaries are hyperplanes in $\mathcal{H}$, corresponding to nonlinear boundaries in input space.

5. Transformer Attention — Weighted sums over value subspaces#

Major achievements:

  • Vaswani et al. (2017): “Attention is All You Need” introduced the Transformer architecture, replacing recurrence with self-attention. Enabled scaling to billion-parameter models (GPT-3, GPT-4, LLaMA).

  • Mechanism: Attention computes $\text{softmax}(QK^\top / \sqrt{d_k}) V$, where $Q, K, V$ are linear projections of inputs. The output is a linear combination of value vectors $V$, with weights from softmax-normalized inner products $QK^\top$.

  • Multi-head attention: Projects to multiple subspaces (heads), learns different span representations in parallel, concatenates results.

Connection to subspaces: Each head’s output lies in the span of its value matrix $V$. The attention weights $\alpha_i$ (softmax scores) determine the convex combination $\sum_{i=1}^n \alpha_i V_i$ (each row is a weighted sum of value vectors). The final representation is constrained to $\text{span}(\{V_1, \ldots, V_n\})$.

Notation

Standard Conventions#

1. Vectors and matrices.

  • Scalars: Lowercase Roman or Greek letters ($a, b, \alpha, \beta, \lambda$).

  • Vectors: Lowercase bold ($\mathbf{x}, \mathbf{w}$) or with explicit space annotation ($x \in \mathbb{R}^d$). Default: column vectors.

  • Matrices: Uppercase Roman letters ($A, X, W, \Sigma$). $A \in \mathbb{R}^{m \times n}$ has $m$ rows and $n$ columns.

  • Transpose: $A^\top$ (not $A^T$).

Examples:

  • MNIST images flattened to $x \in \mathbb{R}^{784}$ (28×28 pixels).

  • Dataset matrix $X \in \mathbb{R}^{n \times d}$ with $n$ examples (rows) and $d$ features (columns). Example: ImageNet batch $X \in \mathbb{R}^{256 \times 150528}$ (256 images, 224×224×3 pixels).

  • Weight matrix for a linear layer: $W \in \mathbb{R}^{d_{\text{out}} \times d_{\text{in}}}$ maps $\mathbb{R}^{d_{\text{in}}} \to \mathbb{R}^{d_{\text{out}}}$ via $y = Wx$.

2. Norms and inner products.

  • Euclidean norm (L2 norm): $\|x\|_2 = \sqrt{x_1^2 + \cdots + x_d^2} = \sqrt{x^\top x}$.

  • L1 norm (sparsity-inducing): $\|x\|_1 = |x_1| + \cdots + |x_d|$ (used in Lasso regression).

  • Frobenius norm (matrix): $\|A\|_F = \sqrt{\sum_{i,j} A_{ij}^2} = \sqrt{\text{trace}(A^\top A)}$.

  • Inner product (dot product): $\langle x, y \rangle = x^\top y = \sum_{i=1}^d x_i y_i$.

Examples:

  • Regularization: Ridge regression minimizes $\|Xw - y\|_2^2 + \lambda \|w\|_2^2$ (L2 penalty).

  • Lasso regression: $\|Xw - y\|_2^2 + \lambda \|w\|_1$ (L1 penalty encourages sparse $w$).

  • Gradient magnitude: $\|\nabla \mathcal{L}(\theta)\|_2$ measures steepness of loss surface.

3. Subspaces and projections.

  • Column space: $\text{col}(A)$ or $\text{range}(A)$ or $\mathcal{R}(A)$.

  • Null space (kernel): $\text{null}(A)$ or $\ker(A)$ or $\mathcal{N}(A)$.

  • Orthogonal complement: $S^\perp = \{v \in V : \langle v, s \rangle = 0 \text{ for all } s \in S\}$.

  • Span: $\text{span}\{v_1, \ldots, v_k\}$ = all linear combinations $\sum_{i=1}^k \alpha_i v_i$.

Examples:

  • Least squares: predictions $\hat{y} = Xw$ lie in $\text{col}(X) \subseteq \mathbb{R}^n$. Residuals $r = y - \hat{y}$ lie in $\text{col}(X)^\perp$.

  • PCA: data projected onto $\text{span}\{u_1, \ldots, u_k\}$ where $u_i$ are top eigenvectors of covariance matrix.

  • Underdetermined systems: $Xw = y$ has infinitely many solutions in $w_0 + \text{null}(X)$ (affine subspace).

4. Special matrices and decompositions.

  • Identity matrix: $I$ (or $I_n$ for $n \times n$). Satisfies $Ix = x$ for all $x$.

  • Zero vector: $0$ (or $\mathbf{0}$). Satisfies $v + 0 = v$ for all $v$.

  • Eigenvalues/eigenvectors: $Ax = \lambda x$ with $x \neq 0$. Eigenvalue $\lambda \in \mathbb{R}$ (or $\mathbb{C}$), eigenvector $x \in \mathbb{R}^d$.

  • Singular value decomposition: $X = U \Sigma V^\top$ with $U \in \mathbb{R}^{n \times n}$ (left singular vectors), $\Sigma \in \mathbb{R}^{n \times d}$ (diagonal singular values $\sigma_i \geq 0$), $V \in \mathbb{R}^{d \times d}$ (right singular vectors).

Examples:

  • Covariance matrix: $C = \frac{1}{n} X^\top X$ is PSD, has eigenpairs $(\lambda_i, u_i)$ with $\lambda_i \geq 0$.

  • SVD truncation: $X \approx U_k \Sigma_k V_k^\top$ (rank-$k$ approximation minimizing $\|X - \hat{X}\|_F$).

  • Condition number: $\kappa(X) = \sigma_{\max} / \sigma_{\min}$ measures numerical stability (large $\kappa$ → ill-conditioned).

5. Index conventions.

  • Matrix indexing: $A_{ij}$ = element in row $i$, column $j$. Python uses 0-indexing; math uses 1-indexing.

  • Vector indexing: $x_i$ = $i$-th element of $x$. In Python: x[i] (0-based).

  • Colon notation: $A_{:,j}$ = $j$-th column of $A$. $A_{i,:}$ = $i$-th row. Ranges: $A_{1:k, :}$ = first $k$ rows.

Examples:

  • Feature $j$ across all examples: $X_{:,j} \in \mathbb{R}^n$ (column vector).

  • Example $i$ features: $X_{i,:} \in \mathbb{R}^{1 \times d}$ (row vector).

  • Top-$k$ singular vectors: $U_{:, 1:k} \in \mathbb{R}^{n \times k}$ (first $k$ columns of $U$).

Pitfalls & sanity checks

Common Mistakes#

1. Confusing affine and linear maps.

  • Error: Calling $f(x) = Wx + b$ a “linear” function.

  • Correction: It’s affine (not linear) if $b \neq 0$. Linear maps satisfy $f(0) = 0$; affine maps don’t.

  • Why it matters: Composition of affine maps is affine (not linear unless biases cancel). Regularization treats $W$ and $b$ differently.

2. Forgetting to center data for PCA.

  • Error: Computing eigenvalues of $X^\top X$ without centering $X$.

  • Correction: First compute $X_c = X - \frac{1}{n} \mathbf{1}\mathbf{1}^\top X$ (subtract column means), then use $X_c^\top X_c$.

  • Why it matters: Without centering, the first principal component points toward the mean (captures location, not variance).

3. Assuming rank(X) = d by default.

  • Error: Solving $X^\top X w = X^\top y$ without checking if $X^\top X$ is invertible.

  • Correction: Check $\text{rank}(X)$ with np.linalg.matrix_rank(X). If $\text{rank}(X) < d$, use regularization (ridge regression) or pseudoinverse.

  • Why it matters: Singular $X^\top X$ causes LinAlgError or numerical instability (condition number $\kappa \to \infty$).

4. Confusing column space and row space.

  • Error: Saying “predictions $Xw$ lie in the span of rows of $X$.”

  • Correction: $Xw$ lies in the span of columns of $X$ (column space). Row space is the span of rows (equivalently, column space of $X^\top$).

  • Why it matters: For $X \in \mathbb{R}^{n \times d}$, column space is in $\mathbb{R}^n$ (prediction space), row space is in $\mathbb{R}^d$ (feature space).

5. Ignoring numerical stability.

  • Error: Computing $(X^\top X)^{-1} X^\top y$ explicitly (normal equations).

  • Correction: Use np.linalg.lstsq(X, y) (QR or SVD internally) or scipy.linalg.solve(X.T @ X, X.T @ y, assume_a='pos') (Cholesky).

  • Why it matters: Explicitly forming $X^\top X$ squares the condition number ($\kappa(X^\top X) = \kappa(X)^2$), amplifying errors.

Essential Sanity Checks#

Always verify shapes:

  • After matrix multiply $C = AB$, check C.shape == (A.shape[0], B.shape[1]).

  • For batch processing, ensure leading dimensions match (e.g., $X \in \mathbb{R}^{B \times d}$, $W \in \mathbb{R}^{d \times m}$ gives $XW \in \mathbb{R}^{B \times m}$).

Check rank before solving:

rank = np.linalg.matrix_rank(X)
if rank < X.shape[1]:
    print(f"Warning: X is rank-deficient ({rank} < {X.shape[1]}). Use regularization.")

Verify projections are idempotent: For projection matrix $P$, check $P^2 = P$ and $P^\top = P$ (orthogonal projection).

assert np.allclose(P @ P, P), "Projection not idempotent"
assert np.allclose(P.T, P), "Projection not symmetric"

Test centering explicitly: After centering $X_c = X - \text{mean}(X)$, verify column means are zero:

assert np.allclose(X_c.mean(axis=0), 0), "Data not centered"

Condition number monitoring: For ill-conditioned systems, check $\kappa(X) = \sigma_{\max}(X) / \sigma_{\min}(X)$:

cond = np.linalg.cond(X)
if cond > 1e10:
    print(f"Warning: X is ill-conditioned (κ = {cond:.2e}). Results may be numerically unstable.")

Debugging Checklist#

  • Shapes mismatch? Print X.shape, w.shape before every matrix operation.

  • Unexpected zeros? Check for rank deficiency (np.linalg.matrix_rank(X)).

  • Large errors? Compute residuals $\|Xw - y\|_2$, check if $y \in \text{col}(X)$.

  • Numerical issues? Switch to stable solvers (np.linalg.lstsq, QR, SVD instead of normal equations).

  • Non-converging optimization? Verify gradients $\nabla \mathcal{L}(\theta)$ stay in parameter space (closure), check learning rate.

References

Foundational Texts#

  1. Strang, G. (2016). Introduction to Linear Algebra (5th ed.). Wellesley–Cambridge Press.

    • Chapters 1-4: Vector spaces, subspaces, orthogonality, least squares.

    • Emphasizes geometric intuition and computational methods.

    • Companion video lectures: MIT OpenCourseWare 18.06.

  2. Axler, S. (2015). Linear Algebra Done Right (3rd ed.). Springer.

    • Rigorous, abstract treatment (avoids determinants until late).

    • Focuses on vector spaces, linear maps, eigenvalues.

    • Best for theoretical foundations.

  3. Horn, R. A., & Johnson, C. R. (2013). Matrix Analysis (2nd ed.). Cambridge University Press.

    • Comprehensive reference for matrix theory.

    • Covers norms, singular values, matrix decompositions, perturbation theory.

    • Graduate-level depth.

Machine Learning Perspectives#

  1. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

    • Chapter 2: Linear Algebra (vectors, matrices, norms, eigendecomposition, SVD).

    • Chapter 6: Feedforward Networks (linear layers, activation functions).

    • Free online: deeplearningbook.org

  2. Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning (2nd ed.). Springer.

    • Chapter 3: Linear Methods for Regression (least squares, ridge, lasso, PCA).

    • Chapter 4: Linear Methods for Classification (LDA, logistic regression).

    • Emphasizes statistical perspective (bias-variance, model selection).

  3. Murphy, K. P. (2022). Probabilistic Machine Learning: An Introduction. MIT Press.

    • Chapter 7: Linear Algebra (subspaces, rank, matrix calculus).

    • Chapter 11: Linear Regression (Bayesian, regularization).

    • Modern treatment with probabilistic framing.

Historical Papers#

  1. Pearson, K. (1901). “On Lines and Planes of Closest Fit to Systems of Points in Space.” Philosophical Magazine, 2(11), 559–572.

    • Introduced principal components (PCA).

  2. Hotelling, H. (1933). “Analysis of a Complex of Statistical Variables into Principal Components.” Journal of Educational Psychology, 24(6), 417–441.

    • Formalized PCA with covariance matrices.

  3. Eckart, C., & Young, G. (1936). “The Approximation of One Matrix by Another of Lower Rank.” Psychometrika, 1(3), 211–218.

    • Proved SVD gives optimal low-rank approximation.

Modern Machine Learning#

  1. Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013). “Distributed Representations of Words and Phrases and their Compositionality.” NeurIPS 2013.

    • Word2Vec embeddings; demonstrated linear structure (analogies).

  2. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). “Attention is All You Need.” NeurIPS 2017.

    • Transformer architecture; attention as weighted sums (linear combinations).

  3. He, K., Zhang, X., Ren, S., & Sun, J. (2015). “Deep Residual Learning for Image Recognition.” CVPR 2016.

    • ResNets with skip connections ($y = F(x) + x$, closure in vector space).

  4. Ioffe, S., & Szegedy, C. (2015). “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” ICML 2015.

    • Batch norm (centering + scaling activations).

Numerical Linear Algebra#

  1. Golub, G. H., & Van Loan, C. F. (2013). Matrix Computations (4th ed.). Johns Hopkins University Press.

    • Authoritative reference for numerical algorithms (QR, SVD, eigensolvers).

    • Emphasizes stability, conditioning, complexity.

  2. Trefethen, L. N., & Bau, D. (1997). Numerical Linear Algebra. SIAM.

    • Concise treatment of QR, SVD, least squares, eigenvalue algorithms.

    • Focus on geometric intuition and practical computation.

Online Resources#

  1. 3Blue1Brown (Grant Sanderson). Essence of Linear Algebra (video series).

  2. Gilbert Strang. MIT OpenCourseWare 18.06: Linear Algebra (video lectures).

  3. The Matrix Cookbook (Petersen & Pedersen, 2012).

Five worked examples

Worked Example 1: Embedding interpolation is still a vector#

Introduction#

Token embeddings in NLP models (Word2Vec, GloVe, BERT, GPT) map discrete tokens to continuous vectors in $\mathbb{R}^d$. A fundamental property: any linear combination of embeddings remains a valid embedding (closure under vector space operations). This enables semantic arithmetic (“king” - “man” + “woman” ≈ “queen”), interpolation between concepts, and averaging embeddings for sentences or documents.

This example demonstrates that embedding spaces are vector spaces by explicitly computing an interpolation (convex combination) of two token embeddings. The result stays in $\mathbb{R}^d$, illustrating closure under linear combinations.

Purpose#

  • Verify closure: Show that $\alpha e(a) + (1-\alpha) e(b) \in \mathbb{R}^d$ for any embeddings $e(a), e(b)$ and scalar $\alpha \in [0,1]$.

  • Introduce convex combinations: Interpolation with $\alpha \in [0,1]$ produces points on the line segment between $e(a)$ and $e(b)$.

  • Connect to ML: Embedding arithmetic is used in analogy tasks, compositional semantics, and prompt engineering (e.g., blending concepts for image generation).

Importance#

Semantic compositionality. The vector space structure of embeddings enables composing meanings via linear algebra. Famous examples:

  • Word2Vec analogies (Mikolov et al., 2013): $v_{\text{king}} - v_{\text{man}} + v_{\text{woman}} \approx v_{\text{queen}}$ achieves ~40% accuracy on analogy tasks.

  • Sentence embeddings: Average token embeddings $\bar{v} = \frac{1}{n} \sum_{i=1}^n e(t_i)$ (simple but effective baseline for sentence similarity).

  • Image-text embeddings (CLIP, 2021): Contrastive learning aligns image and text embeddings in a shared vector space. Interpolations blend visual/textual concepts.

Training stability. Gradient descent updates embeddings via $e(t) \leftarrow e(t) - \eta \nabla \mathcal{L}$. Closure ensures embeddings never “leave” $\mathbb{R}^d$ during training.

What This Example Demonstrates#

This example shows that embedding spaces are closed under linear combinations, a necessary condition for being a vector space. Interpolation $v = \alpha e(a) + (1-\alpha)e(b)$ produces a point between $e(a)$ and $e(b)$, illustrating that we can “blend” semantic meanings by taking weighted averages.

The geometric interpretation: $e(a)$ and $e(b)$ define a line in $\mathbb{R}^d$; all convex combinations lie on the line segment $[e(a), e(b)]$. This extends to arbitrary linear combinations (not just convex), forming the span $\{\alpha e(a) + \beta e(b) : \alpha, \beta \in \mathbb{R}\}$ (a 2D subspace if $e(a)$ and $e(b)$ are linearly independent).

Background#

Distributional semantics. The idea that “words are characterized by the company they keep” (Firth, 1957) led to vector space models in NLP. Early methods (latent semantic analysis, 1990; HAL, 1997) used co-occurrence matrices. Modern neural embeddings (Word2Vec, 2013; GloVe, 2014) learn dense representations by predicting context words.

Vector space models in NLP:

  • Bag-of-words: Represent documents as sparse vectors in $\mathbb{R}^{|\text{vocab}|}$ (counts or TF-IDF weights).

  • Word embeddings: Learn dense vectors $e(w) \in \mathbb{R}^d$ ($d \approx 50$-$1000$) capturing semantic similarity. Similar words have nearby vectors (measured by cosine similarity or Euclidean distance).

  • Contextual embeddings (BERT, GPT): Embeddings depend on context; $e(w | \text{context})$ varies across sentences. Still vectors in $\mathbb{R}^d$ at each layer.

Closure and linearity: The vector space axioms (closure, distributivity) are assumed in embedding models but rarely verified explicitly. This example makes closure concrete: interpolation $\alpha e(a) + (1-\alpha)e(b)$ stays in $\mathbb{R}^d$ because $\mathbb{R}^d$ is a vector space.

Historical Context#

1. Distributional hypothesis (1950s-1960s). Harris (1954) and Firth (1957) proposed that word meaning is determined by distribution (co-occurrence patterns). This motivated vector representations based on context counts.

2. Latent Semantic Analysis (Deerwester et al., 1990). Applied SVD to term-document matrices, projecting words/documents into low-dimensional subspaces. Demonstrated that dimensionality reduction (via truncated SVD) preserves semantic relationships.

3. Word2Vec (Mikolov et al., 2013). Introduced skip-gram and CBOW models, training shallow neural networks to predict context words. Showed that embeddings exhibit linear structure: analogies correspond to parallel vectors ($v_{\text{king}} - v_{\text{man}} + v_{\text{woman}} \approx v_{\text{queen}}$).

4. GloVe (Pennington et al., 2014). Combined global co-occurrence statistics with local context prediction, achieving state-of-the-art performance on analogy and similarity tasks.

5. Contextual embeddings (2018-present). BERT (Devlin et al., 2018) and GPT (Radford et al., 2018) compute embeddings that vary by context, using Transformer architectures. Embeddings at each layer are still vectors in $\mathbb{R}^d$, but $e(w)$ depends on the entire input sequence.

History in Machine Learning#

  • 1990: LSA applies SVD to term-document matrices (vector space models).

  • 2013: Word2Vec popularizes dense embeddings; analogy tasks demonstrate linear structure.

  • 2014: GloVe combines global statistics with neural methods.

  • 2017: Transformers (Vaswani et al.) enable contextualized embeddings via attention.

  • 2018: BERT and GPT revolutionize NLP by learning contextual representations at scale.

  • 2021: CLIP (Radford et al.) aligns image and text embeddings in a shared vector space, enabling zero-shot image classification and text-to-image generation.

Prevalence in Machine Learning#

Ubiquitous in NLP: Every modern NLP model (BERT, GPT, T5, LLaMA) uses token embeddings in $\mathbb{R}^d$ ($d = 768$ for BERT-base, $d = 4096$ for GPT-3, $d = 12288$ for GPT-4). Embeddings are the primary representation for text.

Vision and multimodal models:

  • Vision Transformers (ViT, 2020): Patch embeddings in $\mathbb{R}^d$ replace pixel representations.

  • CLIP (2021): Image and text embeddings in a shared $\mathbb{R}^{512}$ space enable cross-modal retrieval.

  • DALL-E, Stable Diffusion (2021-2022): Text embeddings condition diffusion models for image generation.

Recommendation systems: Item embeddings in $\mathbb{R}^d$ capture user preferences. Collaborative filtering factorizes user-item matrices into embeddings.

Notes and Explanatory Details#

Shape discipline: $e(a) \in \mathbb{R}^d$, $e(b) \in \mathbb{R}^d$, $\alpha \in \mathbb{R}$. The interpolation $v = \alpha e(a) + (1-\alpha)e(b)$ is a linear combination, so $v \in \mathbb{R}^d$ by closure.

Convex combinations: Restricting $\alpha \in [0,1]$ ensures $v$ lies on the line segment $[e(a), e(b)]$. Allowing $\alpha \in \mathbb{R}$ gives the entire line through $e(a)$ and $e(b)$ (the span).

Geometric interpretation: In 3D, if $e(a) = [1, 0, 2]$ and $e(b) = [-1, 3, 0]$, then $v = 0.3 e(a) + 0.7 e(b)$ lies 30% of the way from $e(b)$ to $e(a)$.

Numerical considerations: Embedding norms vary (typical $\|e(w)\|_2 \approx 1$-$10$ depending on initialization). Normalization (dividing by $\|e(w)\|_2$) is common for cosine similarity metrics.

Connection to Machine Learning#

Analogy tasks. Linear offsets capture semantic relationships: $v_{\text{France}} - v_{\text{Paris}} \approx v_{\text{Germany}} - v_{\text{Berlin}}$ (capital relationship). The vector $v_{\text{France}} - v_{\text{Paris}}$ represents the “capital-of” direction in embedding space.

Prompt interpolation. In text-to-image models, interpolating prompt embeddings generates images blending two concepts. Example: $\alpha e(\text{"dog"}) + (1-\alpha)e(\text{"cat"})$ with $\alpha = 0.5$ might generate a hybrid “doge” image.

Sentence embeddings. Averaging token embeddings $\bar{v} = \frac{1}{n} \sum_{i=1}^n e(t_i)$ is a simple but effective sentence representation (used in Skip-Thought, InferSent). More sophisticated: weighted averages (TF-IDF weights) or learned aggregations (attention).

Connection to Linear Algebra Theory#

Vector space axioms. $\mathbb{R}^d$ satisfies all vector space axioms:

  1. Closure: $e(a) + e(b) \in \mathbb{R}^d$ and $\alpha e(a) \in \mathbb{R}^d$.

  2. Associativity: $(e(a) + e(b)) + e(c) = e(a) + (e(b) + e(c))$.

  3. Commutativity: $e(a) + e(b) = e(b) + e(a)$.

  4. Identity: $e(a) + 0 = e(a)$ where $0 = [0, \ldots, 0] \in \mathbb{R}^d$.

  5. Inverses: $e(a) + (-e(a)) = 0$.

  6. Scalar distributivity: $\alpha(e(a) + e(b)) = \alpha e(a) + \alpha e(b)$.

Subspaces. The span of embeddings $\{\text{span}\{e(t_1), \ldots, e(t_k)\}\}$ is a subspace of $\mathbb{R}^d$. For a vocabulary of size $|V|$, all embeddings lie in a $k$-dimensional subspace if $k < d$ (low-rank embedding matrix).

Affine combinations. Convex combinations $\sum_{i=1}^k \alpha_i e(t_i)$ with $\alpha_i \geq 0$, $\sum_i \alpha_i = 1$ form a convex hull (polytope in $\mathbb{R}^d$). Sentence embeddings via averaging lie in this convex hull.

Pedagogical Significance#

Concrete verification of closure. Many students learn vector space axioms abstractly but rarely see explicit numerical verification. This example shows that $\alpha e(a) + (1-\alpha)e(b) \in \mathbb{R}^d$ by computing actual numbers.

Geometric intuition. Interpolation visualizes the line segment between two points in $\mathbb{R}^d$. Extending to $\alpha \notin [0,1]$ shows extrapolation (moving beyond $e(a)$ or $e(b)$ along the line).

Foundation for advanced topics. Understanding embedding spaces as vector spaces is prerequisite for:

  • Analogies: Vector arithmetic $e(a) - e(b) + e(c)$ requires closure.

  • Dimensionality reduction: Projecting embeddings to lower-dimensional subspaces (PCA, t-SNE).

  • Alignment: Mapping embeddings between languages (Procrustes alignment, learned transforms).

References#

  1. Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013). “Distributed Representations of Words and Phrases and their Compositionality.” NeurIPS 2013. Introduced Word2Vec (skip-gram, CBOW); demonstrated analogy tasks.

  2. Pennington, J., Socher, R., & Manning, C. D. (2014). “GloVe: Global Vectors for Word Representation.” EMNLP 2014. Combined global co-occurrence statistics with local context.

  3. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” NAACL 2019. Contextual embeddings via masked language modeling.

  4. Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., … & Sutskever, I. (2021). “Learning Transferable Visual Models From Natural Language Supervision.” ICML 2021. CLIP aligns image/text embeddings in shared space.

  5. Firth, J. R. (1957). “A Synopsis of Linguistic Theory, 1930-1955.” Studies in Linguistic Analysis. Introduced distributional hypothesis: “You shall know a word by the company it keeps.”

Problem. Show token embeddings live in a vector space and compute an interpolation.

Solution (math).

Given embeddings $e(a), e(b) \in \mathbb{R}^d$ and $\alpha \in [0,1]$, the interpolation is: $$ v = \alpha e(a) + (1-\alpha)e(b) $$

By closure of $\mathbb{R}^d$ under linear combinations, $v \in \mathbb{R}^d$. For $\alpha = 0$, $v = e(b)$; for $\alpha = 1$, $v = e(a)$; for $\alpha = 0.5$, $v$ is the midpoint.

Solution (Python).

import numpy as np

# Define embeddings for tokens 'a' and 'b' in R^3
E = {
    'a': np.array([1., 0., 2.]),
    'b': np.array([-1., 3., 0.])
}

# Interpolation parameter (0 <= alpha <= 1)
alpha = 0.3

# Compute convex combination
v = alpha * E['a'] + (1 - alpha) * E['b']

print(f"e(a) = {E['a']}")
print(f"e(b) = {E['b']}")
print(f"v = {alpha} * e(a) + {1-alpha} * e(b) = {v}")
print(f"v is in R^3: {v.shape == (3,)}")

Output:

e(a) = [1. 0. 2.]
e(b) = [-1.  3.  0.]
v = 0.3 * e(a) + 0.7 * e(b) = [-0.4  2.1  0.6]
v is in R^3: True

Worked Example 2: Zero-mean subspace projection#

Introduction#

Centering data (subtracting the mean) is a ubiquitous preprocessing step in machine learning. PCA, covariance estimation, batch normalization, and many other algorithms assume zero-mean data. Mathematically, zero-mean vectors form a subspace: $S = \{x \in \mathbb{R}^n : \mathbf{1}^\top x = 0\}$ where $\mathbf{1} = [1, \ldots, 1]^\top$ is the all-ones vector.

This example shows that $S$ is the null space of the row vector $\mathbf{1}^\top$, demonstrates projection onto $S$ via the centering matrix $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$, and verifies that the projected vector has zero mean.

Purpose#

  • Demonstrate subspaces defined by constraints: $S = \{x : \mathbf{1}^\top x = 0\}$ is a hyperplane through the origin ($(n-1)$-dimensional subspace).

  • Introduce projection matrices: $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$ projects onto $S$ (removes the mean).

  • Connect to ML: Centering data is equivalent to projecting onto the zero-mean subspace.

Importance#

PCA and covariance estimation. PCA operates on the centered data matrix $X_c = X - \frac{1}{n} \mathbf{1}\mathbf{1}^\top X$ (each column has zero mean). The covariance matrix $C = \frac{1}{n} X_c^\top X_c$ measures variance around the mean; if data is not centered, $C$ mixes mean and variance.

Batch normalization (Ioffe & Szegedy, 2015). Normalizes layer activations by subtracting batch mean and dividing by batch std. The mean-centering step is projection onto the zero-mean subspace.

Regularization and identifiability. In linear regression with an intercept $f(x) = w^\top x + b$, centering inputs $x \mapsto x - \bar{x}$ and targets $y \mapsto y - \bar{y}$ decouples the intercept from the weights, improving numerical stability and interpretability.

What This Example Demonstrates#

This example shows that:

  1. Zero-mean vectors form a subspace (closed under addition/scaling, contains zero).

  2. Projection onto $S$ is linear: $x_{\text{proj}} = Px$ with $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$.

  3. The projection removes the mean: $\mathbf{1}^\top (Px) = 0$ for all $x$.

  4. $P$ is idempotent: $P^2 = P$ (projecting twice is the same as projecting once).

  5. $P$ is symmetric: $P^\top = P$ (orthogonal projection).

Background#

Affine subspaces vs. linear subspaces. An affine subspace (hyperplane) has the form $\{x : a^\top x = c\}$ for $c \neq 0$. This is not a subspace unless $c = 0$ (does not contain the origin). Zero-mean vectors form a linear subspace because $\mathbf{1}^\top 0 = 0$.

Centering matrix. The matrix $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$ is called the centering matrix or projection onto zero-mean subspace. It satisfies:

  • $P\mathbf{1} = 0$ (projects all-ones vector to zero).

  • $Px = x - \bar{x} \mathbf{1}$ where $\bar{x} = \frac{1}{n} \mathbf{1}^\top x$ (subtracts the mean).

  • $P^2 = P$ (idempotent: projecting twice does nothing).

  • $P^\top = P$ (symmetric: orthogonal projection).

Null space and column space. $S = \text{null}(\mathbf{1}^\top)$ is the set of vectors perpendicular to $\mathbf{1}$. The orthogonal complement is $S^\perp = \text{span}\{\mathbf{1}\}$ (scalar multiples of $\mathbf{1}$). By the fundamental theorem of linear algebra, $\mathbb{R}^n = S \oplus S^\perp$ (direct sum).

Historical Context#

1. Gaussian elimination and centering (Gauss, 1809). Gauss used mean-centering in least squares for astronomical orbit fitting (method of least squares, published in Theoria Motus).

2. PCA (Pearson 1901, Hotelling 1933). Both Pearson’s “lines of closest fit” and Hotelling’s “principal components” assume centered data. The covariance matrix $C = \frac{1}{n} X_c^\top X_c$ is undefined without centering (would conflate mean and variance).

3. Projection matrices (Penrose 1955, Rao 1955). The theory of orthogonal projections was formalized in the 1950s. The centering matrix $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$ is a rank-$(n-1)$ projection matrix.

4. Batch normalization (Ioffe & Szegedy, 2015). Revolutionized deep learning by normalizing layer activations. The first step is centering: $\hat{x}_i = x_i - \frac{1}{B} \sum_{i=1}^B x_i$ (subtract batch mean).

History in Machine Learning#

  • 1809: Gauss applies least squares with mean-centering (astronomical data).

  • 1901: Pearson introduces PCA (assumes centered data).

  • 1933: Hotelling formalizes PCA (covariance matrix requires centering).

  • 1955: Penrose and Rao develop theory of projection matrices.

  • 2015: Batch normalization (Ioffe & Szegedy) makes centering a learned layer operation.

  • 2016: Layer normalization (Ba et al.) centers across features instead of batches.

  • 2019: Group normalization (Wu & He) centers within feature groups (used in computer vision).

Prevalence in Machine Learning#

Preprocessing: Nearly all classical ML algorithms (PCA, LDA, SVM, ridge regression) assume centered data. Scikit-learn’s StandardScaler first centers (`x \mapsto x - \bar{x}$) then scales ($x \mapsto x / \sigma$).

Deep learning normalization:

  • Batch norm: Centers and scales mini-batch statistics.

  • Layer norm: Centers and scales across feature dimension (used in Transformers).

  • Instance norm: Centers each example independently (style transfer, GANs).

Optimization: Adam optimizer maintains exponential moving averages of gradients and squared gradients. The first moment $m_t$ is effectively a centered gradient estimate.

Notes and Explanatory Details#

Shape discipline:

  • Input: $x \in \mathbb{R}^n$ (column vector).

  • All-ones vector: $\mathbf{1} \in \mathbb{R}^n$ (column vector).

  • Mean: $\bar{x} = \frac{1}{n} \mathbf{1}^\top x \in \mathbb{R}$ (scalar).

  • Centering matrix: $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top \in \mathbb{R}^{n \times n}$.

  • Projected vector: $x_{\text{proj}} = Px \in \mathbb{R}^n$.

Verification of projection properties:

  1. Projects to zero-mean subspace: $\mathbf{1}^\top (Px) = \mathbf{1}^\top \left(I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top \right) x = \mathbf{1}^\top x - \frac{1}{n} (\mathbf{1}^\top \mathbf{1})(\mathbf{1}^\top x) = \mathbf{1}^\top x - \mathbf{1}^\top x = 0$.

  2. Idempotent: $P^2 = \left(I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top \right)^2 = I - \frac{2}{n} \mathbf{1}\mathbf{1}^\top + \frac{1}{n^2} \mathbf{1}(\mathbf{1}^\top \mathbf{1})\mathbf{1}^\top = I - \frac{2}{n} \mathbf{1}\mathbf{1}^\top + \frac{1}{n} \mathbf{1}\mathbf{1}^\top = P$.

  3. Symmetric: $P^\top = \left(I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top \right)^\top = I - \frac{1}{n} (\mathbf{1}\mathbf{1}^\top)^\top = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top = P$.

Numerical considerations: Computing $P$ explicitly (storing $n \times n$ matrix) is wasteful for large $n$. Instead, compute $Px = x - \bar{x} \mathbf{1}$ directly (subtracting the mean).

Connection to Machine Learning#

PCA. The centered data matrix $X_c = PX$ (apply $P$ to each column) ensures principal components capture variance, not mean. The covariance matrix $C = \frac{1}{n} X_c^\top X_c$ would be biased without centering.

Batch normalization. For a mini-batch $\{x_1, \ldots, x_B\}$, batch norm computes: $$ \hat{x}_i = \frac{x_i - \mu_B}{\sqrt{\sigma_B^2 + \epsilon}}, \quad \mu_B = \frac{1}{B} \sum_{i=1}^B x_i, \quad \sigma_B^2 = \frac{1}{B} \sum_{i=1}^B (x_i - \mu_B)^2 $$ The centering step $x_i - \mu_B$ is projection onto the zero-mean subspace.

Residuals in regression. In ordinary least squares, residuals $r = y - \hat{y} = (I - X(X^\top X)^{-1} X^\top)y$ are projections onto the orthogonal complement of $\text{col}(X)$. If $X$ includes a column of ones (intercept), residuals automatically have zero mean.

Connection to Linear Algebra Theory#

Projection theorem. Every vector $x \in \mathbb{R}^n$ can be uniquely decomposed as $x = x_\parallel + x_\perp$ where $x_\parallel \in S$ and $x_\perp \in S^\perp$. For $S = \{x : \mathbf{1}^\top x = 0\}$, we have: $$ x_\parallel = Px = x - \bar{x} \mathbf{1}, \quad x_\perp = (I-P)x = \bar{x} \mathbf{1} $$

Rank of projection matrix. $P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top$ has rank $n-1$ because:

  • $\text{null}(P) = \text{span}\{\mathbf{1}\}$ (1D subspace).

  • By rank-nullity theorem, $\text{rank}(P) + \dim(\text{null}(P)) = n$, so $\text{rank}(P) = n-1$.

Eigenvalues of $P$. $P$ has eigenvalues $\lambda = 1$ (multiplicity $n-1$) and $\lambda = 0$ (multiplicity 1):

  • Eigenvectors with $\lambda = 1$: any $v \perp \mathbf{1}$ (orthogonal to all-ones).

  • Eigenvector with $\lambda = 0$: $v = \mathbf{1}$.

This confirms $P$ is a projection: eigenvalues are 0 or 1, characteristic of projection matrices.

Relation to covariance. The sample covariance matrix is: $$ C = \frac{1}{n} X_c^\top X_c = \frac{1}{n} (PX)^\top (PX) = \frac{1}{n} X^\top P^\top P X = \frac{1}{n} X^\top P X $$ using $P^\top = P$ and $P^2 = P$.

Pedagogical Significance#

Concrete example of a subspace. Students often learn subspaces abstractly (“closed under addition and scaling”). This example gives a geometric and algebraic definition: $S = \{x : \mathbf{1}^\top x = 0\}$ (algebraic) is an $(n-1)$-dimensional hyperplane through the origin (geometric).

Projection as matrix multiplication. Projecting $x$ onto $S$ is simply $x_{\text{proj}} = Px$. This demystifies projections (often introduced with complicated formulas) by showing they’re linear maps.

Foundation for PCA. Understanding centering is essential before learning PCA. Many textbooks jump to “compute eigenvalues of $X^\top X$” without explaining why $X$ must be centered.

Computational perspective. Explicitly forming $P$ (storing $n^2$ entries) is wasteful. Implementing $Px$ as $x - \bar{x} \mathbf{1}$ (computing mean, subtracting) is much faster ($O(n)$ vs. $O(n^2)$).

References#

  1. Gauss, C. F. (1809). Theoria Motus Corporum Coelestium. Introduced least squares with mean-centering for orbit determination.

  2. Hotelling, H. (1933). “Analysis of a Complex of Statistical Variables into Principal Components.” Journal of Educational Psychology, 24(6), 417–441. Formalized PCA (assumes centered data).

  3. Ioffe, S., & Szegedy, C. (2015). “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” ICML 2015. Introduced batch normalization (centering + scaling layer activations).

  4. Strang, G. (2016). Introduction to Linear Algebra (5th ed.). Wellesley–Cambridge Press. Chapter 4 covers projection matrices and least squares.

  5. Horn, R. A., & Johnson, C. R. (2013). Matrix Analysis (2nd ed.). Cambridge University Press. Section 2.5 covers idempotent matrices (projections).

Problem. Project $x$ onto $S = \{x \in \mathbb{R}^n : \mathbf{1}^\top x = 0\}$ (zero-mean subspace).

Solution (math).

$S$ is a subspace (null space of $\mathbf{1}^\top$). The projection matrix is: $$ P = I - \frac{1}{n} \mathbf{1}\mathbf{1}^\top $$

Applying $P$ to $x$ gives: $$ x_{\text{proj}} = Px = x - \frac{1}{n} (\mathbf{1}^\top x) \mathbf{1} = x - \bar{x} \mathbf{1} $$ where $\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i$ is the mean of $x$. Verification: $\mathbf{1}^\top x_{\text{proj}} = \mathbf{1}^\top (x - \bar{x} \mathbf{1}) = \mathbf{1}^\top x - n \bar{x} = 0$.

Solution (Python).

import numpy as np

# Define vector x in R^5
n = 5
x = np.array([3., 1., 0., -2., 4.])

# Centering matrix P = I - (1/n) * 1 * 1^T
I = np.eye(n)
one = np.ones((n, 1))
P = I - (1 / n) * (one @ one.T)

# Project x onto zero-mean subspace
x_proj = P @ x

print(f"Original x: {x}")
print(f"Mean of x: {x.mean():.2f}")
print(f"Projected x_proj: {x_proj}")
print(f"Mean of x_proj: {x_proj.sum():.2e} (should be ~0)")
print(f"Verification: 1^T @ x_proj = {one.T @ x_proj.reshape(-1, 1)[0, 0]:.2e}")

Output:

Original x: [ 3.  1.  0. -2.  4.]
Mean of x: 1.20
Projected x_proj: [ 1.8 -0.2 -1.2 -3.2  2.8]
Mean of x_proj: 0.00e+00 (should be ~0)
Verification: 1^T @ x_proj = 0.00e+00

Worked Example 3: Model outputs form range(X)#

Introduction#

In linear regression $\hat{y} = Xw$, all possible predictions lie in the column space (range) of the feature matrix $X$. This fundamental constraint determines model expressiveness: if the target $y \notin \text{col}(X)$, the model cannot fit perfectly (residual is nonzero). Understanding $\text{col}(X)$ as a subspace clarifies when adding features helps, when features are redundant (linearly dependent), and how model capacity relates to matrix rank.

Purpose#

  • Identify $\text{col}(X)$ as a subspace: All vectors $Xw$ (for $w \in \mathbb{R}^d$) form a subspace of $\mathbb{R}^n$.

  • Relate expressiveness to rank: $\dim(\text{col}(X)) = \text{rank}(X) \leq \min(n, d)$.

  • Connect to ML: Model predictions span a $\text{rank}(X)$-dimensional subspace. If $\text{rank}(X) < n$, the model cannot fit arbitrary targets.

Importance#

Underdetermined vs. overdetermined systems.

  • Underdetermined ($n < d$): More features than examples. $\text{rank}(X) \leq n < d$, so infinitely many solutions exist (null space is nontrivial).

  • Overdetermined ($n > d$): More examples than features. $\text{rank}(X) \leq d < n$, so exact fit is impossible unless $y \in \text{col}(X)$ (rare). Least squares finds best approximation.

Multicollinearity. If $\text{rank}(X) < d$, features are linearly dependent (redundant). Example: including both “temperature in Celsius” and “temperature in Fahrenheit” as features makes $X$ rank-deficient. Solutions are non-unique ($w + v$ is also a solution for any $v \in \text{null}(X)$).

What This Example Demonstrates#

  • Column space = set of all predictions: $\{Xw : w \in \mathbb{R}^d\} = \text{col}(X) = \text{span}\{x_1, \ldots, x_d\}$ where $x_j$ are columns of $X$.

  • Rank determines dimension: $\dim(\text{col}(X)) = \text{rank}(X)$.

  • NumPy verification: np.linalg.matrix_rank(X) computes rank via SVD.

Background#

Fundamental Theorem of Linear Algebra (Strang). For $A \in \mathbb{R}^{m \times n}$:

  1. $\text{col}(A)$ and $\text{null}(A^\top)$ are orthogonal complements in $\mathbb{R}^m$: $\mathbb{R}^m = \text{col}(A) \oplus \text{null}(A^\top)$.

  2. $\text{col}(A^\top)$ and $\text{null}(A)$ are orthogonal complements in $\mathbb{R}^n$: $\mathbb{R}^n = \text{col}(A^\top) \oplus \text{null}(A)$.

  3. $\dim(\text{col}(A)) = \dim(\text{col}(A^\top)) = \text{rank}(A)$.

  4. Rank-nullity theorem: $\text{rank}(A) + \dim(\text{null}(A)) = n$.

Historical Context: The concept of rank appeared in Frobenius (1911) and Sylvester (1850s), but the geometric interpretation as “dimension of column space” became standard only in the 20th century with abstract linear algebra.

Connection to Machine Learning#

Regularization and identifiability. If $\text{rank}(X) < d$, the normal equations $X^\top X w = X^\top y$ are singular ($X^\top X$ is not invertible). Ridge regression adds $\lambda I$ to make $(X^\top X + \lambda I)$ invertible, effectively restricting solutions to a preferred subspace.

Feature selection. Adding a feature that’s a linear combination of existing features (e.g., $x_{\text{new}} = 2x_1 + 3x_2$) does not increase $\text{rank}(X)$ or model capacity. Feature selection algorithms (LASSO, forward selection) aim to maximize rank while minimizing redundancy.

Low-rank approximation. If $\text{rank}(X) \ll \min(n, d)$, truncated SVD $X \approx U_k \Sigma_k V_k^\top$ captures most information with $k \ll d$ features. This is the basis of PCA, autoencoders, and matrix factorization (recommender systems).

References#

  1. Strang, G. (2016). Introduction to Linear Algebra (5th ed.). Wellesley–Cambridge Press. Chapter 3: “The Four Fundamental Subspaces.”

  2. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. Chapter 2: “Linear Algebra” (discusses rank and span).

  3. Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning (2nd ed.). Springer. Section 3.4: “Shrinkage Methods” (ridge regression, handling rank deficiency).

Problem. Interpret $\{Xw : w \in \mathbb{R}^d\}$ as a subspace and compute its dimension.

Solution (math).

The set $\{Xw : w \in \mathbb{R}^d\}$ is the column space (range) of $X$: $$ \text{col}(X) = \{Xw : w \in \mathbb{R}^d\} = \text{span}\{x_1, \ldots, x_d\} $$ where $x_j \in \mathbb{R}^n$ are the columns of $X \in \mathbb{R}^{n \times d}$. The dimension is: $$ \dim(\text{col}(X)) = \text{rank}(X) \leq \min(n, d) $$

Solution (Python).

import numpy as np

# Define feature matrix X (3 examples, 2 features)
X = np.array([[1., 0.],
              [0., 1.],
              [1., 1.]])

# Compute rank (dimension of column space)
rank = np.linalg.matrix_rank(X)

print(f"X shape: {X.shape}")
print(f"X =\n{X}")
print(f"Rank(X) = {rank}")
print(f"Column space dimension = {rank}")
print(f"Predictions Xw span a {rank}-dimensional subspace of R^{X.shape[0]}")

Output:

X shape: (3, 2)
X =
[[1. 0.]
 [0. 1.]
 [1. 1.]]
Rank(X) = 2
Column space dimension = 2
Predictions Xw span a 2-dimensional subspace of R^3

Worked Example 4: Bias trick (affine → linear)#

Introduction#

Most machine learning models use affine transformations: $f(x) = Wx + b$ where $W$ is a weight matrix and $b$ is a bias vector. Affine maps are not linear (they don’t map zero to zero if $b \neq 0$), but there’s an elegant trick: augment the input with a constant 1, turning $f(x) = Wx + b$ into a purely linear map $f(x') = W' x'$ in a higher-dimensional space.

This “bias trick” (also called “homogeneous coordinates”) is ubiquitous in ML: neural network layers, logistic regression, SVMs, and computer graphics all use it. It simplifies implementation (one matrix multiply instead of multiply + add) and unifies the treatment of weights and biases.

Purpose#

  • Unify affine and linear maps: Convert $f(x) = Wx + b$ to $f(x') = W'x'$ where $x' = [x; 1]$ (augmented input) and $W' = [W \,|\, b]$ (concatenated weight matrix and bias).

  • Simplify backpropagation: Gradients w.r.t. $W'$ handle both weights and biases uniformly.

  • Connect to projective geometry: Homogeneous coordinates in computer graphics use the same augmentation.

Importance#

Neural network implementation. Every linear layer $h = Wx + b$ can be written as $h = W'x'$ where $x' = [x; 1]$ and $W' = [W \,|\, b]$. Many frameworks (PyTorch, TensorFlow) handle biases separately for efficiency, but conceptually this augmentation clarifies the math.

Logistic regression. The decision boundary $w^\top x + b = 0$ becomes $w'^\top x' = 0$ in augmented space, a linear classifier (hyperplane through the origin in $\mathbb{R}^{d+1}$).

Conditioning and regularization. Regularizing $\|w\|_2^2$ without penalizing $b$ (common practice) is harder to express if $w$ and $b$ are combined. Keeping them separate maintains flexibility, but the augmentation perspective clarifies that they live in different subspaces.

What This Example Demonstrates#

  • Affine maps become linear in augmented space: $f(x) = Wx + b$ (affine in $\mathbb{R}^d$) equals $f(x') = W'x'$ (linear in $\mathbb{R}^{d+1}$).

  • Augmentation preserves structure: Adding a constant 1 extends the input space without losing information.

  • Numerical verification: Compute both $Wx + b$ and $W'x'$, verify they’re identical.

Background#

Affine vs. linear. A map $f: \mathbb{R}^d \to \mathbb{R}^m$ is:

  • Linear if $f(\alpha x + \beta y) = \alpha f(x) + \beta f(y)$ for all $x, y, \alpha, \beta$. Equivalently, $f(x) = Ax$ for some matrix $A$.

  • Affine if $f(x) = Ax + b$ for some matrix $A$ and vector $b$. Affine maps preserve affine combinations (weighted averages with weights summing to 1) but not arbitrary linear combinations.

Homogeneous coordinates. In computer graphics, 3D points $(x, y, z)$ are represented as 4-vectors $(x, y, z, 1)$ to handle translations uniformly. The last coordinate acts as a “scaling factor” (1 for ordinary points, 0 for vectors). This is exactly the bias trick.

Historical Context: Homogeneous coordinates were introduced by August Ferdinand Möbius (1827) for projective geometry. Their use in ML is a modern application of this classical idea.

Connection to Machine Learning#

Deep neural networks. Each layer computes $h_{l+1} = \sigma(W_l h_l + b_l)$ where $\sigma$ is a nonlinearity (ReLU, sigmoid). The affine transformation $W_l h_l + b_l$ can be written as $W'_l h'_l$ with $h'_l = [h_l; 1]$.

Batch processing. For a mini-batch $X \in \mathbb{R}^{B \times d}$ (rows are examples), the transformation $Y = XW^\top + \mathbf{1} b^\top$ (broadcasting bias) becomes $Y = X' W'^\top$ where $X' = [X \,|\, \mathbf{1}]$ (augmented batch) and $W' = [W \,|\, b]$.

Regularization subtlety. Ridge regression penalizes $\|w\|_2^2$ but not $b$ (bias is unregularized). If we augment and use $w' = [w; b]$, regularizing $\|w'\|_2^2$ would incorrectly penalize the bias. This is why most implementations keep $w$ and $b$ separate despite the conceptual elegance of augmentation.

References#

  1. Möbius, A. F. (1827). Der barycentrische Calcul. Introduced homogeneous coordinates for projective geometry.

  2. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. Section 6.1: “Feedforward Networks” (linear layers with biases).

  3. Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer. Section 3.1: “Linear Regression” (discusses augmentation for intercept terms).

Problem. Rewrite $f(x) = Wx + b$ as a linear map in augmented space.

Solution (math).

Define the augmented input $x' \in \mathbb{R}^{d+1}$: $$ x' = \begin{bmatrix} x \\ 1 \end{bmatrix} $$

and the augmented weight matrix $W' \in \mathbb{R}^{m \times (d+1)}$: $$ W' = \begin{bmatrix} W & b \end{bmatrix} $$

Then: $$ f(x) = Wx + b = W' x' = \begin{bmatrix} W & b \end{bmatrix} \begin{bmatrix} x \\ 1 \end{bmatrix} = Wx + b \cdot 1 $$

This is a linear map in $\mathbb{R}^{d+1}$ (no bias term needed).

Solution (Python).

import numpy as np

# Define weight matrix W (2x2) and bias vector b (2,)
W = np.array([[2., 1.],
              [-1., 3.]])
b = np.array([0.5, -2.])

# Input vector x (2,)
x = np.array([1., 4.])

# Standard affine transformation: Wx + b
y_affine = W @ x + b

# Augmented transformation: W' @ x'
# W' = [W | b] (concatenate b as a column)
W_aug = np.c_[W, b]  # Shape: (2, 3)

# x' = [x; 1] (append 1)
x_aug = np.r_[x, 1.]  # Shape: (3,)

# Linear transformation in augmented space
y_linear = W_aug @ x_aug

print(f"W =\n{W}")
print(f"b = {b}")
print(f"x = {x}\n")

print(f"Affine: Wx + b = {y_affine}")
print(f"\nAugmented W' =\n{W_aug}")
print(f"Augmented x' = {x_aug}")
print(f"Linear: W'x' = {y_linear}\n")

print(f"Are they equal? {np.allclose(y_affine, y_linear)}")

Output:

W =
[[ 2.  1.]
 [-1.  3.]]
b = [ 0.5 -2. ]
x = [1. 4.]

Affine: Wx + b = [ 6.5 9. ]

Augmented W' =
[[ 2.   1.   0.5]
 [-1.   3.  -2. ]]
Augmented x' = [1. 4. 1.]
Linear: W'x' = [ 6.5 9. ]

Are they equal? True

Worked Example 5: Attention outputs lie in span(V)#

Introduction#

The attention mechanism (Bahdanau et al., 2015; Vaswani et al., 2017) is the core operation in Transformers, powering modern LLMs (GPT, BERT, LLaMA), vision models (ViT), and multimodal systems (CLIP, Flamingo). Attention computes weighted sums of value vectors $V$, with weights determined by query-key similarities. Crucially, attention outputs are constrained to lie in $\text{span}(V)$ — they cannot “invent” information outside the value subspace.

This example demonstrates that attention is a linear combination operation: the output $z = \sum_{i=1}^n \alpha_i v_i$ (where $\alpha_i$ are softmax-normalized attention scores) always lies in $\text{span}\{v_1, \ldots, v_n\}$, a subspace of $\mathbb{R}^{d_v}$.

Purpose#

  • Understand attention as weighted averaging: Output = $\sum_{i=1}^n \alpha_i v_i$ with $\alpha_i \geq 0$, $\sum_i \alpha_i = 1$ (convex combination).

  • Identify the constraint: Attention outputs lie in $\text{span}(V)$, limiting expressiveness to the value subspace.

  • Connect to ML: Multi-head attention learns multiple subspaces (heads) in parallel, increasing capacity.

Importance#

Transformer architecture. Attention is the primary operation in Transformers, replacing recurrence (RNNs) and convolution (CNNs). Each layer computes: $$ \text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^\top}{\sqrt{d_k}}\right) V $$ where $Q$ (queries), $K$ (keys), and $V$ (values) are linear projections of inputs. The output is a weighted sum of value vectors, constrained to $\text{span}(V)$.

Multi-head attention. Splitting into $h$ heads projects to $h$ different subspaces: $$ \text{MultiHead}(Q, K, V) = \text{Concat}(\text{head}_1, \ldots, \text{head}_h) W^O $$ where $\text{head}_i = \text{Attention}(Q W_i^Q, K W_i^K, V W_i^V)$. Each head operates in a different $(d_v / h)$-dimensional subspace.

Expressiveness vs. efficiency. Attention can only combine existing values (linear combinations), not generate new directions. This is a feature, not a bug: it provides inductive bias (outputs depend on inputs) and computational efficiency (matrix multiplications).

What This Example Demonstrates#

  • Attention is linear combination: For softmax weights $\alpha \in \mathbb{R}^{1 \times n}$ and value matrix $V \in \mathbb{R}^{n \times d_v}$, the output $z = \alpha V \in \mathbb{R}^{1 \times d_v}$ is a linear combination of rows of $V$.

  • Outputs lie in subspace: $z \in \text{span}\{v_1, \ldots, v_n\}$ where $v_i \in \mathbb{R}^{d_v}$ are rows of $V$.

  • Convex combination: Since $\alpha_i \geq 0$ and $\sum_i \alpha_i = 1$, $z$ lies in the convex hull of $\{v_1, \ldots, v_n\}$.

Background#

Attention mechanism history:

  1. Bahdanau attention (2015): Introduced for neural machine translation. Computes alignment scores between encoder hidden states and decoder state, uses weighted sum for context.

  2. Scaled dot-product attention (Vaswani 2017): Simplified to $\text{softmax}(QK^\top / \sqrt{d_k})V$, enabling parallelization and scaling.

  3. Multi-head attention (Vaswani 2017): Projects to multiple subspaces (heads), learns different relationships.

Mathematical interpretation: Attention is a content-based addressing mechanism: queries “look up” relevant keys, retrieve corresponding values. The softmax ensures smooth interpolation (differentiable, convex weights).

Relation to kernels: Attention can be viewed as a kernel method where $K(q, k) = \exp(q^\top k / \sqrt{d_k})$ (unnormalized softmax). The output is a weighted sum in the kernel space (span of values).

Connection to Machine Learning#

Self-attention in Transformers. For input sequence $X \in \mathbb{R}^{n \times d}$, self-attention computes $Q = XW^Q$, $K = XW^K$, $V = XW^V$. Each output token is a linear combination of all input tokens’ values, weighted by query-key similarities.

Cross-attention in encoder-decoder models. Decoder queries attend to encoder keys/values. Example: in machine translation, the decoder (target language) attends to encoder representations (source language). Outputs lie in the span of encoder values.

Positional embeddings. Since attention is permutation-invariant (outputs are linear combinations regardless of input order), position information must be injected via positional encodings $p_i \in \mathbb{R}^d$ added to embeddings. This augments the value subspace.

Computational complexity. Attention requires $O(n^2 d_v)$ operations ($n \times n$ attention matrix times $n \times d_v$ value matrix). For long sequences (e.g., books, genomic data), this becomes prohibitive, motivating sparse attention and low-rank approximations.

Connection to Linear Algebra Theory#

Convex combinations and convex hulls. If $\alpha_i \geq 0$ and $\sum_i \alpha_i = 1$, then $z = \sum_i \alpha_i v_i$ lies in the convex hull $\text{conv}\{v_1, \ldots, v_n\}$, the smallest convex set containing all $v_i$. Geometrically, this is a polytope (bounded region) in $\mathbb{R}^{d_v}$.

Rank of attention output. The attention output matrix $Z = \text{softmax}(QK^\top / \sqrt{d_k}) V \in \mathbb{R}^{n \times d_v}$ has $\text{rank}(Z) \leq \min(n, \text{rank}(V))$. If $V$ is low-rank, attention cannot increase rank (outputs lie in a low-dimensional subspace).

Projection interpretation. If all attention weights concentrate on a single token ($\alpha_i = 1$, $\alpha_j = 0$ for $j \neq i$), the output equals $v_i$ (projection to a single basis vector). Smooth attention weights (uniform $\alpha_i = 1/n$) give the average $\bar{v} = \frac{1}{n} \sum_i v_i$ (projection to center of value cloud).

Orthogonality and attention scores. High query-key similarity $q^\top k$ indicates alignment (small angle between $q$ and $k$). Orthogonal query-key pairs ($q^\top k = 0$) receive low attention weight. This is the same geometric intuition as inner products measuring similarity.

Pedagogical Significance#

Concrete example of span. Attention outputs visibly demonstrate that linear combinations $\sum_i \alpha_i v_i$ lie in $\text{span}\{v_1, \ldots, v_n\}$. Students can compute actual numbers and verify the output is expressible as a weighted sum.

Geometric visualization. For 2D or 3D value vectors, plot $\{v_1, \ldots, v_n\}$ and the attention output $z$. $z$ lies inside the convex hull (polytope formed by connecting $v_i$).

Foundation for Transformers. Understanding attention as linear combination clarifies:

  • Why attention is permutation-invariant: Linear combinations don’t depend on order.

  • Why multi-head attention helps: Different heads explore different subspaces.

  • Limitations: Attention can only interpolate existing values, not generate new directions (nonlinearity comes from layer stacking and feedforward networks).

References#

  1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). “Attention is All You Need.” NeurIPS 2017. Introduced Transformer architecture with scaled dot-product attention and multi-head attention.

  2. Bahdanau, D., Cho, K., & Bengio, Y. (2015). “Neural Machine Translation by Jointly Learning to Align and Translate.” ICLR 2015. First attention mechanism for seq2seq models.

  3. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” NAACL 2019. Bidirectional self-attention for masked language modeling.

  4. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., … & Houlsby, N. (2020). “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.” ICLR 2021. Vision Transformers (ViT) apply attention to image patches.

  5. Tay, Y., Dehghani, M., Bahri, D., & Metzler, D. (2020). “Efficient Transformers: A Survey.” arXiv:2009.06732. Reviews sparse attention, low-rank approximations, and other efficiency techniques.

Problem. Show attention output is in the span of value vectors.

Solution (math).

For value matrix $V \in \mathbb{R}^{n \times d_v}$ (rows $v_1, \ldots, v_n$) and attention weights $\alpha \in \mathbb{R}^{1 \times n}$ (from softmax, so $\alpha_i \geq 0$, $\sum_i \alpha_i = 1$), the attention output is: $$ z = \alpha V = \sum_{i=1}^n \alpha_i v_i \in \mathbb{R}^{1 \times d_v} $$

This is a convex combination of rows of $V$, hence $z \in \text{span}\{v_1, \ldots, v_n\} \subseteq \mathbb{R}^{d_v}$.

Solution (Python).

import numpy as np

# Define value matrix V (3 tokens, 2-dimensional values)
V = np.array([[1., 0.],   # v_1
              [0., 1.],   # v_2
              [2., 2.]])  # v_3

# Define attention weights (from softmax, sum to 1)
A = np.array([[0.2, 0.5, 0.3]])  # Shape: (1, 3)

# Compute attention output z = A @ V
z = A @ V  # Shape: (1, 2)

print(f"Value matrix V (rows are values):\n{V}\n")
print(f"Attention weights A = {A[0]} (sum = {A.sum():.1f})\n")
print(f"Attention output z = A @ V = {z[0]}")
print(f"\nVerification as linear combination:")
print(f"z = {A[0,0]}*v_1 + {A[0,1]}*v_2 + {A[0,2]}*v_3")
print(f"  = {A[0,0]}*{V[0]} + {A[0,1]}*{V[1]} + {A[0,2]}*{V[2]}")
print(f"  = {A[0,0]*V[0] + A[0,1]*V[1] + A[0,2]*V[2]}")
print(f"\nz lies in span(V): True (z is a linear combination of rows of V)")

Output:

Value matrix V (rows are values):
[[1. 0.]
 [0. 1.]
 [2. 2.]]

Attention weights A = [0.2 0.5 0.3] (sum = 1.0)

Attention output z = A @ V = [0.8 1.1]

Verification as linear combination:
z = 0.2*v_1 + 0.5*v_2 + 0.3*v_3
  = 0.2*[1. 0.] + 0.5*[0. 1.] + 0.3*[2. 2.]
  = [0.8 1.1]

z lies in span(V): True (z is a linear combination of rows of V)

Comments

Algorithm Category
Data Modality
Historical & Attribution
Learning Path & Sequencing
Linear Algebra Foundations
Theoretical Foundation