Chapter 15
Sparse Matrices
Key ideas: Introduction

Introduction#

Most real-world data matrices are sparse: social networks are sparse graphs; text is sparse (million-word vocabulary but document has hundreds of words); images are sparse in wavelet domains; scientific simulations generate sparse discretizations. Dense matrices require $O(n^2)$ memory and $O(n^3)$ computation; sparse matrices require only $O(\text{nnz})$. A sparse matrix with $\text{nnz} = O(n)$ or $O(n \log n)$ reduces complexity from cubic to linear or nearly linear. This is transformational: it enables algorithms that would be intractable on dense representations. Exploiting sparsity requires careful algorithm design: level-of-fill control, reordering strategies, specialized data structures, and communication patterns in distributed solvers.

Important ideas#

  1. Sparse matrix representations

    • Compressed Sparse Row (CSR): store nonzeros by row; three arrays (values, column indices, row pointers); $O(\text{nnz} + n)$ memory.

    • Compressed Sparse Column (CSC): symmetric role for columns; efficient for column access.

    • Coordinate format (COO): list of $(i, j, v)$ triples; inefficient for computation but flexible for construction.

    • Diagonal, tridiagonal, block-structured: exploit special structure for even greater efficiency.

  2. Sparsity patterns and fill-in

    • Reachability: nonzero $(i, j)$ entry means path from $i$ to $j$ in graph; reveals communication pattern.

    • Fill-in: $L, U$ in $A = LU$ can have more nonzeros than $A$; reordering (minimum degree, nested dissection) minimizes fill-in.

    • Symbolic factorization: determine nonzero pattern of $L, U$ without computing values.

  3. Matrix-vector products and efficient computation

    • Dense $y = A x$ costs $O(n^2)$; sparse costs $O(\text{nnz})$.

    • SpMV (sparse matrix-vector product) is bandwidth-limited on modern hardware; careful data layout crucial.

    • GPU sparse kernels: CUSPARSE; CPU sparse kernels: MKL Sparse BLAS, OpenMP parallelization.

  4. Direct sparse solvers

    • LU with partial pivoting: reorder to minimize fill-in, apply pivoting.

    • Cholesky for SPD: symmetric structure implies symmetric reordering; often very sparse.

    • Supernodal methods: group small dense blocks for cache efficiency; key to modern solvers (PARDISO, SuperLU, UMFPACK).

  5. Iterative sparse solvers

    • CG, GMRES, MINRES: require only matrix-vector products; no explicit factorization.

    • Preconditioning essential: ILU (incomplete LU), IC (incomplete Cholesky), algebraic multigrid (AMG) enable convergence.

    • Matrix-free methods: supply only matrix-vector product function; never form or store matrix.

  6. Graph algorithms and adjacency matrices

    • Adjacency matrix $A$ where $A_{ij} = 1$ if edge $(i, j)$ exists; number of nonzeros = $2 \times$ edges (undirected).

    • Shortest paths, PageRank, diffusion: matrix powers, powers of adjacency matrix encode reachability.

    • Laplacian matrix $L = D - A$ (degree diagonal minus adjacency); eigenvectors yield graph partitions (spectral clustering).

  7. Storage and computational complexity trade-offs

    • Dense $A \in \mathbb{R}^{n \times n}$: $O(n^2)$ memory, $O(n^3)$ LU, $O(n^2)$ SpMV.

    • Sparse $A$ with $\text{nnz} = O(n)$: $O(n)$ memory, $O(n^{1.5})$ LU (with reordering), $O(n)$ SpMV.

    • For $n = 10^6$: dense requires terabytes; sparse (with $\text{nnz} = 10^7$) requires gigabytes.

Relevance to ML#

  • Text and NLP: Document-term matrices are sparse (million words, hundreds per document); sparse SVD enables topic models.

  • Graphs and networks: Social networks, citation graphs, knowledge graphs are sparse; graph neural networks leverage adjacency matrix sparsity.

  • Recommender systems: User-item ratings sparse ($\sim 0.1\%$ observed); matrix completion via sparse factorization.

  • Inverse problems and imaging: Medical imaging, seismic inversion, tomography discretizations are sparse; sparse solvers essential.

  • Large-scale optimization: SGD and proximal methods operate on sparse data; distributed solvers exploit communication sparsity.

Algorithmic development (milestones)#

  • 1960s: Sparse matrix computation recognized as practical necessity in structural engineering (finite element analysis).

  • 1971: Tinney & Walker develop minimum degree ordering; George extends to nested dissection.

  • 1979: Duff develops efficient symbolic factorization algorithms.

  • 1980s: Supernodal methods developed (Ashcraft, Eisenstat); enable cache-efficient dense kernels within sparse framework.

  • 1990s: PARDISO, SuperLU, UMFPACK emerge as production sparse solvers.

  • 2000s: Algebraic multigrid (AMG) becomes standard preconditioner for sparse iterative methods.

  • 2005–2010: GPU sparse kernels (CUSPARSE) enable acceleration; sparse matrix formats adapt to GPU memory.

  • 2010s: Sparse deep learning (neural networks on graph-structured data); graph convolutional networks (GCN), graph attention networks (GAT) exploit adjacency matrix sparsity.

Definitions#

  • Sparse matrix: $A \in \mathbb{R}^{n \times n}$ with $\text{nnz}(A) \ll n^2$ nonzeros; typically $\text{nnz} = O(n)$ or $O(n \log n)$.

  • Sparsity: $\text{sparsity} = 1 - \frac{\text{nnz}(A)}{n^2}$ (fraction of zeros); $\text{sparsity} > 0.99$ is typical for ML.

  • Compressed Sparse Row (CSR): Three arrays: values[nnz], col[nnz], row_ptr[n+1]; row_ptr[i] points to start of row $i$ in values/col.

  • Fill-in: Nonzero entries created during factorization that were zero in original matrix.

  • Reordering: Permutation $P$ such that $P A P^\top$ has lower fill-in; minimum degree, nested dissection are common strategies.

  • Graph of a matrix: Undirected graph $G$ where vertices are matrix indices, edge $(i, j)$ exists iff $A_{ij} \ne 0$.

  • Degree: Vertex $i$ has degree = number of nonzeros in row/column $i$.

  • Laplacian matrix: $L = D - A$ where $D = \text{diag}(d_1, \ldots, d_n)$ is degree matrix, $A$ is adjacency matrix.

Essential vs Optional: Theoretical ML

Theoretical (essential)#

  • Sparse matrix representations: CSR, CSC, COO formats; trade-offs between storage and access patterns. Reference: Golub & Van Loan (2013).

  • Fill-in and reordering: Minimum degree, nested dissection reduce fill-in; symbolic factorization predicts nonzero pattern. Reference: George & Liu (1981).

  • SpMV complexity: $O(\text{nnz})$ time, bandwidth-limited on modern hardware. Reference: Williams et al. (2009).

  • Direct sparse solvers: LU with reordering, supernodal methods, pivot selection for stability. Reference: Davis (2006) (SuiteSparse); Schenk & Gärtner (2004) (PARDISO).

  • Iterative sparse solvers: CG, GMRES on Krylov subspaces; preconditioning essential (ILU, IC, AMG). Reference: Saad (2003); Briggs et al. (2000) (multigrid tutorial).

  • Graph Laplacian and spectral methods: Eigenvalues/eigenvectors of $L = D - A$ for clustering, diffusion. Reference: Spielman & Teng (2004).

Applied (landmark systems)#

  • Sparse SVD for dimensionality reduction: ARPACK for large sparse matrices; used in topic modeling (LSA, LDA). Implementation: scikit-learn TruncatedSVD. Reference: Deerwester et al. (1990).

  • Direct sparse solvers (production): PARDISO (Intel MKL), SuperLU (LBNL), UMFPACK (UMF collection). Deployed in finite element, circuit simulation. Reference: Schenk & Gärtner (2004); Li (2005).

  • Iterative sparse solvers with preconditioning: GMRES + ILU in PETSc, Trilinos; ALS for matrix completion in Spark MLlib. Reference: Balay et al. (2021) (PETSc); Herring et al. (2009) (Trilinos).

  • Graph neural networks: PyTorch Geometric adjacency matrix operations; sparse matrix multiplication for graph convolution. Reference: Kipf & Welling (2017).

  • Compressed sensing and sparse recovery: ADMM/FISTA for L1-regularized problems; applications in imaging, signal processing. Reference: Boyd et al. (2011); Beck & Teboulle (2009).

Key ideas: Where it shows up
  1. Text and NLP

    • Document-term matrices $X \in \mathbb{R}^{n \times d}$ with $n$ documents, $d$ vocabulary; typically $\text{nnz} = O(nd)$ (words per doc).

    • Achievements: sparse SVD/NMF in topic modeling (LSA, LDA); scikit-learn TfidfVectorizer produces sparse matrices. References: Deerwester et al. (1990) (LSA); Blei et al. (2003) (LDA).

  2. Graph neural networks and social networks

    • Adjacency matrix $A \in \mathbb{R}^{n \times n}$ sparse; graph convolution $X' = \sigma(AXW)$ exploits sparsity for $O(\text{nnz} \cdot d)$ per layer vs. $O(n^2 d)$ dense.

    • Achievements: PyTorch Geometric, DGL, Spektral enable billion-node GNNs; GCN (Kipf & Welling 2017), GAT (Velickovic et al. 2018) standard. References: Kipf & Welling (2017); Velickovic et al. (2018).

  3. Recommender systems and matrix completion

    • Rating matrices $R \in \mathbb{R}^{m \times n}$ (users × items) extremely sparse ($\sim 0.01\%$ observed); matrix factorization via alternating least squares.

    • Achievements: Sparse SVD in Netflix prize; ALS in Spark MLlib scales to billions of ratings. References: Koren et al. (2009) (matrix factorization); Zhou et al. (2008) (implicit feedback).

  4. Inverse problems and sparse regularization

    • Measurement matrix $A$ in compressed sensing, tomography often sparse or low-rank. Sparse solutions via L1 regularization (LASSO).

    • Achievements: ADMM (alternating direction method of multipliers) for large-scale L1-regularized problems; FISTA (fast iterative shrinkage-thresholding). References: Boyd et al. (2011) (ADMM); Beck & Teboulle (2009) (FISTA).

  5. Distributed and federated learning

    • Distributed optimization exploits sparsity in communication: gradient compression, local SGD reduce communication. Sparse parameter updates.

    • Achievements: gradient sparsification in federated learning (TensorFlow Federated); communication-efficient distributed SGD (Kairouz et al. 2021). References: Kairouz et al. (2021) (federated learning).

Notation
  • Sparse matrix: $A \in \mathbb{R}^{n \times n}$ with $\text{nnz}(A)$ nonzeros; typically $\text{nnz} \ll n^2$.

  • Sparsity: $\text{sparsity}(A) = \frac{n^2 - \text{nnz}(A)}{n^2}$ (fraction of zeros); $\text{sparsity}(A) > 0.99$ typical.

  • CSR format: Values v[1..nnz], column indices j[1..nnz], row pointers p[1..n+1] with p[i+1] - p[i] = nonzeros in row $i$.

  • Adjacency matrix: $A \in \{0,1\}^{n \times n}$ where $A_{ij} = 1$ if edge $(i, j)$ exists; $\text{nnz}(A) = 2m$ for $m$ edges (undirected).

  • Degree matrix: $D = \text{diag}(d_1, \ldots, d_n)$ where $d_i = \sum_j A_{ij}$ (degree of vertex $i$).

  • Laplacian: $L = D - A$ (graph Laplacian); $L$ is PSD with $\lambda_1 = 0$ (constant eigenvector).

  • Fill-in ratio: $\text{fill}(L, U) = \frac{\text{nnz}(L) + \text{nnz}(U)}{\text{nnz}(A)}$ (factorization creates nonzeros).

  • Example: Document-term matrix for 100K documents, 1M vocabulary words, ~200 words per doc: $\text{nnz} \approx 100K \times 200 = 20M$; density $\approx 0.00002\%$.

Pitfalls & sanity checks
  • Never convert large sparse matrix to dense: Memory explosion; dense matrix for $n = 10^6$ requires terabytes.

  • Choose representation based on access pattern: CSR for row access/SpMV, CSC for column access/factorization.

  • Monitor fill-in in factorization: If fill-in ratio $> 100$, consider iterative solver + preconditioner instead.

  • Ensure reordering is applied: Production solvers (PARDISO, SuperLU) apply automatically; verify if using custom code.

  • Test on subset before scaling: Verify correctness on small sparse system before large deployment.

  • GPU sparse kernels have overhead: SpMV on GPU slower than CPU for small matrices; threshold typically $n > 1000$.

  • Preconditioning setup vs. solve trade-off: Single solve: skip preconditioning. Multiple solves with same matrix: preconditioning pays off.

  • Sparsity pattern changes: If $A$ has varying sparsity over iterations, symbolic factorization must be updated.

References

Foundational sparse matrix theory

  1. George, J. A., & Liu, J. W. H. (1981). Computer Solution of Large Sparse Positive Definite Systems.

  2. Golub, G. H., & Van Loan, C. F. (2013). Matrix Computations (4th ed.).

  3. Davis, T. A. (2006). Direct Methods for Sparse Linear Systems.

Sparse matrix representations and algorithms

  1. Barrett, R., Berry, M., Chan, T. F., et al. (1994). Templates for the Solution of Linear Systems.

  2. Tinney, W. F., & Walker, J. W. (1967). Direct solutions of sparse network equations by optimally ordered triangular factorization.

  3. Schenk, O., & Gärtner, K. (2004). PARDISO: A high-performance serial and parallel sparse linear solver.

Sparse iterative solvers and preconditioning

  1. Saad, Y. (2003). Iterative Methods for Sparse Linear Systems (2nd ed.).

  2. Briggs, W. L., Henson, V. E., & McCormick, S. F. (2000). A Multigrid Tutorial (2nd ed.).

  3. Ruge, J. W., & Stüben, K. (1987). Algebraic Multigrid (AMG).

Sparse linear algebra in scientific computing

  1. Balay, S., Abhyankar, S., Adams, M. F., et al. (2021). PETSc Users Manual.

  2. Herring, A. L., Gilmour, A., Paddon, D. J., & Giles, M. B. (2009). Trilinos Overview.

  3. Li, X. S. (2005). An overview of SuperLU: Algorithms, implementation, and user interface.

Topic modeling and sparse SVD

  1. Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis.

  2. Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation.

  3. Lehoucq, R. B., Sorensen, D. C., & Yang, C. (1998). ARPACK Users’ Guide.

Graph neural networks and sparse operations

  1. Kipf, T., & Welling, M. (2017). Semi-supervised classification with graph convolutional networks.

  2. Velickovic, P., Cucurull, G., Casanova, A., et al. (2018). Graph Attention Networks.

  3. Fey, M., & Lenssen, J. E. (2019). Fast graph representation learning with PyTorch Geometric.

Recommender systems and matrix completion

  1. Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems.

  2. Zhou, T., Kuscsik, Z., Liu, J. K., Medo, M., Yeung, C. H., & Zhang, Y. C. (2010). Solving the apparent diversity-accuracy dilemma of recommender systems.

Optimization and sparse recovery

  1. Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers.

  2. Beck, A., & Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems.

  3. Kairouz, P., McMahan, H. B., Avent, B., et al. (2021). Advances and Open Problems in Federated Learning.

Five worked examples

Worked Example 1: Sparse matrix representations and storage efficiency#

Introduction#

Construct a sparse matrix (adjacency matrix of a large graph); compare storage and computation time across dense, CSR, and COO formats; demonstrate memory/time trade-offs.

Purpose#

Illustrate practical efficiency gains of sparse formats; motivate algorithm selection based on access patterns.

Importance#

Foundation for understanding when sparse methods are worthwhile; shows memory savings enable billion-scale problems.

What this example demonstrates#

  • Generate sparse graph (e.g., scale-free network or grid).

  • Store as dense array, then convert to CSR, CSC, COO.

  • Measure memory footprint for each format.

  • Benchmark matrix-vector product on dense vs. sparse.

  • Plot time vs. sparsity level.

Background#

Dense storage is simple but wasteful for sparse matrices. Sparse formats trade memory for computational complexity in accessing elements.

Historical context#

Sparse matrix formats developed as computers gained memory constraints in 1960s–1970s (finite element analysis).

History#

CSR became de facto standard; specialized hardware (GPU) support added via CUSPARSE, cuSPARSE.

Prevalence in ML#

Every large-scale ML system (Spark, Hadoop, TensorFlow) supports sparse matrices; default for text, graphs.

Notes#

  • CSR efficient for row access, SpMV; CSC efficient for column access, matrix-matrix products.

  • COO flexible for construction; convert to CSR/CSC before computation.

  • GPU memory bandwidth is bottleneck; careful format choice crucial.

Connection to ML#

Text vectorization produces sparse matrices; graphs produce sparse adjacency matrices. System must efficiently handle both.

Connection to Linear Algebra Theory#

Sparse matrix properties (connectivity, degree distribution) determine factorization cost and algorithm performance.

Pedagogical Significance#

Demonstrates performance trade-offs; motivates specialized algorithms.

References#

  1. Golub, G. H., & Van Loan, C. F. (2013). Matrix Computations (4th ed.).

  2. Barrett, R., Berry, M., Chan, T. F., et al. (1994). Templates for the Solution of Linear Systems.

  3. Davis, T. A. (2006). Direct Methods for Sparse Linear Systems.

Solution (Python)#

import numpy as np
import scipy.sparse as sp
import time

np.random.seed(30)

# Create sparse adjacency matrix (random graph)
n = 10000
density = 0.001  # 0.1% sparse
nnz = int(density * n * n)

# Generate random sparse matrix
rows = np.random.randint(0, n, nnz)
cols = np.random.randint(0, n, nnz)
data = np.ones(nnz)

# CSR format (efficient for SpMV)
A_csr = sp.csr_matrix((data, (rows, cols)), shape=(n, n))

# CSC format (efficient for column access)
A_csc = A_csr.tocsc()

# COO format (flexible)
A_coo = sp.coo_matrix((data, (rows, cols)), shape=(n, n))

# Dense format
A_dense = A_csr.toarray()

# Measure memory
import sys
mem_csr = sys.getsizeof(A_csr.data) + sys.getsizeof(A_csr.indices) + sys.getsizeof(A_csr.indptr)
mem_dense = A_dense.nbytes

print("Memory comparison for {}x{} matrix (density {:.4f}%)".format(n, n, 100*density))
print("  Dense: {:.2f} MB".format(mem_dense / 1e6))
print("  CSR: {:.2f} MB".format(mem_csr / 1e6))
print("  Savings: {:.1f}x".format(mem_dense / mem_csr))

# Benchmark matrix-vector product
x = np.random.randn(n)
iterations = 100

# Sparse SpMV
t0 = time.time()
for _ in range(iterations):
    y_sparse = A_csr @ x
t_sparse = time.time() - t0

# Dense gemv
t0 = time.time()
for _ in range(iterations):
    y_dense = A_dense @ x
t_dense = time.time() - t0

print("\nMatrix-vector product time ({} iterations):".format(iterations))
print("  Sparse: {:.4f} s".format(t_sparse))
print("  Dense: {:.4f} s".format(t_dense))
print("  Speedup: {:.1f}x".format(t_dense / t_sparse))

print("\nSparsity level: {:.1f}%".format(100 * A_csr.nnz / (n*n)))
print("NNZ: {}".format(A_csr.nnz))

Worked Example 2: Fill-in and reordering in sparse LU factorization#

Introduction#

Construct a sparse matrix; compute LU factorization without reordering and with minimum-degree reordering; compare fill-in and factorization time.

Purpose#

Demonstrate how reordering dramatically reduces fill-in and speeds factorization.

Importance#

Critical for practical sparse solvers; shows that problem structure matters.

What this example demonstrates#

  • Construct sparse system (banded, block-structured, or random).

  • Perform symbolic LU factorization (predict nonzero pattern).

  • Compute LU without reordering; measure fill-in ratio.

  • Apply minimum-degree reordering; compute LU again.

  • Compare factorization time and fill-in.

Background#

Fill-in (new nonzeros created during factorization) can destroy sparsity; reordering (permutation of rows/columns) reduces fill-in.

Historical context#

Tinney & Walker (1967) minimum degree; George (1973) nested dissection; standard in SPARSE BLAS era.

History#

All production sparse solvers (PARDISO, SuperLU, UMFPACK) include automatic reordering.

Prevalence in ML#

Used in sparse inverse problems, finite element discretizations; less visible in ML but critical for scalability.

Notes#

  • Minimum degree is heuristic; nested dissection is theoretically superior but more complex.

  • Fill-in ratio typically 5–100× for unordered; reordering reduces to 1–5×.

Connection to ML#

Sparse solve is bottleneck in iterative methods (Newton, interior-point algorithms). Reordering accelerates optimization.

Connection to Linear Algebra Theory#

Graph reachability (matrix powers) predicts fill-in; reduction is graph-theoretic problem.

Pedagogical Significance#

Shows interplay between linear algebra and graph algorithms.

References#

  1. George, J. A., & Liu, J. W. H. (1981). Computer Solution of Large Sparse Positive Definite Systems.

  2. Davis, T. A. (2006). Direct Methods for Sparse Linear Systems.

  3. Schenk, O., & Gärtner, K. (2004). PARDISO: A high-performance serial and parallel sparse linear solver.

Solution (Python)#

import numpy as np
import scipy.sparse as sp
import scipy.sparse.linalg as spla
from scipy.sparse import csgraph
import time

np.random.seed(31)

# Create sparse banded system (mimics discretized PDE)
n = 1000
band_width = 5
A_banded = sp.diags([np.ones(n-1), -2*np.ones(n), np.ones(n-1)],
                     [-1, 0, 1], shape=(n, n), format='csr')
# Add random entries to create irregular pattern
rows = np.random.choice(n, 50)
cols = np.random.choice(n, 50)
A_banded = A_banded + 0.01 * sp.csr_matrix((np.ones(50), (rows, cols)), shape=(n, n))
A_banded.eliminate_zeros()

# Convert to CSC for LU (standard for sparse direct solvers)
A_csc = A_banded.tocsc()

# LU without reordering
print("LU factorization comparison (n={})".format(n))
print("\nNo reordering:")
t0 = time.time()
LU_no_reorder = spla.splu(A_csc, permc_spec='natural')
t_no_reorder = time.time() - t0
nnz_no_reorder = LU_no_reorder.nnz

# LU with COLAMD reordering (minimum degree heuristic)
print("\nWith COLAMD reordering:")
t0 = time.time()
LU_reorder = spla.splu(A_csc, permc_spec='COLAMD')
t_reorder = time.time() - t0
nnz_reorder = LU_reorder.nnz

print("\nResults:")
print("  Original NNZ: {}".format(A_csc.nnz))
print("  No reorder - NNZ: {}, fill-in ratio: {:.2f}, time: {:.4f} s".format(
    nnz_no_reorder, nnz_no_reorder / A_csc.nnz, t_no_reorder))
print("  With reorder - NNZ: {}, fill-in ratio: {:.2f}, time: {:.4f} s".format(
    nnz_reorder, nnz_reorder / A_csc.nnz, t_reorder))
print("  Fill-in reduction: {:.1f}x".format(nnz_no_reorder / nnz_reorder))
print("  Time improvement: {:.1f}x".format(t_no_reorder / t_reorder))

Worked Example 3: Sparse SVD for topic modeling (LSA)#

Introduction#

Apply truncated SVD to sparse document-term matrix; extract latent topics; compare sparse vs. dense SVD efficiency.

Purpose#

Show practical use of sparse linear algebra in NLP; demonstrate dimensionality reduction on real-world sparse data.

Importance#

Sparse SVD is standard in topic modeling; ARPACK enables scalability to billions of words.

What this example demonstrates#

  • Generate synthetic document-term matrix (sparse, realistic sparsity ~0.01%).

  • Compute truncated SVD via ARPACK (sparse).

  • Compute dense SVD for comparison (infeasible for large $n$).

  • Extract topics (top words per topic).

  • Measure reconstruction error vs. rank.

Background#

LSA (latent semantic analysis) applies SVD to tf-idf matrix; low-rank factors reveal latent topics.

Historical context#

Deerwester et al. (1990) pioneered LSA; ARPACK (Lehoucq et al. 1998) made scalable SVD practical.

History#

Standard in topic modeling pipelines; scikit-learn TruncatedSVD widely used.

Prevalence in ML#

Textmining, information retrieval, recommendation systems routinely use sparse SVD.

Notes#

  • Truncated SVD (rank $k \ll n$) computes only $k$ largest singular values; much cheaper than full SVD.

  • Sparse matrix multiplication dominates cost; $O(k \cdot \text{nnz})$ per iteration.

Connection to ML#

Topics are interpretable; document-topic and topic-word matrices enable downstream tasks.

Connection to Linear Algebra Theory#

ARPACK uses Krylov subspaces and implicitly restarted Arnoldi; exploits SpMV efficiency.

Pedagogical Significance#

Demonstrates power of combining sparse formats with specialized algorithms (Krylov methods).

References#

  1. Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis.

  2. Lehoucq, R. B., Sorensen, D. C., & Yang, C. (1998). ARPACK Users’ Guide.

  3. Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation.

Solution (Python)#

import numpy as np
import scipy.sparse as sp
from scipy.sparse.linalg import svds
import time

np.random.seed(32)

# Synthetic document-term matrix (sparse)
n_docs, n_terms = 10000, 50000
nnz_per_doc = 200

# Generate sparse matrix (realistic sparsity)
data = []
row_idx = []
col_idx = []
for doc in range(n_docs):
    terms = np.random.choice(n_terms, nnz_per_doc, replace=False)
    counts = np.random.poisson(2, nnz_per_doc) + 1
    data.extend(counts)
    row_idx.extend([doc] * nnz_per_doc)
    col_idx.extend(terms)

X = sp.csr_matrix((data, (row_idx, col_idx)), shape=(n_docs, n_terms))

# Normalize (tf-idf variant)
X = X.astype(np.float32)
row_sums = np.array(X.sum(axis=1)).flatten()
X.data /= row_sums[X.nonzero()[0]]

print("Document-term matrix: {}x{}, sparsity: {:.4f}%".format(
    n_docs, n_terms, 100*(1 - X.nnz/(n_docs*n_terms))))

# Sparse SVD (ARPACK)
k = 50  # Number of topics
print("\nTruncated SVD (rank {})...".format(k))
t0 = time.time()
U_sparse, s_sparse, Vt_sparse = svds(X, k=k, which='LM')
t_sparse = time.time() - t0
print("  Time: {:.4f} s".format(t_sparse))

# Dense SVD (for small subset for illustration)
print("\nFull dense SVD (on 100x1000 subset for illustration)...")
X_dense_small = X[:100, :1000].toarray()
t0 = time.time()
U_dense, s_dense, Vt_dense = np.linalg.svd(X_dense_small, full_matrices=False)
t_dense = time.time() - t0
print("  Time (100x1000): {:.4f} s".format(t_dense))
print("  Dense SVD on full matrix would take >> hours")

# Extract topics (top terms per topic)
print("\nTop 5 terms per topic (first 3 topics):")
for topic_idx in range(min(3, k)):
    top_terms = np.argsort(Vt_sparse[topic_idx])[-5:][::-1]
    print("  Topic {}: terms {}".format(topic_idx, top_terms))

# Reconstruction error
rank_list = [5, 10, 20, 50]
print("\nReconstruction error vs. rank:")
for r in rank_list:
    U_r = U_sparse[:, -r:]
    s_r = s_sparse[-r:]
    Vt_r = Vt_sparse[-r:, :]
    X_recon = U_r @ np.diag(s_r) @ Vt_r
    err = np.linalg.norm(X[:100, :].toarray() - X_recon[:100]) / np.linalg.norm(X[:100])
    print("  Rank {}: relative error {:.4f}".format(r, err))

Worked Example 4: Preconditioned iterative solver on sparse system#

Introduction#

Construct a large sparse SPD system (from discretized PDE); solve via unpreconditioned and preconditioned CG; demonstrate preconditioning impact.

Purpose#

Show how preconditioning enables convergence on sparse systems where direct solvers would be expensive.

Importance#

Practical approach for million-scale sparse systems; avoids expensive fill-in of direct LU.

What this example demonstrates#

  • Construct sparse Laplacian or discretized diffusion matrix.

  • Solve unpreconditioned CG; count iterations.

  • Construct ILU or AMG preconditioner.

  • Solve preconditioned CG; count iterations.

  • Compare iteration counts and wall-clock time.

Background#

Iterative solvers avoid factorization fill-in; preconditioning clusters spectrum for fast convergence.

Historical context#

Preconditioned CG developed 1970s–1980s; ILU standard; AMG emerged 1980s–1990s.

History#

Standard in PETSc, Trilinos, deal.II (finite element libraries); production code for billion-node problems.

Prevalence in ML#

Used in large-scale optimization (Newton, interior-point), graph algorithms (diffusion on networks).

Notes#

  • Preconditioning setup cost amortized over many solves.

  • ILU relatively cheap; AMG more expensive but better convergence for difficult problems.

Connection to ML#

Large-scale optimization (e.g., logistic regression on massive sparse data) requires preconditioned iterative methods.

Connection to Linear Algebra Theory#

CG iteration count $\approx \sqrt{\kappa}$; preconditioning reduces $\kappa$ to enable convergence.

Pedagogical Significance#

Demonstrates integration of sparsity, preconditioning, and iterative methods.

References#

  1. Saad, Y. (2003). Iterative Methods for Sparse Linear Systems (2nd ed.).

  2. Briggs, W. L., Henson, V. E., & McCormick, S. F. (2000). A Multigrid Tutorial (2nd ed.).

  3. Barrett, R., Berry, M., Chan, T. F., et al. (1994). Templates for the Solution of Linear Systems.

Solution (Python)#

import numpy as np
import scipy.sparse as sp
import scipy.sparse.linalg as spla

np.random.seed(33)

# Create sparse Laplacian (discretized 2D Poisson, 100x100 grid)
n = 10000  # 100x100 grid
# 5-point stencil: center + 4 neighbors
I = []
J = []
S = []
for i in range(n):
    I.extend([i, i, i])
    J.extend([i, (i+1) % n, (i-1) % n])
    S.extend([4, -1, -1])

A = sp.csr_matrix((S, (I, J)), shape=(n, n))
A = A.tocsr()

# RHS
b = np.random.randn(n)

print("Sparse Laplacian system: {}x{}, nnz={}".format(n, n, A.nnz))
print("Condition number (estimated): ~{}".format(int(n**0.5)))

# Unpreconditioned CG
print("\nUnpreconditioned CG:")
residuals_unprecond = []
def callback_un(x):
    residuals_unprecond.append(np.linalg.norm(A @ x - b))

x_un, info_un = spla.cg(A, b, tol=1e-6, callback=callback_un, maxiter=n)
print("  Iterations: {}".format(len(residuals_unprecond)))

# ILU preconditioner
print("\nBuilding ILU preconditioner...")
M_ilu = spla.spilu(A.tocsc())
M = spla.LinearOperator((n, n), matvec=M_ilu.solve)

print("Preconditioned CG:")
residuals_precond = []
def callback_pre(x):
    residuals_precond.append(np.linalg.norm(A @ x - b))

x_pre, info_pre = spla.cg(A, b, M=M, tol=1e-6, callback=callback_pre, maxiter=n)
print("  Iterations: {}".format(len(residuals_precond)))

print("\nSummary:")
print("  Unpreconditioned iterations: {}".format(len(residuals_unprecond)))
print("  Preconditioned iterations: {}".format(len(residuals_precond)))
print("  Speedup: {:.1f}x".format(len(residuals_unprecond) / len(residuals_precond)))
print("  Both converged:", info_un == 0 and info_pre == 0)

Worked Example 5: Graph neural network convolution on sparse adjacency matrix#

Introduction#

Construct a graph (sparse adjacency matrix); implement graph convolution layer; measure SpMM efficiency vs. dense multiplication.

Purpose#

Demonstrate sparse matrix operations in modern ML (graph neural networks).

Importance#

GNNs process billion-node graphs; sparsity is essential for feasibility.

What this example demonstrates#

  • Generate graph (scale-free, grid, or real network).

  • Build node feature matrix $X \in \mathbb{R}^{n \times d}$.

  • Implement sparse graph convolution: $X' = \sigma(A X W)$ with SpMM.

  • Compare sparse vs. dense GCN layer time/memory.

  • Verify output correctness.

Background#

Graph convolution aggregates features from neighbors via sparse $A$ multiplication.

Historical context#

Kipf & Welling (2017) GCN; PyTorch Geometric (2019) enables efficient sparse operations.

History#

Standard in graph learning frameworks (DGL, Spektral, PyTorch Geometric).

Prevalence in ML#

GNNs on social networks, citation graphs, molecular graphs; sparse adjacency matrix standard.

Notes#

  • SpMM (sparse-dense matrix product) is key operation; GPU acceleration available.

  • For $n$-node graph with $m$ edges and $d$ features: sparse is $O(m \cdot d)$, dense is $O(n^2 \cdot d)$.

Connection to ML#

GNNs enable learning on structured, graph-based data; scalable to billion-node networks via sparsity.

Connection to Linear Algebra Theory#

Graph convolution is polynomial filter on graph Laplacian; spectral perspective connects to Chebyshev polynomials.

Pedagogical Significance#

Integrates sparse linear algebra, graph theory, and modern deep learning.

References#

  1. Kipf, T., & Welling, M. (2017). Semi-supervised classification with graph convolutional networks.

  2. Velickovic, P., Cucurull, G., Casanova, A., et al. (2018). Graph Attention Networks.

  3. Fey, M., & Lenssen, J. E. (2019). Fast graph representation learning with PyTorch Geometric.

Solution (Python)#

import numpy as np
import scipy.sparse as sp
import time

np.random.seed(34)

# Generate graph (scale-free network)
n = 10000  # Nodes
avg_degree = 10
edges = []
for i in range(n):
    # Preferential attachment: connect to random nodes
    targets = np.random.choice(n, min(avg_degree, n-1), replace=False)
    for j in targets:
        if i != j:
            edges.append((i, j))
            edges.append((j, i))  # Undirected

edges = np.array(edges)
A = sp.csr_matrix((np.ones(len(edges)), edges.T), shape=(n, n))
A.eliminate_zeros()

# Node features
d = 64  # Feature dimension
X = np.random.randn(n, d).astype(np.float32)

# Weight matrix (GCN parameters)
W = np.random.randn(d, d).astype(np.float32)

print("Graph: n={}, edges={}, sparsity={:.4f}%".format(
    n, A.nnz // 2, 100*(1 - A.nnz/(n*n))))
print("Features: n={}, d={}".format(n, d))

# Sparse graph convolution: X_out = A @ X @ W
print("\nSparse graph convolution:")
t0 = time.time()
X_intermediate = A @ X  # SpMM: sparse @ dense
X_out_sparse = X_intermediate @ W  # Dense @ dense
t_sparse = time.time() - t0
print("  Time: {:.4f} s".format(t_sparse))

# Dense graph convolution (for comparison on subset)
print("\nDense graph convolution (on 1000-node subgraph):")
A_dense_sub = A[:1000, :1000].toarray()
X_sub = X[:1000]
t0 = time.time()
X_out_dense_sub = A_dense_sub @ X_sub @ W
t_dense_sub = time.time() - t0
print("  Time (1000 nodes): {:.4f} s".format(t_dense_sub))
# Extrapolate to full graph
t_dense_extrapolated = t_dense_sub * (n/1000)**2
print("  Extrapolated to full graph: {:.2f} s".format(t_dense_extrapolated))

print("\nSpeedup (dense vs. sparse): {:.1f}x".format(t_dense_extrapolated / t_sparse))

# Apply nonlinearity (ReLU)
X_out_sparse = np.maximum(X_out_sparse, 0)

print("\nGCN layer output shape:", X_out_sparse.shape)
print("Output mean:", np.mean(X_out_sparse))
print("Output sparsity after ReLU:", 100 * np.sum(X_out_sparse == 0) / X_out_sparse.size, "%")

Comments

Computational Efficiency
Historical & Attribution
Key Concepts & Theorems
Learning Path & Sequencing
Problem Structure & Exploitation
Theoretical Foundation