Linear AlgebraΒΆ

Linear algebra is the mathematical foundation of machine learning and data science. NumPy’s np.linalg module provides functions for matrix multiplication, decompositions, eigenvalue computation, norms, determinants, and solving linear systems. Understanding these operations is essential because ML algorithms are fundamentally linear algebra: neural networks perform chains of matrix multiplications, PCA uses eigendecomposition, recommendation systems rely on SVD, and linear regression solves systems of equations.

import numpy as np
np.__version__

Matrix and Vector ProductsΒΆ

np.dot(), np.matmul() (or @), np.inner(), np.outer(), and np.vdot() compute different types of products between arrays. dot and matmul perform standard matrix multiplication, inner computes the dot product along the last axis, outer produces the outer product (every pair of elements), and vdot flattens inputs before computing the dot product. Understanding which product to use is crucial – matrix multiplication powers forward passes in neural networks, dot products measure similarity between vectors, and outer products appear in attention mechanisms.

Q1. Predict the results of the following code.

x = [1,2]
y = [[4, 1], [2, 2]]
#print np.dot(x, y)
#print np.dot(y, x)
#print np.matmul(x, y)
#print np.inner(x, y)
#print np.inner(y, x)

Q2. Predict the results of the following code.

x = [[1, 0], [0, 1]]
y = [[4, 1], [2, 2], [1, 1]]
#print np.dot(y, x)
#print np.matmul(y, x)

Q3. Predict the results of the following code.

x = np.array([[1, 4], [5, 6]])
y = np.array([[4, 1], [2, 2]])
#print np.vdot(x, y)
#print np.vdot(y, x)
#print np.dot(x.flatten(), y.flatten())
#print np.inner(x.flatten(), y.flatten())
#print (x*y).sum()

Q4. Predict the results of the following code.

x = np.array(['a', 'b'], dtype=object)
y = np.array([1, 2])
#print np.inner(x, y)
#print np.inner(y, x)
#print np.outer(x, y)
#print np.outer(y, x)

DecompositionsΒΆ

Matrix decompositions factor a matrix into simpler components. Cholesky (np.linalg.cholesky) decomposes a positive-definite matrix into a lower-triangular matrix – used in efficient sampling from multivariate distributions. QR (np.linalg.qr) factors into orthogonal and upper-triangular matrices – used in least-squares solvers. SVD (np.linalg.svd) decomposes any matrix into singular vectors and values – the backbone of dimensionality reduction, data compression, and recommendation systems.

Q5. Get the lower-trianglular L in the Cholesky decomposition of x and verify it.

x = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]], dtype=np.int32)

Q6. Compute the qr factorization of x and verify it.

x = np.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=np.float32)

Q7. Factor x by Singular Value Decomposition and verify it.

x = np.array([[1, 0, 0, 0, 2], [0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 2, 0, 0, 0]], dtype=np.float32)

Matrix EigenvaluesΒΆ

Eigenvalues and eigenvectors reveal the fundamental structure of a matrix. np.linalg.eig() computes both for a square matrix. The eigenvectors define the principal directions of a linear transformation, and the eigenvalues tell you how much the transformation stretches or compresses along each direction. PCA (Principal Component Analysis) is built entirely on eigendecomposition – the eigenvectors of the covariance matrix become the principal components, and the eigenvalues indicate how much variance each component explains.

Q8. Compute the eigenvalues and right eigenvectors of x. (Name them eigenvals and eigenvecs, respectively)

x = np.diag((1, 2, 3))

Q9. Predict the results of the following code.

#print np.array_equal(np.dot(x, eigenvecs), eigenvals * eigenvecs)

Norms and Other NumbersΒΆ

Matrix norms, determinants, ranks, and traces provide scalar summaries of a matrix’s properties. The Frobenius norm measures overall matrix magnitude. The condition number indicates numerical stability (high values mean the matrix is near-singular). The determinant tells you whether a matrix is invertible and how it scales volume. The rank reveals how many linearly independent rows/columns exist. The trace (sum of diagonal elements) equals the sum of eigenvalues. These metrics are used in regularization, model diagnostics, and numerical stability checks.

Q10. Calculate the Frobenius norm and the condition number of x.

x = np.arange(1, 10).reshape((3, 3))

Q11. Calculate the determinant of x.

x = np.arange(1, 5).reshape((2, 2))

Q12. Calculate the rank of x.

x = np.eye(4)

Q13. Compute the sign and natural logarithm of the determinant of x.

x = np.arange(1, 5).reshape((2, 2))

Q14. Return the sum along the diagonal of x.

x = np.eye(4)

Solving Equations and Inverting MatricesΒΆ

np.linalg.inv() computes the matrix inverse, and np.linalg.solve() solves the linear system Ax = b directly (which is faster and more numerically stable than computing the inverse). These operations underpin linear regression (solving the normal equations), Kalman filters, and any algorithm that requires finding exact solutions to systems of linear equations.

Q15. Compute the inverse of x.

x = np.array([[1., 2.], [3., 4.]])