Skip to main content

SVD Calculator

Compute the Singular Value Decomposition of a matrix. Enter values for instant results with step-by-step formulas.

Share this calculator

Formula

A = U * Sigma * V^T

The matrix A is decomposed into the product of three matrices: U (left singular vectors), Sigma (diagonal matrix of singular values), and V^T (transpose of right singular vectors). Singular values are the square roots of the eigenvalues of A^T * A.

Worked Examples

Example 1: SVD of a 2x2 Symmetric Matrix

Problem: Compute the singular values of A = [[3,2],[2,3]].

Solution: A^T * A = [[13, 12],[12, 13]]\nCharacteristic polynomial: lambda^2 - 26*lambda + 25 = 0\nEigenvalues: lambda_1 = 25, lambda_2 = 1\nSingular values: s1 = sqrt(25) = 5, s2 = sqrt(1) = 1\nCondition number: 5/1 = 5\nFrobenius norm: sqrt(9+4+4+9) = sqrt(26) = 5.099

Result: Singular values: 5, 1 | Condition number: 5 | Rank: 2

Example 2: Rank-Deficient Matrix

Problem: Compute the singular values of A = [[1,2],[2,4]].

Solution: A^T * A = [[5, 10],[10, 20]]\nCharacteristic polynomial: lambda^2 - 25*lambda + 0 = 0\nEigenvalues: lambda_1 = 25, lambda_2 = 0\nSingular values: s1 = 5, s2 = 0\nThe matrix has rank 1 (row 2 = 2 * row 1)

Result: Singular values: 5, 0 | Condition number: Infinite | Rank: 1 (rank-deficient)

Frequently Asked Questions

How is SVD used for image compression?

SVD enables image compression by approximating the original image matrix with a low-rank version that uses far less storage. A grayscale image is an m-by-n matrix of pixel values. Computing its SVD gives singular values in decreasing order of importance. By keeping only the k largest singular values and their corresponding columns of U and V (called a rank-k approximation), you create a compressed version of the image that captures the most important visual features. Instead of storing m*n pixel values, you store k*(m+n+1) values. For a 1000-by-1000 image, keeping just 50 singular values reduces storage by 90% while maintaining recognizable image quality. The Eckart-Young theorem guarantees this is the optimal rank-k approximation.

What is the relationship between SVD and the pseudoinverse?

SVD provides a direct way to compute the Moore-Penrose pseudoinverse of any matrix, which generalizes the matrix inverse to non-square and singular matrices. Given A = U * Sigma * V^T, the pseudoinverse is A+ = V * Sigma+ * U^T, where Sigma+ is formed by taking the reciprocal of each nonzero singular value and transposing the matrix. The pseudoinverse gives the least-squares solution to overdetermined systems (more equations than unknowns) and the minimum-norm solution to underdetermined systems (fewer equations than unknowns). This makes SVD the foundation of least-squares fitting, which is used extensively in statistics, machine learning, signal processing, and any field that requires fitting models to data.

How does SVD relate to Principal Component Analysis?

Principal Component Analysis (PCA) is essentially SVD applied to centered data. When you have a data matrix X where each row is a data point and each column is a feature, PCA finds the directions of maximum variance. Computing the SVD of the centered data matrix X = U * Sigma * V^T reveals that the columns of V are the principal components (directions of maximum variance), the singular values in Sigma are proportional to the standard deviations along each component, and the matrix U * Sigma gives the coordinates of each data point in the principal component basis. This connection is why SVD is the preferred computational method for PCA, as it is more numerically stable than computing eigenvalues of the covariance matrix directly.

How is SVD used in recommendation systems?

Recommendation systems like those used by Netflix and Amazon use SVD to discover latent factors in user-item interaction data. The user-rating matrix (with users as rows and items as columns) is approximately factored using truncated SVD into low-rank matrices that capture hidden patterns. The singular values represent the importance of each latent factor, while the U and V matrices encode how users and items relate to these factors. Missing ratings can then be predicted by multiplying the low-rank factors. In practice, modified versions like regularized SVD or matrix factorization with stochastic gradient descent are used because the rating matrix is very sparse. The Netflix Prize competition famously demonstrated that SVD-based approaches could improve recommendation accuracy by over 10 percent.

What are the computational costs and algorithms for SVD?

Computing the full SVD of an m-by-n matrix costs approximately O(min(mn^2, m^2n)) floating-point operations, making it about 3-5 times more expensive than LU decomposition. The most widely used algorithm is the Golub-Kahan bidiagonalization followed by implicit QR iteration, which is implemented in LAPACK and used by MATLAB, NumPy, and R. For very large matrices where only the top k singular values are needed, iterative methods like the Lanczos algorithm or randomized SVD provide much faster approximate results. Randomized SVD, introduced by Halko, Martinsson, and Tropp, can compute a rank-k approximation in O(mn*log(k)) time, making SVD practical for matrices with millions of rows and columns.

How do I interpret the result?

Results are displayed with a label and unit to help you understand the output. Many calculators include a short explanation or classification below the result (for example, a BMI category or risk level). Refer to the worked examples section on this page for real-world context.

References