Skip to main content

Singular Values Calculator

Our free fractions calculator solves singular values problems. Get worked examples, visual aids, and downloadable results.

Share this calculator

Formula

sigma_i = sqrt(lambda_i(A^T A))

Singular values are the square roots of the eigenvalues of A-transpose times A. They represent the stretching factors of the linear transformation defined by the matrix.

Worked Examples

Example 1: Singular Values of a 2x3 Matrix

Problem: Find the singular values of A = [[3,2,2],[2,3,-2]].

Solution: Compute A A^T:\nA A^T = [[17, 8], [8, 17]]\nEigenvalues: trace = 34, det = 225\nlambda = (34 +/- sqrt(1156-900))/2 = (34 +/- 16)/2\nlambda1 = 25, lambda2 = 9\nSingular values: sigma1 = sqrt(25) = 5, sigma2 = sqrt(9) = 3

Result: sigma1 = 5, sigma2 = 3, condition number = 5/3 = 1.667

Example 2: Rank-Deficient Matrix

Problem: Find the singular values of A = [[1,2,3],[2,4,6]] (rank 1).

Solution: A A^T = [[14, 28], [28, 56]]\nEigenvalues: trace = 70, det = 14*56 - 28*28 = 0\nlambda1 = 70, lambda2 = 0\nSingular values: sigma1 = sqrt(70) = 8.3666, sigma2 = 0\nRank = 1 (one nonzero singular value)

Result: sigma1 = 8.3666, sigma2 = 0, rank = 1, condition = infinity

Frequently Asked Questions

What are singular values of a matrix and what do they represent?

Singular values are non-negative real numbers that describe the stretching factors of a matrix when viewed as a linear transformation. For any matrix A, the singular values are the square roots of the eigenvalues of A-transpose times A (or equivalently, A times A-transpose). They are typically denoted sigma-1, sigma-2, etc., arranged in decreasing order. Geometrically, when a matrix transforms a unit sphere, it becomes an ellipsoid, and the singular values are the lengths of the semi-axes of that ellipsoid. The largest singular value gives the maximum stretching factor, the smallest gives the minimum, and their ratio (the condition number) measures how distorted the transformation is.

How is Singular Value Decomposition (SVD) related to singular values?

Singular Value Decomposition factors any m-by-n matrix A into three matrices: A = U times Sigma times V-transpose. U is an m-by-m orthogonal matrix whose columns are the left singular vectors, Sigma is an m-by-n diagonal matrix containing the singular values on its diagonal, and V is an n-by-n orthogonal matrix whose columns are the right singular vectors. The singular values appear explicitly as the diagonal entries of Sigma. SVD is one of the most important decompositions in all of mathematics and computing, providing the foundation for principal component analysis, image compression, recommender systems, and numerous other applications. Every matrix has an SVD, making it universally applicable.

How do singular values relate to the rank of a matrix?

The rank of a matrix equals the number of nonzero singular values. This provides a robust and reliable way to determine matrix rank, especially for matrices that are numerically rank-deficient (nearly singular). In practice, singular values that are extremely small relative to the largest singular value are treated as effectively zero, which defines the numerical rank. This threshold-based approach is more reliable than checking whether the determinant equals zero, because determinants can be misleadingly close to zero for well-conditioned matrices or misleadingly far from zero for ill-conditioned ones. The gap between consecutive singular values, particularly between the smallest nonzero one and zero, indicates how well-determined the rank is.

What is the Frobenius norm and how does it relate to singular values?

The Frobenius norm of a matrix is the square root of the sum of the squares of all its entries, analogous to the Euclidean norm for vectors. It equals the square root of the sum of the squares of all singular values. This relationship provides an elegant connection between matrix entries and matrix geometry. The Frobenius norm gives a measure of the overall magnitude of the matrix. In low-rank approximation (truncated SVD), the Frobenius norm of the error matrix equals the square root of the sum of squares of the discarded singular values. This result (the Eckart-Young theorem) guarantees that truncated SVD gives the best possible low-rank approximation in the Frobenius norm sense.

How are singular values used in image compression and data reduction?

In image compression, an image is stored as a matrix of pixel values, and SVD decomposes it into singular value components. Each component captures a different level of detail, with larger singular values representing more important features. By keeping only the k largest singular values and their corresponding singular vectors, you create a rank-k approximation that captures the essential visual information while dramatically reducing storage. For example, a 1000-by-1000 image requires storing 1 million values. A rank-50 approximation stores only 50 times (1000 + 1000 + 1) values, roughly 100,000 total, achieving 10-to-1 compression while preserving most visual quality. This principle extends to any data matrix in machine learning and statistics.

How do singular values help in solving least-squares problems?

Singular values provide complete insight into least-squares problems. The SVD of the coefficient matrix A reveals both the solution and its sensitivity. The least-squares solution is x = V times Sigma-pseudoinverse times U-transpose times b, where Sigma-pseudoinverse replaces each nonzero singular value with its reciprocal. Small singular values cause their reciprocals to be large, amplifying noise in the right-hand side b. This is why ill-conditioned systems (with small singular values relative to large ones) produce unreliable least-squares solutions. Truncated SVD and Tikhonov regularization address this by either ignoring small singular values or damping their contribution, trading a small amount of bias for much better stability.

References