Skip to main content

Cholesky Decomposition Calculator

Our free linear algebra calculator solves cholesky decomposition problems. Get worked examples, visual aids, and downloadable results.

Share this calculator

Formula

A = L * L^T

The Cholesky decomposition factors a symmetric positive definite matrix A into the product of a lower triangular matrix L and its transpose L^T. For a 2x2 matrix: l11 = sqrt(a11), l21 = a12/l11, l22 = sqrt(a22 - l21^2).

Worked Examples

Example 1: Cholesky of a 2x2 Positive Definite Matrix

Problem: Find the Cholesky decomposition of A = [[4, 2], [2, 5]].

Solution: Step 1: l11 = sqrt(4) = 2\nStep 2: l21 = 2 / 2 = 1\nStep 3: l22 = sqrt(5 - 1^2) = sqrt(4) = 2\n\nL = [[2, 0], [1, 2]]\nL^T = [[2, 1], [0, 2]]\n\nVerification: L * L^T = [[4, 2], [2, 5]] = A\nDeterminant = (l11 * l22)^2 = (2*2)^2 = 16\nActual det = 4*5 - 2*2 = 16

Result: L = [[2, 0], [1, 2]] | det(A) = 16 | log(det) = 2.773

Example 2: Solving a System via Cholesky

Problem: Solve Ax = b where A = [[9, 3], [3, 5]] and b = [12, 7].

Solution: Cholesky: l11 = sqrt(9) = 3, l21 = 3/3 = 1, l22 = sqrt(5-1) = 2\nL = [[3, 0], [1, 2]]\n\nForward solve Ly = b: y1 = 12/3 = 4, y2 = (7-1*4)/2 = 1.5\nBack solve L^Tx = y: x2 = 1.5/2 = 0.75, x1 = (4-1*0.75)/3 = 1.083\n\nSolution: x = [1.083, 0.75]

Result: x = [1.083, 0.75] | L = [[3, 0], [1, 2]]

Frequently Asked Questions

How is the Cholesky decomposition computed step by step?

For a 2x2 matrix A = [[a, b], [b, d]], the Cholesky factor L = [[l11, 0], [l21, l22]] is computed as follows. First, l11 = sqrt(a). Then l21 = b / l11. Finally, l22 = sqrt(d - l21^2). The algorithm proceeds column by column for larger matrices. For column j, first compute l_jj = sqrt(a_jj - sum of l_jk^2 for k < j), then for each i > j compute l_ij = (a_ij - sum of l_ik * l_jk for k < j) / l_jj. The total cost for an n x n matrix is n^3/3 floating point operations, which is half the cost of standard LU decomposition.

Why is Cholesky decomposition preferred for solving symmetric positive definite systems?

Cholesky decomposition is preferred for several compelling reasons. First, it requires only n^3/3 operations compared to 2n^3/3 for general LU decomposition, making it roughly twice as fast. Second, it is backward stable, meaning roundoff errors are well-controlled. Third, it requires no pivoting, simplifying implementation and improving cache performance. Fourth, it guarantees real results since all intermediate quantities under square roots are positive. Fifth, it preserves symmetry and positive definiteness in the factors. These advantages make it the method of choice for large-scale systems arising in finite elements, statistics, optimization, and machine learning.

How is Cholesky decomposition used in statistics and machine learning?

In statistics, Cholesky decomposition is essential for working with multivariate normal distributions. The log-likelihood involves log(det(Sigma)), which equals 2*sum(log(L_ii)) using Cholesky, avoiding explicit determinant computation. Sampling from a multivariate normal N(mu, Sigma) is done by computing z = mu + L*u where u is a standard normal vector. In machine learning, Gaussian processes use Cholesky decomposition to solve kernel matrix systems efficiently. Bayesian inference frameworks use it for computing posterior distributions. The decomposition also appears in Kalman filters, factor analysis, and any algorithm requiring covariance matrix operations.

What is the relationship between Cholesky and LDL decomposition?

The LDL decomposition factors a symmetric matrix as A = L * D * L^T, where L is unit lower triangular (ones on the diagonal) and D is diagonal. It is closely related to Cholesky: if A = L_c * L_c^T is the Cholesky factorization, then L = L_c * D^(-1/2) and D = diag(L_c)^2. The LDL decomposition avoids computing square roots, making it faster and applicable to symmetric indefinite matrices (not just positive definite ones). LDL is preferred in some applications like interior point methods in optimization where the matrix may become indefinite during iterations.

Can Cholesky decomposition be used for matrix inversion?

Yes, Cholesky decomposition provides an efficient method for inverting symmetric positive definite matrices. After factoring A = L * L^T, the inverse is computed by first inverting L (which is triangular and therefore easy to invert) and then computing A^(-1) = (L^(-1))^T * L^(-1). Alternatively, you can solve A * x_i = e_i for each standard basis vector e_i using forward and back substitution with L and L^T. The inverse inherits the positive definite and symmetric properties of the original matrix. In practice, computing the full inverse is often unnecessary; solving Ax = b via forward and back substitution is more efficient and numerically stable.

What happens when Cholesky decomposition fails?

Cholesky decomposition fails when the matrix is not positive definite, which manifests as encountering a negative number or zero under a square root during the algorithm. This failure is actually useful as a definitive test for positive definiteness: a matrix is positive definite if and only if the Cholesky algorithm completes successfully. If the decomposition fails at step j, it means the j-th leading principal minor is non-positive. Common causes include: the matrix is indefinite, it is only positive semi-definite (has zero eigenvalues), numerical errors have destroyed positive definiteness, or the matrix was not truly symmetric to begin with.

References