Diagonalize Matrix Calculator
Solve diagonalize matrix problems step-by-step with our free calculator. See formulas, worked examples, and clear explanations.
Formula
A = P D P^(-1), where D = diag(lambda1, lambda2) and P = [v1 | v2]
The matrix A is decomposed into the product of three matrices: P (matrix of eigenvectors as columns), D (diagonal matrix of eigenvalues), and P^(-1) (inverse of P). This decomposition exists when A has enough linearly independent eigenvectors.
Worked Examples
Example 1: Diagonalizing a 2x2 Matrix
Problem: Diagonalize A = [[4, 1], [2, 3]].
Solution: Characteristic equation: lambda^2 - 7*lambda + 10 = 0\n(lambda - 5)(lambda - 2) = 0\nEigenvalues: lambda1 = 5, lambda2 = 2\n\nFor lambda1 = 5: (A - 5I)v = 0 -> [[-1, 1], [2, -2]]v = 0 -> v1 = [1, 1]\nFor lambda2 = 2: (A - 2I)v = 0 -> [[2, 1], [2, 1]]v = 0 -> v2 = [-1, 2]\n\nP = [[1, -1], [1, 2]], D = [[5, 0], [0, 2]]
Result: A = PDP^(-1) where D = diag(5, 2), P = [[1, -1], [1, 2]]
Example 2: Non-Diagonalizable Matrix
Problem: Try to diagonalize A = [[2, 1], [0, 2]].
Solution: Characteristic equation: (lambda - 2)^2 = 0\nRepeated eigenvalue: lambda = 2 (algebraic multiplicity 2)\nFor lambda = 2: (A - 2I)v = 0 -> [[0, 1], [0, 0]]v = 0 -> v = [1, 0]\nOnly one linearly independent eigenvector (geometric multiplicity = 1)\nSince geometric < algebraic multiplicity, matrix is defective.
Result: Not diagonalizable (defective) - Jordan form: [[2, 1], [0, 2]]
Frequently Asked Questions
What does it mean to diagonalize a matrix?
Diagonalizing a matrix means finding an invertible matrix P and a diagonal matrix D such that A = P * D * P^(-1). The diagonal entries of D are the eigenvalues of A, and the columns of P are the corresponding eigenvectors. Diagonalization transforms a matrix into its simplest form, making computations like matrix powers trivial since A^n = P * D^n * P^(-1), and raising a diagonal matrix to a power simply raises each diagonal entry to that power. Not all matrices can be diagonalized, but those that can are significantly easier to work with in applications.
When is a matrix diagonalizable?
A matrix is diagonalizable if and only if it has n linearly independent eigenvectors, where n is the size of the matrix. This happens when the geometric multiplicity (number of independent eigenvectors) equals the algebraic multiplicity (multiplicity as root of characteristic polynomial) for every eigenvalue. Sufficient conditions include having n distinct eigenvalues (always diagonalizable) or being a real symmetric matrix (always diagonalizable with orthogonal eigenvectors). A matrix that is not diagonalizable is called defective. Defective matrices can still be decomposed using Jordan normal form, which is a generalization of diagonalization.
How do you find the diagonalization of a 2x2 matrix?
For a 2x2 matrix, first find eigenvalues by solving the characteristic equation det(A - lambda*I) = 0, which gives a quadratic equation. Then find an eigenvector for each eigenvalue by solving (A - lambda*I)v = 0. Form the matrix P by placing eigenvectors as columns, and D is the diagonal matrix of eigenvalues in the same order. Finally compute P^(-1) using the 2x2 inverse formula. The decomposition A = PDP^(-1) can be verified by multiplying the three matrices together. If the eigenvalues are complex, the matrix is not diagonalizable over the real numbers but may be diagonalizable over the complex numbers.
What is the characteristic equation of a matrix?
The characteristic equation is det(A - lambda*I) = 0, where lambda is the eigenvalue variable and I is the identity matrix. For a 2x2 matrix [[a, b], [c, d]], this simplifies to lambda^2 - (a+d)*lambda + (ad-bc) = 0, or equivalently lambda^2 - trace(A)*lambda + det(A) = 0. The roots of this polynomial are the eigenvalues. The discriminant (trace^2 - 4*det) determines the nature of eigenvalues: positive means two distinct real eigenvalues, zero means a repeated eigenvalue, and negative means complex conjugate eigenvalues. The characteristic equation is fundamental to spectral analysis in linear algebra.
Why is diagonalization useful in computing matrix powers?
Diagonalization makes computing matrix powers extremely efficient. Since A = PDP^(-1), we have A^n = PD^nP^(-1). Raising a diagonal matrix to a power simply raises each diagonal entry to that power, which is trivial. Without diagonalization, computing A^100 directly would require 99 matrix multiplications. With diagonalization, you compute D^100 (just raise two numbers to the 100th power), then multiply three 2x2 matrices. This is especially important in applications like Markov chains (finding steady-state probabilities), solving systems of differential equations, and computing Fibonacci numbers, where matrix powers arise naturally.
Can you diagonalize a matrix with a zero eigenvalue?
Yes, having a zero eigenvalue does not prevent diagonalization. A zero eigenvalue simply means the matrix is singular (non-invertible), but diagonalizability depends on having enough linearly independent eigenvectors, not on the eigenvalue values. For example, [[0, 0], [0, 1]] is already diagonal with eigenvalues 0 and 1. The matrix [[0, 1], [0, 0]] has eigenvalue 0 with algebraic multiplicity 2 but only one independent eigenvector, so it is not diagonalizable. The key distinction is between the algebraic and geometric multiplicities, not whether eigenvalues are zero or nonzero.