Inverse Matrix Calculator
Our free linear algebra calculator solves inverse matrix problems. Get worked examples, visual aids, and downloadable results.
Formula
A^(-1) = (1/det(A)) * adj(A)
Where A^(-1) is the inverse matrix, det(A) is the determinant of A, and adj(A) is the adjugate (transpose of the cofactor matrix). The inverse exists only when det(A) is not zero.
Worked Examples
Example 1: 2x2 Matrix Inverse
Problem: Find the inverse of the matrix [[4, 7], [2, 6]].
Solution: Determinant = 4(6) - 7(2) = 24 - 14 = 10\nAdjugate = [[6, -7], [-2, 4]]\nInverse = (1/10) * [[6, -7], [-2, 4]] = [[0.6, -0.7], [-0.2, 0.4]]\nVerification: [[4,7],[2,6]] * [[0.6,-0.7],[-0.2,0.4]] = [[1,0],[0,1]]
Result: Inverse: [[0.6, -0.7], [-0.2, 0.4]] with determinant = 10
Example 2: 3x3 Matrix Inverse
Problem: Find the inverse of [[2, 1, 0], [1, 3, 0], [0, 0, 1]].
Solution: Determinant = 2(3*1 - 0*0) - 1(1*1 - 0*0) + 0 = 6 - 1 = 5\nCofactor matrix computed for all 9 entries\nAdjugate = [[3, -1, 0], [-1, 2, 0], [0, 0, 5]]\nInverse = (1/5) * [[3, -1, 0], [-1, 2, 0], [0, 0, 5]] = [[0.6, -0.2, 0], [-0.2, 0.4, 0], [0, 0, 1]]
Result: Inverse: [[0.6, -0.2, 0], [-0.2, 0.4, 0], [0, 0, 1]] with determinant = 5
Frequently Asked Questions
What is the inverse of a matrix and when does it exist?
The inverse of a matrix A is another matrix, denoted A-inverse, such that when multiplied together they produce the identity matrix. A matrix inverse exists if and only if the determinant of the matrix is non-zero, meaning the matrix is non-singular. Square matrices that have an inverse are called invertible or non-singular matrices. For a 2x2 matrix, the inverse is computed by swapping the diagonal elements, negating the off-diagonal elements, and dividing by the determinant. For larger matrices, methods like cofactor expansion, Gauss-Jordan elimination, or LU decomposition are typically used to find the inverse efficiently.
How is the inverse of a 3x3 matrix calculated?
For a 3x3 matrix, the inverse is found using the matrix of cofactors (adjugate) divided by the determinant. First, compute the determinant using cofactor expansion along the first row. Then calculate each of the nine cofactors by finding the determinant of the 2x2 submatrix obtained by removing that element row and column, applying the checkerboard sign pattern. The transpose of this cofactor matrix gives the adjugate. Finally, divide each element of the adjugate by the determinant. This method is known as the classical adjoint method and is exact, though computationally expensive for matrices larger than 3x3.
What does it mean when a matrix is singular?
A singular matrix is one whose determinant equals zero, which means it has no inverse. Geometrically, a singular matrix collapses at least one dimension, mapping some nonzero vectors to the zero vector. This means the linear transformation represented by the matrix is not bijective and cannot be reversed. Common causes include having a row or column of all zeros, having two identical rows or columns, or having one row that is a scalar multiple or linear combination of other rows. In practical applications, near-singular matrices with very small determinants can cause numerical instability and should be handled with care.
What are common applications of matrix inversion?
Matrix inversion is fundamental in solving systems of linear equations, where x = A-inverse times b gives the solution directly. In computer graphics, inverse matrices are used for coordinate transformations, camera projections, and undoing rotations or scaling operations. In statistics, the inverse of the covariance matrix appears in multivariate normal distributions and in the least squares regression formula. Control systems engineering uses matrix inversions for state-space analysis and controller design. Cryptography systems like the Hill cipher use modular matrix inversion for encoding and decoding messages. Economics uses input-output analysis relying on Leontief inverse matrices.
What is the relationship between determinant and inverse?
The determinant and inverse are intimately connected. A matrix has an inverse if and only if its determinant is non-zero. The inverse is computed as the adjugate matrix divided by the determinant, so the determinant appears as a scaling factor in every element of the inverse. When the determinant is very small but non-zero, the inverse elements become very large, indicating numerical instability. The determinant of the inverse equals the reciprocal of the original determinant, and the determinant of the product of two matrices equals the product of their determinants. These properties make the determinant a quick screening tool for invertibility.
How does matrix size affect inverse computation difficulty?
The computational complexity of matrix inversion grows rapidly with matrix size. For a 2x2 matrix, inversion requires just a few arithmetic operations using a direct formula. A 3x3 matrix requires computing nine cofactors and a determinant, involving dozens of operations. For an n-by-n matrix, naive cofactor expansion has factorial time complexity, making it impractical beyond small sizes. In practice, algorithms like Gaussian elimination with partial pivoting achieve cubic time complexity, meaning doubling the matrix size increases computation time by roughly eight times. For very large matrices, iterative methods or specialized decompositions like LU or QR are preferred.