Pseudoinverse Calculator
Free Pseudoinverse Calculator for fractions. Enter values to get step-by-step solutions with formulas and graphs. Free to use with no signup required.
Formula
A+ satisfies: AA+A = A, A+AA+ = A+, (AA+)^T = AA+, (A+A)^T = A+A
The Moore-Penrose pseudoinverse A+ is the unique matrix satisfying all four conditions. For invertible matrices, A+ equals the regular inverse. For singular matrices, it provides least-squares solutions.
Worked Examples
Example 1: Pseudoinverse of an Invertible 2x2 Matrix
Problem: Find the pseudoinverse of A = [[1, 2], [3, 4]].
Solution: det(A) = (1)(4) - (2)(3) = 4 - 6 = -2\nSince det is nonzero, A+ = A^(-1)\nA^(-1) = (1/-2) * [[4, -2], [-3, 1]]\nA^(-1) = [[-2, 1], [1.5, -0.5]]\nVerify: A * A^(-1) = [[1,0],[0,1]] = I
Result: A+ = [[-2, 1], [1.5, -0.5]]
Example 2: Pseudoinverse of a Singular Matrix
Problem: Find the pseudoinverse of A = [[1, 2], [2, 4]] (rank 1).
Solution: det(A) = (1)(4) - (2)(2) = 0 (singular)\nA^T A = [[5, 10], [10, 20]], det(A^T A) = 0\nUsing rank-1 formula: A+ = A^T / sigma1^2\nsigma1^2 = eigenvalue of A^T A = 25\nA+ = [[1,2],[2,4]]^T / 25 = [[1/25, 2/25], [2/25, 4/25]]
Result: A+ = [[0.04, 0.08], [0.08, 0.16]]
Frequently Asked Questions
What is the Moore-Penrose pseudoinverse and when is it used?
The Moore-Penrose pseudoinverse (denoted A+) is a generalization of the matrix inverse that exists for every matrix, including non-square and singular matrices. While a regular inverse only exists for square matrices with nonzero determinant, the pseudoinverse always exists and is unique. It satisfies four defining properties known as the Moore-Penrose conditions: A times A+ times A equals A, A+ times A times A+ equals A+, and both A times A+ and A+ times A are Hermitian (symmetric for real matrices). The pseudoinverse is primarily used for solving overdetermined and underdetermined systems of linear equations, providing least-squares solutions and minimum-norm solutions respectively.
How is the pseudoinverse computed for an invertible matrix?
When a square matrix A is invertible (has nonzero determinant), the pseudoinverse is simply the regular inverse. For a 2x2 matrix [[a,b],[c,d]] with determinant ad-bc not equal to zero, the inverse is (1/det) times [[d,-b],[-c,a]]. This is because when A is invertible, all four Moore-Penrose conditions are automatically satisfied by the standard inverse. The relationship is straightforward: A-inverse times A equals the identity, which trivially satisfies A times A-inverse times A equals A. So the pseudoinverse is a true generalization that reduces to the ordinary inverse in the non-singular case, making it a more powerful and universally applicable concept.
How does the pseudoinverse solve least-squares problems?
When a system Ax = b has no exact solution (overdetermined system with more equations than unknowns), the pseudoinverse provides the least-squares solution x = A+ times b. This solution minimizes the Euclidean norm of the residual vector Ax - b, meaning it finds the vector x that comes closest to satisfying all equations simultaneously. In statistics, this is exactly what happens in linear regression: the pseudoinverse of the design matrix produces the regression coefficients that minimize the sum of squared errors. For underdetermined systems (fewer equations than unknowns), x = A+ b gives the minimum-norm solution among all exact solutions. These properties make the pseudoinverse indispensable in data fitting and optimization.
What are the four Moore-Penrose conditions that define the pseudoinverse?
The four conditions are: (1) A times A+ times A equals A, meaning A+ acts as a weak inverse. (2) A+ times A times A+ equals A+, ensuring A+ is also weakly inverted by A. (3) A times A+ is Hermitian (equals its own conjugate transpose), making it an orthogonal projector onto the column space of A. (4) A+ times A is Hermitian, making it an orthogonal projector onto the row space of A. These four conditions uniquely determine A+ for any matrix A. If A is invertible, all conditions are trivially satisfied by the standard inverse. The elegance of these conditions is that they characterize the pseudoinverse purely through algebraic identities without referencing any optimization problem.
How is the pseudoinverse computed using SVD?
The most reliable method for computing the pseudoinverse uses the Singular Value Decomposition. Given A = U times Sigma times V-transpose, the pseudoinverse is A+ = V times Sigma+ times U-transpose. Here Sigma+ is formed by taking the reciprocal of each nonzero singular value in Sigma and transposing the result. Singular values that are zero (or numerically very small) are left as zero rather than reciprocated, which provides numerical stability. This approach works for any matrix regardless of shape or rank. In practice, a threshold is applied: singular values below a certain tolerance (typically machine epsilon times the largest singular value times the matrix dimension) are treated as zero to avoid amplifying numerical noise.
What is the relationship between pseudoinverse and projection matrices?
The pseudoinverse creates two important projection matrices. The product A times A+ is the orthogonal projection onto the column space (range) of A, projecting any vector onto the subspace spanned by the columns. The product A+ times A is the orthogonal projection onto the row space of A, projecting onto the subspace spanned by the rows. These projections are idempotent (applying them twice gives the same result as applying once) and symmetric. The complementary projections I minus A times A+ and I minus A+ times A project onto the left null space and null space respectively. These projection properties are fundamental to understanding why the pseudoinverse gives least-squares and minimum-norm solutions.