Skip to main content

Matrix Multiplication Calculator

Our free fractions calculator solves matrix multiplication problems. Get worked examples, visual aids, and downloadable results.

Share this calculator

Formula

C(i,j) = sum of A(i,k) * B(k,j) for k = 1 to n

Each element of the result matrix C at position (i,j) is computed as the dot product of row i from matrix A and column j from matrix B. The number of columns in A must equal the number of rows in B.

Worked Examples

Example 1: Multiplying Two 2x2 Matrices

Problem: Multiply A = [[1, 2], [3, 4]] by B = [[5, 6], [7, 8]].

Solution: C[0][0] = 1*5 + 2*7 = 5 + 14 = 19\nC[0][1] = 1*6 + 2*8 = 6 + 16 = 22\nC[1][0] = 3*5 + 4*7 = 15 + 28 = 43\nC[1][1] = 3*6 + 4*8 = 18 + 32 = 50\nResult C = [[19, 22], [43, 50]]

Result: Result: [[19, 22], [43, 50]] | Sum: 134 | Total multiplications: 8

Example 2: Non-Commutative Example

Problem: Show that AB is different from BA for A = [[1, 2], [3, 4]] and B = [[5, 6], [7, 8]].

Solution: AB = [[19, 22], [43, 50]] (computed above)\nBA:\nC[0][0] = 5*1 + 6*3 = 5 + 18 = 23\nC[0][1] = 5*2 + 6*4 = 10 + 24 = 34\nC[1][0] = 7*1 + 8*3 = 7 + 24 = 31\nC[1][1] = 7*2 + 8*4 = 14 + 32 = 46\nBA = [[23, 34], [31, 46]]

Result: AB = [[19, 22], [43, 50]] vs BA = [[23, 34], [31, 46]] | AB is not equal to BA

Frequently Asked Questions

Why is matrix multiplication not commutative?

Matrix multiplication is not commutative because it represents the composition of linear transformations, and the order in which transformations are applied matters. Rotating then translating produces a different result than translating then rotating. Consider a rotation matrix R and a scaling matrix S applied to a vector: RSv rotates the scaled vector, while SRv scales the rotated vector. Even for 2x2 matrices where both AB and BA are defined, the results typically differ because each element is a different combination of dot products. This non-commutativity is not a limitation but a feature that accurately models real-world sequential operations in physics, computer graphics, and engineering.

What are the dimension requirements for matrix multiplication?

For the product AB to be defined, the number of columns of A must equal the number of rows of B. If A is m x n and B is n x p, then AB is m x p. The shared dimension n determines the length of the dot products computed for each element. If A is 3x2 and B is 2x4, then AB is 3x4, but BA would require B to have as many columns as A has rows, which would need a 2x4 times 3x2 product, and this is undefined since 4 does not equal 3. Understanding these dimensional constraints is essential for setting up correct matrix equations in linear algebra, statistics, and machine learning applications.

What is the computational complexity of matrix multiplication?

The standard (naive) matrix multiplication algorithm for two n x n matrices requires n cubed scalar multiplications and n squared times (n-1) additions, giving O(n cubed) time complexity. For two 1000x1000 matrices, this means approximately one billion multiplications. The Strassen algorithm reduces this to approximately O(n^2.807) by cleverly reducing the number of multiplications at each recursive step. More advanced algorithms like Coppersmith-Winograd achieve O(n^2.376), though they have large constant factors making them impractical for reasonable matrix sizes. In practice, optimized BLAS libraries use cache-friendly blocking strategies with the standard algorithm, achieving near-peak hardware performance.

How is matrix multiplication used in computer graphics?

In computer graphics, 4x4 transformation matrices encode translation, rotation, scaling, and perspective projection. Multiplying these matrices together combines multiple transformations into a single matrix that can be applied to all vertices in a scene. This is far more efficient than applying each transformation separately. The model-view-projection (MVP) matrix is the product of model, view, and projection matrices. Modern GPUs are specifically designed to perform massive parallel matrix multiplications for rendering. Each pixel on screen results from matrix multiplications involving vertex positions, normals, texture coordinates, and lighting calculations, making matrix multiplication the most fundamental operation in real-time graphics.

What is the identity matrix and how does it relate to multiplication?

The identity matrix I is a square matrix with ones on the main diagonal and zeros everywhere else. It serves as the multiplicative identity for matrix multiplication, meaning AI = IA = A for any compatible matrix A. This property parallels the number 1 in scalar arithmetic. The identity matrix represents the transformation that leaves every vector unchanged. When solving Ax = b, finding A inverse (A^(-1)) such that A^(-1)A = I allows computing x = A^(-1)b. The identity matrix appears in eigenvalue problems as det(A - lambda*I) = 0, in matrix polynomials, and in iterative algorithms as the starting point or convergence target.

How does matrix multiplication relate to systems of linear equations?

A system of linear equations can be compactly written as the matrix equation Ax = b, where A is the coefficient matrix, x is the vector of unknowns, and b is the vector of constants. Matrix multiplication encodes all the equations simultaneously: each row of A multiplied by x gives one equation. Solving the system means finding x such that the matrix product Ax equals b. Methods like Gaussian elimination, LU decomposition, and iterative solvers all fundamentally rely on matrix multiplication. The existence of a unique solution depends on whether A is invertible, which requires a nonzero determinant. This connection makes matrix multiplication central to numerical computing.

References