- to introduce matrix multiplication
- to determine when two matrices can be multiplied
- to know how to find the shape of a product of matrices
- to be introduced the algebraic properties of matrix multiplication
Row and column vector multiplication is introduced as a stepping stone before getting to matrix multiplication. An example is worked in a short video before the algebraic properties of matrix multiplication are covered.
Before you begin with this lesson, you ought to be comfortable with the definition of a matrix, with adding and subtracting matrices, and with multiplying matrices by scalar values. These topics are covered here, feel free to review.
There is no easy way to approach the multiplication of matrices. While the concept itself is not difficult to grasp, the procedure does seem unnecessarily tedious when it is first encountered. So how do you multiply matrices?
To start, we will look at how to multiply vectors. A vector is just a matrix that consists entirely in either a single row or a single column. In the first case, the matrix is called a row vector, and in the second it is called a column vector. The multiplication of vectors is just a simpler version of the same process that is used to multiply larger matrices.
To multiply a 1 x n matrix (a row vector with n entries) by an n x 1 matrix (a column vector with n entries), you just take the sum of products of each entry in order. Here is an example:
In general, we can express the dot product like this:
It is just the of the products of the two vectors' entries.
Now that we have seen an example using row and column vectors, we'll move on to the real thing. Here is an example. Try to get a sense for what is happening just by looking, but don't worry, an explanation follows:
We find the entries in the product matrix by taking rows from the first matrix and columns from the second matrix and calculating their dot products. If ai,j is the entry in the product found in the ith row and jth column, then it is the dot product of the ith row from the first matrix and the jth column from the second. Right away this leads us to an important fact: In order to multiply two matrices, A and B, the number of columns in A must equal the number of rows in B, otherwise we cannot take the dot products of the right vectors. That is, if A is an n x m matrix and B is an m x r matrix, then the product AB exists. Also, the resulting matrix will have n rows and r columns.
An example matrix multiplication is worked through in full.
Source: Colin O'Keefe, youtube
Unlike ordinary multiplication of numbers, in which xy = yx (i.e. 2*3 = 3*2 = 6), the multiplication of matrices is not commutative. This means that for two matrices A and B, AB does not generally equal BA. Here is an example to point this out:
In the example we see that AB is not the same matrix as BA. This is usually the case with matrix multiplication, but not always. For instance, if A = B, then AB = AA = BB = BA, so clearly it is possible for commutativity to hold for certain, special matrices, but such special cases are not the norm.
Just like matrix addition, and just like the multiplication of regular numbers, matrix multiplication is associative.
So long as each pair in a chain of factors can be legally multiplied, then the whole chain can be, regardless of where you put the parentheses. In the above, the product ABC will be an m x s matrix. We can see this easily - AB will be an m x r matrix, so that AB times C will be an m x s matrix. Likewise, BC will be an n x s matrix so that A times BC will again be an m x s matrix.
Finally, just like scalar multiplication and regular multiplication, a matrix factor can be distributed across a sum. E.g.