Objectives
- Learn to use invertible matrices to convert between coordinate systems.
- Learn to represent linear transformations with respect to given bases.
- Recipes: compute the -matrix of a linear transformation.
- Vocabulary: -matrix.
In this section, we study matrix representations for linear transformations with respect to bases for the domain and the codomain. We'll also get another perspective on bases in terms of the linear transformations they represent, and we'll see how to convert between different coordinate systems using matrices and their inverses.
If is an matrix with columns then is a basis for the column span if and only if the linear transformation is one-to-one.
Indeed, the columns of span by definition, and they're linearly independent if and only if is one-to-one, using theorem in Section 4.2.
In particular, a list of vectors in forms a basis for all of if and only if the square matrix with columns is invertible. These columns are clearly the coordinate vectors of the with respect to the standard basis for Conversely, the columns of the inverse matrix are the -coordinates of the standard basis vectors:
This is because so which is the definition of being the -coordinates of Multiplying by and taking linear combinations we get:
If is an invertible square matrix, whose columns thus form a basis of the -coordinates of any vector is given by
This says that (multiplication by) changes from the -coordinates to the usual coordinates, and changes from the usual coordinates to the -coordinates.
Suppose we're given bases for and for Let be a linear transformation. The -matrix of is the matrix
This generalizes the definition of the standard matrix of If we let and be the standard bases for and then the standard matrix of
By the discussion above, if is the standard basis for and is another basis for so the square matrix with columns is invertible, then
If is an invertible square matrix, whose columns thus form a basis of and is an invertible square matrix, whose columns thus form a basis of then the -matrix of a linear transformation is given by the matrix product
where is the standard matrix for
One way to read this formula is right-to-left: The multiplying by converts from -coordinates to standard coordinates in multiplying by the standard matrix then results in the standard coordinates in of the result of acting with the transformation while finally multiplying by converts to the -coordinates. Altogether, the matrix product takes -coordinates of a vector to the -coordinates of the image
Conversely, we can recover the standard matrix of as
this says that to do in standard coordinates is the same as to first convert to -coordinates, then do in the -coordinates, and finally convert from -coordinates back to standard coordinates.
Suppose that is the standard matrix for where is the invertible matrix corresponding to the basis for is the invertible matrix corresponding to the basis for and is the -matrix for To compute for some one can do the following:
To summarize: if then and do the same thing, only in different coordinate systems for and
Consider the matrices
One can verify that try it yourself. Let and the columns of corresponding to the basis of Let and the columns of corresponding to the basis another basis of
The matrix is diagonal with a one and a zero: it keeps the -direction and zeroes out the -direction.
To compute first we multiply by to find the -coordinates of then we multiply by then we multiply by For instance, let
To summarize:
Let and be linear transformations, and let be bases for respectively. Then:
Suppose is a linear map. Then we can pick bases, for and for relative to which the -matrix for is diagonal with ones and zeroes everywhere else:
This is a very simple kind of diagonalisation theorem, which is made possible by the fact that we can choose bases for the domain and the codomain independently. In Chapter 6 we'll see that the situation is much more subtle for linear transformations where the domain and the codomain coincide and we want to use the same basis for both.