MA20216 Algebra 2A (Winter ’12)
Tutorials
I give tutorials for Group 3 at 16:15 on Thursdays in 8W 2.4, and for Group 6 at 10:15 on Thursdays in 8W 2.27. Both groups should hand in work to my folders in the pigeonholes for the module on the first floor of 4W, by 12:15 each Wednesday.
There will be no tutorials in week 1.
Course Website
The Moodle page for this module can be found here.
Week 2
I was in Oxford on Thursday afternoon of this week, so Group 3’s tutorial was taken by Alex Collins.
In the morning tutorial, we talked a bit about computing images and kernels of linear maps. I pointed out that if you know the dimensions of these spaces (the rank and the nullity), then you simply need to find enough independent elements in them to know the whole space. So if, for example, the rank is \(1\), then you need only find one non-zero vector in the image, and you know that the entire image is the span of that one vector.
I also talked briefly about indexed collections of vectors as functions, while labouring under the misunderstanding that this viewpoint was being used in the lectures. If you read the notes on the Moodle page, many concepts will be defined in terms of functions rather than directly by elements, and it’s worth spending some time thinking about how the definitions are equivalent.
Week 3
Remember that when writing a vector in a general vector space as a column (for example when doing matrix calculations), you should write the coefficients of an expression for the vector in terms of a basis, not the actual basis elements. For example, if you’re working in the vector space of polynomials in \(t\) of degree at most \(3\), and you’re using the usual monomial basis consisting of the polynomials of the form \(t^i\), then the polynomial \(4+8t-6t^2+t^3\) is represented by the column \((4,8,-6,1)^{\mathsf{T}}\). If you use a different basis, you get a different column.
In the morning, some of you asked about the notation \(\mathbb{C}[t,t^{-1}]\) for the vector space of Laurent polynomials. This should be read as meaning polynomials with coefficients in \(\mathbb{C}\) in the variables \(t\) and \(t^{-1}\), with the implicit relation that \(tt^{-1}=t^{-1}t=1\). If you need some practice with quotients of rings, you can think about why the ring of Laurent polynomials is isomorphic to the quotient of the polynomial ring \(\mathbb{C}[x,y]\) by the ideal generated by \(xy-1\). (If it’s not clear to you, you should first think about why the vector space of Laurent polynomials is also a ring).
Week 4
Remember that a vector space \(V\) over a field \(\mathbb{K}\) with a basis indexed by the set \(S\) is isomorphic to \(\mathbb{K}\langle S\rangle\), the set of functions from \(S\) to \(\mathbb{K}\) with finite support, i.e. that are zero at all but finitely many points of \(S\). The way such an isomorphism can be constructed is to take \(v\) in \(V\) and write it in terms of the chosen basis. Then define a function \(f\colon S\to\mathbb{K}\) by \(f(s)=c_s\), where \(c_s\) is the coefficient of the basis vector indexed by \(s\) in the expression for \(v\). You should check that this is a linear isomorphism.
In the afternoon we talked a bit about dual spaces. Remember that you have seen at least one example of a dual vector so far—the trace is a vector in the dual of the vector space of \(n\times n\) matrices. As linear functionals can occur naturally, it is useful to be able to apply all of the theory of vector spaces that you’ve developed, such as rank-nullity, and expressing linear maps as matrices, to spaces of functionals. You’ll see more interesting things about dual spaces in the coming weeks.
Week 5
As we spent most of the tutorials talking about dual spaces, there wasn’t any time to talk about annihilators, which can also be a little confusing, so I’ll try to give some intuition for them here. As they’re very much an aspect of a wider story about dual spaces, that’s where we should start.
We can think of the dual space as a kind of mirror image of the vector space—everything looks very much the same, but backwards. So for example, if we have some basis of a vector space \(V\), we get a basis of \(V^*\) in the mirror, the dual basis. If we have a map \(f\colon U\to V\), then this also defines a map \(f\) between the dual spaces \(U^*\) and \(V^*\), but because we’re now in the mirror world, the direction of the map has also been reversed, i.e. it maps \(V^*\to U^*\).
One thing missing in our mirror picture is what happens to subspaces \(W\) of \(V\) when we pass to the dual. There are several candidates for what could be considered the mirror image of \(W\), but in some sense the best choice is \(W^0\), the annihilator of \(W\). One way in which the mirror property is manifesting itself is that the dimension has been mirrored: \(\dim{W_0} = \dim{V} - \dim{W}\).
Many of the results you know translate into this analogy by saying that if you hold up a mirror to the dual (i.e. reflect the reflection), you end up back where you started.
Week 6
We talked a bit this week about quotients of vector spaces. One way to think of the quotient space \(V/U\) is to take \(V\), set everything in \(U\) to \(0\), and then alter the remaining space in the minimal possible way such that it becomes a vector space again. More precisely, if we take a basis \(v_1,\dotsc,v_n\) for \(V\), then the vectors \(v_1+U,\dotsc,v_n+U\) span \(V/U\), but may not be linearly independent anymore. If we were more careful, and obtained the basis of \(V\) by starting with a basis \(v_1,\dotsc,v_k\) of \(U\) and extending it, then the vectors \(v_1+U,\dotsc,v_k+U\) in the quotient are zero, and the rest form a basis of \(V/U\). So we see that everything in \(U\) becomes \(0\) when we take the quotient, and for every other vector, the “component parallel to \(U\)” becomes \(0\). So we can visualize the space \(V/U\) as the result of crushing \(V\) along the direction of \(U\).
Week 7
Remember that inner products on real vector spaces are bilinear, but inner products on complex vector spaces are only linear in one argument (in this course the convention is that it’s the second argument). They are sometimes called “conjugate linear” in the other argument.
If you want to know more about spaces with non-degenerate, but not positive-definite, bilinear forms, Minkowski space is a good place to start.
Week 8
I talked briefly this week about categories and functors, after observing that the process of taking the adjoint of a linear map has some similarity to taking its transpose. If you want to find out a little more, a good place to start is with the Wikipedia articles on categories and category theory, and the references listed at the bottom of these pages.
Week 9
Finding the operator norm of a linear functional \(f\) usually happens in two stages. First you put a bound on \(\lvert f(w)\rvert\) (over \(w\) with norm at most \(1\)). Then you show that this bound cannot be improved, either by finding a specific \(v\) such that \(\lvert f(v)\rvert\) attains your bound—this may not exist in general—or finding a sequence \(v_n\) such that \(\lvert f(v_n)\rvert\) converges to your bound.
It is also worth remembering that inner products allow you to define the angle between two vectors in your space. In particular, they tell you when two vectors are orthogonal. It is important to remember that the orthogonal complement to a subspace of an inner product space depends on the inner product. If you equip the same underlying vector space with a different inner product, subspaces may have different orthogonal complements.
Week 10
You can think of the tensor product of two vector spaces in two different ways, and each has its own advantages. The first definition you know says that each bilinear map from the cross product of \(U\) and \(V\) defines a unique map from \(U\otimes V\), commuting with the natural map from cross product to tensor product. This definition explains what the tensor product does, and is often more useful for proving theorems.
Another definition tells you what the tensor product is—i.e. it tells you what elements look like, and when two of them are equal to each other. This point of view can be more useful in working with specific examples. Being able to move easily between these two points of view is a useful skill.