MA20217 Algebra 2B (Spring ’15)


Tutorials

I give tutorials for Group 2 at 11:15 on Fridays in 1WN 3.11, and for Group 6 at 13:15 on Thursdays in CB 5.7. Work should be handed in to my folders in the pigeonholes for the module on the first floor of 4W, on the Tuesday two weeks after each sheet is set.

Course Website

The main course webpage for this module can be found here.

IMPORTANT: The definition of ring is not consistent across all sources. Wikipedia, for example, uses “ring” for what we call “ring with \(1\)” in this course, and “rng” for what we call “ring”. Thus if you look up the definition of “ring homomorphism” on Wikipedia, you will get the wrong thing; you want “rng homomorphism”.

Week 1

There are no tutorials in Week 1.

Week 2

One of the hurdles at the beginning of the course is to get used to doing calculations in arbitrary rings. Most importantly, you need to remember that a general ring is not commutative, and even non-zero elements may not have multiplicative inverses. So for example, while you have \[(x+y)^2=x^2+xy+yx+y^2\] for \(x\), \(y\) in a ring \(R\), you don’t necessarily have \(xy\)=\(yx\), so you shouldn’t combine these terms. Similary, given the equation \[xy=-yx\] some of you tried to ‘cancel the \(y\)’ to prove that \(x=-x\). But you don’t know that this can happen; firstly, \(y\) might not have a multiplicative inverse, and secondly, even if it does, you couldn’t multiply by it to clear the \(y\) from both sides of this equation. For example, if \(y\) has inverse \(y^{-1}\), then we can multiply by it on the left to get \[y^{-1}xy=-y^{-1}yx=-x\] but since \(R\) might not be commutative, we needn’t necessarily have \(y^{-1}xy=x\). A good example to bear in mind when doing these calculations is a ring of matrices; matrix multiplication is not commutative, and not every non-zero matrix is invertible.

Week 3

We spent some time this week discussing well-definedness. It is easiest to describe what it means for a map to be well-defined by describing how this property can fail, and there are two main ways. As an example, we take a candidate map \(\cdot\colon R\times R\to R\), such as a candidate for the multiplication on \(R\). The first problem might be that given \(r,r'\in R\), the product \(r\cdot r'\) doesn’t lie in \(R\). (This is most common if \(R\) is a subset of \(\hat{R}\), and we tried to restrict multiplication of \(\hat{R}\) to \(R\)—it might not work). The property that \(r\cdot r'\in R\) for all \(r,r'\in R\) is sometimes referred to as closure (particularly if \(R\) is a subset of something else), but is really just part of the map being well-defined.

The other problem can be if there are two ways of writing the same element, and our map appears to depend on the way the element is written. As an example, consider a map \(\phi\colon R/I\to S\), where \(I\) is some ideal of \(R\). Now we have \([r]=[s]\) in \(R/I\) if \(r-s\in I\), but this doesn’t mean \(r=s\). Let’s say (as this is a common situation) that we want to induce our map \(\phi\colon R/I\to S\) from a map \(\hat{\phi}\colon R\to S\), by defining \(\phi([r])=\hat{\phi}(r)\); this is only allowable if \(\hat{\phi}(r)=\hat{\phi}(s)\) whenever \([r]=[s]\), otherwise \(\phi\) is not defined unambiguously.

Week 4

I was in Sweden this week, so tutorials were taken by Steven Pagett and George Frost.

Week 5

This week we mostly discussed quotient rings, and the solutions to questions 2 and 8 from Sheet 2. The main lesson was a psychological one rather than a mathematical one - the model solutions are polished, and not necessarily presented in the order you might think of the ideas! You are not expected to be able to look at the problem and immediately produce the model answer. Instead, you should allow time to experiment with the problem, and expect to take some wrong turns along the way to a solution. If anything, these wrong turns are often the most enlightening part of the process.

Week 6

The main message this week, which people often forget, is that a polynomial in \(R[x]\) may be reducible even if it has no roots in \(R\). For example, the polynomial \[(x^2+1)^2\in\mathbb{R}[x]\] has no roots, but is manifestly reducible. The reason for this confusion seems to be that you are used to seeing polynomials of low degrees (here meaning less than \(4\)), where there is a relationship between reducibility and the presence of roots. If \(f\) is a polynomial of degree \(2\) or \(3\), and \(f=gh\), then we have \(\operatorname{deg}(f)=\operatorname{deg}(g)+\operatorname{deg}(h)\). If neither \(g\) or \(h\) is a unit, so that \(f\) is reducible, then both \(g\) and \(h\) have positive degree. So either both have degree \(1\), or one has degree \(1\) and the other degree \(2\), depending on the degree of \(f\). In either case, there is a linear factor, which corresponds to a root. So if \(f\) has no roots, it must be irreducible. However, as soon as \(\operatorname{deg}(f)\geq 4\) it is possible to write \(\operatorname{deg}(f)=a+b\) for \(a,b\geq2\), and so this argument breaks down (as the above counterexample shows that it must!).

Week 7

We mainly dealt with terminology this week. The common theme was that if you have an \(n\)-dimensional vector space V, and you choose an (ordered) basis \(v_1,\dotsc,v_n\), then you get an isomorphism \(\Phi\colon\mathbb{R}^n\stackrel{\sim}{\to} V\) via the map that sends the standard basis vector \(e_i\) of \(\mathbb{R}^n\) to \(v_i\). You can then transfer various structures on \(\mathbb{R}^n\) to \(V\). For example, if you have an endomorphism \(T\colon\mathbb{R}^n\to\mathbb{R}^n\), it gives you an endomorphism \(\Phi\circ T\circ\Phi^{-1}\colon V\to V\); this is the linear map \(V\to V\) which has the same basis with respect to \(v_1,\dotsc,v_n\) as \(T\) does with respect to \(e_1,\dotsc,e_n\). Similarly, you can define an inner product on \(V\) via \[u\cdot v=\Phi^{-1}(u)\cdot\Phi^{-1}(v)\] where the \(\cdot\) on the right-hand side denotes the standard dot product on \(\mathbb{R}^n\).

Week 8

Today we discussed field extensions. If you have an irreducible polynomial \(f\in k[x]\) for some field \(k\), you can find a field \(K\supseteq k\) such that \(f\) splits into linear factors when thought of as an element of \(K[x]\). This field can be taken to be \(k[y]/k[y]f\), which is a field since \(f\) is irreducible; this amounts to adjoining the roots of \(f\) to the field \(k\).

As an example, if we consider \(f=x^2+1\in\mathbb{R}[x]\), this does not split into linear factors, since there is no square root of \(-1\) in \(\mathbb{R}\). To get a field containing such a square root, we first adjoin a new element \(y\) to \(\mathbb{R}\)—since everything should be a ring, this means we have to take the polynomial ring \(\mathbb{R}[y]\), so that multiplication and addition continue to be defined. Now we wish to enforce the equation \(y^2+1=0\), which amounts to taking a quotient by the smallest ideal containing \(y^2+1\), i.e. \(\mathbb{R}[y](y^2+1)\). (This is essentially what quotient constructions are designed to acheive.) Now the quotient \(K=\mathbb{R}[y]/\mathbb{R}[y](y^2+1)\) is a field, since \(y^2+1\) is an irreducible polynomial, and it ‘contains’ \(\mathbb{R}\) via the map \(\lambda\mapsto\lambda+\mathbb{R}[y](y^2+1)\), which is injective. Over \(K\), our polynomial \(x^2+1\) factors as \((x+[y])(x-[y])\), where \([y]\) denotes the coset \(y+\mathbb{R}[y](y^2+1)\)—check it! While this might look very abstract, it should be what we expected, as \(y\) (which becomes \([y]\) in the quotient) is the element we added with the aim of making a root of the polynomial \(x^2+1\)!

In the above example, you probably saw immediately that we could instead have taken \(K=\mathbb{C}\) to get our desired field extension, as \(f\) factors as \((x-\mathrm{i})(x+\mathrm{i})\) over \(\mathbb{C}\). In fact we have \(\mathbb{R}[y]/\mathbb{R}[y](y^2+1)\cong\mathbb{C}\) via the map \([p(y)]\mapsto p(\mathrm{i})\).

Week 9

This week we observed a useful fact about \(K\)-algebras with \(1\). If \(p\in K[x]\) is a polynomial with coefficients in \(K\), and \(\alpha\in A\) for some \(K\)-algebra \(A\), then we can make sense of the evaluation \(p(\alpha)\) as an element of \(A\). Indeed, if we replace \(x\) by \(\alpha\) each time it appears in an expression for \(p\), then to evaluate this expression \(A\) only requires taking powers of \(\alpha\), multiplying them by elements of \(K\), and adding them up; these are all well-defined operations in \(A\). Perhaps the most confusing thing is what to do with the constant term \(c\) of \(p\); this is replaced by \(c\cdot 1_A\), where \(1_A\) is the multiplicative identity of \(A\). (One way of thinking about this is that the constant term is the \(x^0\) term, and \(\alpha^0=1_A\) for all \(\alpha\in A\).)

For the rest of the course, which deals with Jordan normal form, you will be most interested in the cases \(A=\operatorname{End}(V)\) for \(V\) a \(K\)-vector space, and (very similarly!) \(A=M_{n\times n}(K)\), the set of \(n\times n\) matrices with entries in \(K\).

Week 10

A useful fact when calculating the minimal polynomial \(m_\alpha\) of an element \(\alpha\) of some algebra is that if \(p(\alpha)=0\) for any polynomial \(p\), then \(p\) divides \(m_\alpha\). This is particularly useful if \(\alpha\in\operatorname{End}(V)\) and you want to show that \(\alpha\) is diagonalisable. A sufficient (but not necessary) condition for diagonalisability of \(\alpha\) is that \(m_\alpha\) is a product of distinct linear factors. Thus if \(p(\alpha)=0\) for some product \(p\) of distinct linear factors, \(m_\alpha\) is also such a product and so \(\alpha\) is diagonalisable.