*Algebra*

## Section 2.4. Cyclic Groups

Exercise 2.4.1 Since \(a^7 = 1\), it follows that \(ab = a^{15}b\). We can use \(a^3 b = ba^3\) to exchange \(b\) with 3 factors of \(a\); doing this 5 times; we obtain \(a^{15}b = ba^{15}\). So \(ab = ba^{15} = ba\).

Exercise 2.4.2

- The \(n\)
^{th}roots of unity are the \(n\) complex numbers \(e^{2\pi ik/n}\) with \(k = 0, 1, \ldots, n-1\). It is easily verified that this set contains 1 and is closed under multiplication and inverses, so it is a group. It is cyclic since it is generated by the element \(e^{2\pi i/n}\). - If \(n\) is odd, then the \(n\)
^{th}roots of unity occur in \((n-1)/2\) complex conjugate pairs together with 1, so the product is 1. If \(n\) is even, then there are \(n/2-1\) complex conjugate pairs together with 1 and -1, so the product is -1. In general, the product is \((-1)^{n+1}\).

Exercise 2.4.3 Let \(n\) be the order of \(ab\). Then \(1 = (ab)^n = a(ba)^{n-1}b\). Left-multiplying by \(b\) right-multiplying by \(b^{-1}\) yields \(1 = ba(ba)^{n-1} = (ba)^n\). Therefore the order of \(ba\) is at most \(n\). By exchanging the roles of \(a\) and \(b\) in the above argument, we see that the order of \(ab\) is also at most the order of \(ba\). Therefore the two orders are equal.

Exercise 2.4.4 Suppose \(G\) is a group that contains no proper subgroup. If \(G\) is nontrivial, then \(G\) must be cyclic; otherwise the subgroup generated by some element of \(G\) other than the identity would be a proper subgroup. If \(G\) is cyclic and finite with order greater than 1, the order must be prime, since \(C_p\) is a proper subgroup of \(C_{pq}\) whenever \(p, q > 1\). If \(G\) is infinite cyclic, then it has proper subgroups, such as the subgroup generated by \(1 + 1\). Thus, the possible groups are the trivial group and the cyclic groups of prime order.

Exercise 2.4.5 Let \(G\) be a cyclic group and \(H\) a subgroup of \(G\). Let \(x\) be a generator of \(G\), and write \(H = \{x^i \mid i \in S\}\) where the set \(S\) is possibly infinite. Let \(d\) be the GCD of \(S\), which is defined even if \(S\) is infinite; see Exercise 2.3.3. By definition, \(d\) is a linear combination of the elements of \(S\), say, \(d = \sum_{i \in S} c_i i\) where all \(c_i\)'s are integers and only finitely many are nonzero; and \(d\) generates the set of such linear combinations. This implies that \(x^d = \prod_{i \in S} (x^i)^{c_i}\), so \(H\) contains \(x^d\), and that \(d\) divides all \(i \in S\) so that all \(x^i\)'s are powers of \(x^d\). It follows that \(H\) is cyclic with generator \(x^d\).

Exercise 2.4.6 In general, let \(G = C_n\) and let \(x\) be some generator of \(C_n\). Observe that \(m\) and \(n\) are relatively prime if and only if \(m\) and \(-n\) are relatively prime, so by Corollary 2.3.6, there exist \(r, s \in \mathbb{Z}\) such that \(rm - sn = 1\) if and only if \(m\) and \(n\) are relatively prime. If such \(r, s\) exist, then \((x^m)^r = x^{sn + 1} = x\), so \(x^m\) generates \(C_n\). Conversely, if \(x \in \langle x^m \rangle\), then there is some \(r\) with \(x = (x^m)^r = x^{mr}\), so \(x^{mr-1} = 1\), implying \(mr - 1 = sn\) for some \(s\), or \(rm - sn = 1\); so when \(m\) is not relatively prime to \(n\), \(x \notin \langle x^m \rangle\). Therefore the number of elements of \(C_n\) that generate \(C_n\) is precisely \(\varphi(n)\), the number of integers from 0 to \(n-1\), inclusive, that are relatively prime to \(n\). Therefore \(C_6\) has 2 generators, \(C_5\) has 4, and \(C_8\) has 4.

Exercise 2.4.7 Since all non-identity elements of \(H\) have order 2, each such element is its own inverse, so each element of \(H\) has an inverse in \(H\). It remains to verify that \(H\) is closed under multiplication. We only need to establish that \(xy, x(xy), yx, y(xy), (xy)x\), and \((xy)y\) are in \(H\); the products with 1 are obviously in \(H\), and the squares of elements in \(H\) have already been discussed. Now \(x(xy) = (xx)y = 1y = y \in H\). Similar reasoning shows that \((xy)y \in H\). And \((yx)xy = y(xx)y = yy = 1\), so \(yx\) is the inverse of \(xy\), implying that \(yx = xy \in H\). This then allows us to establish that \(y(xy) = y(yx) = (yy)x = x \in H\) and likewise \((xy)x = (yx)x = y(xx) = y \in H\).

Exercise 2.4.8 I can't see how solving (a) first makes (b) easier, so instead I will give a standalone solution to (b) and reduce (a) to (b).

- Suppose \(A \in GL_n(\mathbb{R})\) is given. Let \(c = \det A\). Then, \(A = BC\) where \(B = \diag(c, 1, 1, \ldots, 1)\) and \(C\) is identical to \(A\) except that each entry in the first row is equal to \(1/c\) times the corresponding entry of \(A\). By part (b), \(C\) can be written as a product of elementary matrices of the first type. \(B\) is an elementary matrix of the third type. Therefore \(A\) is a product of elementary matrices of the first and third types.
First, some notation. Let \(E_n(r, c, a)\) denote the \(n \times n\) elementary matrix of the first type with the entry \(a\) located at position \((r, c)\). Let \(S_n(i_1, i_2, c)\) denote the diagonal matrix with the entry \(c\) at \((i_1, i_1)\), the entry \(1/c\) at \((i_2, i_2)\), and ones elsewhere on the diagonal. Note that left-multiplication by \(S_n(i_1, i_2, c)\) has the effect of scaling rows \(i_1\) and \(i_2\) by a factor of \(c\) and \(1/c\), respectively, while leaving other rows unchanged. Let \(T_n(i_1, i_2)\) denote the matrix whose entries are as follows: \begin{equation*} T_n(i_1, i_2)_{r,c} = \begin{cases} 1 & \text{if $r = c$, and $r \neq i_1$, and $r \neq i_2$} \\ -1 & \text{if $r = i_1$ and $c = i_2$} \\ 1 & \text{if $r = i_2$ and $c = i_1$} \\ 0 & \text{otherwise} \end{cases} \end{equation*} Note that left-multiplication by \(T_n(i_1, i_2)\) has the effect of swapping rows \(i_1\) and \(i_2\), then changing the sign of the new row \(i_2\).

As the text suggests, we first consider the case \(n = 2\). If one can work out how to express \(S_2\) and \(T_2\) matrices in terms of \(E_2\) matrices, the idea can be extended to arbitrary \(n\). Here one can easily verify that \begin{align*} S_2(1, 2, c) &= \begin{pmatrix} c & 0 \\ 0 & 1/c \end{pmatrix} \\ &= \begin{pmatrix} 1 & 0 \\ -1/c & 1 \end{pmatrix} \begin{pmatrix} 1 & c-1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} 1 & \frac{1-c}{c} \\ 0 & 1 \end{pmatrix} \\ &= E_2(2, 1, -1/c)\, E_2(1, 2,c-1)\, E_2(2, 1, 1)\, E_2(1, 2, (1-c)/c) \\ T_2(1, 2) &= \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \\ &= \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix} \\ &= E_2(1, 2, -1)\, E_2(2, 1, 1)\, E_2(1, 2, -1) \end{align*} This allows us to deduce a way of writing the \(S_n\) and \(T_n\) matrices as products of \(E_n\) matrices in general: \begin{align*} S_n(i_1, i_2, c) &= E_n(i_2, i_1, -1/c)\, E_n(i_1, i_2, c-1)\, E_n(i_2, i_1, 1) E_n(i_1, i_2, (1-c)/c) \\ T_n(i_1, i_2) &= E_n(i_1, i_2, -1)\, E_n(i_2, i_1, 1)\, E_n(i_1, i_2, -1) \end{align*}

Now in general the idea is to use row reduction to write an arbitrary \(A \in SL_n(\mathbb{R})\) as the product of \(E_n\)'s. The \(S_n\) and \(T_n\) matrices correspond naturally to certain operations done during row reduction. We claim that if the first \(k\) columns of \(A\) are the same as those of the \(n \times n\) identity matrix, then \(A\) can be written as a product of \(E_n\)'s. The result will be proven by induction on \(n - k\).

*Base case*: If \(n - k = 1\), then by expansion in minors, \(\det A = A_{n,n}\), therefore \(A_{n,n} = 1\). Therefore \begin{equation*} A = E_n(1,n,A_{1,n})\, E_n(2,n,A_{2,n}) \ldots E_n(n-1,n,A_{n-1,n}) \end{equation*}*Inductive case*: Suppose \(n - k \ge 2\) and the left \(k\) columns of \(A\) are the same as those of an identity matrix. We now work on column \(k+1\). If \(A_{k+1,k+1}\) is nonzero then let \(B = A\). Otherwise, since \(A\) has full rank, the \((1+k)\)^{th}column can't be a linear combination of the first \(k\) columns, so there must be some \(m > k\) with \(A_{m,k+1} \neq 0\). In the process of row reduction, we would swap rows \(k+1\) and \(m\), but since we are working with matrices of determinant 1, we also have to change the sign of one of the rows. Here, this is accomplished by writing \(A = T_n(k+1, m) B\). Note that the first \(k\) columns of \(B\) are also the same as those of the identity matrix, but \(B_{k+1,k+1} \neq 0\). So we have reduced the problem of \(A\) to that of \(B\).In the process of row reduction, we would now scale row \(k+1\) by \(1/c\) to make the entry at \((k+1,k+1)\) equal to 1. Since we are working with matrices of determinant 1, we have to scale some higher-numbered row by \(c\) as well in order to preserve the determinant. Since \(n - k \ge 2\), we can write \(B = S_n(k+1, k+2, B_{k+1,k+1})C\) where the first \(k\) rows of \(C\) are the same as those of the identity matrix, but \(C_{k+1,k+1} = 1\). So we have reduced the problem of \(B\) to that of \(C\).

The final step of row reduction is to zero out all elements in column \(k+1\) other than the pivot. This is done using the \(E_n\)'s themselves (see (1.2.5) in the text). The result is that \(C = E_n(1, k+1, -C_{1,k+1})\, E_n(2, k+1, -C_{2,k+1})\ldots D\) where the first \(k+1\) columns of \(D\) are the same as those of the identity matrix. By the inductive hypothesis, \(D\) is a product of elementary matrices of the first kind. This completes the induction.

By taking \(k = 0\), we recover the desired result in full generality.

Exercise 2.4.9 A permutation \(P\) that equals its own inverse can be expressed as a product of disjoint transpositions, where each transposition \((a\ b)\) is formed from a pair of distinct elements with \(P(a) = b, P(b) = a\). Conversely, every product of disjoint transpositions is involutive. So the number of elements of order 2 in \(S_4\) is equal to the number of ways to select some disjoint pairs of elements from a set of size 4, not counting the empty set of pairs (corresponding to the identity, which has order 1). There are 6 permutations containing one transposition and 3 more containing two tranpositions, for a total of 9.

Exercise 2.4.10 Define \(A, B \in GL_2(\mathbb{R})\) as follows: \begin{align*} A &= \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \\ B &= \begin{pmatrix} -1 & 0 \\ 1 & 1 \end{pmatrix} \end{align*} Then \(A^2 = B^2 = I\), but \begin{align*} AB &= \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \\ (AB)^n &= \begin{pmatrix} 1 & 0 \\ n & 1 \end{pmatrix} \end{align*} so \(AB\) does not have finite order. But in an abelian group, if \(a\) has order \(m\) and \(b\) has order \(n\), then \((ab)^{mn} = (a^m)^n(b^n)^m = 1\), so \(ab\) has finite order too.

Exercise 2.4.11

- Write \(P \in S_n\) as a product of disjoint cycles. It suffices to show that the tranpositions generate the cycles. Suppose \(C = (a_1\ a_2 \ldots a_n)\). Then it can be written as the product of transpositions \(C = (a_1\ a_2) (a_2\ a_3) \ldots (a_{n-1}\ a_n)\), and we are done.
- Write \(P \in A_n\) as a product of transpositions \(P = \tau_1 \tau_2 \ldots \tau_{2k-1} \tau_{2k}\). The number of these transpositions must be even; we can pair them up like so: \(P = (\tau_1\tau_2) (\tau_3\tau_4) \ldots (\tau_{2k-1}\tau_{2k})\). It suffices to write each parenthesized factor as a product of 3-cycles. We can do so using the fact that if \(a, b, c, d\) are distinct, then \((a\ b)(c\ d) = (a\ d\ c)(a\ b\ c)\); otherwise WLOG assume \(b = c\), and then \((a\ b)(c\ d) = (a\ b\ d)\). This completes the proof. Note that we didn't use the fact that \(n \geq 3\), so the result holds for \(n = 2\) as well; in this case \(A_n\) is trivial so it is generated by the (empty) set of 3-cycles.