Brian Bi

Problem 4.12.1

1. The class equation of $$D_N$$ (we use the convention that $$D_N$$ has $$2N$$ elements) is $2N = \begin{cases} 2 \cdot 1 + \frac{N-2}{2} \cdot 2 + 2 \cdot \frac{N}{2} & \text{if N is even} \\ 1 \cdot 1 + \frac{N-1}{2} \cdot 2 + 1 \cdot N & \text{if N is odd} \end{cases}$ where $$p \cdot q$$ denotes $$p$$ conjugacy classes of size $$q$$. Therefore, for $$N$$ even there are $$(N+6)/2$$ conjugacy classes in total, and for $$N$$ odd there are $$(N+3)/2$$ conjugacy classes.

The group $$D_N$$ is generated by two elements $$r, s$$ with $$r^N = s^2 = 1$$ and $$sr = r^{-1}s$$. To find one-dimensional representations, we need to find scalars $$R, S$$ that satisfy $$R^N = S^2 = 1, SR = R^{-1}S$$. The last identity implies $$R^2 = 1$$. In the case where $$N$$ is odd, -1 is not an $$N$$th root of unity, so $$R = 1$$ and $$S$$ may be +1 or -1, giving two representations $$\mathbb{C}_+$$ and $$\mathbb{C}_-$$, respectively; the former is the trivial representation while the latter is the sign representation, which maps orientation-preserving isometries to +1 and orientation-reversing isometries to -1. If $$N$$ is even, then both $$R$$ and $$S$$ may take on values +1 and -1 independently, so there are four nonisomorphic one-dimensional representations $$\mathbb{C}_{\pm\pm}$$ where $$\mathbb{C}_{++}$$ is the trivial representation and $$\mathbb{C}_{+-}$$ is the sign representation.

The irreducible two-dimensional representations can also be realized over $$\mathbb{R}$$. Namely, put $\rho_j(s) = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \qquad \rho_j(r) = \begin{pmatrix} \cos(2\pi j/N) & -\sin(2\pi j/N) \\ \sin(2\pi j/N) & \phantom{-}\cos(2\pi j/N) \end{pmatrix}$ where for even $$N$$, $$j \in \{1, \ldots, N/2 - 1\}$$ and for odd $$N$$, $$j \in \{1, \ldots, (N-1)/2\}$$. That is, $$s$$ acts by reflection across the x-axis while $$r$$ acts as a rotation by $$2\pi j/N$$ around the origin. To see that these representations are irreducible, we simply have to observe that $$\rho_j(s)$$ has two eigenvectors with different eigenvalues, namely $$(1, 0)$$ and $$(0, 1)$$, and neither is an eigenvector of $$\rho_j(r)$$ as long as $$\sin(2\pi j/N) \ne 0$$, which will be true when $$j$$ is within the bounds given above. The eigenvalues of $$\rho_j(r)$$ are $$e^{\pm 2\pi ij/N}$$ so the representations are nonisomorphic for different values of $$j$$.

For odd $$N$$ we therefore have two one-dimensional irreps and $$(N-1)/2$$ two-dimensional irreps; and $$2\cdot 1^2 + \frac{N-1}{2} \cdot 2^2 = 2N$$ so we have found all the finite-dimensional irreps. For even $$N$$, similarly $$4 \cdot 1^2 + \frac{N-2}{2} \cdot 2^2 = 2N$$.

2. Consider first the case of $$N$$ odd. Let us write out the character table. (We use $$\mathbb{C}^2_j$$ to denote the two-dimensional irreducible representation $$\rho_j$$ described in part (a).)

$$D_N$$$$r^k$$$$r^k s$$
$$\mathbb{C}_+$$11
$$\mathbb{C}_-$$1-1
$$\mathbb{C}^2_j$$$$2\cos\frac{2\pi jk}{N}$$ 0

The standard representation $$V$$ is $$\mathbb{C}^2_1$$. We have $$\chi_{V\otimes V} = \chi_V^2$$, taking on the value $$4\cos^2 \frac{2\pi k}{N}$$ for $$r^k$$ and 0 for each $$r^k s$$.

The number of copies of $$\mathbb{C}_+$$ in $$V \otimes V$$ is \begin{align*} \langle \chi_{\mathbb{C}_+}, \chi_{V\otimes V} \rangle &= \frac{1}{2N} \sum_{g \in D_N} \chi_{\mathbb{C}_+}(g) \overline{\chi_{V \otimes V}(g)} \\ &= \frac{1}{2N} \sum_{k=0}^{N-1} 4 \cos^2 \frac{2\pi k}{N} \\ &= \frac{1}{N} \sum_{k=0}^{N-1} (1 + \cos \frac{4\pi k}{N}) \\ &= \frac{1}{N} (N + 0) = 1 \end{align*} and similarly $$V \otimes V$$ contains one copy of $$\mathbb{C}_-$$. The dimension of $$V \otimes V$$ is 4, so it must contain exactly one copy of $$\mathbb{C}^2_j$$ for some $$j$$ as well. We can find it simply by subtracting $$\chi_{\mathbb{C}_+}$$ and $$\chi_{\mathbb{C}_-}$$ from $$\chi_{V \otimes V}$$. Indeed, \begin{align*} \chi_{\mathbb{C}^2_j}(r^k) &= \chi_{V \otimes V}(r^k) - \chi_{\mathbb{C}_+}(r^k) - \chi_{\mathbb{C}_-}(r^k) \\ &= 4 \cos^2 \frac{2\pi k}{N} - 2 \\ &= 2 \cos \frac{4\pi k}{N} \\ &= \chi_{\mathbb{C}^2_2}(r^k) \end{align*} so $$j = 2$$ and the desired decomposition is $V \otimes V \cong \mathbb{C}_+ \oplus \mathbb{C}_- \oplus \mathbb{C}^2_2$ There are two special cases: $$N = 1$$ and $$N = 3$$. For $$N = 1$$ there are no irreducible two-dimensional representations at all, and the sum $$\sum_{k=0}^{N-1} \cos \frac{4\pi k}{N}$$ is equal to 1, not 0, so each of $$\mathbb{C}_+$$ and $$\mathbb{C}_-$$ occurs twice. For $$N = 3$$, there is, strictly speaking, no $$\mathbb{C}^2_2$$, but if this is interpreted in the natural way then it is the same as $$\mathbb{C}^2_1$$ (for $$N = 3$$ only).

For $$N$$ even, the character table is:

$$D_N$$$$r^k$$$$r^k s$$
$$\mathbb{C}_{++}$$11
$$\mathbb{C}_{+-}$$1-1
$$\mathbb{C}_{-+}$$$$(-1)^k$$1
$$\mathbb{C}_{--}$$$$(-1)^k$$-1
$$\mathbb{C}^2_j$$$$2\cos\frac{2\pi jk}{N}$$ 0

Note that the rows $$\mathbb{C}_{++}$$ and $$\mathbb{C}_{+-}$$ are the same as $$\mathbb{C}_+$$ and $$\mathbb{C}_-$$ from the odd case, and similarly the row $$\mathbb{C}^2_j$$ is the same in both even and odd cases. Therefore, we still have $\chi_{V \otimes V} = \chi_{\mathbb{C}_{++}} + \chi_{\mathbb{C}_{+-}} + \chi_{\mathbb{C}^2_2}$ so the decomposition is analogous to that of the odd case: $V \otimes V \cong \mathbb{C}_{++} \oplus \mathbb{C}_{+-} \oplus \mathbb{C}^2_2$ Exceptions occur for $$N = 2$$ and $$N = 4$$. For $$N = 2$$, as with the $$N = 1$$ case, the sum $$\sum_{k=0}^{N-1} \cos \frac{4\pi k}{N}$$ does not vanish, but is instead equal to 2; so again each of $$\mathbb{C}_{++}$$ and $$\mathbb{C}_{+-}$$ occurs twice.

For $$N = 4$$, there is no irreducible representation $$\mathbb{C}^2_2$$; instead the result of subtracting $$\chi_{\mathbb{C}_+}$$ and $$\chi_{\mathbb{C}_-}$$ from $$\chi_{V \otimes V}$$, which, as we found for the odd case, is $\chi(g) = \begin{cases} 2 \cos \frac{4\pi k}{N} & \text{if g = r^k} \\ 0 & \text{otherwise} \end{cases}$ is just $$2(-1)^k$$ for $$r^k$$ and 0 otherwise, which is the sum of the rows for $$\mathbb{C}_{-+}$$ and $$\mathbb{C}_{--}$$ in the character table. So for $$N = 4$$ each of the four one-dimensional representations occurs once in the decomposition of $$V \otimes V$$.

Problem 4.12.2

1. The matrices $A = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \qquad B = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}$ generate $$G$$.

Left as exercise for the reader.

Explicitly, the representation is given by $$\left[ \rho\begin{pmatrix} 1 & a & b \\ 0 & 1 & c \\ 0 & 0 & 1 \end{pmatrix}f \right] (x) = z^{cx-b}f(x-a) \label{eqn:heisenrz}$$ This is a representation because $\begin{pmatrix} 1 & a & b \\ 0 & 1 & c \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & a' & b' \\ 0 & 1 & c' \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & a + a' & b + b' + ac' \\ 0 & 1 & c + c' \\ 0 & 0 & 1 \end{pmatrix}$ and \begin{align*} z^{cx - b} (x \mapsto z^{c'x-b'} f(x-a'))(x - a) &= z^{cx-b} z^{c'(x-a)-b'} f(x-a-a') \\ &= z^{(c+c')x-(b+b'+ac')}f(x-(a+a')) \end{align*} so $$\rho(xy) = \rho(x)\rho(y)$$ for all $$x, y \in G$$. By inspection, this representation agrees with the explicit forms of $$\rho(A)$$ and $$\rho(B)$$ given in the text. Uniqueness then follows from the Lemma.

2. For $$z = 1$$, $$\rho(B)$$ reduces to the identity operator, so by the Lemma, any eigenspace of $$\rho(A)$$ is a subrepresentation, for example, the one-dimensional subspace of constant functions; thus $$R_1$$ is not irreducible.

For $$z \ne 1$$, a proof using characters is possible, but as Problem 4.12.9 asks us to compute the characters, we will instead give a direct proof here. The operator $$\rho(A)$$ has eigenvectors $$f_j$$ for $$j = 0, 1, \ldots, p-1$$ given by $$f_j(x) = z^{jx}$$ with eigenvalue $$z^{-j}$$. The space $$V$$ is $$p$$-dimensional and the $$p$$ eigenvalues $$1, z, \ldots, z^{p-1}$$ are distinct, so the eigenvectors $$f_j$$ form a basis of $$V$$. Any nonzero subrepresentation $$W \subseteq V$$ must contain an eigenvector of $$\rho(A)$$, which must be one of the $$f_j$$'s up to an irrelevant scaling factor. However, the subgroup generated by $$\rho(B)$$ acts transitively on this basis, so $$W$$ in fact must contain the entire basis, and therefore be $$V$$ itself.

3. Since $$A^p = B^p = I_3$$, and $$A$$ and $$B$$ generate $$G$$, there are at most $$p^2$$ distinct one-dimensional representations, in which $$A$$ and $$B$$ are chosen to act as some $$p$$th roots of unity. In fact, for each $$(i, j) \in \{0, 1, \ldots, p-1\}^2$$, we have a valid representation $\rho\begin{pmatrix} 1 & a & b \\ 0 & 1 & c \\ 0 & 0 & 1 \end{pmatrix} = \omega^{ai + cj}$ where $$\omega$$ is a primitive $$p$$th root of unity. We will denote this representation by $$\mathbb{C}_{ij}$$. That these are valid representations follow from the fact that multiplication in the group $$G$$ is additive in $$a$$ and $$c$$. Since $$A$$ acts as $$\omega^i$$ and $$B$$ acts as $$\omega^j$$ in $$\mathbb{C}_{ij}$$, the $$p^2$$ representations of the form $$\mathbb{C}_{ij}$$ are pairwise nonisomorphic.

Let $$f_k \in R_1$$ be given by $$f_k(x) = \omega^{kx}$$ where $$\omega$$ is a primitive $$p$$th root of unity. Then $$f_k$$ has eigenvalue $$\omega^{-k}$$ with respect to $$\rho(A)$$, so $$\rho(A)$$ has $$p$$ distinct eigenvalues; thus $$R_1$$ is a direct sum (as vector spaces) of the $$p$$ one-dimensional eigenspaces generated by the $$f_k$$'s. Since $$\rho(B)$$ is the identity on $$R_1$$, these one-dimensional eigenspaces are also subrepresentations. On the subspace generated by $$f_k$$, since the eigenvalue of $$\rho(A)$$ is $$\omega^{-k}$$ and that of $$\rho(B)$$ is 1, we must have that in this subrepresentation, $$\omega^{ai + cj} = \omega^{-k}$$ when $$a = 1, c = 0$$ and $$\omega^{ai + cj} = 1$$ when $$a = 0, c = 1$$, which implies $$i = -1, j = 0$$. Therefore $$R_1 \cong \bigoplus_{j=0}^{p-1} \mathbb{C}_{-1,j}$$.

4. We have found $$p^2$$ irreducible representations of dimension 1, and $$p-1$$ irreducible representations of dimension $$p$$. But $$p^3 = p^2 \cdot 1^2 + (p-1) \cdot p^2$$, so these are all the irreducible representations of $$G$$.

Problem 4.12.3 Choose a basis $$\{e_1, \ldots, e_n\}$$ of $$V$$. A basis of $$\Lambda^m V$$ is given by the set of elements of the form $$e_{i_1} \wedge \ldots \wedge e_{i_m}$$ with $$1 \le i_1 < \ldots < i_m \le n$$. Denote this basis by $$B$$.

Let $$p_1, \ldots, p_n$$ be distinct primes. Let $$H \in GL(V)$$ be the operator such that $$H(e_i) = p_i e_i$$ for each $$i$$. The basis element $$e_{i_1} \wedge \ldots \wedge e_{i_m} \in \Lambda^m V$$ is then an eigenvector of $$H$$ with eigenvalue $$\prod_{j=1}^n p_{i_j}$$. By the unique factorization of integers, these eigenvalues are all distinct, so $$\Lambda^m V$$, as a vector space, is a direct sum of one-dimensional eigenspaces, each generated by one of the elements of $$B$$.

Let $$W \subseteq \Lambda^m V$$ be a nonezro subrepresentation. Then $$W$$ contains an eigenvector of $$\rho(H)$$, which must be (up to scaling) an element of $$B$$. Say $$w \in B \cap W$$ and write $$w = e_{i_1} \wedge \ldots \wedge e_{i_m}$$. Let $$w' \in B$$ and write $$w = e_{j_1} \wedge \ldots \wedge e_{j_m}$$. There exists a permutation $$\sigma$$ of $$\{e_1, \ldots, e_n\}$$ such that $$\sigma(e_{i_k}) = e_{j_k}$$ for each $$k \in \{1, \ldots, m\}$$. The permutation $$\sigma$$ can be extended to a linear operator on $$V$$, which is an isomorphism since it maps a basis to a basis. Then $$\rho(\sigma)(w) = w'$$. Since $$w \in W$$, it follows that all $$w' \in B$$ belong to $$W$$, so $$W$$ is the entire space $$\Lambda^m V$$. We conclude that $$\Lambda^m V$$ is irreducible.

The symmetric algebra $$SV$$ is isomorphic to $$\mathbb{C}[x_1, \ldots, x_n]$$ where $$e_i$$ corresponds to $$x_i$$, so we will just use juxtaposition to denote multiplication in $$SV$$. A basis $$B'$$ for the grade-$$m$$ subspace, that is, $$S^m V$$, is given by elements of the form $$e_{i_1} \cdot \ldots \cdot e_{i_m}$$ where $$1 \le i_1 \le \ldots \le i_m \le n$$. The operator $$H$$ previously described acts on $$S^m V$$ as a diagonal matrix with distinct eigenvalues, so $$S^m V$$ is a direct sum of one-dimensional eigenspaces, each generated by one element of $$B'$$. So far this is essentially the same as what happens with $$\Lambda^m V$$. Let $$W$$ be a nonzero subrepresentation of $$S^m V$$; it must then contain some $$w \in B$$ and we want to show that $$B \subseteq W$$.

In the algebra $$\mathbb{C}[x, y]$$, the following holds for all positive integers $$n$$: $$n(x^n + y^n) = \sum_{i=0}^{n-1} (x + \omega^i y)^n \label{eqn:sym1}$$ where $$\omega$$ is a primitive $$n$$th root of unity.

This will be true if all the mixed terms on the RHS of $$\pref{eqn:sym1}$$ cancel. The coefficient of $$x^i y^{n-i}$$ is $$\binom{n}{i} \sum_{i=0}^{n-1} \omega^i$$, which vanishes when $$0 < i < n$$.

Corollary: $$y^n$$ is a linear combination of $$\{x^n\} \cup \{(x + \omega^i y)^n \mid i \in \{0, 1, \ldots, n-1\}\}$$.

We now return to the problem. We will first show that $$W$$ contains a basis element of the form $$e_i^m$$. Write $$w = e_{i_1}^{p_1} \cdot \ldots \cdot e_{i_q}^{p_q}$$ where $$i_1 < \ldots < i_q$$ and $$p_1, \ldots, p_q \ge 1$$. If $$q = 1$$, we are done. Otherwise, for each $$j$$, let $$T_j \in \End V$$ be the unique linear operator that maps $$e_{i_1}$$ to $$e_{i_1} + \omega^j e_{i_2}$$ where $$\omega$$ is a primitive root of unity of degree $$p_1$$, and is the identity on all other basis elements of $$V$$. Then each $$T_j$$ is invertible and so belongs to $$GL(V)$$. Then $$T_j w = (e_{i_1} + \omega^j e_{i_2})^{p_1} e_{i_2}^{p_2} \ldots e_{i_q}^{p_q}$$. Using $$\pref{eqn:sym1}$$, we see that $$w$$ together with the elements $$T_j w$$ for $$j \in \{0, 1, \ldots, p_1-1\}$$ may be combined linearly to produce $$e_{i_2}^{p_1} e_{i_2}^{p_2} \ldots e_{i_q}^{p_q}$$, or in other words $$e_{i_2}^{p_1 + p_2} e_{i_3}^{p_3} \ldots e_{i_q}^{p_q}$$. Iterating this process eventually gives a power of $$e_{i_q}$$. So $$e_{i_q}^m \in W$$.

Having found that $$W$$ contains $$e_{i_q}^m$$, it is easy to see that $$W$$ also contains $$v^m$$ for all $$v \in V$$. To show that $$W = S^m V$$, we need to reverse this process, showing that $$W$$ contains all mixed basis elements as well. The following Lemma allows us to do so:

In the algebra $$\mathbb{C}[x, y]$$, the following holds for all positive integers $$n, j$$ with $$0 < j < n$$: $$n\binom{n}{j} x^j y^{n-j} = \sum_{i=0}^{n-1} \omega^{ij} (x + \omega^i y)^n \label{eqn:sym2}$$ where $$\omega$$ is a primitive $$n$$th root of unity.

The coefficient of $$x^k y^{n-k}$$ on the RHS of $$\pref{eqn:sym2}$$ is $$\binom{n}{k} \sum_{i=0}^{n-1} \omega^{ij} \omega^{i(n-k)} = \binom{n}{k} \sum_{i=0}^{n-1} \omega^{i(j-k)}$$. For $$k = j$$ this is just $$\binom{n}{j} \sum_{i=0}^{n-1} 1 = n\binom{n}{j}$$. For $$k \ne j$$, we have $$\sum_{i=0}^{n-1} \omega^{i(j-k)} = 0$$.

Suppose $$w = e_{i_1}^{p_1} \cdot \ldots \cdot e_{i_q}^{p_q}$$. If $$q = 1$$ then we already know $$w \in W$$. Otherwise, since $$W$$ contains $$(e_{i_1} + \omega^j e_{i_2})^m$$ for each $$j \in \{0, 1, \ldots, m-1\}$$ where $$\omega$$ is a primitive $$m$$th root of unity, $$\pref{eqn:sym2}$$ shows that $$W$$ contains $$e_{i_1}^{m-p_2} e_2^{p_2}$$. If $$q = 2$$ we are done. Otherwise there are invertible linear operators $$T_j$$ that map $$e_{i_1}$$ to $$e_{i_1} + \omega^j e_{i_3}$$ with $$\omega$$ a primitive root of unity of degree $$m-p_2$$, so $$W$$ contains the $$m-p_2$$ elements of the form $$(e_{i_1} + \omega^j e_{i_3})^{m-p_2} e_2^{p_2}$$, and applying $$\pref{eqn:sym2}$$ shows that $$e_{i_1}^{m-p_2-p_3} e_{i_2}^{p_2} e_{i_3}^{p_3}$$ belongs to $$W$$. Iterating this process, we eventually arrive at $$w \in W$$. This completes the proof that $$S^m V$$ is irreducible.

Problem 4.12.4 Let $$n$$ be the number of vertices in $$\Gamma$$. The automorphism group of $$\Gamma$$ is isomorphic to the group $$G$$ of permutation matrices on $$\mathbb{C}^n$$ that commute with the adjacency matrix $$A$$ of $$\Gamma$$. Suppose $$A$$ has $$n$$ distinct eigenvalues. Let $$Q$$ be a matrix such that $$QAQ^{-1}$$ is diagonal. Suppose $$P \in G$$, so $$P$$ commutes with $$A$$. It follows that $$QPQ^{-1}$$ commutes with $$QAQ^{-1}$$. But since the latter matrix is diagonal with distinct entries, the former matrix must also be diagonal. Since diagonal matrices commute, it follows that elements of the subset $$QGQ^{-1}$$ all commute with each other; but this is just a conjugate subgroup to $$G$$ in $$GL_n(\mathbb{C})$$, so in fact $$G$$ is also abelian.

Problem 4.12.5

1. Suppose $$g \in A_5$$ splits the 12 vertices of the icosahedron into orbits $$O_1 \cup \ldots \cup O_n$$. To each orbit $$O_i$$, assign the subspace $$U_i \subseteq F(I)$$ consisting of functions that vanish on all vertices outside $$O_i$$. Evidently $$F(I) \cong U_1 \oplus \ldots \oplus U_n$$ as vector spaces. Now $$U_i$$ has a basis of eigenvectors of $$g$$. Explicitly, let $$v \in O_i$$ so that $$O_i = \{v, gv, \ldots, g^{|O_i|-1}v\}$$, and for each $$j \in \{0, 1, \ldots, |O_i|-1\}$$, assign the function $$f_{ij}$$ such that $$f_{ij}(g^k v) = \omega^{jk}$$ where $$\omega$$ is a primitive $$i$$th root of unity, and $$f_{ij}$$ vanishes on all vertices that are not in $$O_i$$. The function $$f_{ij}$$ is an eigenvector of $$g$$ with eigenvalue $$\omega^{-j}$$, so the $$f_{ij}$$'s for given $$i$$ have distinct eigenvalues and therefore form a basis of $$U_i$$. The collection of $$f_{ij}$$'s for all $$i, j$$ is therefore a eigenbasis of $$g$$ for $$F(I)$$.

The eigenbasis obtained in this way may be different for different elements of $$A_5$$, but this does not matter because for each $$g \in A_5$$, we can compute $$\chi(g)$$ in the eigenbasis of $$g$$, in which $$\rho(g)$$ is a diagonal matrix. Now $$\chi(g) = \tr\rho(g) = \sum_{j=1}^n \sum_{k=0}^{|O_j|-1} \exp(2\pi ik/|O_j|)$$ where $$n$$ is the number of orbits (and here $$i = \sqrt{-1}$$). The inner sum vanishes whenever $$|O_j| > 1$$, so $$\chi(g)$$ is just the number of vertices fixed by $$g$$. Using the identification of conjugacy classes of $$A_5$$ with the rotations they induce given in section 4.8, we obtain the character of $$F(I)$$ as follows:

$$A_5$$$$\Id$$$$(123)$$$$(12)(34)$$ $$(12345)$$$$(13245)$$
#120151212
$$F(I)$$120022

It is evident that $$F(I)$$ contains a copy of the representation $$\mathbb{C}^5$$ described in section 4.8, as well as a copy of the trivial representation (the subspace of constant functions). Subtracting the characters of these two representations from that of $$F(I)$$ yields $$(6,0,-2,1,1)$$. By inspection, this is the sum of the rows for $$\mathbb{C}^3_+$$ and $$\mathbb{C}^3_-$$, so the desired decomposition is $F(I) \cong \mathbb{C} \oplus \mathbb{C}^3_+ \oplus \mathbb{C}^3_- \oplus \mathbb{C}^5$

2. The icosahedron has 20 faces. The elements of order 3 in $$A_5$$ each fix a pair of opposite faces while the elements of order 2 and 5 leave no face fixed. The icosahedron has 30 edges, and the elements of order 2 in $$A_5$$ each fix a pair of opposite edges while the elements of order 3 and 5 leave no edge fixed. The characters of the representation $$V_f$$ of functions on the set of faces and $$V_e$$ of functions on the set of edges are therefore

$$A_5$$$$\Id$$$$(123)$$$$(12)(34)$$ $$(12345)$$$$(13245)$$
#120151212
$$V_f$$202000
$$V_e$$300200

Using Theorem 4.5.1 and the character table given in section 4.8, we obtain \begin{align*} V_f &\cong \mathbb{C} \oplus \mathbb{C}^3_+ \oplus \mathbb{C}^3_- \oplus (\mathbb{C}^4)^2 \oplus \mathbb{C}^5 \\ V_e &\cong \mathbb{C} \oplus \mathbb{C}^3_+ \oplus \mathbb{C}^3_- \oplus (\mathbb{C}^4)^2 \oplus (\mathbb{C}^5)^3 \end{align*}

Problem 4.12.6 For brevity, we will denote the element $$x \mapsto ax + b$$ of $$G$$ by $$(a, b)$$.

It is well-known that the multiplicative group of a finite field is cyclic, so let $$g \in \mathbb{F}_q^\times$$ be a generator and let $$\omega = e^{2\pi i/(q-1)}$$; then, for each $$j \in \{0, 1, \ldots, q-2\}$$ there is a one-dimensional representation $$\mathbb{C}_j$$ in which $$(g, b)$$ acts as $$\omega^j$$ for each $$b \in \mathbb{F}_q$$, and hence $$(a, b)$$ acts as $$\omega^{je}$$ where $$a = g^e$$. These $$q-1$$ one-dimensional representations are pairwise nonisomorphic.

Let $$V'$$ be the $$q$$-dimensional representation consisting of functions $$f : \mathbb{F}_q \to \mathbb{C}$$, with $$\rho'(a,b)f(x) = f(ax+b)$$. As we did in problem 4.12.5, we observe that the trace of $$\rho'(g)$$ equals the number of fixed points of the action of $$g$$ on the domain of $$f$$. Now $$(1, 0)$$ fixes $$q$$ elements, $$(1, b)$$ fixes 0 elements for $$b \ne 0$$, and $$(a, b)$$ fixes 1 element when $$a \ne 1$$. The representation $$V'$$ is not irreducible; it contains a copy of the trivial representation (as the subspace of $$V'$$ consisting of the constant functions). We may form the quotient representation $$V = V'/\mathbb{C}$$ with character given by $$\chi_V = \chi_{V'} - \chi_{\mathbb{C}}$$. That is, $$\chi_V(a, b) = \begin{cases} q - 1 & \text{if a = 1 and b = 0} \\ -1 & \text{if a = 1 and b \ne 0} \\ 0 & \text{otherwise} \end{cases}$$ Theorem 4.5.1 shows that $$V$$ is irreducible.

By the sum of squares formula, $$V$$ together with the one-dimensional representations $$\mathbb{C}_j$$ constitute all the irreducible representations of $$G$$.

The character of the one-dimensional representation $$\mathbb{C}_j$$ is just the map from each element of $$G$$ to the scalar by which it acts in $$\mathbb{C}_j$$. If $$j, k$$ are given, the representation $$\mathbb{C}_j \otimes \mathbb{C}_k$$ is also one-dimensional so it must be isomorphic to $$\mathbb{C}_m$$ for some $$m$$; namely the one for which $$\chi_{\mathbb{C}_m} = \chi_{\mathbb{C}_j}\chi_{\mathbb{C}_k}$$. Since $$g$$ acts by $$\omega^j$$ and $$\omega^k$$ in $$\mathbb{C}_j$$ and $$\mathbb{C}_k$$, respectively, it follows that $\mathbb{C}_j \otimes \mathbb{C}_k \cong \mathbb{C}_{j+k}$

If we take any tensor product $$\mathbb{C}_j \otimes V$$, we find that the character is the same as that of $$V$$ since $$\chi_{\mathbb{C}_j}(a, b) = 1$$ whenever $$a = 1$$ (that is, whenever $$\chi_V$$ is nonzero). Therefore $\mathbb{C}_j \otimes V \cong V$.

Finally, for $$V \otimes V$$ the character is $\chi_{V\otimes V}(a, b) = \begin{cases} (q - 1)^2 & \text{if a = 1 and b = 0} \\ 1 & \text{if a = 1 and b \ne 0} \\ 0 & \text{otherwise} \end{cases}$ so $$\langle \chi_{V\otimes V}, \chi_V \rangle = \frac{1}{q(q-1)}(1 \cdot (q-1)^3 + (q-1)\cdot(-1)) = q - 2$$ and, for each $$j$$, $$\langle \chi_V, \chi_{\mathbb{C}_j} \rangle = \frac{1}{q(q-1)}(1\cdot (q-1)^2 + (q-1)\cdot(1)) = 1$$. Thus $V \otimes V \cong V^{q-2} \oplus \bigoplus_{j=0}^{q-2} \mathbb{C}_j$

Problem 4.12.7

1. For each $$(a, b) \in \mathbb{C}^2$$ with $$|a|^2 + |b|^2 = 1$$ we have $\begin{pmatrix} a & -\overline{b} \\ b & \overline{a} \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} a \\ b \end{pmatrix}$ where the matrix on the left is an element of $$SU(2)$$. The inverse of this matrix is also in $$SU(2)$$, and multiplying it by $$(a, b)$$ yields back $$(1, 0)$$. Since all nonzero elements of $$\mathbb{C}^2$$ can be rescaled by a real number to give a vector $$v \in \mathbb{C}^2$$ with $$|v_1|^2 + |v_2|^2 = 1$$, this establishes that every nonzero vector in $$\mathbb{C}^2$$ (regarded as a real representation) is cyclic, so this representation is irreducible.

2. It is obvious that $$\mathbb{H}$$ is closed under multiplication. By Schur's lemma, every nonzero element of $$\mathbb{H}$$ is an isomorphism, so $$\mathbb{H}$$ is a division algebra. One way to find an explicit description of $$\mathbb{H}$$ is to write out the $$4 \times 4$$ matrices of elements of $$SU(2)$$ (regarded as real endomorphisms) and use a computer algebra system to find the conditions for matrices to commute with all the matrices in $$SU(2)$$. We will give a slightly different approach that avoids $$4 \times 4$$ matrices. This is based on the following result:

Let $$C : V \to V$$ act as complex conjugation in the standard basis, $$C(a, b) = (\overline{a}, \overline{b})$$. Then,

1. For all $$2 \times 2$$ complex matrices $$P$$, we have $$C \circ P = \overline{P} \circ C$$, where $$\overline{P}$$ denotes the matrix obtained by taking the complex conjugate of each entry of $$P$$.
2. Let $$A$$ be an $$\mathbb{R}$$-linear endomorphism on $$V$$, regarded only as a vector space. Then, there exists a unique pair of $$\mathbb{C}$$-linear endomorphisms $$(B, B')$$ such that $$A = B + B'C$$.

Part 1 is verified by a straightforward calculation. For part 2, let $$B = \frac{1}{2}A - \frac{i}{2}Ai, \qquad B' = \frac{1}{2}AC - \frac{i}{2}ACi \label{eqn:crdecomp}$$ where $$i$$ represents the operator $$i I_2$$. Direct calculation shows that the equation $$A = B + B'C$$ is satisfied by these choices of $$B$$ and $$B'$$ (note that $$Ci = -iC$$). It is clear that $$B$$ and $$B'$$ are $$\mathbb{R}$$-linear; direct calculation shows that $$B(iv) = i(Bv)$$ and $$B'(iv) = i(B'v)$$ for all $$v \in V$$ so $$B$$ and $$B'$$ are also $$\mathbb{C}$$-linear. For uniqueness, let $$A = B + B'C$$ and substitute this into the right-hand sides in $$\pref{eqn:crdecomp}$$, and use $$\mathbb{C}$$-linearity of $$B$$ and $$B'$$ to derive $$\pref{eqn:crdecomp}$$.

We will not need to use $$\pref{eqn:crdecomp}$$ explicitly. The characterization of $$\mathbb{R}$$-linear endomorphisms on $$V$$ allows us to determine the centralizer of $$SU(2)$$ by working with $$2\times 2$$ matrices instead of $$4\times 4$$ matrices. Indeed, suppose $$A \in SU(2)$$ and $$P \in \End_{\mathbb{R}}(V)$$; write $$P = Q + Q'C$$ whereupon: \begin{align*} AP &= PA \\ A(Q + Q'C) &= (Q + Q'C)A \\ AQ + AQ'C &= QA + Q'\overline{A}C \end{align*} and by uniqueness, a necessary condition for $$AP = PA$$ to hold is that $$AQ = QA$$ and $$AQ' = Q'\overline{A}$$. By reversing the steps, we see that this condition is also sufficient. Since $$V$$ is irreducible as a complex representation, Schur's lemma implies that $$A$$ must be a multiple of the identity matrix. Using the following two elements of $$SU(2)$$ $\begin{pmatrix} i & 0 \\ 0 & -i\end{pmatrix} \qquad \begin{pmatrix}0 & i \\ i & 0\end{pmatrix}$ we find that $$Q'$$ must take the form $$\begin{pmatrix} 0 & z \\ -z & 0 \end{pmatrix}$$, and using the fact that all elements of $$SU(2)$$ are of the form $$\begin{pmatrix}a & -\overline{b} \\ b & \overline{a} \end{pmatrix}$$, we can verify that this condition on $$Q'$$ is also sufficient.

A basis of $$\mathbb{H}$$ is therefore given by $P_1 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \quad P_i = \begin{pmatrix} i & 0 \\ 0 & i \end{pmatrix}, \quad P_j = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}C, \quad P_k = \begin{pmatrix} 0 & i \\ -i & 0 \end{pmatrix}C$

3. If we identify the symbols $$1, i, j, k$$ with $$P_1, P_i, P_j, P_k$$, respectively, than the required properties follow from a direct calculation which is simplified by the use of part 1 of the Lemma.

4. These identities follow from direct calculation using the properties of $$i, j, k$$ given in part (c).

5. This well-known isomorphism is described explicitly here.

6. It is not obvious that $$qxq^{-1} \in V$$ for all $$x \in V$$, so let us show this first. For a unit quaternion, since $$q\overline{q} = 1$$, this implies $$q^{-1} = \overline{q}$$. Therefore $\Re(qxq^{-1}) = \frac{1}{2}(qxq^{-1} + \overline{qxq^{-1}}) = \frac{1}{2}(qxq^{-1} + \overline{q^{-1}}\overline{x}\overline{q}) = \frac{1}{2}(qxq^{-1} + q\overline{x}q^{-1}) = q\frac{x+\overline{x}}{2}q^{-1} = q\Re(x)q^{-1} = \Re(x)$

The norm preserved by conjugation is the standard Euclidean norm on $$V$$ with respect to the basis $$i, j, k$$. This induces the standard inner product on $$V$$, defined by $$\langle x, y \rangle = \frac{1}{2}(\|x+y\|^2 - |x|^2-|y|^2) = \frac{1}{2}(x\overline{y}+y\overline{x}) = -\frac{1}{2}(xy+yx)$$.

Let $$\rho : G \to O(3)$$ be the map such that $$\rho(q)(x) = qxq^{-1}$$ for each $$q \in G, x \in V$$; $$\rho$$ is a homomorphism. If $$\varphi : SU(2) \to G$$ is an isomorphism, then $$h = \rho \circ \varphi$$ is a homomorphism from $$SU(2)$$ to $$O(3)$$, and $$\im h = \im \rho$$. Now $$G$$ is connected and $$\rho$$ is continuous, so $$\im \rho$$ must be connected, so $$\im \rho$$ is contained within the identity component of $$O(3)$$, which is $$SO(3)$$.

From this point onward, we regard $$\rho$$ and $$h$$ as having codomain $$SO(3)$$. To show that $$h$$ is surjective, it suffices to show that $$\rho$$ is surjective. For this, it suffices to show that $$\im \rho$$ contains the stabilizer $$SO(3)^v$$ of every unit vector $$v \in V$$, since every element of $$SO(3)$$ fixes some unit vector. Let $$v \in V$$ be a given unit vector. Define $$f : \mathbb{R} \to G$$ by $$f(\theta) = \cos\theta + v \sin\theta$$. Observe that $$f(\theta)$$ commutes with $$v$$ for all $$\theta$$ and thus $$\rho(f(\theta))$$ fixes $$v$$ for all $$\theta$$. The maps $$f$$ and $$\rho$$ are both continuous, so the image of $$\rho \circ f$$ is a connected subset of $$SO(3)^v$$. Obviously $$\rho(f(0))$$ is the identity, and if $$w \in V$$ is a nonzero unit vector that satisfies $$\langle v, w \rangle = 0$$, then we have $$vw + wv = 0$$, or $$vw = -wv$$. Therefore $$vwv^{-1} = -wvv^{-1} = -w$$. Now the map $$x \mapsto vwv^{-1}$$ is $$\rho(f(\pi/2))$$ and geometrically corresponds to a 180-degree rotation, as only 180-degree rotations send a nonzero vector to its additive inverse. Thus the image of $$\rho \circ f$$ contains both the 0 degree rotation and the 180 degree rotation. But the image of $$\rho\circ f$$ is connected, so it must contain a counterclockwise rotation of each amount between 0 and 180 degrees, or a counterclockwise rotation of each amount between 180 and 360 degrees. In any case, $$\rho(f(-\theta)) = \rho(f(\theta))^{-1}$$, so in fact $$\rho\circ f$$ must contain all rotations that fix $$v$$, and we can conclude that $$\rho$$ and $$h$$ are indeed surjective.

The kernel of the map $$\rho$$ consists of those elements of $$G$$ (unit quaternions) that commute with all elements of $$V$$ (purely imaginary quaternions). The property of commuting with all purely imaginary quaternions is equivalent to commuting with all quaternions, which in turn is equivalent to commuting with all unit quaternions. Therefore $$\ker \rho$$ is the centre of the group $$G$$, which implies that $$\ker h$$ is the centre of the group $$SU(2)$$ since $$SU(2) \cong G$$. We determined in part (b) that the only matrices that commute with all matrices in $$SU(2)$$ are the scalar matrices, of which only $$\pm I$$ belong to $$SU(2)$$. Therefore $$\ker h = \{I, -I\}$$.

Problem 4.12.8

1. A proof along the lines given in the Hint can be found in Artin's Algebra (section 6.12 in the second edition). This is a standard result and we will not elaborate further here.

2. Let $$h : SU(2) \to SO(3)$$ denote the surjective homomorphism described in Problem 4.12.7(f).

The only element of $$SU(2)$$ that has order 2 is $$-I$$.

Exercise.

Corollary: If $$x \in SU(2)$$ and the order of $$h(x)$$ is an even integer $$n$$, then the order of $$x$$ is $$2n$$.

Now, there are potentially two different types of finite subgroups $$G \subseteq SU(2)$$: those that don't contain $$-I$$, and are therefore isomorphic to their images in $$SO(2)$$, and those that do contain $$-I$$. We consider the former case first.

If $$G$$ doesn't contain $$-I$$, the Lemma implies that $$G$$ doesn't contain any element of order 2, therefore neither does $$h(G)$$. Therefore $$h(G)$$ has odd order. Using the classification of part (a), the only possibility is that $$h(G)$$ (and therefore $$G$$) is cyclic. $$SU(2)$$ does, in fact, contain a cyclic subgroup of order $$n$$ for all $$n$$ (and therefore all odd $$n$$); it can for instance be generated by the element $\begin{pmatrix} e^{2\pi i/n} & 0 \\ 0 & e^{-2\pi i/n} \end{pmatrix}$

We now consider the case $$-I \in G$$. Here we have $$G = h^{-1}(h(G))$$. By considering all possible finite subgroups of $$SO(3)$$ and examining their preimages, we can classify all finite subgroups of $$SU(2)$$ that contain $$-I$$. First, we should ask the question of whether two isomorphic finite subgroups of $$SO(3)$$ have isomorphic preimages in $$SU(2)$$; for example, when considering the preimage of $$A_4$$, does it matter which of the infinitely many copies of $$A_4$$ in $$SO(3)$$ we consider? The answer is no, because isomorphic subgroups of $$SO(3)$$ are conjugate to each other (we will not give a proof of this fact here) and therefore their preimages are also conjugate (and hence isomorphic). Since the image of a cyclic subgroup of $$SU(2)$$ of order $$2n$$ is a cyclic subgroup of $$SO(3)$$ of order $$n$$, it follows that the preimage of every cyclic subgroup of $$SO(3)$$ is a cyclic subgroup of $$SU(2)$$. The structures of the other finite subgroups of $$SU(2)$$, which are the preimages of a dihedral group or one of the groups $$A_4, S_4, A_5$$, are more complicated, but we will rely on our result for cyclic groups in what follows.

The dihedral group $$D_n$$ with $$2n$$ elements has the presentation $$\langle x, y \mid x^n = y^2 = (xy)^2 = 1\rangle$$. (It can be realized as a subgroup of $$SO(3)$$ by letting $$x$$ be a rotation around the z-axis by angle $$2\pi/n$$ and letting $$y$$ be a rotation around the x-axis by angle $$\pi$$, although we won't need this explicit description.) Choose some $$x, y \in SO(3)$$ that satisfy the defining relations. Let $$X, Y \in SU(2)$$ such that $$h(X) = x, h(Y) = y$$. Now $$h^{-1}(\langle y \rangle)$$ is a cyclic group of order 4, and $$Y$$ has order 4 according to the Corollary above, so $$Y$$ generates $$h^{-1}(\langle y \rangle)$$. $$X$$ has order either $$n$$ or $$2n$$, but in the case that $$X$$ has order $$n$$, it must be that $$n$$ is odd and that $$-I \notin \langle X \rangle$$, and we can replace $$X$$ by $$-X$$, which then has order $$2n$$; since $$h^{-1}(\langle x \rangle)$$ is a cyclic group of order $$2n$$, this means we can choose $$X$$ such that $$\langle X\rangle = h^{-1}(\langle x\rangle)$$. We also have $$h(XY) = h(X)h(Y) = xy$$ which is of order 2 in $$SO(3)$$, therefore $$XY$$ is of order 4. We note that $$X$$ and $$Y$$ generate $$h^{-1}(D_n)$$ since the preimage of $$x^i y^j$$ must be $$\{X^i Y^j, -X^i Y^j\}$$ (and $$-I$$ is just $$Y^2$$). This suggests the presentation $2D_n = \langle X, Y \mid X^n = Y^2 = (XY)^2 = -I \rangle$ The only question that remains is whether the $$X$$'s and $$Y$$'s satisfy other relations not derived from these. In fact we do not need to add any more relations because it is clear enough that the quotient of the group defined above with its normal subgroup $$\{I, -I\}$$ is just $$D_n$$, so $$|2D_n| = 4n$$, so $$2D_n$$ as defined above is precisely the preimage of $$D_n$$ which is of order $$2n$$. The group $$2D_n$$ is also known as the dicyclic group $$\mathrm{Dic}_n$$.

One presentation of the tetrahedral group is: $A_4 = \langle x, y \mid x^3 = y^3 = (xy)^2 = 1 \rangle$ We can reason similarly to the previous paragraph. Choose $$X, Y \in SU(2)$$ such that $$h(X) = x, h(Y) = y$$; without loss of generality we can do this so that $$X$$ and $$Y$$ respectively generate $$h^{-1}(\langle x\rangle)$$ and $$h^{-1}(\langle y \rangle)$$ and therefore $$X^3 = Y^3 = -I$$ and $$X, Y$$ generate $$h^{-1}(A_4)$$. We know that $$h(XY)$$ has order 2 in $$SO(3)$$ so $$XY$$ will have order 4. Thus, $$(XY)^2 = -I$$. Therefore, the preimage of $$A_4$$ in $$SU(2)$$, known as the binary tetrahedral group, has the presentation: $2A_4 = \langle X, Y \mid X^3 = Y^3 = (XY)^2 = -I \rangle$ The following presentation of the octahedral group $S_4 = \langle x, y \mid x^4 = y^3 = (xy)^2 = 1 \rangle$ likewise gives rise to the following presentation for its preimage, the binary octahedral group: $2S_4 = \langle X, Y \mid X^4 = Y^3 = (XY)^2 = -I \rangle$ For the icosahedral group, we can use the presentation $A_5 = \langle x, y \mid x^2 = y^3 = (xy)^5 = 1 \rangle$ Here, choosing $$X$$ and $$Y$$ as in the previous cases might yield either $$(XY)^5 = I$$ or $$(XY)^5 = -I$$; replacing $$X$$ with $$-X$$ yields the other. So one possible presentation of its preimage, the binary icosahedral group (which is not isomorphic to $$S_5)$$ is: $2A_5 = \langle X, Y \mid X^2 = Y^3 = (XY)^5 = -I \rangle$ (These presentations of the binary polyhedral groups are not the standard ones, but we will not concern ourselves with that detail here. It is much easier to work with these presentations than with actual matrices, so our work here is essentially done.)

In conclusion, the finite subgroups of $$SU(2)$$ are the cyclic groups $$C_n$$, the dicyclic groups $$\mathrm{Dic}_n$$, and the binary polyhedral groups $$2A_4, 2S_4, 2A_5$$.

Problem 4.12.9 For brevity, write $(a, b, c) = \begin{pmatrix} 1 & a & b \\ 0 & 1 & c \\ 0 & 0 & 1 \end{pmatrix}$ The character of the representation $$\mathbb{C}_{ij}$$ from Problem 4.12.2 is simply the scalar by which each group element acts, $\chi_{ij}(a, b, c) = \omega^{ai + cj}$ The character of the tensor product $$\mathbb{C}_{ij} \otimes \mathbb{C}_{i'j'}$$ is therefore $$\chi(a, b, c) = \omega^{ai + cj} \omega^{ai' + cj'} = \omega^{a(i + i') + c(j + j')} = \chi_{i+i',j+j'}(a, b, c)$$. Therefore $\mathbb{C}_{ij} \otimes \mathbb{C}_{i'j'} \cong \mathbb{C}_{i+i',j+j'}$ We now turn our attention to the $$p$$-dimensional representations $$R_z$$ where $$z$$ is a primitive $$p$$th root of unity. Recall $$\pref{eqn:heisenrz}$$: $(\rho(a, b, c)f)(x) = z^{cx-b} f(x-a)$ If we choose a basis $$f_0, \ldots, f_{p-1}$$ where $$f_j(x) = \delta_{jx}$$, then it is clear that $$\tr \rho(a, b, c) = 0$$ whenever $$a \ne 0$$. If $$a = 0$$, then the $$f_j$$'s are eigenvectors with eigenvalue $$z^{cj-b}$$, so $\tr \rho(a, b, c) = \sum_{j=0}^{p-1} z^{cj - b} = z^{-b} \sum_{j=0}^{p-1} z^{cj}$ When $$c \ne 0$$, this sum is a sum over the $$p$$ $$p$$th roots of unity and therefore vanishes. When $$c = 0$$, the sum is just $$p$$. Therefore $\chi_z(a, b, c) = \begin{cases} pz^{-b} & \text{if a = c = 0} \\ 0 & \text{otherwise} \end{cases}$ Consider the tensor product $$\mathbb{C}_{ij} \otimes R_z$$. The latter's character has support only when $$a = c = 0$$, but here $$\chi_{ij} = 1$$. So $$\chi_{ij} \chi_z = \chi_z$$, and $\mathbb{C}_{ij} \otimes R_z \cong R_z$ All that remains is to find the decomposition of $$R_z \otimes R_{z'}$$ into irreducibles. The character is \begin{align*} (\chi_z \chi_{z'})(a, b, c) &= \begin{cases} p^2(zz')^{-b} & \text{if $a = c = 0$} \\ 0 & \text{otherwise} \end{cases} \\ &= p\chi_{zz'}(a, b, c) \end{align*} therefore $R_z \otimes R_{z'} \cong R_{zz'}^p$ If $$zz' = 1$$, then according to the result of Problem 4.12.2(c), this further decomposes as $$\bigoplus_{j=0}^{p-1} \mathbb{C}_{-1,j}^p$$.

Problem 4.12.10 The proof follows the Hint given in the text.

Lemma 1: Let $$n$$ be a nonnegative integer. Then $$\mathbb{C}^n$$ is not the union of a finite number of its proper subspaces.

By induction. In the case $$n = 0$$, there are no proper subspaces, so the statement is true. In the case $$n = 1$$, which we choose as the base case, every proper subspace is the trivial space, so a union of proper subspaces can only be the trivial space. For the induction case, $$n \ge 2$$. Let $$V_1, \ldots, V_m$$ be proper subspaces of $$\mathbb{C}^n$$. Each proper subspace of $$\mathbb{C}^n$$ is at most $$(n-1)$$-dimensional. There are infinitely many $$(n-1)$$-dimensional subspaces of $$\mathbb{C}^n$$ (one for each nonzero projective $$n$$-tuple), so it is possible to find an $$(n-1)$$-dimensional subspace $$V \subseteq \mathbb{C}^n$$ that is not equal to any $$V_i$$. Therefore, for each $$i$$, $$V \cap V_i$$ is a subspace of $$V$$ that is at most $$(n-2)$$-dimensional. By the induction hypothesis, $$\bigcup_i V \cap V_i$$ is a proper subset of $$V$$, so there is an element of $$V$$ that is not in any of the $$V_i$$'s. Therefore $$\bigcup_i V_i$$ is a proper subset of $$\mathbb{C}^n$$.

Lemma 2: Let $$n$$ be a positive integer and let $$v_1, v_2, \ldots, v_m$$ be distinct nonzero vectors in $$\mathbb{C}^n$$. There exists an ordered basis $$B = \{b_1, \ldots, b_n\}$$ such that with respect to $$B$$, the first coordinate of $$v_1$$ is equal to 1, and for all $$i \in \{2, \ldots, m\}$$, the first coordinate of $$v_i$$ is unequal to 1.

For notational convenience, we will write $$B_j(v)$$ for the $$j$$th component of the vector $$v$$ with respect to ordered basis $$B$$.

Choose $$b_1 = v_1$$. If $$n = 1$$ then we are done; $$\{b_1\}$$ is the desired basis. We will proceed under the assumption that $$n \ge 2$$. Extend $$\{b_1\}$$ to a basis $$B' = \{b_1', b_2', \ldots, b_n'\}$$ (where $$b_1' = b_1$$). Clearly $$B'_1(v_1) = 1$$, but there might be some other $$i$$ such that $$B'_1(v_i) = 1$$ also. To solve this, we will perturb $$B'$$ by putting $$b_i = b_i' + \epsilon_i b_1$$ for some scalars $$\epsilon_2, \ldots, \epsilon_n$$; this mapping is one-to-one so $$B = \{b_1, b_2, \ldots, b_n\}$$ will also be a basis. We now concentrate on finding a vector $$\epsilon = (\epsilon_2, \ldots, \epsilon_n) \in \mathbb{C}^{n-1}$$ that makes this construction work.

Let $$S$$ be the set of all $$i$$ such that $$v_i$$ is not a scalar multiple of $$v_1$$. For all $$i \in S$$, let $$\hat{v}_i$$ be the vector of the last $$n-1$$ components of $$v_i$$ with respect to $$B'$$; that is, $$\hat{v}_i = (B'_2(v_i), \ldots, B'_n(v_i)) \in \mathbb{C}^{n-1}$$. For all $$i \in S$$, $$\hat{v}_i$$ is nonzero. Let $$V_i$$ be the subspace of $$\mathbb{C}^{n-1}$$ orthogonal to $$\hat{v}_i$$ with respect to the usual inner product. Then $$\bigcup_{i\in S} V_i$$ is a finite union of proper subspaces of $$\mathbb{C}^{n-2}$$, and by Lemma 1, there is therefore some $$u = (u_2, \ldots, u_n) \in \mathbb{C}^{n-1}$$ such that $$u \cdot \hat{v}_i \ne 0$$ for all $$i \in S$$.

If we put $$\epsilon = ku$$ for some $$k \in \mathbb{C}$$, then for each $$i \in \{1, \ldots, n\}$$, we have \begin{align*} v_i &= \sum_{j=1}^n B'_j(v_i) b'_j \\ &= B'_1(v_i) b'_1 + \sum_{j=2}^n B'_j(v_i) b'_j \\ &= B'_1(v_i) b'_1 - (ku\cdot \hat{v}_i)b'_1 + (ku\cdot \hat{v_i})b'_1 + \sum_{j=2}^n B'_j(v_i) b'_j \\ &= (B'_1(v_i) + ku\cdot\hat{v}_i)b'_1 + \sum_{j=2}^n (ku_j B_j'(v_i) b'_1 + B'_j(v_i) b'_j) \\ &= (B'_1(v_i) + ku\cdot\hat{v}_i)b'_1 + \sum_{j=2}^n B_j'(v_i)(b'_j + ku_j b'_1) \\ &= (B'_1(v_i) + ku\cdot\hat{v}_i)b_1 + \sum_{j=2}^n B_j'(v_i) b_j \end{align*} so the coordinates of $$v_i$$ in $$B$$ are $$(B'_1(v_i) + ku\cdot\hat{v}_i, B'_2(v_i), \ldots, B'_n(v_i))$$. For $$i \in S$$, there is exactly one scalar $$k_i$$ such that $$B_1(v_i) = B'_1(v_i) + k_i(u\cdot\hat{v}_i)$$ is equal to 1. For $$i \in \{2, \ldots, n\} \setminus S$$, $$B_1(v_i)$$ is independent of $$k$$, but must already not be equal to 1 since $$v_i$$ is a multiple of $$v_1$$ but not equal to $$v_1$$ itself. By choosing $$k \in \mathbb{C}$$ not equal to any of the $$k_i$$'s, we obtain the desired basis $$B$$.

Corollary: Let $$X$$ be a given basis for $$\mathbb{C}^n$$. For each $$i$$, there is a polynomial $$P_i \in \mathbb{C}[x_1, \ldots, x_n]$$ such that $$P_i(v_j) = \delta_{ij}$$, where $$P_i(v_j)$$ is interpreted as $$P_i(X_1(v_j), \ldots, X_n(v_j))$$. (Remark: These can be thought of as the analogue to the Lagrange interpolating polynomials for vectors.)

Let $$B^i$$ be a basis with respect to which $$B^i_1(v_j) = 1$$ if and only if $$j = i$$, as in the Lemma. Let $$M_i$$ be the matrix of the basis vectors in $$B^i$$ written with respect to $$X$$. Then the coordinate vector of each $$v_j$$ with respect to basis $$B^i$$ is $$M_i^{-1}v_j$$, which is a vector of $$n$$ linear polynomials in the components of $$v_j$$ with respect to $$X$$. The polynomial $$Q_i$$ defined by composing $$x_1 - 1$$ with $$M_i^{-1}$$ therefore vanishes on $$v_j$$ if and only if $$j = i$$. Having defined $$Q_1, \ldots, Q_m$$ in this fashion, define $$P'_i = \prod_{j \ne i} Q_j$$, which therefore vanishes on $$v_j$$ if and only if $$j \ne i$$. Finally, define $$P_i = P'_i/P'_i(v_i)$$. (Remark: The polynomials constructed in this way have degree $$m-1$$, which is the best possible as we know from the case $$n = 1$$. But we don't need to use this bound for the problem as written.)

We return to the problem. Since $$V$$ is faithful, for each $$g \in G \setminus \{1\}$$, the matrix $$\rho(g)$$ is not the identity, and therefore $$(\rho(g)^T)^{-1}$$ is not the identity. Thus, in the dual representation $$V^*$$, only the identity of $$G$$ acts as the identity matrix. The subspace of $$V^*$$ that is fixed by a non-identity element $$g$$, call it $$(V^*)^g$$, is not the entire space. By the Lemma, there is some $$u \in V^*$$ that is not fixed by any $$g \in G \setminus \{1\}$$, i.e., its stabilizer in $$G$$ is the trivial subgroup.

According to the universal property for symmetric powers, for every symmetric $$k$$-linear map $$f : V^k \to \mathbb{C}$$, there is a unique linear map $$g : S^k V \to \mathbb{C}$$ such that $$f = g \circ \iota$$ where $$\iota$$ is the canonical inclusion map. If $$\varphi \in V^*$$, then such a symmetric $$k$$-linear map is $$f_\varphi(v_1, \ldots, v_k) = \varphi(v_1) \cdot \ldots \cdot \varphi(v_k)$$. Write $$f_\varphi = g_\varphi \circ \iota$$. Then, we can naturally interpret $$S^k V$$ as a space of polynomials on $$V^*$$ where each term has degree $$k$$, as follows: if $$w \in S^k V$$, then $$w(\varphi) = g_\varphi(w)$$. Extending by linearity, we can likewise regard $$SV$$ as a space of polynomials on $$V^*$$. Note that by polynomial on $$V^*$$ we mean a polynomial involving the components of $$\varphi \in V^*$$ with respect to a basis $$v_1^*, \ldots, v_n^*$$, and it is not hard to see that the mapping from $$SV$$ to such polynomials is surjective.

Let $$\gamma : SV \to \mathbb{C}^G$$ be defined so that $$\gamma(w)(g) = w(gu)$$, where $$u$$ was previously defined as having trivial stabilizer in $$G$$, and $$w$$ acts on $$V^*$$ as previously described. Let $$g_1, g_2 \in G$$. If $$g_1 u = g_2 u$$, then $$g_1^{-1} g_2 u = g_1^{-1} g_1 u = u$$, and by the definition of $$u$$, it follows that $$g_1^{-1} g_2 = 1$$, and $$g_1 = g_2$$. So the elements of $$V^*$$ given by $$gu$$ for $$g \in G$$ are all distinct; they are also nonzero (since each $$g$$ acts as an invertible matrix). Let $$f \in \mathbb{C}^G$$. By the Corollary to Lemma 2, there are polynomials $$P_g$$ for $$g \in G$$ such that $$P_g(g'u) = \delta_{gg'}$$ for all $$g' \in G$$. The polynomial $$P = \sum_{g\in G} f(g) P_g$$ therefore satisfies $$P(gu) = f(g)$$ for all $$g \in G$$. Therefore $$f = \gamma(P)$$ where $$P$$ is regarded as an element of $$SV$$. Therefore $$\gamma$$ is surjective.

This allows us to show that $$SV$$ contains a copy of the regular representation $$\mathbb{C}[G]$$. We shall find it convenient to regard the elements of $$V^*$$ as row vectors, so that the operation of a linear functional on a vector is simply matrix multiplication, and $$g\varphi = \varphi \rho(g)^{-1}$$ where $$\varphi \in V^*$$. If $$w \in SV$$, then $$w$$ acts on $$\varphi$$ by distributing left-multiplication of $$\varphi$$ over the vector factors in $$w$$. In $$gw(\varphi)$$ the factor $$\rho(g)$$ is interposed, so it is equivalent to distributing left-multiplication of $$\varphi \rho(g)$$ over $$w$$. But $$\varphi\rho(g) = g^{-1}\varphi$$, so we have the identity $gw(\varphi) = w(g^{-1}\varphi)$ Thus, for each $$h \in G$$, we have \begin{align*} \gamma(gw)(h) &= gw(hu) \\ &= w(g^{-1}hu) \\ &= \gamma(w)(g^{-1}h) \end{align*} If we identify each $$f \in \mathbb{C}^G$$ with the element of $$\mathbb{C}[G]$$ given by $$\sum_{g\in G} f(g) g$$, this induces the linear map $$\gamma' : SV \to \mathbb{C}[G]$$ where \begin{align*} \gamma'(gw) &= \sum_{h\in G} gw(hu) h \\ &= \sum_{h\in G} w(g^{-1}hu) h \\ &= g \sum_{h\in G} w(g^{-1}hu) g^{-1}h \\ &= g \sum_{h'\in G} w(h'u) h' \\ &= g\gamma'(w) \end{align*} If $$w$$ is chosen so that $$\gamma'(w) = 1$$, then by linearity, for each $$x \in \mathbb{C}[G]$$ we have $$\gamma'(xw) = x\gamma'(w) = x$$. Therefore the map $$x \mapsto xw$$ is injective, so $$\gamma': \mathbb{C}[G]w \to \mathbb{C}[G]$$ is an isomorphism of representations. By Maschke's theorem, $$\mathbb{C}[G]w$$, which is contained within $$SV$$, contains a copy of each irreducible representation of $$G$$. Let $$U$$ be such an irrep. For each $$n \in \mathbb{N}$$, let $$U_n$$ be the restriction of $$U$$ to $$S^n V$$. Then $$U_n$$ is a subrepresentation of $$U$$. But $$U = \bigoplus_{n=0}^\infty U_n$$, so since $$U$$ is irreducible, we must have $$U = U_n$$ for some $$n$$, that is, $$U \subseteq S^n V$$.

Problem 4.12.11 Before we tackle this problem, we should reflect on the nature of the representations $$\End V$$ and $$S^2 V$$ that the problem concerns.

Example 3.1.2 describes one way of constructing a representation with ground vector space $$\End V$$ given a representation $$V$$. The resulting representation is isomorphic to $$V^n$$ where $$n = \dim V$$. However, this cannot be the type of representation that $$\End V$$ is intended to be in this problem. Instead, elements of $$SO(3)$$ are regarded as rotations of the coordinate axes, so they must act on $$\End V$$ according to $$R \cdot A = RAR^{-1}$$.

The matrix $$A_P$$ described in the problem conceptually lives in $$\End V$$, not $$V \otimes V$$; indeed, since $$g$$ is a diffeomorphism, $$A_P$$ should be invertible. A small deformation can be regarded as living in the tangent space $$\mathfrak{gl}(V)$$ and changes in coordinates should act on this tangent space as the adjoint representation, $$\ad_Q(X) = QXQ^{-1}$$. If $$Q$$ is restricted to lie in $$SO(3)$$, then, since $$SO(3)$$ preserves the Euclidean inner product, we can decompose $$\mathfrak{gl}(V)$$ into self-adjoint and anti-self-adjoint invariant subspaces with respect to this inner product; with the standard basis of $$V$$, these will be symmetric and antisymmetric matrices, respectively corresponding to the distortion and rotation parts.

To make a long story short, $$SO(3)$$ acts by conjugation on both of the representations concerned. However, in the case of $$S^2 V$$, whether we consider $$SO(3)$$ to act in the usual manner $$g(u \otimes v) = gu \otimes gv$$ or as conjugation on the corresponding matrices makes no difference because $$V$$ and $$V^*$$ are isomorphic as representations.

1. We claim that: the trivial representation $$\mathbb{R}$$ is embedded in $$\End V$$ as the scalar matrices $$\mathbb{R}I_3$$, the standard representation is embedded as the antisymmetric matrices, and the 5-dimensional representation $$W$$ is embedded as the traceless symmetric matrices.

It's obvious that there is exactly one way to write an arbitrary $$3\times 3$$ real matrix as the sum of a scalar matrix and a traceless matrix. It's also not hard to see that a traceless matrix can be written as the sum of a symmetric matrix and an antisymmetric matrix in exactly one way, and since the antisymmetric part is traceless, so is the symmetric part. So in fact $$\End V$$ is the direct sum $$\mathbb{R} \oplus V' \oplus W$$ as vector spaces, where $$V'$$ denotes the subspace of antisymmetric matrices in $$\End V$$.

We need to show that these direct summands are also subrepresentations of $$\End V$$. For the scalar matrices this is obvious; they are invariant under conjugation. As conjugation preserves the trace, $$V' \oplus W$$ is also an invariant subspace of $$\End V$$. For all $$R \in SO(3), A \in \End V$$, observe that $$(RAR^{-1})^T = (R^{-1})^T A^T R^T = RA^T R^{-1}$$; thus, if $$A^T = kA$$, then $$(RAR^{-1})^T = RA^T R^{-1} = k(RAR^{-1})$$. This shows that $$V'$$ and $$W$$ are individually invariant subspaces of $$\End V$$. Finally, we need to show that $$V' \cong V$$ as representations. To do so, identify $$v \in V$$ with the operator $$T_v$$ where $$T_v(w) = v \otimes w$$ (the usual cross product). Observe that $$\langle T_v w, x\rangle = (v \times w) \cdot x = -w \cdot (v \times x) = -\langle w, T_v x\rangle$$, so each $$T_v$$ is indeed an antisymmetric matrix; since $$v \mapsto T_v$$ is an injective linear map from $$V$$ to $$V'$$, and $$V'$$ is also three-dimensional, this is a linear isomorphism. And since $$T_{Rv}(w) = Rv \times w = R(v \times R^{-1}w) = R T_v R^{-1}w$$, it is an isomorphism of representations.

For $$S^2 V$$, which we argued is just the symmetric subspace of $$\End V$$, the decomposition $$\mathbb{R} \oplus W$$ follows from the same arguments.

2. Let $$\tilde{V}$$ be the complexification of $$V$$. Suppose $$v = (v_1, v_2, v_3) \in \tilde{V} \setminus \{0\}$$. Let $$U$$ be the subrepresentation generated by $$v$$. Without loss of generality, suppose $$v_1 \ne 0$$. Let $$R$$ be the rotation by angle $$\pi$$ around the x-axis; then $$Rv = (v_1, -v_2, -v_3)$$, and therefore $$(2v_1, 0, 0) = v + Rv \in U$$. Therefore $$(1, 0, 0) \in U$$. Rotations by angle $$\pi/2$$ can be used to transform this into $$(0, 1, 0)$$ and $$(0, 0, 1)$$, so it is clear that in fact $$U = \tilde{V}$$. Since this holds for any starting vector $$v$$, we conclude that $$\tilde{V}$$ is irreducible.

Let $$\tilde{W}$$ be the complexification of $$W$$. In this case we will find it convenient to consider $$W$$ as a subspace of $$S^2 V$$. A basis for $$W$$ (and hence $$\tilde{W}$$) is given by $$e_1 e_2, e_2 e_3, e_3 e_1$$, and any two of $$e_1^2 - e_2^2, e_2^2 - e_3^2, e_3^2 - e_1^2$$. Suppose $$w \in \tilde{W} \setminus \{0\}$$. We will show that the subrepresentation $$U$$ generated by $$w$$ contains the basis, and hence, all of $$\tilde{W}$$.

We begin by observing that there exists $$R \in SO(3)$$ such that $$Re_1 = e_2, Re_2 = e_3, Re_3 = e_1$$. Therefore, if $$U$$ contains any one of $$e_1 e_2, e_2 e_3, e_3 e_1$$, it must contain the other two, and if $$U$$ contains any one of $$e_1^2 - e_2^2, e_2^2 - e_3^2, e_3^2 - e_1^2$$, it must contain the other two. Furthermore, let $$S$$ denote a rotation through angle $$\pi/4$$ around the z-axis, so that $$Se_1 = \frac{1}{\sqrt{2}}e_1 + \frac{1}{\sqrt{2}}e_2, Se_2 = -\frac{1}{\sqrt{2}}e_1 + \frac{1}{\sqrt{2}}e_2$$. Then $$S(e_1e_2) = (Se_1)(Se_2) = -\frac{1}{2}e_1^2 + \frac{1}{2}e_2^2$$; thus, if $$e_1 e_2 \in U$$, then $$e_1^2 - e_2^2 \in U$$, and vice versa using the inverse of $$S$$. Therefore if $$U$$ contains any one of the six elements $$e_1 e_2, e_2 e_3, e_3 e_1, e_1^2 - e_2^2, e_2^2 - e_3^2, e_3^2 - e_1^2$$, it also contains the other five. So, for each nonzero given $$w$$, it suffices to show that $$U$$ contains one of these basis elements.

First consider the case where $$w = c_1 e_2 e_3 + c_2 e_3 e_1 + c_3 e_1 e_2$$. Without loss of generality, say $$c_1 \ne 0$$. Let $$R_x$$ denote the rotation through angle $$\pi$$ around the x-axis. Then $$Rw = c_1 e_2 e_3 - c_2 e_3 e_1 - c_3 e_1 e_2$$ so $$w + Rw = 2c_1 e_2 e_3 \in U$$, thus $$e_2 e_3 \in U$$, and we are done.

If on the other hand $$w$$ doesn't have the form described in the previous paragraph, we can still employ a similar construction. If $$R_x, R_y$$ are the rotations through angle $$\pi$$ around the x and y axes, respectively, observe that $$w' = (I + R_y)(I + R_x)w \in U$$, and that if $$w = c_{11} e_1^2 + c_{22}e_2^2 + c_{33}e_3^2 + c_{12}e_1e_2 + c_{23}e_2e_3 + c_{31}e_3 e_1$$, then $$w' = 4(c_{11}e_1^2 + c_{22}e_2^2 + c_{33}e_3^2)$$. Let $$T_x$$ be a rotation through angle $$\pi/2$$ around the x-axis; then $$T_xw' = 4(c_{11}e_1^2 + c_{33}e_2^2 + c_{22}e_3^2)$$, and $$w' - T_xw' = 4(e_2^2 - e_3^2)(c_{22} - c_{33})$$. Likewise $$(1 - T_y)w' = 4(e_3^2 - e_1^2)(c_{33} - c_{11})$$ and $$(1 - T_z)w' = 4(e_1^2 - e_2^2)(c_{11} - c_{22})$$. Since $$c_{11} + c_{22} + c_{33} = 0$$ but at least one of the three addends is nonzero, there must be some pair that are unequal, so at least one of $$(1 - T_x)w', (1 - T_y)w', (1 - T_z)w'$$ must be a nonzero multiple of the corresponding basis element $$e_2^2 - e_3^2, e_3^2 - e_1^2, e_1^2 - e_2^2$$, and we are done.

We have shown that for any nonzero $$w \in \tilde W$$, the subrepresentation generated by $$w$$ contains a basis for $$\tilde W$$, so it is the entirety of $$\tilde W$$. Thus, we conclude that $$\tilde{W}$$ is irreducible.

We claim that a homomorphism from $$S^2 V \cong \mathbb{R} \oplus W$$ to $$\End V \cong \mathbb{R} \oplus V \oplus W$$ will also be a homomorphism between their complexifications $$\mathbb{C} \oplus \tilde W$$ and $$\mathbb{C} \oplus \tilde{V} \oplus \tilde{W}$$. To see this, let $$\varphi : S^2 V \to \End V$$ be such a homomorphism, so that $$\varphi(RA) = R\varphi(A)$$ for all $$R \in SO(3), A \in S^2 V$$. Then, if $$\tilde{A} \in S^2 \tilde{V}$$, we will have $$\varphi(R\tilde{A}) = \varphi(R(\Re \tilde A + i \Im \tilde A)) = \varphi(R(\Re \tilde A)) + i \varphi(R(\Im \tilde A)) = R(\varphi(\Re \tilde A)) + iR(\varphi(\Im \tilde A)) = R(\varphi(\Re \tilde A + i \Im \tilde A)) = R(\varphi(\tilde A))$$. In this setting we can apply Schur's lemma profitably since $$\mathbb{C}$$ is algebraically closed; a homomorphism from $$\mathbb{C} \oplus \tilde{W}$$ to $$\mathbb{C} \oplus \tilde{V} \oplus \tilde{W}$$ must take the form $$\varphi(x + y) = Kx + \mu y$$ where $$x \in \mathbb{C}, y \in \tilde{W}$$ for some constants $$K, \mu \in \mathbb{C}$$; thus the image of $$\varphi$$ lies in $$S^2 \tilde{V}$$; that is, the stress tensor is always symmetric. (Of course, $$K$$ and $$\mu$$ must be real since $$\varphi$$ is a homomorphism of real representations.)