The affine Weyl group

The visualisation below is a work-in-progress to show features of affine Weyl groups corresponding to the irreducible rank-2 root systems. Click and draw to move around, or hold shift and scroll to zoom in and out. An overview of what each option means and the particular conventions used are below.

Definition of the affine Weyl group

Let (R \subseteq V, R^\vee \subseteq V^*) be a reduced, irreducible root system. Each root \alpha \in R and integer k \in \bbZ defines an affine Hyperplane H_{\alpha, k}^\vee = \set{\mu \in V^* \mid \innprod{\mu, \alpha} = k} in the dual space V^*, and together with its corresponding coroot \alpha^\vee an affine reflection r_{\alpha, k}(\mu) = \mu - (\innprod{\mu, \alpha} - k)\alpha^\vee fixing H_{\alpha, k}^\vee pointwise. Let \Aff(V^*) denote the group of invertible affine transformations of V^*; the subgroup generated by all the affine reflections r_{\alpha, k} is called the affine Weyl group associated to (R, R^\vee), denoted by W. The finite Weyl group W_f is naturally a subgroup, generated by those reflections with k = 0. Hence in our setup we have W_f \subseteq W \subseteq \Aff(V^*), and the visualisation above is a picture of V^*.

Note: Entirely analagously there is an affine Weyl group which is a subgroup of \Aff(V), which is often also called the affine Weyl group. If the root system (R, R^\vee) is simply-laced then these groups are isomorphic, but otherwise they differ. The convention we have adopted is standard in the literature on Coxeter groups, for example in Bourbaki. We warn that when looking at the literature on algebraic groups, the affine Weyl group appearing in the linkage principle is the subgroup of \Aff(V), not of \Aff(V^*).

The affine hyperplanes cut the space V^* up into disjoint simplices upon which W acts freely and transitively, hence after choosing a reference simplex \Delta we can identify an element w \in W with the simplex w \Delta. If a base has already been chosen for the underlying finite root system, then the standard choice of \Delta is the unique simplex in the dominant chamber which touches the zero vector. The walls of \Delta then are the walls H_{\alpha, 0}^\vee corresponding to the simple roots \alpha for the finite Weyl group W_f, along with the wall H_{\theta, 1}^\vee where \theta is the highest root of the finite root system. The finite simple reflections S_f together with the affine reflection r_{\theta, 1} generate the affine Weyl group W as a Coxeter group.

Options

The labelling option puts a number or polynomial on top of each alcove.

  • Length labels each group element by its length l(w).
  • # Reduced expressions labels each group element by the number of reduced expressions for l(w). The reduced expressions themselves are not enumerated (there can be exponentially many in the length), but rather they are only counted by noting that c(\id) = 1, and c(w) = \sum_{s \in L(w)} c(sw) where L(w) denotes the left descent set of w.
  • KL polynomials labels each group element w with the Kazhdan-Lusztig polynomial h_{w, x}, where x is the selected (purple) element under the cursor. We are using the Soergel normalisation of the Hecke algebra - more details are below in the Hecke algebras section.
  • KL polynomials (v = 1) labels each group element w with the Kazhdan-Lusztig polynomial h_{w, x} evaluated at v = 1.
  • \mu-coefficients labels each group element w with the v^1-coefficient appearing in the Kazhdan-Lusztig polynomial h_{w, x}. These coefficients determine the structure of the cells.
  • AS KL polynomials labels each dominant alcove with the antispherical Kazhdan-Lusztig polynomial n_{w, x}, where x is the element under the cursor. More details are in the antispherical modules section.

The shading option displays a set of elements by shading them a darker colour. Sometimes this set depends on the element x under the cursor.

  • The Bruhat order is the order generated by w \leq wt whenever t is a reflection (a conjugate of a generator). In this case of finite and affine Weyl groups, the set of reflections is in bijection with the affine reflections which generate the group. When this option is selected, the set \set{w \leq x \mid w \in W} of elements less than or equal to the element under the cursor is shown. When the covering option is set, the set of elements which x covers (the set of elements with length l(x) - 1) are shown in green.
  • The right weak order is generated by w \leq ws whenever l(w) < l(ws), and the displayed elements work like the Bruhat option.
  • The left weak order is generated by w \leq sw whenever l(w) < l(sw), and the displayed elements work like the Bruhat option.
  • The cone type of x is the set T(x) = \set{w \in W \mid l(xw) = l(x) + l(w)}.
  • The dihedral elements of W are those contained in some dihedral reflection subgroup.

Cells: upon choosing left, right, or two-sided cells, the cells for a limited subset of the Kazhdan-Lusztig basis are computed and shown. The cell ordering is also computed, and shown on the left.

The NF Tree option shows the tree associated to the chosen normal form.

The p-canonical option shows p-canonical basis elements for the antispherical module for select types. After selecting a dataset from the p-canonical list, two more labelling options become available: one showing the expansion of a p-canonical basis element into the canonical basis, and another for those coefficients evaluated at v = 1. The # p-can in can shows the number of nonzero terms in the expansion of each p-canonical basis element in terms of the canonical basis elements. Unlike everything else on this page (which is calculated on-the-fly in your browser), the p-canonical datasets were calculated offline - each requiring several hours of CPU time. The algorithm used is due to Thorge Jensen and Geordie Williamson.

Hecke algebras

We use the “Soergel normalisation” of the Hecke algebra, which are the same as the conventions used in the book Introduction to Soergel bimodules. We give a few details here so that the reader can check what conventions we are using; see Chapter 3 of Introduction to Soergel bimodules for a proper introduction to the Hecke algebra.

The Hecke algebra H = H(W, S) is a free \bbZ[v^{\pm}]-module, with a standard basis \set{\delta_x \mid x \in W}. The quadratic relation used is \delta_s^2 = (v^{-1} - v)\delta_s + 1, hence the right-multiplication-by-\delta_s formula for s \in S is \delta_x \delta_s = \begin{cases} \delta_{xs} & \text{if } x < xs, \\ \delta_{xs} + (v^{-1} - v) \delta_x & \text{if } x > xs. \end{cases} The bar involution is the \bbZ-linear map sending p(v) \delta_x to \overline{p(v) \delta_x} = p(v^{-1}) \delta_{x^{-1}}^{-1}. On generators, we have \delta_s^{-1} = \delta_s + (v - v^{-1}). The Kazhdan-Lusztig basis or canonical basis is the unique set \set{b_x \mid x \in W} \subseteq H of elements which are self-dual (fixed points of the bar involution), and furthermore satisfy the degree bound condition b_x = \delta_x + \sum_{y < x} h_{y, x} \delta_y, \quad \text{for some } h_{y, x} \in v \bbZ[v]. We have b_{\id} = \delta_{\id} = 1 \in H and b_s = \delta_s + v. The Laurent polynomials h_{y, x} are called Kazhdan-Lusztig polynomials. Canonical basis elements can be calculated in a relatively straightforward fashion by using the right-multiplication formulas \delta_x b_s = \begin{cases} \delta_{xs} + v \delta_x & \text{if } x < xs, \\ \delta_{xs} + v^{-1} \delta_x & \text{if } x > xs, \end{cases} b_y b_s = \begin{cases} b_{ys} + \sum_{zs < z} \mu(z, y) b_z & \text{if } y < ys, \\ (v + v^{-1})b_y & \text{if } y > ys, \end{cases} where \mu(y, x) denotes the coefficient of v in h_{y, x}.

Other normalisations

There are some other normalisations of the Hecke algebra which are used in practice. (Prepare to get into the weeds here).

As a preliminary, after fixing a Coxeter system (W, S), a base commutative ring R, and two parameters \lambda, \mu \in R, there is a Hecke algebra \cH_R(\lambda, \mu) defined as the free module over R with basis \set{\delta_x \mid x \in W} indexed by W, such that the braid relations hold amongst the generators \set{\delta_s \in S} and we have the quadratic relation \delta_s^2 = \lambda \delta_s + \mu for each generator. (This is not a vacuous statement: work has to be done to show that this does in fact describe an algebra structure). It is important to keep this in mind when trying to distinguish different-looking Hecke algebras: all you need to describe the whole algebra structure is the quadratic relation, basically. From this point on I’ll reserve the notation \delta_x explicitly for the Soergel normalisation.

In Kazhdan-Lusztig’s original work, they start with the “standard” Hecke algebra defined over \bbZ[q] with the quadratic relation (T_s + 1)(T_s - q) = 0, and tensor on the ring of Laurent polynomials \bbZ[q^{\pm \frac{1}{2}}]. (So here q is an arbitrary parameter, subject to it being invertible and square-rootable). The relationship between their Hecke algebra normalisation and Soergel’s is q = v^{-2}, \quad T_x = v^{-l(x)}\delta_x. For instance, applying this to the KL quadratic relation (T_s + 1)(T_s - q) = 0 gives (v^{-1}\delta_s + 1)(v^{-1}\delta_s - v^{-2}) = 0, multiplying through by v and rearranging gives Soergel’s. The bar involution is defined the same way: it sends q to q^{-1}, and T_x to T_{x^{-1}}^{-1}. Kazhdan and Lusztig then define two different canonical bases, which they label C_w and C_w'. The second basis is the equivalent of Soergels, and it is defined by the condition that it is self-dual, and C_w' = q_w^{-\frac{1}{2}} \sum_{y \leq w} P_{y, w} T_y, where q_w = q^{l(w)} (shorthand used throughout their paper), P_{w, w} = 1, and P_{y, w} is a polynomial in q of degree at most \frac{1}{2}(l(w) - l(y) - 1) for y < w. The P_{y, w} are the Kazhdan-Lusztig polynomials, and are related to the h_{y, w} in Soergel’s normalisation by h_{y, w} = v^{l(y) - l(w)} P_{y, w} = q^{\frac{1}{2}(l(w) - l(y)} P_{y, w}. The generating canonical basis elements are C_s' = q^{-\frac{1}{2}}(T_s + 1), and we again have the multiplication formula C'_{sw} = C'_s C'_w - \sum_{\substack{z < w \\ sz < z}}\mu(z, w) C_z', where \mu(z, w) is the coefficient of the degree \frac{1}{2}(l(w) - l(y) - 1) term in the KL polynomial P_{z, w}. Since P_{z, w} is a polynomial in q and not in q^{\frac{1}{2}}, we will have nonzero \mu(z, w) only when the length difference between z and w is odd.

The rule for right multiplication by T_s in the KL normalisation is T_w T_s = \begin{cases} T_{ws} & \text{if } ws > w, \\ qT_{ws} + (q - 1)T_w & \text{if } ws < w, \end{cases} with right multiplication by T_s + 1 being particularly nice: T_w (T_s + 1) = \begin{cases} T_{ws} + T_w & \text{if } ws > w, \\ qT_{ws} + qT_w & \text{if } ws < w. \end{cases}

If we use the KL normalisation to compute the KL polynomials P_{y, w}, we should be computing the elements q_w^{\frac{1}{2}} C_w' in the Hecke algebra: at risk of being very confused define C_w '' = q_w^{\frac{1}{2}} C_w', so for example C_s'' = (T_s + 1). Then multiplying an earlier equation through by a scaling factor gives C_{ws}'' = (T_s + 1) C_w'' - \sum_{\substack{z < w \\ zs < z}} \mu(z, w) C_z'' q^{\frac{1}{2}(1 + l(w) - l(z))}.

The symmetry in the formula for right multiplication by T_s + 1 shows us that if ws > w then (aT_{ws} + bT_w)(T_s + 1) = (a + bq)T_{ws} + (a + bq)T_w, and so by induction if s is a right descent for w, then P_{z, w} = P_{zs, w} for all z. The analagous property holds for left descents.

Table of normalisations

There are two main presentations of the Hecke algebra, together with a normalisation of the standard basis, which come up in the literature. The definitions of the standard basis, KL polynomials, and R-polynomials change slightly depending on which normalisation is used (the bar involution and canonical basis remain the same). It can be tiresome to track how these normalisations are done across the literature, especially once notation is brought along for the ride. The purpose of this section is to lay out some notational touchstones across influential pieces of the literature, so that the reader can triangulate their position from there.

The first I will call the Lusztig normalisation, where the Hecke algebra H_q(W, S) is a module over \mathbb{Z}[q^{± 1}] (although square roots of q are usually adjoined), with the standard basis generators T_s satisfying the any of the three equivalent quadratic relations (T_s + 1)(T_s - q) = 0, T_s^2 = T_s(q - 1) + q, T_s^{-1} = q^{-1} T_s + (q^{-1} - 1). The second I will call the Soergel normalisation, where the Hecke algebra H_v(W, S) is a module over \mathbb{Z}[v^{± 1}], with the standard basis elements \delta_s satisfying any of the three equivalent quadratic relations (\delta_s + v)(\delta_s - v^{-1}) = 0, \delta_s^2 = (v^{-1} - v)\delta_s + 1, \delta_s^{-1} = \delta_s + (v - v^{-1}).

Note: Lusztig was not the first to write down the Hecke algebra, and Soergel was not the first to use the Soergel normalisation. These names are chosen because they seem semi-common in the literature, and some name is better than no name.

If we identify q = v^{-2}, then we can view \mathbb{Z}[q^{± 1}] sitting inside \mathbb{Z}[v^{± 1}]. By requiring that \delta_x = v^{l(x)} T_x, we may then view H_q(W, S) ⊆ H_v(W, S) as \mathbb{Z}-modules. Under this identification, the Kazhdan-Lusztig bar involution i on H_v(W, S) may be defined as the unique \mathbb{Z}-algebra automorphism which acts as i(v) = v^{-1} and i(\delta_s) = \delta_s^{-1} on the algebra generators: it then acts by i(\delta_x) = \delta_{x^{-1}}^{-1} on the standard basis. The reader may check that under the chosen embedding H_q(W, S) ⊆ H_v(W, S), this agrees with the involution j on H_q(W, S) defined as j(q) = q^{-1} and j(T_s) = T_s^{-1}. Hence the embeddings of algebras are compatible with the bar involution.

Now that we have compatible algebra-with-involution structures, it makes sense to compare elements which are written down in different papers. Each column of the table below corresponds to a paper or book, with the entries in that column written in the notation used there. Each row of the table names a term (The (L) or (S) are Lusztig or Soergel), then all entries in that row are equal. For example, by looking at the second row, we can see that the standard basis elements \widetilde{T}_x, H_x, and \delta_x are all equal, but T_x itself is not, rather it is scaled q^{-l(x)/2} T_x = \delta_x.

Note also the variable substitution q = v^{-2} when comparing Laurent polynomials. Take for example h_{\id, x} = v^2 + v^4 for x = ustu in type \widetilde{A}_2. We have P_{x, y} = v^{l(x) - l(y)} h_{x, y}, which gives P_{\id, x} = v^{-4}(v^2 + v^4) = 1 + v^{-2} = 1 + q.

The “normalisation table” is below.

Name [KL79] [Lus83] [Soe97] [EMTW20]
Standard basis (L) T_x q^{l(x)/2} \widetilde{T}_x v^{-l(x)} H_x v^{-l(x)} \delta_x
Standard basis (S) q^{-l(x)/2} T_x \widetilde{T}_x H_x \delta_x
Canonical basis C'_x C'_x \underline{H}_x b_x
KL polynomial (L) P_{x, y} q^{l(y)/2 - l(x)/2} P^*_{x, y} v^{l(x) - l(y)} h_{x, y} v^{l(x) - l(y)} h_{x, y}
KL polynomial (S) q^{l(x)/2 - l(y)/2} P_{x, y} P^*_{x, y} h_{x, y} h_{x, y}
R-polynomial R_{x, y} v^{l(x) - l(y)} R^*_{x, y} - -
R*-polynomial q^{l(x)/2 - l(y)/2} R_{x, y} R^*_{x, y}

The references in the table are:

  • [KL79]: Kazhdan and Lusztig, Representations of Coxeter groups and Hecke algebras, Inventiones Mathematicae, 1979. DOI.
  • [Lus83]: Lusztig, Left cells in weyl groups, Lie Group Representations I, Springer 1983. DOI.
  • [Soe97]: Soergel, Kazhdan-Lusztig polynomials and a combinatoric for tilting modules, Represent. Theory, 1997. DOI.
  • [EMTW20]: Elias, Makisumi, Thiel and Williamson, Introduction to Soergel Bimodules, RSME Springer Series, 2020. DOI

Antispherical modules

Let I \subseteq S be a subset of the Coxeter generators, then the subgroup W_I \subseteq W generated by I is called a standard parabolic subgroup of W. The length function on (W_I, I) is equal to the restriction of the length function on (W, S) to W_I. Each right coset W_I x contains a unique minimal-length coset representative x_0 = \argmin_{w \in W_I x} l(w). Let {^I W} \subseteq W denote the set of minimal-length coset representatives of elements of W: they can be characterised as the elements which cannot be made shorter by left-multiplication by I: {^I W} = \set{w \in W \mid l(sw) > l(w) \text{ for all } s \in I}. We get a bijection of sets W_I \times {^I W} \to W by multiplying group elements, which satisfies l(uv) = l(u) + l(v) for u \in W_I and v \in {^I W}. The example we will be most interested in is when I is the subset of simple reflections of the underlying finite root system, so W_I is the usual finite Weyl group. In this case, {^I W} is the set of alcoves that lie in the dominant chamber of the finite system.

The Hecke algebra H_I = H(W_I, I) is naturally a subalgebra of the Hecke algebra H. Let \cL = \bbZ[v^{\pm 1}] denote the ring of Laurent polynomials. The quadratic relation implies that \set{v^{-1}, -v} are the eigenvalues for the action of right multiplication by any \delta_s upon H, hence we may define an algebra homomorphism H_I \to \cL by sending \delta_s to (-v) for s \in I. This makes \cL into a H_I-bimodule called \cL(-v), which can be induced to a right H-module N = \cL(-v) \otimes_{H_I} H, called the antispherical module. The elements \set{1 \otimes \delta_x \mid x \in {^I W}} form a standard basis for N.

The bar involution on H induces an \bbZ-linear involution on N, by setting \overline{p \otimes h} = \overline{p} \otimes \overline{h}. We can again define a canonical basis of N by looking for a set \set{d_x \mid x \in {^I W}} of self-dual elements which satsify the degree bound condition d_x = n_x + \sum_{y < x} n_{y, x} n_y, \quad \text{for some } n_{y, x} \in v \bbZ[v]. We have a right multiplication formula n_x b_s = \begin{cases} n_{xs} + v n_x & \text{if } x < xs \text{ and } xs \in {^I W}, \\ n_{xs} + v^{-1} n_x & \text{if } x > xs \text{ and } xs \in {^I W}, \\ 0 & \text{if } xs \notin {^I W}. \end{cases} which can be used to calculate the canonical basis of N, using a similar idea to the calculation of the canonical basis of H: when x < xs then d_x b_s = d_{xs} + (\text{junk}), and the junk can be identified as the elements violating the degree bound, and removed by subtracting an appropriate multiple of d_y for various y < x.

Cells

Suppose that H is an algebra, equipped with a choice of basis \set{h_x \mid x \in W} for some indexing set W. (We will always be considering H to be the Hecke algebra, and the indexing set to be the affine Weyl group W). When multiplying a particular basis element h_w on the left by an arbitrary element of H, the product can be re-expanded in the chosen basis, yielding some terms on the right: n h_w = \sum_{z} \lambda_z h_z. Whenever h_z appears in the expansion with a nonzero coefficient, we write z \xleftarrow{L} w, which can be read informally as “h_w produces an h_z under left multiplication”. More formally, we define the binary relation z \xleftarrow{L} w if and only if there exists some n \in H such that n h_w = \sum_{y} \lambda_y h_y with \lambda_z \neq 0.

The relation \xleftarrow{L} is reflexive (z \xleftarrow{L} z for all z), but unless the basis h_x is extremely special it will not be transitive. We take its transitive closure by declaring that z \leq_L w if there exists some path z \xleftarrow{L} \cdots \xleftarrow{L} w. By construction \leq_L is reflexive and transitive and so is a preorder (a preorder is like a partial order, but missing the axiom that x \leq y and y \leq x implies x = y). By taking its strongly connected components (or the strong components generated by \xleftarrow{L}), we introduce equivalence classes called left cells, and a partial order \leq_L on those left cells.

Analagously, we can define the binary relation z \xleftarrow{R} w for when h_z appears in a product like h_w m, and z \xleftarrow{LR} w for when h_z appears in a product like n h_w m. (The \xleftarrow{LR} relation is the union of the \xleftarrow{L} and \xleftarrow{R} relations). And analagously we get right cells, and two-sided cells.

In the Hecke algebra, the standard basis does not produce interesting cells. Every standard basis element \delta_x is invertible, and hence produces the relation x \xleftarrow{L} y for all x, y \in W, and hence a single cell containing every element. On the other hand, the cells defined by the KL basis is very interesting. We give a short account of how the cells are computed, after all the KL basis elements have been computed.

For y, x \in W let \mu(y, x) be the coefficient of v in the polynomial h_{y, x}. Then we have the following formulae for right and left multiplication by the canonical basis element b_s corresponding to a simple generator: b_y b_s = \begin{cases} (v + v^{-1})b_y & \text{if } ys < y \\ b_{ys} + \sum_{zs < z} \mu(z, y) b_z & \text{if } ys > y \end{cases}

b_s b_y = \begin{cases} (v + v^{-1})b_y & \text{if } sy < y \\ b_{sy} + \sum_{sz < z} \mu(z, y) b_z & \text{if } sy > y \end{cases}

In order to generate the preorder \leq_L, it is enough to consider left multiplication by some generating set of the algebra. The generating set we choose are the KL basis elements \set{b_s \mid s \in S} corresponding to the simple reflections, since we have those nice formulae above showing how such products expand back into the KL basis. Then it is simply a matter of running through, for all elements y \in W, and for all s \in S such that ys > y, the nonzero \mu(z, y) coefficients, checking the additional condition that zs < z. If zs < z and \mu(z, y) \neq 0, then we add an edge z \leftarrow y to a directed graph. After collecting all edges, we run a beautiful algorithm due to Robert Tarjan to generate the strong components of the graph: these are the cells. This algorithm outputs the cells in a topological ordering, so the “condensed graph” (the directed acyclic graph where vertices are cells, and edges are induced by the edges on the original graph) can be easily generated. The transitive closure of this condensed graph is the cell ordering. To display the cell ordering however, we take the transitive reduction of the graph: this deletes edges until all we are left with are covering relations.

Normal forms

After fixing an order on the generating set S, there are two commonly used normal forms for elements of a Coxeter group W. Clearly a normal form should be a reduced expression, but which one?

  • The ShortLex normal form for w \in W is the lexicographically least reduced word in the generators representing w. If \mathtt{ShortLex}(w) = (s_1, \ldots, s_n), then s_1 is the “smallest possible starting letter” for w, in that s_1 is the least generator s such that l(s w) < l(w). Furthermore, (s_2, \ldots, s_n) is the normal form for s_1 w. Every suffix of a normal form is again a normal form.
  • The InvShortLex normal form for w \in W is (s_1, \ldots, s_n), where s_n is the “smallest possible ending letter” for w. The InvShortLex language (as a formal language contained in S^*) is the reverse of the ShortLex language. We have the equality \mathtt{Reverse}(\mathtt{InvShortLex}(w)) = \mathtt{ShortLex}(w). Every prefix of a normal form is again a normal form.

Since any prefix of a normal form is again a normal form, the normal forms define a tree structure on the group, with an edge x \xrightarrow{s} y iff NF(x) \cdot s = NF(y) as words.

The p-dialated affine Weyl group

Recall that the affine Weyl group W \subseteq \Aff(V^*) is generated by the reflections r_{\alpha, k} where \alpha is a finite root, and k is an integer. For p > 0, the subgroup generated by the r_{\alpha, k} with k \in p \bbZ is called the p-dialated affine Weyl group W_p \subseteq W. In a fractal-like way, it is both a subgroup of W (when both W and W_p are viewed as sitting inside \Aff(V^*)), and it is abstractly isomorphic to W.

W is generated by the simple reflections s_1, \ldots, s_n (which are all linear reflections), and the affine reflection s_0 = r_{\theta, 1} where \theta is the highest root of the underlying finite root system. W_p is generated by the same linear reflections s_1, \ldots, s_n, and the affine reflection s_0^{(1)} = r_{\theta, p}. These generators give the same Coxeter presentation, and so the isomorphism W \to W_p may be defined by sending s_1 \mapsto s_1, \ldots, s_n \mapsto s_n, and s_0 \mapsto s_0^{(1)}.

Now the fractal part: we want to define a homomorphism W \mapsto W_p \injto W which embeds W inside of itself. (This is because to compute using W and W_p we really treat them as finitely presented groups, in which they look exactly the same). In order to compute this map, we need to know how to express the affine reflection s_0^{(1)} in terms of (s_1, \ldots, s_n, s_0). Purely by the definition of r_{\theta, k} we can see that s_0 = t_{\theta^\vee} s_\theta where t_\mu \colon V^* \to V^* is translation by \mu \in V^*. Similarly we have s_0^{(1)} = t_{p \theta^\vee} s_\theta. So if we know how to express s_\theta in terms of the finite simple reflections s_1, \ldots, s_n, then we can get t_{\theta^\vee} in terms of the affine simple generators and we will then have s_0^{(1)} = (t_{\theta^\vee})^p s_\theta.

In order to find s_\theta we use the general fact that w r_\alpha w^{-1} = r_{w \alpha} (which holds in any reflection-faithful root system). When constructing the root system we indexed roots by increasing depth (how far, in terms of reflections, they are away from being simple), and recorded this data in a reflection table. This leads to a simple process for finding a sequence of simple reflections such that x_n \cdots x_2 x_1 \theta = \alpha_s for some simple root \alpha_s, then \theta = x_1 x_2 \cdots x_n \alpha_s and hence letting w = x_1 \cdots x_n we have r_\theta = r_{w \alpha_s} = w s w^{-1}, and so t_{\theta^\vee} = s_0 w s w^{-1}. Finally we have s_0^{(1)} = t_{p \theta^\vee} s_\theta = (s_0 s_\theta)^p s_\theta = (s_0 s_\theta)^{p - 1} s_0

What would also be handy to be able to calculate is the factorisation map W \to W_p \times {^p W}, where {^p W} is the set of minimal coset representatives (which behaves well by general reflection subgroup theory). Here we can use a similar idea to parabolic subgroups, based on the fact (still true for general reflection subgroups) that each coset W_p x has a unique element of minimal length (with length as measured in W). Therefore to factorise an element w, try to multiply on the left by any of the subgroup generators (s_1, \ldots, s_n, s_0^{(1)}) and see if the length goes down. Eventually we get to some product x_k \cdots x_1 w of minimal length in the coset W_p w, and we have w = (x_1 \cdots x_k)(x_k \cdots x_1 w), a product of elements of W_p with a minimal coset rep. (In fact, during this algorithm we can treat any x_i = s_0^{(1)} as a simple generator for the left factor, and a product of simple generators for the right factor, which means we can treat the left generator as either being inside W or inside W_p, implicitly using the isomorphism W \to W_p).

(In fact one thing is not obvious from the description above: why can we necessarily move to a minimal-length element only by multiplication on the left by the simple generators of W_p? Clearly we cannot choose any generating set of W_p. The reason this is working is because the set of generators for W_p is canonical with respect to the reflection subgroup W_p \subseteq W …)

It is not hard to see that the cardinality of {^p W} is p^{|S|}: the fundamental alcove and the p-dialated fundamental alcove are similar simplicies in an |S|-dimensional space, therefore the ratio of their volumes is p^{|S|}.

How alcoves are drawn

The affine fundamental chamber is the open simplex generated by \Delta = (\Lambda_1^\vee, \Lambda_2^\vee, \Lambda_3^\vee). Using the usual formula for the simple reflections on the coweight basis, this chamber gets moved around generating other chambers w \Delta for w \in W. In order to draw triangles in the plane, we need to take each generating vector v \in \Delta of the simplex, treat it as a projective line \bbR v, and intersect with the hyperplane \innprod{-, \delta} = 1. This is done by converting from the (\Lambda_1^\vee, \Lambda_2^\vee, \Lambda_3^\vee) basis into the (\varpi_1^\vee, \varpi_2^\vee, \delta^*) basis, then doing the usual projectivisation thing where the vector is normalised so that the last coordinate is 1. (This works because \innprod{\varpi_1^\vee, \delta} = \innprod{\varpi_2^\vee, \delta} = 0). The basis conversion map is x_1 \Lambda_1^\vee + x_2 \Lambda_2^\vee + x_3 \Lambda_3^\vee \mapsto x_1 \varpi_1^\vee + x_2 \varpi_2^\vee + (a_1 x_1 + a_2 x_2 + x_3) \delta^*.

Note that in this approach the coordinates of the simplex start as basis vectors (1, 0, 0), (0, 1, 0), and (0, 0, 1). It’s only after basis conversion and scaling that we end up with (1/a_1, 0, 1), (0, 1/a_2, 1), (0, 0, 1).

Geometry

We will model the 2D Euclidean space for the affine Weyl group inside the 3D coweight space of the affine root system, living inside the plane \innprod{-, \delta} = 1. We use the adjoint realisation of the affine root datum, so the coweight space is the space V^* with basis (\Lambda_1^\vee, \Lambda_2^\vee, \Lambda_3^\vee) of affine fundamental weights, and the weight space V has basis (\alpha_1, \alpha_2, \alpha_3) of simple roots. These are in perfect pairing: \innprod{\Lambda_i^\vee, \alpha_j} = \delta_{ij}.

An alternative basis of V is the basis (\alpha_1, \alpha_2, \delta), with \alpha_3 = \delta - \widetilde{\alpha}, where \widetilde{\alpha} = a_1 \alpha_1 + a_2 \alpha_2 is the highest root of the finite root system generated by the two finite roots. The corresponding alternative basis of V^* is (\varpi_1^\vee, \varpi_2^\vee, \delta^*) which is a dual basis to (\alpha_1, \alpha_2, \delta), and the relation to the old basis is \Lambda_1^\vee = \varpi_1^\vee + a_1 \delta^*, \quad \Lambda_2^\vee = \varpi_2^\vee + a_2 \delta^*, \quad \Lambda_3^\vee = \delta^*

Detecting cursor position

Given a point (x_1, x_2) inside a triangle on the screen, which open simplex w\Delta does it correspond to? We embed this point into the affine space by adding \delta^*, then rewrite in fundamental coweight coordinates x_1 \varpi_1^\vee + x_2 \varpi_2^\vee + \delta^* \mapsto x_1 \Lambda_1^\vee + x_2 \Lambda_2^\vee + (-a_1 x_1 - a_2 x_2 + 1)\delta^*, which gives us a point on the interior of an affine chamber. This chamber is the dominant chamber if all coordinates are nonnegative, otherwise some coordinate (say i) is negative, and the simple reflection s_i can be applied to move the point to a smaller length chamber. Eventually we reach the dominant chamber, and the sequence of reflections applied gives us a reduced word in the Weyl group generators for w. .

Drawing

We only actually need to draw the coloured walls in order to generate “the look” of the affine Weyl group. This is easy enough: if w\Delta = (v_1, v_2, v_3), then the right-multiplication-by-s_1-wall is the segement [v_2, v_3], etc. We may also need to shade various simplices.

There are three different coordinate systems in play: (\Lambda_1^\vee, \Lambda_2^\vee, \Lambda_3^\vee) \quad (\varpi_1^\vee, \varpi_2^\vee, \delta^*) \quad (x, y) where the last coordinate system is “screen coordinates” in pixels. The coordinates of a simplex never change in the first two bases, but they will change in the last basis as we zoom and pan around.