返回题库

PUMaC 2017 · 加试 · 第 11 题

PUMaC 2017 — Power Round — Problem 11

专题
Discrete Math / 离散数学
难度
L3
来源
PUMaC

题目详情

  1. There are two places where you may ask questions about the test. The first is Piazza. Please ask your coach for instructions to access our Piazza forum. On Piazza, you may ask any question so long as it does not give away any part of your solution to any problem . If you ask a question on Piazza, all other teams will be able to see it. If such a question reveals all or part of your solution to a power round question, your team’s power round score will be penalized severely. For any questions you have that might reveal part of your solution, or if you are not sure if your question is appropriate for Piazza, please email us at pumac@math.princeton.edu. We will email coaches with important clarifications that are posted on Piazza. PUMaC 2017 Power Round Page 3 Introduction and Advice The topic of this power round is Lie algebras (“Lie” pronounced as “Lee”). Lie al- gebras are mathematical objects with a simple set of operations that leads to a powerful classification. Rather than being a trivial matter untouched by modern mathematicians, Lie algebras are integral to a great deal of cutting-edge research; just last year, mathematicians fully cracked the structure of the algebra corresponding to the E Dynkin diagram (shown 8 on page 20). Sections 1 and 2 introduce fundamental ideas of linear algebra. Sections 3 and 4 provide some theory of Lie algebras through a problem-solving approach. Section 5 it discusses graphs known as Dynkin diagrams and works through elegant cases of Serre’s Theorem. This is not intended to be a complete course in Lie algebras; in any event, a contest is far from the best way to provide a complete undertaking. Rather, think of this as a groundwork for Section 5, which contains some truly beautiful results, in a sort of “greatest hits” of linear algebra. Instead of having very few problems with many steps apiece, the guiding philosophy behind Sections 3-5 is that the majority of the problems are intended to be solved using only a handful of leaps. This is meant to reward understanding, as the progression in these sections is meant to be incremental. Here is some further advice with regard to the Power Round: • Read the text of every problem! Many important ideas are included in problems and may be referenced later on. In addition, some of the theorems you are asked to prove are useful or even necessary for later problems. • Make sure you understand the definitions , especially in the last few sections. If you don’t, then you will not be able to do the problems. Feel free to ask clarifying questions about the definitions on Piazza (or email us). • Don’t make stuff up : on problems that ask for proofs, but you will receive more points if you demonstrate legitimate and correct intuition than if you fabricate some- thing that looks rigorous just for the sake of having “rigor.” • Check Piazza often! Clarifications will be posted there, and if you have a question it is possible that it has already been asked and answered in a Piazza thread (and if not, you can ask it, assuming it does not reveal any part of your solution to a question). If in doubt about whether a question is appropriate for Piazza, please email us at pumac@math.princeton.edu. Good luck, and have fun! – Zachary Stier We’d like to acknowledge and thank many individuals and organizations for their sup- port; without their help, this Power Round (and the entire competition) could not exist. Please refer to the solution of the power round for full acknowledgments. PUMaC 2017 Power Round Page 4 Contents 0 Whitelist 5 1 Linear Algebra I (20 points) 6 1.1 Vector Spaces (5 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Linear Mappings, Inner Products (11 points) . . . . . . . . . . . . . . . . . 7 1.3 Matrices (4 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Linear Algebra II (22 points) 10 2.1 Eigen- (7 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Trace (12 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Semisimplicity & nilpotency (3 points) . . . . . . . . . . . . . . . . . . . . . 11 3 Lie Algebras I (77 points) 11 3.1 What is a Lie algebra? (26 points) . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 Ideals and Subalgebras (16 points) . . . . . . . . . . . . . . . . . . . . . . . 13 3.3 Ado’s Theorem (9 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4 The adjoint representation (26 points) . . . . . . . . . . . . . . . . . . . . . 14 4 Lie Algebras II (55 points) 15 4.1 The Killing form and semisimple Lie algebras (24 points) . . . . . . . . . . 15 4.2 Root space decomposition (31 points) . . . . . . . . . . . . . . . . . . . . . 15 5 Root systems (125 points) 16 5.1 What is a root system? (37 points) . . . . . . . . . . . . . . . . . . . . . . . 16 5.2 Dynkin diagrams (48 points) . . . . . . . . . . . . . . . . . . . . . . . . . . 18 5.3 Dynkin diagrams of Lie algebras (40 points) . . . . . . . . . . . . . . . . . . 21 PUMaC 2017 Power Round Page 5 Notation • ∀ : for all. ex.: ∀ x ∈ { 1 , 2 , 3 } means “for all x in the set { 1 , 2 , 3 } ” • f ◦ g : function composition. ex.: ( f ◦ g )( x ) = f ( g ( x )) . • A ⊂ B : proper subset. ex.: { 1 , 2 } ⊂ { 1 , 2 , 3 } , but { 1 , 2 } 6 ⊂ { 1 , 2 } • A ⊆ B : subset, possibly improper. ex.: { 1 } , { 1 , 2 } ⊆ { 1 , 2 } • f : x 7 → y : f maps x to y . ex.: if f ( n ) = n − 3 then f : 20 7 → 17 and f : n 7 → n − 3 are both true. • { x ∈ S | C ( x ) } : the set of all x in the set S satisfying the condition C ( x ). ex.: √ { n ∈ N | n ∈ N } is the set of perfect squares. • N : the natural numbers { 1 , 2 , 3 , . . . } . • R : the real numbers. √ • C : the complex numbers, { x + iy | x, y ∈ R and i = − 1 } . 0 Whitelist Linear algebra can be tough to grasp in its first go. For teams that might like a more in-depth look at certain aspects, only the following resources may be referenced to enhance understanding: • Robert Beezer, A First Course in Linear Algebra , http://linear.ups.edu/fcla/index.html – I would recommend this link first, as it appears to be an intermediate stage between the exposition of this Power Round and the following two texts. • Jim Hefferon, Linear Algebra , 3 ed., http://joshua.smcvt.edu/linearalgebra/book.pdf ~ • David Cherney et al., Linear Algebra , https://www.math.ucdavis.edu/ linear/linear- guest.pdf These are the only resources you may access while working on the Power Round. Using any other resource constitutes cheating. In addition, you may only cite the texts when those results are consistent with only what has been featured prior to that point in the Power Round . This is to ensure that the texts are purely supplemental to the Round. What follows is a brief crash course in linear algebra, which will begin to lay down the foundations for the crux of this Power Round. If it appears that the coming sections are rather disjointed, understand that a great deal of content has been omitted as to only include the topics essential for our course of study. PUMaC 2017 Power Round Page 6 1 Linear Algebra I (20 points) 1.1 Vector Spaces (5 points) Definition 1.1.A. A vector space is a set V of elements (known as vectors ) that is associated with a field F . In what follows, F will be either R or C (though these are not the only fields). • A vector space is closed under two operations: – addition ( v + w is an element of V for any v , w ∈ V ) – scalar multiplication ( a v is an element of V for any a ∈ F and v ∈ V ) • There is a zero element 0 ∈ V that functions as an additive identity. • Scalar multiplication distributes over vector addition ( a ( v + w ) = a v + a w ) and vice- versa (( a + b ) v = a v + b v ), plus we have the following associativity: a ( b v ) = ( ab ) v . Note that this implies the existence of additive inverses in V , i.e. for any v ∈ V there is another element, w , for which v + w = 0. Two illustrative initial examples are R and C , which are each vector spaces over them- selves. We see that each is closed under addition and scalar multiplication (which is actu- ally just multiplication distributing over addition), and multiplying by − 1 gives additive n n inverses. R and C , the ordered n -tuples ( a , a , . . . , a ) of real or complex numbers, are 1 2 n for the same reasons vector spaces. Definition 1.1.B. W ⊆ V is a subspace if W is itself a vector space. 2 For instance, { ( z, 0) | z ∈ C } ⊂ C is a subspace relation. If two vector spaces are “independent from each other,” in that their only common element is 0, then we write V ⊕ V = { v + v | v ∈ V , v ∈ V } . For instance, if 1 2 1 2 1 1 2 2 2 V = { ( r, 0) | r ∈ R } and V = { (0 , r ) | r ∈ R } then V ⊕ V = R . This lends itself to a 1 2 1 2 notion of “adding” disjoint vector spaces. Definition 1.1.C. A set of vectors { v , v , . . . , v } is linearly independent if there are no 1 2 n c , c , . . . , c ∈ F such that not all of them are 0 and c v + c v + · · · + c v = 0. 1 2 n 1 1 2 2 n n 2 For instance, (2 , 0) , (1 , 7) ∈ R are linearly independent because they cannot be com- bined to 0. However, no two nonzero real numbers are linearly independent. ′ Definition 1.1.D. A subset V ⊂ V spans V if, given v ∈ V , there is a collection of values ′ ′ ′ ′ ′ c , . . . , c ′ for which if V = { v , . . . , v } , then v = c v + · · · + c ′ v . 1 ′ 1 ′ | V | | V | 1 1 | V | | V | Definition 1.1.E. The dimension of V is the smallest integer n such that there exists a spanning set { v , v , . . . , v } ⊂ V , and such a set is called a basis . If there is no such 1 2 n integer n then V is infinite-dimensional ; otherwise V is finite-dimensional . We denote this by n = dim V . 3 For instance, R = { ( a, b, c ) | a, b, c ∈ R } has dimension 3; one basis is { (1 , 0 , 0) , (0 , 1 , 0) , (0 , 0 , 1) } . PUMaC 2017 Power Round Page 7
解析

PUMaC 2017 Power Round Page 1 PUMaC 2017 Power Round Solutions Zachary Stier Contents 0 Acknowledgements 1 1 Linear Algebra I (20 points) 2 1.1 Vector Spaces (5 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Linear Mappings, Inner Products (11 points) . . . . . . . . . . . . . . . . . 2 1.3 Matrices (4 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Linear Algebra II (22 points) 3 2.1 Eigen- (7 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Trace (12 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Semisimplicity & nilpotency (3 points) . . . . . . . . . . . . . . . . . . . . . 5 3 Lie Algebras I (77 points) 5 3.1 What is a Lie algebra? (26 points) . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Ideals and Subalgebras (16 points) . . . . . . . . . . . . . . . . . . . . . . . 6 3.3 Ado’s Theorem (9 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.4 The adjoint representation (26 points) . . . . . . . . . . . . . . . . . . . . . 8 4 Lie Algebras II (55 points) 9 4.1 The Killing form and semisimple Lie algebras (24 points) . . . . . . . . . . 9 4.2 Root space decomposition (31 points) . . . . . . . . . . . . . . . . . . . . . 10 5 Root systems (125 points) 11 5.1 What is a root system? (37 points) . . . . . . . . . . . . . . . . . . . . . . . 11 5.2 Dynkin diagrams (48 points) . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.3 Dynkin diagrams of Lie algebras (40 points) . . . . . . . . . . . . . . . . . . 14 0 Acknowledgements I’d like to take a moment to thank the individuals without whose contributions this Power Round would not have been nearly as successful: • Asilata Bapat and Kevin Carde, who taught a class on Lie algebras at Canada/USA Mathcamp 2015, where I originally learned this material and whose notes provided a basis for the structure of the round; • Bill Huang ’19, whose insightful comments on an earlier draft kept this round from being the equivalent of a preposterous number of problem sets; and • Evan Chen, whose thorough and thoughtful remarks helped guide this round to com- pletion in the weeks leading up to PUMaC 2017. A good deal of content was drawn from Asilata and Kevin’s notes, as well as Introduc- tion to Lie Algebras by Karin Erdmann and Mark Wildon. Students looking for a more comprehensive treatment may want to look into Erdmann and Wildon’s text, and an even more rigorous treatment can be found in Lie Algebras by James Humphreys (which was the basis for Asilata and Kevin’s course). The title of the round, “I Ain’t Even Lie-in’. . . ,” was inspired by “Dunked On” by Froggy Fresh. In addition, huge thanks go to the entire staff and volunteers of PUMaC 2017.

PUMaC 2017 Power Round Page 2 1 Linear Algebra I (20 points) 1.1 Vector Spaces (5 points) Problem 1.1.1. (2 points) Show that any linearly independent set of size dim V < ∞ in the vector space V over the field F is a basis of V . Solution : Suppose not, and span { v , . . . , v } ⊂ V . Add in v ∈ V \ span { v , . . . , v } ; 1 n n +1 1 n we will show that { v , . . . , v , v } is linearly dependent. 1 n n +1 Give V the basis w , . . . , w , which we know exists by the definition of dimension. Since 1 n V = span { w } we can write v = a w + · · · + a w where, without loss of generality, i 1 1 1 n n a 6 = 0. So we can rearrange and then we find that we can replace w with v in the 1 1 1 basis. We repeat with each of the other elements until we find that in fact { v , . . . , v } are 1 n sufficient to span. Problem 1.1.2. (3 points) Show that R [ x ] is a real vector space, and that C [ x ] is a complex vector space (where F [ x ] is the set of finite-degree polynomials with coefficients in F ). Is R [ x ] a complex vector space? Is C [ x ] a real vector space? Solution : First part: Adding polynomials over a given space is clearly valid, since it will just get a different polynomial, also with coefficients in the field (since the field is closed). Second part: no; 1 ∈ R [ x ] but i · 1 6 ∈ R [ x ], making it not closed under complex scalar multiplication. Third part: yes, since a complex number times a complex number is still complex. 1.2 Linear Mappings, Inner Products (11 points) N ∑ n Problem 1.2.1. (2 points each) Fix N ∈ N . Let a ( x ) take the form a ( x ) = a x and n n =0 let F be the set of degree- N polynomials in F [ x ]. For each of the following functions N f : F → F [ x ], determine with proof whether or not it is a linear map. i N (i) f : a ( x ) 7 → a ( x ) + a (1) 1 N +1 (ii) f : a ( x ) 7 → a ( x ) + a (1) x 2 N ∑ n (iii) f : a ( x ) 7 → ( a − a ) x 3 n N − n n =0 Solution : (i) Yes; f (( αa + βb )( x )) = ( αa + βb )( x ) + ( αa + βb )(1) = αa ( x ) + βb ( x ) + αa (1) + βb (1) = 1 αf ( a ( x )) + βf ( b ( x )) (where α, β ∈ F ; a, b ∈ F [ x ]). 1 1 N +1 N +1 (ii) Yes; f (( αa + βb )( x )) = αa ( x ) + βb ( x ) + ( αa (1) + βb (1)) x = α ( a ( x ) + a (1) x ) + 2 N +1 β ( b ( x ) + b (1) x ) = αf ( a ( x )) + βf ( b ( x )) (where α, β ∈ F ; a, b ∈ F [ x ]). 2 2 N ∑ n (iii) Yes; f (( αa + βb )( x )) = ( αa + βb − αa − βb ) x = αf ( a ( x )) + βf ( b ( x )) 3 n n N − n N − n 3 3 n =0 (where α, β ∈ F ; a, b ∈ F [ x ]). Problem 1.2.2. (3 points) Describe as precisely as possible all possible inner products on 2 C . Solution : Using the properties in Definition 1.2.F, we find that 〈 ( c , c ) , ( c , c ) 〉 = c c 〈 (1 , 0) , (1 , 0) 〉 + c c 〈 (1 , 0) , (0 , 1) 〉 + c c 〈 (0 , 1) , (1 , 0) 〉 + c c 〈 (0 , 1) , (0 , 1) 〉 . 1 2 3 4 1 3 1 4 2 3 2 4 We now must simply determine the inner product of each pair of basis vectors 〈 e , e 〉 where i j + e = (2 − i, i − 1). We know that 〈 e , e 〉 ∈ R by the first and fourth properties, so we can i i i

PUMaC 2017 Power Round Page 3 call r = 〈 e , e 〉 . We know that 〈 e , e 〉 = 〈 e , e 〉 so we can let 0 6 = c = 〈 e , e 〉 . Then we i i i 1 2 2 1 1 2 get the inner product 〈 ( c , c ) , ( c , c ) 〉 = c c r + c c c + c c c + c c r 1 2 3 4 1 3 1 1 4 2 3 2 4 2 + for any r , r ∈ R and c ∈ C { 0 } , which can be verified by checking the properties in 1 2 Definition 1.2.F. − 1 Problem 1.2.3. (2 points) If φ is an isomorphism, show that φ is also an isomorphism. − 1 Solution : As it is a bijective homomorphism, φ has its φ also a homomorphism that − 1 is 1-to-1. Thus, φ is a homomorphism where its inverse is well-defined (since it is 1-to-1) and a homomorphism, making it also a bijective homomorphism. 1.3 Matrices (4 points) Problem 1.3.1. (1 point) Does matrix multiplication associate? Why or why not? An- swer the same question about commutativity. Solution : First question: application of linear maps associates. Second question: no, [ ] [ ] [ ] [ ] [ ] [ ] 0 0 0 1 0 0 0 1 0 0 1 0 · = and · = . 1 0 0 0 0 1 0 0 1 0 0 0 m n Problem 1.3.2. (1 point) Show that, for linear mappings T , T : R → R represented 1 2 m n by the n × m matrices { a } and { b } , respectively, the linear mapping T + T : R → R i,j i,j 1 2 can be represented by the n × m matrix { a + b } (where in each instance i ranges along i,j i,j 1 to m and j from 1 to n ). m n Solution : Give R the basis { v , . . . , v } and R the basis { w , . . . , w } . We’ll write the 1 m 1 n action of T as sending v 7 → a w + · · · + a w and of T : v 7 → b w + · · · + b w . 1 1 n 2 1 n k 1 ,k n,k k 1 ,k n,k Adding these maps vector-by-vector can be shown to compare directly with what the sum of the matrices looks like. n p m n Problem 1.3.3. (2 points) Show that, for matrices T : R → R and T : R → R and 1 2 m vector v ∈ R , T ( T v ) = ( T × T ) v . 1 2 1 2 m n p Solution : Give R the basis { v , . . . , v } , R the basis { w , . . . , w } , and R the ba- 1 m 1 n sis { x , . . . , x } . We’ll write the action of T as sending v 7 → a w + · · · + a w and of 1 p 1 k 1 ,k 1 n,k n T : w 7 → b x + · · · + b x . Composing these maps vector-by-vector can be shown to 2 k 1 ,k 1 p,k p compare directly with what the product of the matrices looks like. 2 Linear Algebra II (22 points) 2.1 Eigen- (7 points) Problem 2.1.1. (1 point) Show that each eigenspace V is a subspace of V . λ Solution : If v, w ∈ V then M ( av + bw ) = aM v + bM w = aλv + bλw = λ ( av + bw ), λ as desired. Problem 2.1.2. (1 point) Why does any mapping of a finite-dimensional space have a finite number of distinct eigenvalues? Solution : Suppose not, and there were an infinite number. Then in particular we can choose more eigenvectors corresponding to distinct eigenvalues than the dimension of the space. These eigenvectors are linearly independent since otherwise they would all belong to the same eigenspace, so this contradicts the definition of dimension.

PUMaC 2017 Power Round Page 4 [ ] 2 0 2 2 Problem 2.1.3. (2 points) Suppose M : R → R is a linear map with matrix . 1 7 Find each eigenvector and the corresponding eigenvalue of M . (Hint: which linear map sends the vector v to c v ( c ∈ R )?) [ ] [ ] [ ] x 2 x cx Solution : M = = . We immediately find 2 cases: c = 2 or x = 0. y x + 7 y cy [ ] − 5 t In the former case, x + 7 y = 2 y = ⇒ x = − 5 y . We thus get the 2-eigenevector for t t ∈ R . If x = 0 then 7 y = cy . y = 0 is not an option (this would just generate the trivial [ ] 0 eigenspace) so we find c = 7 and the 7-eigenspace is generated by for t ∈ R . t Problem 2.1.4. (3 points) For each function in Problem 1.2.1, if it is a linear map, de- termine its eigenvalues and classify as best you can the associated eigenvectors. Solution : (i) If a ( x ) + a (1) = λa ( x ) then ( λ − 1) a ( x ) = a (1) for all x . This means that either a ( x ) is a constant or λ − 1 = 0. In the former case, if a ( x ) is a nonzero constant then we get λ = 2, and V = F . In the latter case, we get a (1) = 0, so the V is everything in 2 1 F [ x ] with their sum of coefficients equal to zero. N +1 N +1 (ii) a ( x ) + a (1) x = λa ( x ) only when a (1) = 0, because x does not appear on the right-hand side. Thus, we get that V is everything in F [ x ] with their sum of 1 coefficients equal to zero. ∑ n (iii) Let ˜ a ( x ) = a x . Then in order to have a ( x ) − a ˜ ( x ) = λa ( x ) we need ˜ a ( x ) = N − n 2 (1 − λ ) a ( x ); this means a = (1 − λ ) a = (1 − λ ) a ; the only solutions are N − n n N − n λ = 0 , 2. V is the polynomials that are symmetric; V is the polynomials that are 0 2 antisymmetric (i.e. a = − a . n N − n 2.2 Trace (12 points) Problem 2.2.1. (2 points) What is dim sl ? Find a basis. n We want to have trace zero, so we need to have n − 1 degrees of freedom on the main 2 diagonal. The remaining n − n entries not on the main diagonal can be anything, so 2 2 there are n − n extra degrees of freedom. The final degree is n − 1. A basis could be   . . .     1 . .   . and where there are some number of zeroes along the first n − 2   . .   . − 1 . . ., and zeroes elsewhere. minus that number along the second The other basis elements could have a 1 at row i and column j , one for each i and j ( i 6 = j ). Problem 2.2.2. (10 points) Classify as best you can the elements of so and sp ; find n n dim so and dim sp . n n Solution :   T T 0 C − B   so : Elements here take the form B M P where B and C each have n rows 2 n +1 T − C Q − M T and 1 column, and P and Q both satisfy X = − X . P and Q have zeroes along the ( n − 1) n diagonal and total freedom in the upper triangle, for 2 · degrees of freedom. M 2

PUMaC 2017 Power Round Page 5 2 has n degrees of freedom. B and C each have n degrees of freedom. Thus the total 2 dimension is 2 n + n . [ ] M P T so : Elements here take the form where P and Q both satisfy X = − X . 2 n T Q − M P and Q have zeroes along the diagonal and total freedom in the upper triangle, for ( n − 1) n 2 2 · degrees of freedom. M has n degrees of freedom. Thus the total dimension 2 2 is 2 n − n . [ ] M P T sp : Elements here take the form where P and Q both satisfy X = X . P 2 n T Q − M and Q have total freedom in the upper triangle that includes the main diagonal, for n ( n +1) 2 2 · degrees of freedom. M has n degrees of freedom. Thus the total dimension 2 2 is 2 n + n . 2.3 Semisimplicity & nilpotency (3 points) Problem 2.3.1. (3 points) Show that if X and Y are matrices that commute and X is nilpotent then tr( XY ) = 0. k k k k Solution : We note that if X = 0 then ( XY ) = X Y = 0, so XY is also nilpotent. It then just remains to be shown that all nilpotent matrices have trace zero. We know that n n X v = λ v when v is a λ -eigenvector. However, setting n = k gives λ = 0. Since trace is sum of eigenvalues, we are done. 3 Lie Algebras I (77 points) 3.1 What is a Lie algebra? (26 points) Problem 3.1.1. (1 point) Let [ f, g ] = f ◦ g − g ◦ f for f, g ∈ gl . Show that the Jacobi n identity holds for f, g, h ∈ gl , and therefore that gl is a Lie algebra. n n Solution : [ f, [ g, h ]] = [ f, gh − hg ] = f gh − f hg − ghf + hgf . [ g, [ h, f ]] = [ g, hf − f h ] = ghf − gf h − hf g + f hg . [ h, [ f, g ]] = [ h, f g − gf ] = hf g − hgf − f gh + gf h . Adding gets cancellation to 0. Problem 3.1.2. (1 point) Explain why two Lie algebras of different dimension cannot be isomorphic. Solution : Their underlying vector spaces must be isomorphic. Problem 3.1.3. (4 points) Show that if L and L are abelian Lie algebras, they are 1 2 isomorphic if and only if they are of the same dimension. Solution : → : Trivial since isomorphism keeps bases distinct (so a basis of n elements maps to another basis of n elements). ← : Consider any isomorphism of the underlying vector spaces (this exists because all complex vector spaces of the same dimension are isomorphic) and call it φ . For any x, y ∈ L , 0 = φ (0) = φ ([ x, y ]) = [ φ ( x ) , φ ( y )] = 0, so the bracket is preserved. 1 Problem 3.1.4. (4 points) Assume that for all x, y, z ∈ L , [ x, [ y, z ]] = [[ x, y ] , z ]. What is the most general statement you can make about elements of L ? Solution : [ L, L ] ⊆ Z ( L ) ( Z ( L ) is L ’s center), or, put less symbolically, anything brack- eted with anything else commutes with anything in L . Another way to put it is that

PUMaC 2017 Power Round Page 6 [ L, [ L, L ]] = 0, or anything that uses that but with generic elements of L . This is due to the Jacobi identity: 0 = [ x, [ y, z ]] + [ y, [ z, x ]] + [ z, [ x, y ]] = [ y, [ z, x ]] by cancellation; since this is the case for any x, y, z , [ L, L ] ⊆ Z ( L ) is all we can say. Problem 3.1.5. Fix n . Suppose there is some subalgebra S ⊆ gl for which φ : sl → S is 2 n a bijective Lie algebra homomorphism (where gl has the bracket as in Problem 3.1.1). If n [ ] [ ] [ ] 0 1 0 0 1 0 ′ v = , v = , and v = , let V = φ ( v ). Let λ and λ be the greatest 1 2 3 i i 3 3 0 0 1 0 0 − 1 ′ n and least eigenvalue, respectively, of V and let v, v ∈ C be vectors such that V v = λ v 3 3 3 ′ ′ ′ and V v = λ v . 3 3 ′ (i) (5 points) Find, with proof, V v and V v . 1 2 ′ (ii) (7 points) Find, with proof, λ and λ . 3 3 Solution : (i) We first compute [ v , v ] = 2 v and [ v , v ] = − 2 v . φ ([ v , v ]) v = [ φ ( v ) , φ ( v )] v 3 1 1 3 2 2 3 1 3 1 and expanding gets V V v = ( λ + 2) V v . But we assumed λ was the maximal real 3 1 1 eigenvalue – this is a contradiction if V v 6 = 0. Thus we conclude that it is indeed 1 ′ ′ ′ 0. By analogous computation, V V v = ( λ − 2) V v , and V V v = ( λ − 2) V v , which 3 2 2 3 2 2 ′ forces V v = 0. 2 (ii) We repeat the process of multiplying V V v to get a string of eigenvectors ( v = 3 2 ) v , v , . . . , v , v . We know that the string must stop here because each is λ λ − 2 − λ +2 − λ an eigenvector of V and tr V = tr φ ( v ) = tr( φ ([ v , v ])) = tr( V V − V V ) = 0. 3 3 3 1 2 1 2 2 1 n ∑ ( λ − 2 k ) = 0 k =0 ( n + 1) λ − n ( n + 1) = λ = n ′ and so the minimum must be λ = − n . Problem 3.1.6. (4 points) Up to isomorphism, how many Lie algebras have dimension 1? dimension 2? List them all. Solution : Dimension 1: any such Lie algebra is abelian, since its elements take the form ax for some nonzero vector x , and thus [ ax, bx ] = ab [ x, x ] = 0. This is unique up to isomor- phism by Problem 3.1.3. Dimension 2: If it’s abelian then we’ve found the only one up to isomorphism, again by Problem 3.1.3. If it’s not abelian, then we know that the set of post-bracket operations (e.g. [ v, w ] for v, w ∈ L ) is spanned by [ e, f ] for e, f the bases of L . (This is because all elements are of the form ae + bf , so [ ae + bf, ce + df ] = ( ad − bc )[ e, f ].) If [ e, f ] = 0 then this is an abelian algebra. We actually find that [ e, f ] = x for some v 6 = 0. We can actually take a change-of-basis isomorphism to make x one of the basis elements and y the other. Thus up to isomorphism the last option is given by [ x, y ] = x . 3.2 Ideals and Subalgebras (16 points) Problem 3.2.1. (i) (1 point) If L is an abelian Lie algebra, show that any subspace of L is a subalgebra and an ideal. Solution : Let V ⊂ L be a vector space. If v ∈ V, x ∈ L then [ v, x ] = 0 ∈ V , so V is an ideal and thus a subalgebra.

PUMaC 2017 Power Round Page 7 (ii) (4 points) Find a nonabelian Lie algebra on a space of n × n matrices for some n > 2 that has a subalgebra that is not an ideal. Specify that subalgebra and briefly justify why it is not an ideal. Solution : Let L = gl and let S ⊂ L be the subalgebra of diagonal matrices. S is a 3 subalgebra because it is a vector space with bracket identically 0. However, consider       1 0 0 1 1 0 0 − 1 0       x = 0 2 0 ∈ S and y = 0 2 0 ∈ L \ S . However, [ x, y ] = 0 0 0 6 ∈ S . 0 0 1 0 0 3 0 0 3 Problem 3.2.2. (5 points) Take I = { x ∈ L | x nilpotent } where L is a subspace of gl . n Is I a subalgebra? an ideal? Fully justify your responses. Solution : The intuitive reasoning is that either I is a subalgebra but not an ideal, or it’s [ ] [ ] 0 1 0 0 both. As it turns out, however, neither is the case. Consider N = , N = . 1 2 0 0 1 0 [ ] 1 0 N , N ∈ I . [ N , N ] = which is never 0 (its exponents have period 2). This ex- 1 2 1 2 0 − 1   0 · · · 2 − i   . . . . . . tends to N = where all entries are zero except in the corners indicated;   i . . . i − 1 · · · 0 here, the general case, [ N , N ] 6 ∈ I for any value of n , making I not even closed under the 1 2 Lie bracket. Problem 3.2.3. (3 points) Let [ L , L ] = { [ v , v ] | v ∈ L } for Lie algebras L and L . 1 2 1 2 i i 1 2 Is [ I, J ] an ideal for ideals I and J of the same Lie algebra? Solution : Yes. Take a ∈ [ I, J ] , z ∈ L . We wish to have [ a, z ] ∈ [ I, J ]. a is of the form [ x, y ] for some x ∈ I, y ∈ J . The Jacobi identity gives that [[ x, y ] , z ] = − [[ y, z ] , x ] − [[ z, x ] , y ]. [ y, z ] ∈ J and [ z, x ] ∈ I , so indeed [[ x, y ] , z ] ∈ [ I, J ]. Problem 3.2.4. (3 points) Show that ker φ is an ideal and im φ is a subalgebra of the domain and image space, respectively, for φ a Lie algebra homomorphism. Solution : If x ∈ ker φ then φ ( x ) = 0. [ y, x ] ∈ ker φ iff φ ([ y, x ]) = 0. φ ([ y, x ]) = [ φ ( y ) , φ ( x )] = [ φ ( y ) , 0] = 0, as desired. The Jacobi identity can also be shown. For the image, we can show the Jacobi identity because any three elements appear as φ ( x ) , φ ( y ) , φ ( z ) for some x, y, z in the domain space; then, [ φ ( x ) , [ φ ( y ) , φ ( z )]] = [ φ ( x ) , φ ([ y, z ])] = φ ([ x, [ y, z ]]); summing cyclically gets what we want. The other properties are quicker to check. 3.3 Ado’s Theorem (9 points) Problem 3.3.1. Consider the additive vector space V of degree- d polynomials with real d coefficients. (i) (1 point) Find the dimension of V , and find a basis. 3 2 3 Solution : V is polynomials of the form a + bx + cx + dx , so the basis could be 3 2 3 { 1 , x, x , x } and the dimension is 4. 2 (ii) (5 points) Say V is given a bracket operation, for p ( x ) = α + α x + α x and 2 0 1 2 2 q ( x ) = β + β x + β x elements of V , 0 1 2 2 2 [ p, q ] = ( α β − α β ) + ( α β − α β ) x + ( α β − α β ) x . 1 2 2 1 2 0 0 2 0 1 1 0 Find a subspace S of gl for some n such that there is an isomorphism between V 2 n and S ; explicitly describe such an isomorphism that preserves the bracket.

PUMaC 2017 Power Round Page 8   0 − α α 2 1   Solution : One possibility: n = 3. φ : p 7 → α 0 − α , and 2 0 − α α 0 1 0         0 0 1 0 1 0 0 0 0   φ       V ' span 0 0 0 , − 1 0 0 , 0 0 1 . 2   − 1 0 0 0 0 0 0 − 1 0 2 Problem 3.3.2. (3 points) Suppose R is given a Lie bracket that makes it abelian. Find a 2 subspace S of gl for some n such that there is an isomorphism between R and S ; explicitly n describe the isomorphism. [ ] {[ ] [ ]} φ a 0 1 0 0 0 2 Solution : One possibility: n = 2. φ : ( a, b ) 7 → , and R ' span , . 0 b 0 0 0 1 3.4 The adjoint representation (26 points) Problem 3.4.1. (1 point) Verify that ad x is a linear map. Solution : For any z ∈ L , (ad ( ax + by ))( z ) = [ ax + by, z ] = a [ x, z ] + b [ y, z ] = a (ad x ) z + b (ad y ) z . Problem 3.4.2. (i) (2 points) Explain how ad x can be thought of as an element of gl if x ∈ sl . 2 3 Solution : sl is a 3-dimensional algebra, which means that it has 3 basis elements; 2 ad x : L → L , so to write it as a matrix we would consider how the basis elements are transformed under the function. (ii) (5 points) Find an explicit isomorphism from the Lie algebra sl to a subalgebra of gl . 2 3   0 0 − 2   Solution : [ v , v ] = 0 , [ v , v ] = v , [ v , v ] = − 2 v , so ad v = 0 0 0 . Simi- 1 1 1 2 3 1 1 1 1 0 1 0     0 0 0 2 0 0     lar computation gives ad v = 0 0 2 and ad v = 0 − 2 0 . We then find 2 3 − 1 0 0 0 0 0 ad x ∈ gl by finding x = ae + be + ce and setting ad x = a ad e + b ad e + c ad e . 1 2 3 1 2 3 3 Problem 3.4.3. (5 points) Show that the set of 3 × 3 complex upper-triangular matrices is a Lie algebra, find a basis, and write ad e explicitly for each basis vector e . Find ad e ’s i i i eigenvalues for each basis vector e . i       0 1 0 0 0 1 0 0 0       Solution : L has the basis e = 0 0 0 , e = 0 0 0 , e = 0 0 1 . [ e , e ] = 1 2 3 1 1 0 0 0 0 0 0 0 0 0   0 0 0   0 , [ e , e ] = 0 , [ e , e ] = e , so ad e = 0 0 1 . Similar computation gives ad e = 0 and 1 2 1 3 2 1 2 0 0 0   0 0 0   ad e = − 1 0 0 . These were computed by considering how each basis vector transforms 3 0 0 0 under action by each adjoint.

PUMaC 2017 Power Round Page 9 Problem 3.4.4. (5 points) Show that [ x, y ] = 0 implies (ad x ) ◦ (ad y ) = (ad y ) ◦ (ad x ). Solution : Consider taking these functions on any z ∈ L . (ad x ◦ ad y )( z ) = ad x ([ y, z ]) = [ x, [ y, z ]] = − [ y, [ z, x ]] − [ z, [ x, y ]] = − [ y, [ z, x ]] = [ y, [ x, z ]] = ad y ◦ (ad x ( z )) = (ad y ◦ ad x )( z ). Facts used: Jacobi identity, anticommutativity. Problem 3.4.5. (6 points) If x is nilpotent, is ad x nilpotent? If ad x is nilpotent, is x nilpotent? Justify your response. k 2 k Solution : For the first question: yes. Say x = 0. Then for any y take (ad x ) y = 2 k ∑ f ( m ) m 2 k 2 k − m ( − 1) x y x for some function f of m that is immaterial, since either m ≥ k m =0 for 2 k − m ≥ k , so all terms go to zero, regardless of y . For the second question: no, k (ad I ) y = 0 for any y , making ad I trivially nilpotent, but I = I for any k . n n n n Problem 3.4.6. (2 points) Show that ad x satisfies the relation (ad x )([ y, z ]) = [(ad x )( y ) , z ]+ [ y, (ad x )( z )] for any x, y, z ∈ L . (Such functions are called derivations .) Solution : (ad x )([ y, z ]) = [ x, [ y, z ]] = − [ y, [ z, x ]] − [ z, [ x, y ]] = [ y, [ x, z ]]+[[ x, y ] , z ] = [ y, (ad x )( z )]+ [(ad x )( y ) , z ]. 4 Lie Algebras II (55 points) 4.1 The Killing form and semisimple Lie algebras (24 points) Problem 4.1.1. (1 point) Verify that the Killing form is symmetric and bilinear. Solution : Linearity: ad is linear so κ ( ax, by ) = tr(ad ( ax ) ◦ ad ( by )) = tr( ab ad x ◦ ad y ) = ab tr(ad x ◦ ad y ). κ ( x, y + z ) = tr(ad x ◦ ad ( y + z )) = tr(ad x ◦ (ad y + ad z )) = tr(ad x ◦ ad y + ad x ◦ ad z ) = κ ( x, y ) + κ ( x, z ). The analogue holds for the other component. Symmetry : κ ( x, y ) = tr(ad x ◦ ad y ) = tr(ad y ◦ ad x ) = κ ( y, z ). Facts used: tr( XY ) = tr( Y X ), linearity of tr. Problem 4.1.2. (5 points) Show that κ ([ x, y ] , z ) = κ ( x, [ y, z ]). Solution : κ ([ x, y ] , z ) = κ ( xy − yx, z ) = tr(ad ( xy − yx )ad z ) = tr(ad x ◦ ad y ◦ ad z − ad y ◦ ad x ◦ ad z ) = tr(ad x ◦ ad y ◦ ad z ) − tr(ad y ◦ ad x ◦ ad z ) = tr(ad x ◦ ad y ◦ ad z ) − tr(ad x ◦ ad z ◦ ad y ) = tr(ad x ◦ ad y ◦ ad z − ad x ◦ ad z ◦ ad y ) = tr(ad x ◦ (ad y ◦ ad z − ad z ◦ ad y )) = κ ( x, [ y, z ]) . Facts used: tr( XY ) = tr( Y X ), linearity of tr/ κ . Problem 4.1.3. (10 points) Show that x ∈ L is semisimple if and only if ad x is semisimple. Solution : We note that ad is injective, because if ad x = ad y then ad ( x − y ) = 0, but then κ ( x − y, z ) = 0 for all z ∈ L , so x − y = 0 by nondegeneracy. Any element of a matrix space can be uniquely written as the sum of a semisimple and nilpotent component (where those components commute, though this won’t be necessary for us right now). Thus if we can show that x semisimple implies ad x semisimple we’ll be done, since the inverse of the

PUMaC 2017 Power Round Page 10 ad map will take a semisimple adjoint plus a zero nilpotent part back to some other part plus a zero nilpotent part in L . Suppose x has eigenvectors e with eigenvalue λ . We may i i give ad L ’s matrix space a subbasis of e , the dim L × dim L matrix with 1 at row i and i,j column j . Computation gives that (ad x ) e = ( λ − λ ) e , so the sum of ad x ’s eigenspace’s i,j i j i,j dimensions is maximal, since that is equivalent to being diagonalizable (since the structure of eigenspaces is preserved by similarity). Therefore we see that since x semisimple implies ad x semisimple, we are done. Problem 4.1.4. ′ ′ (i) (3 points) Let I = { x ∈ L | κ ( x, y ) = 0 ∀ y ∈ I } for I ⊆ L an ideal. Show that I is also an ideal. (ii) (5 points) If L is semisimple, show that there are n simple subalgebras L i (1 ≤ n ≤ dim L ) such that n ⊕ L = L . i i =1 Solution : ′ ′ (i) Take any z ∈ L . If [ x, z ] ∈ I for x ∈ I then κ ([ x, z ] , y ) = 0 for any y ∈ I . We manipulate κ as in Problem 4.1.2 to find κ ([ x, z ] , y ) = κ ( x, [ z, y ]). [ z, y ] ∈ I since it is an ideal so we find that κ ([ x, z ] , y ) = κ ( x, [ z, y ]) = 0 as desired. (ii) We actually will show this for ideals. Let I ⊂ L have least nonzero dimension of all ′ ′ ideals. (If I = L then we’re finished.) We may write L = I ⊕ I since I ∩ I is trivial. ′ By dimension additivity, dim I < dim L , so we descend by continuing this process on ′ I . 4.2 Root space decomposition (31 points) Problem 4.2.1. (5 points) Show that each λ is linear. i Solution : ( αλ ( h ) + βλ ( h )) v = [ αh + βh , v ] = λ ( αh + βh ) v as desired. 1 2 1 2 1 2 Problem 4.2.2. (3 points) Show that L is isomorphic to the vector space ⊕ L . λ λ : H → C Solution : Each λ is a linear map, so L contains span { v } , the λ -eigenvectors. In addi- λ tion, each vector in L belongs to one such eigenspace. To see this, note that L has a basis of simultaneous eigenvectors of H , which is how we know that the λ s exist. If v ∈ L and 1 α ∗ v ∈ L then [ h, v + v ] = [ h, v ] + [ h, v ] = α ( h ) v + β ( h ) v 6 = γ ( h )( v + v for some γ ∈ H 2 1 2 1 2 1 1 1 2 β only when α ( h ) = β ( h ) for all h ∈ H .We therefore find that when α 6 = β these sets never have nonzero overlap, and thus they are linearly independent from each other, and we can add the spaces in the desired manner. Problem 4.2.3. (6 points) Suppose α and β are roots of L having maximal toral subalgebra H . Under what condition on α and β is it true that κ ( v , v ) 6 = 0 for some v ∈ L , v ∈ L ? 1 2 1 α 2 β Solution : Restatement: if [some condition], then κ ( v , v ) = 0. We see that when the 1 2 condition is α + β 6 = 0, we can provide the following argument: by nondegeneracy of α + β there is h ∈ H with ( α + β )( h ) 6 = 0. Then, α ( h ) κ ( v , v ) = κ ( α ( h ) v , v ) = 1 2 1 2 κ ([ h, v ] , v ]) = − κ ([ v , h ] , v ) = − κ ( v , [ h, v ]) = − κ ( v , β ( h ) v ) = − β ( h ) κ ( v , v ). Adding 1 2 1 2 1 2 1 2 1 2 gets ( α + β )( h ) κ ( v , v ) = 0, and we are done. 1 2

PUMaC 2017 Power Round Page 11 ∗ ∗ Problem 4.2.4. (6 points) Show that 〈· , ·〉 : H × H → C as defined above is an inner product. Solution : We know that this inner product is a symmetric bilinear form via what we know ∗ about κ . We need to check that 〈 λ, λ 〉 is always positive and real for nonzero λ ∈ H . ∑ ∑ ∑ 2 2 2 + 〈 λ, λ 〉 = κ ( t , t ) = α ( t ) = κ ( t , t ) = 〈 α, λ 〉 ∈ R . ( R is the set λ λ λ α λ H α ∈ R α ∈ R α ∈ R H H H ∗ of roots in H .) This follows from several results: H has simultaneous eigenvectors; the ∗ definition of the inner product on H ; and the fact that the inner product is rational-values on R since κ is the trace of matrices with integer eigenvalues. H Problem 4.2.5. (6 points) Show that [ x, y ] = κ ( x, y ) t for x ∈ L , y ∈ L . λ λ − λ Solution : For any h ∈ H , κ ( h, [ x, y ]) = κ ([ h, x ] , y ) = λ ( h ) κ ( x, y ) = κ ( t , h ) κ ( x, y ) = λ κ ( h, t κ ( x, y )). Subtracting gives 0 = κ ( h, [ x, y ] − κ ( x, y ) t ) for any h . The second com- λ λ ponent is independent of the first so we find that, independent of h , [ x, y ] = κ ( x, y ) t as λ desired. ∗ Problem 4.2.6. (5 points) Show that for λ ∈ H , κ ( t , t ) κ ( h , h ) = 4 and show that λ λ λ λ 2 t λ h = . λ κ ( t ,t ) λ λ Solution : λ ( h ) = 2 by construction. Thus also by construction λ 2 = λ ( h ) = κ ( t , h ) = κ ( t , [ e , f ]) = κ ( t , κ ( e , f ) t ) = κ ( t , t ) κ ( e , f ) . λ α λ α λ λ λ λ λ λ λ λ λ λ 2 t λ As h = κ ( e , f ), we substitute κ ( t , t ) κ ( e , f ) = 2 to find that h = , the λ λ λ λ λ λ λ λ κ ( t ,t ) λ λ second desired result. We use this to directly derive the first desired result: κ ( h , h ) = λ λ ( ) 4 κ ( t ,t ) 2 t 2 t 4 λ λ λ λ κ , = = which rearranges to the first desired result. 2 κ ( t ,t ) κ ( t ,t ) κ ( t ,t ) κ ( t ,t ) λ λ λ λ λ λ λ λ 5 Root systems (125 points) 5.1 What is a root system? (37 points) Problem 5.1.1. (1 point) Find all root systems in R . Solution : We know that a root system must span its vector space, and contain only two scalar multiples of any given element. Thus all possible root systems are { r, − r } for any + given r ∈ R . Problem 5.1.2. (3 points) In condition 3, note that v and w are interchangeable. Use this and the identity that 〈 v, w 〉 = ‖ v ‖ · ‖ w ‖ cos φ (where φ is the angle between the vectors) 2 to show that 4 cos φ ∈ { 0 , 1 , 2 , 3 } whenever v 6 = ± w . ( ) ( ) 〈 v,w 〉 〈 w,v 〉 2 Solution : 〈 v, w 〉 〈 w, v 〉 = 4 = 4 cos φ . v and w are in R only if 〈 v, w 〉 ∈ Z . 〈 v,v 〉 〈 w,w 〉 2 Thus we find that 4 cos φ is a positive integer no larger than 3. Problem 5.1.3. (8 points) Think about the construction of the root systems in the text, 2 as well as Problem 5.1.2, use these to understand why the subsets of R shown in Figure 1 are root systems. Draw (on separate coordinate axes) all possible root systems R . 2 Solution : There are 4 total possible root systems in R . Two were mentioned in the contest. The following are B and G , respectively: 2 2

PUMaC 2017 Power Round Page 12 ( ) √ 3 3 In the former, α = (1 , 0) and β = ( − 1 , 1). In the latter, α = (1 , 0) and β = − , . 2 2 Problem 5.1.4. (10 points) Show that any root system can be written as the union of irreducible root systems. (Use the fact that a root system R is irreducible if there is no ′ ′ ′ subset R such that for α ∈ R and β ∈ R \ R , 〈 α, β 〉 = 0.) Solution : Let a ≡ b when there is a path of edges in the Dynkin diagram from the node corresponding to a to that of b . Let R enumerate the equivalence classes from this relation. i Each of the R is a root system, by checking the root system properties, and necessarily i they are each irreducible, since an individual R may not be split into mutually-orthogonal i sets, since otherwise there would be S ⊂ R for which 〈 s, r 〉 = 0 for each s ∈ S, r ∈ R \ S , i i which would contradict the accessibility all elements from all other elements in R . i Problem 5.1.5. (15 points) Show that any root system has a base. Solution : Clearly if dim V = 1 there is a base – { 1 } , for instance, suffices. Now, say ⊥ dim V ≥ 2. Let R = { x ∈ V | ( x, v ) 6 = 0 ∀ v ∈ R } (this is a nonempty set because it re- quires an uncountable number of ( n − 1)-dimensional objects to fully cover a n -dimensional ⊥ + + space). Fix some x ∈ R . Let R = { v ∈ R | ( x, v ) > 0 } . Let B be the elements of R + that are not the sum of two other elements of R . ∑ Suppose B was not linearly independent. Then there are real values r for which r v = 0. v v v ∈ B ∑

  • − ′ Let B be the elements with r > 0 and B have r < 0. Set x = r v =

v v v B ∑ ′ ′ ′ ( − r ) v . Now we take the inner product of these two formulations of x . 〈 x , x 〉 = − v B ∑ ∑ ∑ 〈 r v, ( − r ) v 〉 = r ( − r ) 〈 v , v 〉 . We note that the inner prod-

  • − v v v v + −
  • − B B
  • − v ∈ B ,v ∈ B
  • − uct of all pairs of distinct base elements must be negative, i.e. they form obtuse angles; this is because | B | = dim V , and we must satisfy the restrictions on possible angle cosines √ √ 1 2 3 (which may only have size 0 , , , , 1, so we are constrained in our choice of angles). 2 2 2 ′ ′ However, 〈 x , x 〉 ≥ 0 and this is the sum of positive quantities multiplied by negative inner ′ ′ ′ ′ products, which is negative. Thus we find that 〈 x , x 〉 = 0, i.e. x = 0. Taking 〈 x , x 〉 gets ∑ 0 = 〈 0 , x 〉 = r 〈 v, x 〉 ; r , 〈 v, x 〉 ≥ 0 with strict inequality for the inner product, so we

v v B find that r = 0 for all v ∈ B . This contradicts B ’s linear dependence. Additionally, either v v or − v is in B (since 〈 v, x 〉 > 0 or 〈 v, x 〉 < 0), so | B | = dim V . We must also show that B satisfies the other base property. Take any w ∈ R , which lies

in either R or − w ∈ R . WLOG the former. Suppose there doesn’t exist coefficients ∑ + k for which k v = w . Then, choose the element w ∈ R for which 〈 x, w 〉 is min- v v B imized. w 6 ∈ B (otherwise we’d’ve chosen k = 1 and the others to be 0), so there are w + w , w ∈ R with w = w + w . 〈 x, w 〉 = 〈 x, w 〉 + 〈 x, w 〉 , both of which are positive, so 1 2 1 2 1 2 0 < 〈 x, w 〉 , 〈 x, w 〉 < 〈 x, w 〉 so then w , w or both may not be written in the form of the 1 2 1 2 second condition. This contradicts the minimality of 〈 x, w 〉 .

PUMaC 2017 Power Round Page 13 5.2 Dynkin diagrams (48 points) Problem 5.2.1. (10 points) Show that a root system is irreducible if and only if it has a connected Dynkin diagram. Solution : → : If a root system is irreducible, then it has a connected Dynkin diagram. Suppose not; that there was an irreducible root system with a disconnected diagram. Let R be 1 one of the disjoint components. Then, there are no edges from any of R to the rest 1 of the root system...but we have defined the edge count between v and w to be in 2 nonzero proportion with 〈 v, w 〉 , which means that 〈 v, w 〉 = 0 for v ∈ R , w ∈ R \ R . 1 1 This contradicts irreducibility. ← : If a Dynkin diagram is connected, then its root system is irreducible. Again we suppose not, and that it is reducible. Then there would be a subset R ⊂ R for 1 which 〈 v, w 〉 = 0 when v ∈ R , w ∈ R \ R . However, the vertices in the Dynkin 1 1 diagram corresponding to these vectors are connected to at least something outside of R by connectedness. Again, we have defined the edge count between v and w 1 2 to be in nonzero proportion with 〈 v, w 〉 , which means that 〈 v, w 〉 6 = 0 for some v ∈ R , w ∈ R \ R . This contradicts reducibility. 1 1 Problem 5.2.2. (7 points) Show that no Dynkin diagram may have a cycle. Solution : We actually show a stronger result: that no set that we can turn into a Dynkin di- agram may have as many vertices as edges, as a cycle would (since it’d be a sub-diagram). ∑ Suppose the vectors { v , . . . , v } can form a valid Dynkin diagram. Then if v = v , 1 n i ∣ ∣ ∣ ∣ ∑ ∑ ∣ ∣ 0 < 〈 v, v 〉 = n + 2 〈 v , v 〉 (since the v ’s are unit vectors). So, n > 2 〈 v , v 〉 ≥ C ∣ ∣ i j i i j ∣ ∣ i<j i<j 2 since whenever v and v are connected they have 4 〈 v , v 〉 ∈ { 1 , 2 , 3 } , and where C is the i j i j number of such pairs. Thus we find that n > C , as desired, and the proof is finished. Problem 5.2.3. (8 points) Show that no node in a Dynkin diagram may have more than three total edges connected to it. Solution : Say v is connected to v , . . . , v . There are no cycles, so 〈 v , v 〉 = 0 when 1 n i j i 6 = j . We may find v with 〈 v, v 〉 6 = 0 such that span { v , . . . , v } = span { v, v , . . . , v } . 0 0 0 n 1 n ∑ Thus we may write v = 〈 v, v 〉 v and then taking the inner product with v , since v is i i ∑ ∑ 2 2 a unit vector, gives 1 = 〈 v, v 〉 = 〈 v, v 〉 . Since 0 < 〈 v, v 〉 < 1, we find 〈 v, v 〉 < 1. i 0 i 2 1 Since 4 〈 v, v 〉 ∈ { 1 , 2 , 3 } , 〈 v, v 〉 ≥ so we find that n may not be 4 or more. i i 4 Problem 5.2.4. (4 points) The Shrinking Lemma states that if a Dynkin diagram on n ∑ the base B has n vertices v , v , . . . , v that are in a line and v = v , then the base 1 2 n i i =1 B { v , v , . . . , v }∪{ v } corresponds to the root system with the intermediate vertices “com- 1 2 n bined” into a single vertex (as before). (See Figure 4.) Use the Shrinking Lemma to show that no Dynkin diagram may have more than one branch, more than one double-edge, or both a branch and a double edge. Solution : The first case is of two double edges. Some pair of them is connected by a line; invoke the Shrinking Lemma to turn that line into a single node, with two double edges attached to it. This violates Problem 5.2.3. The second case is similar: there at least two branching points, with some pair of them connected by a line. The third case follows for the same reason; we use the Shrinking Lemma to bring the branching point and double edge together, giving a quadruple-edge.

PUMaC 2017 Power Round Page 14 Figure 1: An impossible Dynkin diagram. The Shrinking Lemma gives that if there is a root system corresponding a variant of this diagram (with n nodes amongst the · · · ) then there is one corresponding to the one with n − 1 , n − 2 , . . . , 1 , 0 vertices. Problem 5.2.5. (4 points) Prove that if a Dynkin Diagram has a double edge with both ends connected to other nodes then it must be F . 4 Solution : Suppose there are m nodes to the left of the double-edge and n to the right (including the nodes that form the edge itself). Number them v , . . . , v and w , . . . , w , 1 m 1 n 1 2 2 respectively, where 4 〈 v , w 〉 = 2 (and 〈 v , w 〉 = ), so the other v and w do not m n m n i i 2 “touch” and thus 〈 v , w 〉 = 0 otherwise. Additivity of inner products means that if we i j ∑ ∑ 2 2 2 2 2 1 2 2 set v = v , w = w then 〈 v, w 〉 = 〈 mv , nv 〉 = m n 〈 v , w 〉 = m n . We i i m n m n 2 know that v and w are non-parallel vectors with some angle θ between them; 〈 v, w 〉 = 2 2 2 ‖ v ‖‖ w ‖ cos θ < ‖ v ‖‖ w ‖ , or 〈 v, w 〉 < ‖ v ‖ ‖ w ‖ = 〈 v, v 〉 〈 w, w 〉 . We already know how to evaluate the left-hand side; this was done in Section 5.2. Subsituting in what we know, 1 2 2 m n < m ( m + 1) n ( n + 1), or ( m − 1)( n − 1) < 2. We want m, n > 1 which means 2 m = n = 2, as desired. Figure 2: F , the Dynkin diagram discussed in Problem 5.3.6. 4 Problem 5.2.6. (15 points) Imagine a Dynkin diagram with a “branching point” – there is some vector with three lines coming off of it. Show that one of those lines must have length one (i.e. only one edge), and show that of the other two, either • if one has length 1 then the third may have any length • if one has length 2 then the other’s length may not be more than 4. Solution : We know that there are no double-edges, by Problem 5.2.4. Suppose the branching node is z , and it is joined to x , w , v where each of those are connected to a line going n m p to 1 such as x , . . . , x . We create the “super-vertices” x , w and v as specified earlier. n − 1 1 ′ ′ ′ Each is orthogonal to the others. Normalize them to x , w , v . We can give span { z, x, w, v } ′ ′ ′ the basis { z , x , w , v } for z a unit vector not orthogonal to z but orthogonal to each of 0 0 ′ ′ ′ ′ ′ ′ ′ ′ ′ x , w , v . It is rather readily seen that z = 〈 z, x 〉 x + 〈 z, w 〉 w + 〈 z, v 〉 v + 〈 z, z 〉 z . z too is 0 0 a unit vector (since it lies in a normalized base). Taking another inner product with z gives 2 2 2 2 2 2 2 ′ ′ ′ ′ ′ ′ 1 = 〈 z, x 〉 + 〈 z, w 〉 + 〈 z, v 〉 + 〈 z, z 〉 , i.e. 〈 z, x 〉 + 〈 z, w 〉 + 〈 z, v 〉 < 1 (0 < 〈 z, z 〉 < 0 0 1). We’ve already computed the inner products of x, w, v with themselves, and from the 2 n Dynkin diagram we know that 〈 z, nx 〉 = (since they’re connected by a single edge). n 4 2 2 2 2 p 2 n 2 m 1 1 1 Substituting: + + < 1, and rearranging gets + + > 1. 4 n ( n +1) 4 m ( m +1) 4 p ( p +1) n +1 m +1 p +1 1 1 1 1 3 Without loss of generality take n ≥ m ≥ p . Then ≤ ≤ ≤ , or 1 < , so n +1 m +1 p +1 2 p +1 p < 2, i.e. p = 1. Then, m < 3, i.e. m = 1 , 2. If m = 2 then n < 5; otherwise n can be any natural. 5.3 Dynkin diagrams of Lie algebras (40 points) Problem 5.3.1. (10 points each) For n ≥ 1 in all cases... (i) draw the Dynkin diagram for so 2 n +1

PUMaC 2017 Power Round Page 15 (ii) draw the Dynkin diagram for sl n (iii) draw the Dynkin diagram for sp 2 n and show your reasoning. In particular, many details of the computations were omitted above, but you will be expected to (briefly) justify the numbers you find. Solution : T (i) As L ’s elements satisfy x R + R x = 0, we know that x can be written in the form 2 n 2 n   T T 0 C − B   B M P where B and C each have n rows and 1 column and for any n × n T − C Q − M matrix M and n × n matrices P and Q both equal to their negative transpose. { } n ∑ H is the set of diagonal matrices in L , so H = c ( e − e ) | c ∈ C (in- i i,i i + n,i + n i i =1 dexed from zero this time). We can give L the additional basis elements m = i,j T e − e (1 ≤ i 6 = j ≤ n ), p = e − e (1 ≤ i < j ≤ n ), p , i,j j + n,i + n i,j i,j + n j,i + n i,j B = e − e (1 ≤ i ≤ n ) and C = e − e (1 ≤ i ≤ n ) to go along i i, 0 0 ,i + n i 0 ,i i + n, 0 with those of H . We find, for an arbitrary element h ∈ H of the form specified in H ’s specification, [ h, m ] = ( c − c ) m i,j i j i,j [ h, p ] = ( c + c ) p i,j i j i,j T T [ h, p ] = − ( c + c ) p i j i,j i,j [ h, B ] = c B i i i [ h, C ] = − c C i i i so our roots are: root eigenspace λ − λ span { m , m } i j i,j j,i λ + λ span { p , p } i j i,j j,i λ span { B } i i − λ span { C } i i ∗ For H , a vector space having the basis { λ } , we see that { λ − λ , λ − λ , . . . , λ − i 1 2 2 3 n − 1 λ , λ } is a base; this base will be what we use to build our Dynkin diagram. We set n n α = λ − λ and β = λ . When i < n if e = m then h = e − e − i i i +1 n α i,i +1 α i,i i + n,i + n i i e + e . e = B gets h = 2( e − e ). We compute i +1 ,i +1 i + n +1 ,i + n +1 n n,n 2 n − 2 n β β   2 e i = j α j  [ h , e ] = − e | i − j | = 1 α α α j i j   − 0 otherwise { − 2 e i = n − 1 α i [ h , e ] = β α i − 0 otherwise { − e i = n − 1 β [ h , e ] = α β i − 0 otherwise

PUMaC 2017 Power Round Page 16 Computation gives that   − 2 i = j  〈 λ − λ , λ − λ 〉 = − 1 | i − j | = 1 i i +1 j j +1   − 0 otherwise { − 2 i = n − 1 〈 λ − λ , λ 〉 = i i +1 n − 0 otherwise { − 1 i = n − 1 〈 λ , λ − λ 〉 = n i i +1 − 0 otherwise Since e ( v, w ) = 〈 v, w 〉 · 〈 w, v 〉 , our Dynkin diagram is: { } n − 1 ∑ (ii) H is the set of diagonal matrices in L , so H = c ( e − e ) | c ∈ C . We can i i,i n,n i i =1 give L the additional basis elements e (1 ≤ i 6 = j ≤ n ) to go along with those of i,j H . We find, for an arbitrary element h ∈ H of the form specified in H ’s specification, [ h, e ] = ( c − c ) e , so our roots are λ − λ with eigenspace span { e } . i,j i j i,j i j i,j ∗ For H , a vector space having the basis { λ } , we see that { λ − λ , λ − λ , . . . , λ − i 1 2 2 3 n − 1 λ } is a base; this base will be what we use to build our Dynkin diagram. We set n α = λ − λ . When i < n if e = e then h = e − e . We compute i i i +1 α i,i +1 α i,i i +1 ,e +1 i i   2 e i = j α j  [ h , e ] = − e | i − j | = 1 α α α j i j   − 0 otherwise Computation gives that   − 2 i = j  〈 λ − λ , λ − λ 〉 = − 1 | i − j | = 1 i i +1 j j +1   − 0 otherwise Since e ( v, w ) = 〈 v, w 〉 · 〈 w, v 〉 , our Dynkin diagram is where ` = n − 1. T ˜ ˜ (iii) As L ’s elements satisfy x R + R x = 0, we know that x can be written in the form 2 n 2 n [ ] M P for any n × n matrix M and n × n matrices P and Q both equal to their T Q − M transposes. { } n ∑ H is the set of diagonal matrices in L , so H = c ( e − e ) | c ∈ C . We i i,i i + n,i + n i i =1 can give L the additional basis elements m = e − e (1 ≤ i 6 = j ≤ n ), i,j i,j j + n,i + n T T p = e + e (1 ≤ i < j ≤ n ), p , p = e (1 ≤ i ≤ n ), and p to go i,j i,j + n j,i + n i,i i,i + n i,j i,i along with those of H . We find, for an arbitrary element h ∈ H of the form specified in H ’s specification, [ h, m ] = ( c − c ) m i,j i j i,j [ h, p ] = ( c + c ) p i,j i j i,j T T [ h, p ] = − ( c + c ) p i j i,j i,j

PUMaC 2017 Power Round Page 17 (where in the latter two lines i may equal j ) so our roots are: root eigenspace λ − λ span { m , m } i j i,j j,i λ + λ span { p , p } i j i,j j,i 2 λ span { p } i i,i T − 2 λ span { p } i i,i ∗ For H , a vector space having the basis { λ } , we see that { λ − λ , λ − λ , . . . , λ − i 1 2 2 3 n − 1 λ , 2 λ } is a base; this base will be what we use to build our Dynkin diagram. We set n n α = λ − λ and β = 2 λ . When i < n if e = m then h = e − e − i i i +1 n α i,i +1 α i,i i + n,i + n i i e + e . e = p gets h = e − e . We compute i +1 ,i +1 i + n +1 ,i + n +1 β n,n β n,n 2 n, 2 n   2 e i = j α j  [ h , e ] = − e | i − j | = 1 α α α j i j   − 0 otherwise { − 2 e i = n − 1 α i [ h , e ] = β α i − 0 otherwise { − e i = n − 1 β [ h , e ] = α β i − 0 otherwise Computation gives that   − 2 i = j  〈 λ − λ , λ − λ 〉 = − 1 | i − j | = 1 i i +1 j j +1   − 0 otherwise { − 2 i = n − 1 〈 λ − λ , λ 〉 = i i +1 n − 0 otherwise { − 1 i = n − 1 〈 λ , λ − λ 〉 = n i i +1 − 0 otherwise Since e ( v, w ) = 〈 v, w 〉 · 〈 w, v 〉 , our Dynkin diagram is: Problem 5.3.2. (10 points) State all isomorphims between the classical Lie algebras, and the Dynkin diagrams that you used to reach those conclusions. (Do not provide the iso- morphisms explicitly – just briefly state your reasoning.) Solution : so ' sp ' sl via root systems of type A1. so ' sp via root systems of 3 2 5 2 4 types B2 and C2. so ' sl via root systems of types D3 and A3. 6 4