Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Colors, Kohnert's rule, and flagged Kostka coefficients
(USC Thesis Other)
Colors, Kohnert's rule, and flagged Kostka coefficients
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Colors, Kohnert’s Rule, and Flagged Kostka Coefficients by Henry Ehrhard A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (Mathematics) August 2023 Copyright 2023 Henry Ehrhard Acknowledgements I would like to express my deep gratitude to my advisor, Sami Assaf, for her guidance, assurance, and always generous feedback. Our weekly chats have been a source of warmth and inspiration, and my best would not have been nearly as good without her. I would like to thank my committee members Jason Fulman, Paolo Zanardi, and once again, Sami Assaf, for their time and consideration. I would like to thank my colleagues, collaborators, and everyone with whom I have had the pleasure or frustration of thinking about a problem. There is no shared experience like it. I thank my friends, both in and out of the department, for their companionship, empathy, and for keeping me from becoming insulated. They are responsible for some of my fondest memories and provide not always the right amount of distraction. Finally, I thank my family for their unconditional love and support. My parents especially have been the most consistent presence in my life. One mathematics dissertation later and I cannot enumerate all they have given me. ii Table of Contents Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Chapter 1: Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Kohnert’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Flagged Kostka Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 2: Kohnert’s rule for flagged Schur modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Modules and polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 Schur modules for diagrams . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.2 Demazure modules and characters . . . . . . . . . . . . . . . . . . . . . . 12 2.2.3 Kohnert polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3 Crystal graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3.1 Tableaux crystals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3.2 Kohnert crystals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3.3 Kohnert tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.4 Recurrence for Kohnert polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.4.1 Corollaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.4.2 Percent avoiding diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Chapter 3: A crystal analysis of P-arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 Chromatic Symmetric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.3 Crystals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.3.1 The Diagram Crystal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4 The r Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.5 The P-Array Crystal Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.6 s-Positivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.7 Natural Unit Interval Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.8 Two-Row P-Tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Chapter 4: Complete Flagged Homogeneous Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.2 Complete Flagged Homogeneous Polynomials . . . . . . . . . . . . . . . . . . . . 89 4.3 h as a Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.4 Keys, Atoms, and Nonsymmetric Macdonald Polynomials . . . . . . . . . . . . . 96 iii 4.5 Flagged RSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.6 Schubert Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.7 The Inverse Kostka Analogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 iv List of Figures 2.1 Examples of diagrams. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Two tableaux for a diagram D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3 An illustration of the northwest condition. . . . . . . . . . . . . . . . . . . . . . . 14 2.4 Applying Magyar’s recurrence to a northwest diagram (left) to compute the char- acter of the flagged Schur module. . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.5 The poset of Kohnert diagrams for the bottom diagram. . . . . . . . . . . . . . . . 17 2.6 When D is northwest with row i a subset of row i+ 1, the positions indicated with x must be vacant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.7 The Demazure crystalsB w (2,1,1,0) for several w. . . . . . . . . . . . . . . . . . . . 23 2.8 The Kohnert crystal on Kohnert diagrams for the bottom-left diagram, which is northwest. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.9 An illustration of T (left) and S (right) in Lemma 2.3.14. . . . . . . . . . . . . . . 27 2.10 The labeling algorithm applied to a Kohnert diagram forD(3,0,5,1,4). . . . . . . 33 2.11 Examples of label rectification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.12 The label rectification of a diagram. . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.13 Left to right: a northwest diagram D given the row labeling, a diagram T with the same column weight as D, and the Kohnert labeling of T with respect to D. . . . . 36 2.14 The labeled Kohnert diagrams for D the leftmost diagram and the corresponding flagged tableaux indexing the basis. . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.1 The Hasse diagram (left) and incomparability graph (right) of a (3+1)-free poset. . 46 3.2 Two P-arrays with the one on the right being a P-tableau. . . . . . . . . . . . . . . 46 3.3 A portion of the diagram crystal. . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4 An r pre-alignment for a P-array with P as in Fig. 3.1. . . . . . . . . . . . . . . . 53 v 3.5 The r alignment for a P-array with P as in Fig. 3.1. . . . . . . . . . . . . . . . . . 54 3.6 The columns in the alignment of A whose entries are to be row swapped by f r . . . . 58 3.7 The columns in the alignment of A whose elements are to be row swapped by e r . . 62 3.8 A segment of the P-array crystal for the given poset. . . . . . . . . . . . . . . . . . 67 3.9 A diagram filling with respect to the P-tableau in Fig. 3.2. . . . . . . . . . . . . . 75 4.1 The three possible Kohnert moves on the top diagram. . . . . . . . . . . . . . . . . 93 4.2 A southwest diagram (left) and non-southwest diagram (right). . . . . . . . . . . . 94 4.3 The diagram D (1,0,3,6,1,0,2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.4 The positions and orientation for co-inversion triples. . . . . . . . . . . . . . . . . 98 4.5 An SSKT (left) and a reverse SSAF (right) of shape(1,0,3,6,1,0,2). . . . . . . . 101 4.6 A pair of tableaux inN n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.7 The insertion procedure← 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.8 An illustration of the mapsτ andρ. . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.9 The insertion procedure↽ 7 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.10 A rim hook ofλ =(8,7,6,5,3,2). . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.11 A diagram with enumerated weakly connected components. . . . . . . . . . . . . 124 4.12 A snake of b=(3,7,0,2,5,8,6). . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 4.13 Two special snake tabloids of shape (3,7,0,2,5,8,6), with respective weights (10,10,0,7,0,4,0) and(8,7,0,11,1,1,3) and signs− 1 and+1. . . . . . . . . . . 126 4.14 Special snake tabloids with the same weight and shape but opposite signs. . . . . . 128 4.15 Special snake tabloids whose terms cancel in equation (4.7.4) but not in equation (4.7.3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 4.16 A filling T∈G(S) with S highlighted. . . . . . . . . . . . . . . . . . . . . . . . . 130 4.17 The mapι. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 vi Abstract We consider three topics with some common themes. First we show that Kohnert’s combinatorial model for key polynomials computes the characters of a large class of flagged Schur modules in- dexed by northwest diagrams. Moreover, within the class of %-avoiding diagrams it is precisely the northwest diagrams whose corresponding characters are described by the model. Second, we define a normal crystal structure on the P-arrays considered by Gasharov. While not a highest weight crystal, its connected components have Schur positive characters thereby refining the re- sults of Gasharov, and Shareshian and Wachs, describing how the chromatic symmetric functions of incomparability graphs of (3+1)-free posets are Schur positive. Finally, we define a new ba- sis for the polynomial ring which lifts the complete homogeneous symmetric polynomials while retaining representation theoretic significance. Using a specialized RSK algorithm we give an ex- plicit nonnegative expansion into key polynomials and, generalizing the special rim hook tabloids of E˘ gecio˘ glu and Remmel, give an explicit signed expansion for key polynomials into this new basis. vii Chapter 1 Introduction This dissertation encompasses three papers that have appeared separately. Broadly speaking, they are all concerned with combinatorial models for symmetric polynomials and related objects. Most of these models can be stated in terms of diagrams D⊂ Z + × Z + which we think of as sets of boxes or “cells” arranged at lattice points, or fillings of these cells with some type of entry. For instance, in the French convention a Ferrers diagram is a set of cells that is both left and bottom justified. Fillings of these diagrams with positive integers whose values weakly increase left to right and strictly increase bottom to top are called semi-standard Young tableaux. The objects are used in the canonical combinatorial definition of the Schur functions, which is s λ = ∑ T ∏ i≥ 1 x # of entries i in T i summed over semi-standard Young tableaux whose Ferrers diagram corresponds to the integer par- titionλ. Though not immediately obvious from this definition, the Schur functions are symmetric, which is to say they are fixed under bijective renumberings of the indeterminates. The Schur poly- nomials are obtained by restricting to a finite set of indeterminates but contain the same essential information. Schur polynomials have been generalized in many ways including to the key polynomials, also called Demazure characters [17, 18]. A combinatorial model for the Schur polynomials and key polynomials can be given the additional structure of Kashiwara’s crystal graphs [29]. Conversely, 1 the existence of a crystal graph structure on a set of objects can be used to write the polynomial they combinatorialize in terms of Schur or key polynomials. Crystal graphs have been described as the combinatorial skeletons of representations. They reflect that the polynomials in question are certain characters relevant to the study of representation theory. A final throughline to point out for this dissertation is then the central role of Schur functions, key polynomials, and their associated crystal graphs in some combination. Owing, in part, to the representation theory, it is often interesting to learn when and how a polynomial or symmetric function expands as a nonnegative sum of key polynomials or Schur functions. These properties are concisely known as key positivity and Schur positivity. Questions and answers of this sort arise several times in these pages. The following chapters were originally written to be read in a largely self-contained manner and they have been included with negligible alterations. As such, they slightly vary in their notation, definitions, stylistic choices, and other conventions. Chapter 4 uses some results from Chapter 2 but refers to the previously published version. 1.1 Kohnert’s Rule Chapter 2 is joint work with Sam Armon, Sami Assaf, and Grant Bowling. It appears in [1]. In this chapter we consider the family of flagged Schur modules and their characters. The flagged Schur modules are B-modules indexed by diagrams, where B is the group of invertible lower triangular matrices. The modules introduced by Demazure are a special case indexed by so- called key diagrams associated to weak compositions [17, 18]. Their characters are polynomials in the entries of a diagonal matrix given by the trace of the action of such a matrix, and they correspondingly generalize the key polynomials. Kohnert gave a combinatorial model for the key polynomials that involves raising the cells of the corresponding diagram in a certain way [34]. Assaf showed that the same rule determines Schubert polynomials [7] which are the characters of flagged Schur modules indexed by Rothe 2 diagrams, coinciding with the modules introduced by Kra´ skiewicz and Pragacz [35, 36]. Both key diagrams and Rothe diagrams exhibit the northwest property (or southwest depending on one’s conventions), and it was conjectured by Assaf and Searles that Kohnert’s rule is valid for any northwest diagram [4]. In Chapter 2 the conjecture is proven true, greatly expanding the applicability of a model that already describes two widely studied classes of polynomials. This is accomplished by showing that the polynomials given by Kohnert’s rule obey Magyar’s recurrence relation for the charac- ters [46]. The other main tool is Assaf’s crystal on diagrams [8] which we use to decompose the set of diagrams generated by Kohnert’s rule into components that more directly combinatorialize the transformations in Magyar’s recurrence. The recurrence actually holds for characters corre- sponding to a larger class of diagrams which are called %-avoiding, but we show that Kohnert’s rule sharply fails for the rest of this class; the northwest property precisely detects the validity of Kohnert’s rule. 1.2 Colors Chapter 3 is our attempt to better understand a result by Gasharov [25] which was motivated by a still open conjecture of Stanley and Stembridge [65]. Given a graph we may consider all of the ways to assign number or “colors” to its vertices such that no edge is incident to the same color twice. These are called proper colorings and they are used to define the chromatic symmetric func- tion of a graph. An incomparability graph is one whose edges correspond to a pair of incomparable elements in an underlying partial order on the vertices. When the partial order obeys a (3+1) sub- poset avoidance condition, Gasharov gave a combinatorial model to write the chromatic symmetric function as a nonnegative sum of Schur functions. 3 The Stanley-Stembridge conjecture is that these chromatic symmetric function expand as non- negative sums of elementary symmetric functions [65], a stronger statement since elementary sym- metric functions are themselves nonnegative sums of Schur functions. Another conjectured gener- alization by Stanley is that a positive Schur expansion still exists when the graph is directly subject to a corresponding avoidance condition, allowing into consideration graphs for which there is no underlying poset [63]. Both of these questions continue to motivate research, but even Gasharov’s more limited result remains somewhat mysterious. For instance there is no known direct bijective proof, the existence of which is equivalent to putting a highest weight crystal structure on the so- called P-arrays that define Gasharov’s model. Such a proof would constitute a generalization of the Robinson-Schensted correspondence [63]. One hopes that a better understanding in this vein would also aid attempts to prove the above generalizations. Chapter 3 refines Gasharov’s result in this direction as well as Shareshian and Wachs’ own refinement [59] by decomposing the P-arrays into smaller subsets which still model Schur positive symmetric functions. The strategy is to show that the P-arrays can be given the structure of a normal crystal graph whose connected components yield such symmetric functions. This is not a highest weight crystal which corresponds directly to Schur functions. In fact, even Schur positivity is not a property generally associated with normal crystals. Our work therefore does not conclude the story but it comes with a neat surprise: a class of normal crystals that seemingly “just happen” to yield Schur positive symmetric functions. 1.3 Flagged Kostka Coefficients We have alluded to the way in which key polynomials are a nonsymmetric analogue of the Schur polynomials. Chapter 4 grew out of conversations with Sami Assaf who wondered what other classical symmetric bases have compelling analogues comparable to the key polynomials. A moti- vating application, which eventually fell out of focus, was whether one could form a nonsymmetric 4 analogue of Gasharov’s result [25], replacing the chromatic symmetric function with a nonsym- metric version, and Schur positivity with key positivity. Emulating Gasharov’s original proof then called for an analogue of the complete homogeneous symmetric polynomials in particular. In this chapter we introduce such objects which we call the complete flagged homogeneous polynomials. They turn out to be incredibly natural, while also generalizing existing combinatorics in interesting and nontrivial ways. The chapter has three main pillars. The first, is the matter of defining these polynomials and arguing that, roughly speaking, they occupy the same setting as key polynomials. More specifically these form a basis for the polynomial ring which lifts the corresponding symmetric basis, and can be interpreted as characters of flagged Schur modules. The second pillar centers around the question of how this new basis expands as a sum of key polynomials. The expansion of complete symmetric functions into Schur functions is governed by the Kostka coefficients which are easily described in terms of semi-standard Young tableaux. We find that the analogous flagged Kostka coefficients are naturally described by Haglund, Haiman, and Loehr’s combinatorics of non-attacking fillings of key diagrams [27]. The classical version of this expansion is strongly related to the Robinson-Schensted-Knuth (RSK) correspondence which puts matrices with entries inN into bijection with pairs of semi-standard Young tableaux of match- ing Ferrers diagram. Although not strictly necessary for determining the flagged Kostka coeffi- cients we introduce an RSK analogue that makes sense of these numbers in a similar way. Third, we give a combinatorial description for the coefficients in the opposite expansion of a key polynomial into complete flagged homogeneous polynomials. Our solution generalizes the classical answer given by the special rim hook tabloids of E˘ gecio˘ glu and Remmel [19]. In so doing, we generalize the notion of rim hooks of Ferrers diagrams to the notion of snakes of key diagrams. Rim hooks appear in other applications such as the Murnaghan-Nakayama rule and LLT polynomials [51, 52, 38], so it is tempting to imagine that snakes could find additional use cases as well. 5 Chapter 2 Kohnert’s rule for flagged Schur modules 2.1 Introduction The irreducible representations{S λ } of the general linear group GL n were constructed explicitly by Weyl using the Ferrers diagrams of partitionsλ. Similarly, for any diagram D, a finite collection of cells inZ + × Z + , one can construct the Schur moduleS D . These modules have been widely studied, and their characters include important generalizations of the Schur polynomials such as skew Schur polynomials and Stanley symmetric polynomials [64]. More generally, one can con- struct the flagged Schur module S flag D , which carries the action of the Borel subgroup B⊂ GL n of lower triangular matrices. These modules are also well-studied for certain families of diagrams, and their characters include type A Demazure characters [17], flagged skew Schur polynomials [55], and Schubert polynomials [40, 36]. The Borel-Weil theorem realizes the irreducible representations for the general linear group as spaces of sections of line bundles of the flag manifold. That is, for λ a dominant weight for G = GL n , the dual Weyl module is isomorphic to the Borel-Weil-Bott module H 0 (G/B,L − λ ), whereL is the line bundle over G/B associated to the antidominant weight− λ. More gener- ally, Demazure [18] considered the B-modules H 0 (X w ,L − λ ), where X w = BwB/B is the Schubert variety associated with the permutation w. This gives rise to the Demazure moduleS w λ , which coincides with the flagged Schur module S flag D when D is the key diagram for w acting on the Ferrers diagram ofλ by permuting the rows. 6 More generally, Magyar [46] considered configuration varieties F D of families of subspaces of a fixed vector space with incidence relations specified by a diagram D. When the diagram D has the northwest property, Magyar [46] gave an explicit resolution proving F D is normal with rational singularities. Since the corresponding line bundle has no higher cohomology, its space of sections recovers the Schur moduleS flag D . Magyar [47] uses the geometry to give a recurrence for the character in terms of degree-preserving divided difference operators that arise in Demazure’s character formula [17]. Reiner and Shimozono [56] use this recurrence to show the characters of the corresponding flagged Schur modules expand nonnegatively into Demazure characters, or, equivalently, thatS flag D admits a Demazure flag. In this paper, we prove the characters of flagged Schur modules for northwest diagrams can be computed using Kohnert’s rule [34]. That is, we show in Corollary 2.4.1 that for D northwest, char(S flag D )=K D , (2.1.1) whereK D is the Kohnert polynomial defined by Assaf and Searles [4] as a common generalization of Demazure characters [34] and Schubert polynomials [7]. Schubert polynomials are polynomial representatives of Schubert classes which originated in the study of the cohomology ring for the complete flag manifold by Bernstein, Gelfand, and Gelfand [13] and Demazure [17], with their combinatorics developed extensively by Lascoux and Sch¨ utzenberger [40]. Kohnert polynomials can be computed algorithmically from the diagram D using Kohnert’s rule [34]. Kraskiewicz and Pragacz [35, 36] showed Schubert polynomials are characters of flagged Schur modules indexed by certain northwest shapes. Thus (2.1.1) also proves Kohnert’s rule for Schubert polynomials [7]. We prove (2.1.1) by using the Demazure crystal structure [41, 30] for Kohnert polynomials [8] to show they also satisfy Magyar’s recurrence. Thus we obtain as well an explicit crystal model for flagged Schur modules, which gives a new proof that characters of flagged Schur modules decompose as nonnegative sums of Demazure characters. Of course, this also gives an explicit 7 representation theoretic interpretation for Kohnert polynomials indexed by northwest diagrams, resolving a conjecture of Assaf and Searles [4]. The specific case of the Kraskiewicz–Pragacz modules are also studied by Fink, M ´ esz´ aros, and St. Dizier [20, 21], who use Magyar’s model to derive tableau formulas for Schubert polynomials. Similarly, our proof of (2.1.1) allows us to use Kohnert tableaux to give tableau models for all northwest diagrams. Moreover, we give a natural set indexing a basis for flagged Schur modules. Magyar’s recurrence holds for the more general class of %-avoiding diagrams, which includes northwest diagrams. We show our result is tight in the sense that for %-avoiding diagrams D that are not northwest, the Kohnert polynomial does not agree with characters of the flagged Schur modules. Interestingly, M´ esz´ aros, Setiabrata, and St. Dizier [50] extend Magyar’s recurrence to K-theory, so one naturally wonders if there is a K-theoretic Kohnert’s rule extending (2.1.1). Our paper is structured as follows. In Section 2.2, we review flagged Schur modules and Kohnert polynomials, along with the associated combinatorics. We state Magyar’s recurrence for characters of flagged Schur modules, and begin to show it holds for Kohnert polynomials as well. In Section 2.3, we review crystals and reinterpret the last term in the desired recurrence for Kohnert polynomials as a statement about Demazure operators on crystals. For this, we delve into the combinatorics of Kohnert diagrams to prove the desired result for Demazure operators on Kohnert crystals. Thus we prove Kohnert polynomials satisfy the same recurrence, and so coincide with characters of flagged Schur modules. In Section 2.4, we give several corollaries and prove our result is tight. 2.2 Modules and polynomials In this section, we define the basic objects of study for this paper: flagged Schur modules, De- mazure characters, and Kohnert polynomials. 8 2.2.1 Schur modules for diagrams Consider the infinite, doubly indexed family of indeterminates {z i, j } ∞ i, j=1 . A matrix A∈ GL N act onC[z i, j ] in the usual way, A· z k,l = ∑ N i=1 A i,k z i,l k≤ N z k,l k> N . A diagram D is a finite collection of cells inZ + × Z + . We use matrix convention, so that the northwest corner has index (1,1). The Ferrers diagram of a partition λ =(λ 1 ≤ λ 2 ≤···≤ λ n ) places λ i cells, left justified, in row i. Generalizing this, the key diagram of a weak composition a=(a 1 ,a 2 ,...,a n ), denoted byD(a), places a i cells, left justified, in row i. The Rothe diagram of a permutation w, denoted byD(w), is given by D(w)={(i,w( j))| i< j, w(i)> w( j)}. For example, Fig. 2.1 shows the Ferrers diagram for a partition, the skew diagram (set-theoretic difference) for nested partitions, the key diagram for a weak composition, the Rothe diagram for a permutation, and a generic diagram. g ggg gggg g ggg gggg g gggg ggg g g ggg ggg g g ggg g gg λ=(0,0,1,3,4) ν/µ=(0,1,4,6)/(0,0,1,2) a=(0,1,4,0,3) w=13728456 Figure 2.1: Examples of diagrams. A tableau or filling on D is any map T : D→N. Abusing notation, we let D also denote the row tableau on the shape D where each cell is assigned its row index. For example, Fig. 2.2 shows a generic tableau (left) and the row tableau (right). T = 1 3 3 2 4 D= 1 2 2 3 3 Figure 2.2: Two tableaux for a diagram D. 9 Definition 2.2.1. For D a diagram and T a tableau of shape D, set ∆ T = ∏ j (T ( j) | D ( j) ) (2.2.1) where T ( j) is the array of values in the jth column of T , and for arrays a,b∈N m , we set(a| b)= det(z a i ,b j )∈C[z i, j ]. For example, taking T to be the left tableau in Fig. 2.2, we have ∆ T = z 11 z 12 z 31 z 32 z 23 z 32 z 33 z 42 z 43 =(z 11 z 32 − z 12 z 31 )(z 23 )(z 32 z 43 − z 33 z 42 ). Notice∆ T = 0 whenever T has repeated column entries. Definition 2.2.2. The Schur moduleS D is theC-span of{∆ T | T : D→[n]}. In particular, taking D to be the Ferrers diagram of the partition λ as in the leftmost diagram of Fig. 2.1, the Schur moduleS D is the irreducible representation of GL n indexed byλ, which we denote byS λ . For B⊂ GL n the subalgebra of lower triangular matrices, there is a B-stable ideal I=⟨z i, j | i> j⟩. We consider the B-module spanned by those fillings that respect this ideal. Say a filling T is flagged if all entries in row i are at most i. Definition 2.2.3. The flagged Schur module S flag D is the B-moduleS D /(S D ∩ I) spanned by {∆ T | T is flagged }. Notice that while the flagged tableaux span the module, this construction does not immediately give a basis for the modules. Many flagged Schur modules arise naturally in the context of geometry: for D a key diagram, the moduleS flag D coincides with those introduced by Demazure [18, 17] that give a filtration of irreducible modules with respect to the Weyl group; and for D a Rothe diagram, the moduleS flag D coincides with the Schubert modules introduced by Kraskiewicz and Pragacz [35, 36]. 10 The character of a B-module is the trace of the action of the matrix X = diag(x 1 ,...,x n ). Char- acters of Kraskiewicz-Pragacz modules indexed by Rothe diagrams are Schubert polynomials of Lascoux and Sch¨ utzenberger [40], which compute intersection multiplicies for the flag manifold. Characters of Demazure modules indexed by key diagrams have connections to Schubert polyno- mials [39] and to nonsymmetric Macdonald polynomials [53, 57]. In what follows, we study these flagged Schur modules through their characters. The weight of a filling T is the weak composition wt(T) whose ith part is the number of entries of T with label i. Given a, let x a = x a 1 1 ··· x a n n . Lemma 2.2.4. For a flagged filling T of D, ∆ T (mod I) is an eigenvector with the eigenvalue x wt(T) 1 1 ··· x wt(T) n n under the action of X = diag(x 1 ,...,x n ). In particular, the monomial x wt(T) ap- pears with positive multiplicity in char(S flag D ). Proof. Since each entry in T ( j) is no greater than the corresponding entry in D ( j) , the diagonal of the matrix defining (T ( j) | D ( j) ) includes only indeterminates z k,ℓ for which k≤ ℓ, ensuring that (T ( j) | D ( j) ) / ∈ I. Since I is prime we can be sure that∆ T / ∈ I as well. Namely∆ T is not zero in S flag D . Note that X· z k,ℓ = x k z k,ℓ for any k,ℓ. The ith row vector of the matrix of(T ( j) | D ( j) ) contains only indeterminates z k,ℓ with k= T ( j) i . Hence X· (T ( j) | D ( j) ) is the result of multiplying each row i in(T ( j) | D ( j) ) by x T ( j) i . Multi-linearity of the determinant yields X· (T ( j) | D ( j) )= x wt(T ( j) ) (T ( j) | D ( j) ). Therefore, X· ∆ T = ∏ j x wt(T ( j) ) (T ( j) | D ( j) )= x wt(T) ∏ j (T ( j) | D ( j) )= x wt(T) ∆ T . Take a set of flagged fillings which indexes a basis for S flag D in the natural way. The basis is in fact an eigenbasis for the action of a diagonal matrix and all eigenvalues are positive monomials. Since x wt(T) is an eigenvalue, the result follows. 11 The following is used to determine when the character of a flagged Schur module differs from the Kohnert polynomial (defined below). Given position integers r< s, let α r,s be the sequence with rth entry 1, sth entry− 1, and all others 0. Proposition 2.2.5. Let D be a diagram and C a set of column indices in which row s contains a cell but row r< s does not. Then x wt(D)+|C|α r,s occurs in the monomial expansion of char(S flag D ). Proof. We will define a flagged filling T on D. For j / ∈ C we simply let T ( j) = D ( j) . Otherwise, D ( j) =(a 1 ,...,a m ) contains the entry s but not r. Say k is the least index so that a k > r and a ℓ = s. Then define T ( j) =(...,a k− 1 ,r,a k ,a k+1 ,...,a ℓ− 1 ,a ℓ+1 ,...). That is, the kth entry becomes r and entries of index i with k< i≤ ℓ become a i− 1 with all other entries the same as in D ( j) . The second case is vacuous if k=ℓ. Since the entries in T are no greater than the corresponding entries in D, T is indeed a flagged filling. It is clear to see that wt(T ( j) )= wt(D ( j) )+α r,s if j∈ C, and wt(T ( j) )= wt(D ( j) ) otherwise. Therefore wt(T)= wt(D)+|C|α r,s . The proof is completed by Lemma 2.2.4. 2.2.2 Demazure modules and characters Each irreducible GL n -moduleS λ decomposes into weight spaces asS λ = L a S a λ . Demazure [17] considered the B-action on the extremal weight spaces ofS λ , those whose weak composition weight a is a rearrangement of the highest weightλ. The extremal weights are naturally indexed by the pair(λ,w), whereλ is the partition weight and w is the minimum length permutation that rearrangesλ to a. Definition 2.2.6 ([17]). For a partition λ and a permutation w, the Demazure moduleS w λ is the B-submodule ofS λ generated by the weight spaceS w·λ λ . At one extreme, the Demazure moduleS id λ for the identity permutation is the one-dimensional subspace ofS λ containing the highest weight element. At the other extreme, the Demazure mod- uleS w 0 λ for the long element ofS n is the full moduleS λ . In general, the Demazure modules give 12 a filtration from the highest weight element, namely S u λ ⊂ S w λ whenever u≺ w in weak Bruhat order. The characters of Demazure modules can be computed by degree-preserving divided differ- ence operators [18], as well as myriad combinatorial models. For a positive integer i, the divided difference operator∂ i is the linear operator that acts on polynomials f ∈Z[x 1 ,x 2 ,...] by ∂ i ( f)= f− s i · f x i − x i+1 , (2.2.2) where s i ∈S ∞ is the simple transposition that exchanges x i and x i+1 . Fulton [23] connected the di- vided difference operators with intersection theory, providing a direct geometric context for Schu- bert polynomials [40]. An isobaric variation, sometimes called the Demazure operator, is defined by π i ( f)=∂ i (x i f). (2.2.3) Theπ i satisfy the braid relations, and so for a permutation w, we may define π w =π i k ··· π i 1 (2.2.4) for any expression s i k ··· s i 1 = w with k minimal. Theorem 2.2.7 ([17]). The character of the Demazure moduleS w λ is char(S w λ )=π w (x λ 1 1 ··· x λ n n ). (2.2.5) In particular, when s i w≺ w in Bruhat order, we have the following identity, char(S w λ )=π i char(S s i w λ ) . (2.2.6) Two Demazure modulesS u µ andS v ν correspond precisely when u· µ= v· ν. While this implies µ =ν, the permutations u,v can differ. Therefore the more natural indexing set for Demazure 13 characters is obtained by specifying the weak composition given by the permutation acting on the partition. Define κ a = char(S w λ ), (2.2.7) where w is the shortest permutation that sorts the weak composition a to the partition λ. Letting D(a) denote the key diagram of a, we also have κ a = char(S w λ )= char(S flag D(a) ). In this sense, the flagged Schur modules generalize Demazure characters. Both key diagrams indexing Demazure modules and Rothe diagrams indexing Kraskiewicz– Pragacz modules exhibit the following property. Definition 2.2.8. A diagram D is northwest if whenever( j,k),(i,l)∈ D with i< j and k< l, then (i,k)∈ D; see Fig. 2.3. i j k l g x g g Figure 2.3: An illustration of the northwest condition. Northwest in this context is equivalent to southwest as used by Assaf and Searles [4, 8] for Kohnert polynomials. Magyar [47] gave a recurrence for computing characters of flagged Schur modules for %- avoiding shapes (see Definition 2.4.5). To state the recurrence, let C k denote the diagram with cells in rows 1,2,...,k of the first column. Given a diagram D, let s r D denote the diagram obtained from D by permuting the cells in rows r,r+ 1. Restricting Magyar’s result to northwest shapes gives the following. Theorem 2.2.9. The characters of the flagged Schur modules of northwest shapes satisfy the fol- lowing recurrence: 14 (M1) char(S flag ∅ )= 1; (M2) if the first column of D is exactly C k , then char(S flag D )= x 1 x 2 ··· x k char(S flag D− C k ); (2.2.8) (M3) if every cell in row r has a cell in row r+ 1 in the same column, then char(S flag D )=π r char(S flag s r D ) . (2.2.9) g ggg g x 1 x 2 −−→ gg g π 1 − → gg g π 2 − → gg g x 1 x 2 −−→ g x 1 − → ∅ Figure 2.4: Applying Magyar’s recurrence to a northwest diagram (left) to compute the character of the flagged Schur module. For example, Magyar’s recurrence allows us to compute the character for the flagged Schur module indexed by the leftmost diagram in Figure 2.4 as: char(S flag D ) = x 1 x 2 · π 1 (π 2 (x 1 x 2 · (x 1 · (1))))) = x 3 1 x 2 2 + x 3 1 x 2 x 3 + x 2 1 x 3 2 + x 2 1 x 2 2 x 3 + x 1 x 3 2 x 3 Notice in this example we use Theorem 2.2.9(M3) only when row r is empty. In fact, when D is northwest, we may always restrict our use of Theorem 2.2.9(M3) to this case which will simplify many proofs to follow. Proposition 2.2.10. For D northwest, there exists a sequence of northwest diagrams D= D m ,D m− 1 ,...,D 1 ,D 0 =∅ such that for each 0< i< m, either D i = D i+1 − C k for some k or D i = s r D i+1 and row r of D i+1 is empty. 15 Proof. Suppose D is a northwest diagram, say with k cells in the first column. If these cells lie in rows 1 through k, then we apply (M2) and the result remains northwest. Otherwise, let r be the lowest row such that row r, column 1 is empty but row r+ 1, column 1 is occupied. Since D is northwest, row r must be empty, allowing us to apply (M3) with row r empty. We may iterate this on the first column until all cells lie in rows 1 through k, at which point we may apply (M2). We may iterate this until D is empty. Perhaps unsurprising when comparing (2.2.6) and (2.2.9), Reiner and Shimozono [56] use Magyar’s recurrence for the character of flagged Schur modules [47] to prove the characters of flagged Schur modules decompose as a nonnegative sum of Demazure characters; see [56] for precise definitions. Theorem 2.2.11 ([56]). For D a northwest diagram, we have char(S flag D )= ∑ a c D a κ a , where c D a is the number of D-peelable tableaux whose left-nil key is a. For example, the character of the flagged Schur module for the left diagram in Fig. 2.4 expands into Demazure characters as char(S flag D )=κ (1,3,1) +κ (2,3,0) . One consequence of our main result is a new proof of this positivity and a new combinatorial interpretation for the coefficients in terms of Kohnert diagrams. 2.2.3 Kohnert polynomials Kohnert [34] gave a combinatorial model for Demazure characters in terms of the following oper- ation on diagrams. 16 Definition 2.2.12 ([34]). A Kohnert move on a diagram selects the rightmost cell of a given row and moves the cell up, staying within its column, to the first available position above, if it exists, jumping over other cells in its way as needed. Given a diagram D, let KD(D) denote the set of diagrams that can be obtained by some se- quence of Kohnert moves on D. Such diagrams are called Kohnert diagrams of D. For example, Fig. 2.5 shows the Kohnert diagrams for the bottom diagram, where an edge from S up to T indi- cates T can be obtained from S by a single Kohnert move. g ggg g g g gg g ggg g g ggg gg gg ggg Figure 2.5: The poset of Kohnert diagrams for the bottom diagram. Theorem 2.2.13 ([34]). The Demazure characterκ a is κ a = ∑ T∈KD(D(a)) x wt(T) 1 1 ··· x wt(T) n n , where the weight of a diagram D, denoted by wt(D), is the weak composition whose ith part is the number of cells of D in row i. Assaf and Searles define the Kohnert polynomial of any diagram as follows. Definition 2.2.14 ([4]). The Kohnert polynomial for a diagram D is K D = ∑ T∈KD(D) x wt(T) 1 1 ··· x wt(T) n n . By convention, setK ∅ = 1 for the empty diagram. 17 For example, the Kohnert polynomial for the bottom diagram in Fig. 2.5 is K D = x 3 1 x 2 2 + x 3 1 x 2 x 3 + x 2 1 x 3 2 + x 2 1 x 2 2 x 3 + x 1 x 3 2 x 3 . Assaf and Searles conjectured [4] and Assaf proved [8, Thm 6.1.7] that for D northwest, the Kohnert polynomial expands nonnegatively into Demazure characters, with coefficients counting a certain subset of Kohnert diagrams (see [8, Def 6.1.2]). Theorem 2.2.15 ([8]). For D northwest, we have K D = ∑ a c D a κ a , where c D a is the number of Yamanouchi Kohnert diagrams for D of weight a. For the running example, the Kohnert polynomial expands into Demazure characters as K D =κ (1,3,1) +κ (2,3,0) . Comparing Theorem 2.2.11 with Theorem 2.2.15, and as evidenced by the examples in Figs. 2.4 and 2.5, one begins to suspect the following. Conjecture 2.2.16 ([4]). For D a northwest diagram, we have char(S flag D )=K D . (2.2.10) We prove Conjecture 2.2.16 by showing Kohnert polynomials of northwest diagrams satisfy the recurrence in Theorem 2.2.9. Specifically, we show the following. Theorem 2.2.17. Kohnert polynomials of northwest diagrams satisfy the following: (K1) K ∅ = 1; 18 (K2) if the first column of D is exactly C k , then K D = x 1 x 2 ··· x k K D− C k ; (K3) if every cell in row r has a cell in row r+ 1 in the same column, then K D =π r (K s r D ). Condition (K1) holds by definition, and establishing (K2) is straightforward. Lemma 2.2.18. Let D be a northwest diagram such that the first column is exactly C k . Then D− C k is also northwest, and we have K D = x 1 x 2 ··· x k K D− C k . (2.2.11) Proof. As Kohnert moves preserve the columns of cells and no moves are possible in column 1 of D, all diagrams in KD(D) must coincide in column 1 with cells in rows 1,...,k. In particular, since column 1 is the leftmost column and admits no Kohnert moves, there is a poset isomorphism between KD(D) and KD(D− C k ) obtained by deleting all cells in column 1. Since those cells contribute x 1 x 2 ··· x k to the weight, the result follows. To establish (K3), we relate the sets KD(D) and KD(s r D), where s r acts on D by permuting rows r and r+ 1. To begin, we consider the case when row r is a subset of row r+ 1, meaning the set of occupied columns in row r is a subset of the occupied columns of row r+ 1. In this case, all cells in row r+1 without cells above them in row r lie to the right of all cells in row r; see Fig. 2.6. i i+1 ggg gg x ggg x ggg gg Figure 2.6: When D is northwest with row i a subset of row i+ 1, the positions indicated with x must be vacant. Lemma 2.2.19. If D is a northwest diagram and row i is contained in row i+ 1, then s i D is also northwest and s i D∈ KD(D). In particular, KD(s i D)⊆ KD(D). 19 Proof. Let c be the rightmost occupied column in row i of D. Let m be the number of cells in row i+ 1 strictly to the right of column c. By choice of c, we may apply a sequence of m Kohnert moves to row i+ 1 of D, resulting in all cells strictly to right of column c moving from row i+ 1 to row i. Since row i is a subset of row i+ 1, any column weakly left of column c either has no cells in rows i,i+ 1 or cells in both rows i,i+ 1. Thus this sequence of Kohnert moves yields s i D. In particular, this showsK D − K s i D is monomial positive. Since π i (F)− F is also monomial positive for F a positive sum of Demazure characters, as is the case for Kohnert polynomials, this makes Theorem 2.2.17(K3) plausible. To understand the precise expansion forK D − K s i D , we turn to Demazure crystals. 2.3 Crystal graphs Littelmann [41] proved there is a crystal structure for Demazure modules which proves the De- mazure character formula in classical types. Thus we shift our paradigm to Demazure operators on Demazure subsets of highest weight crystals. 2.3.1 Tableaux crystals A crystal graph [29] is a combinatorial model for a highest weight module, consisting of a vertex setB, corresponding to the crystal base (see also the canonical basis [42]), and directed, colored edges, corresponding to deformations of the Chevalley generators. For a highest weightλ for GL n , the crystal base forS λ is naturally indexed by SSYT n (λ), the semistandard Young tableaux of shape λ with entries in{1,2,...,n}. We adopt the French (coordinate) convention for SSYT in which entries weakly increase left to right within rows and strictly increase bottom to top within columns. Kashiwara and Nakashima [31] and Littelmann [41] defined the crystal edges on tableaux as follows. 20 Definition 2.3.1. For T ∈ SSYT n (λ) and 1≤ i< n, we i-pair the cells of T with entries i or i+ 1 iteratively by i-pairing an unpaired i+ 1 with an unpaired i weakly to its right whenever all entries i or i+ 1 lying between them are already i-paired. The raising and lowering operators on crystals are maps e i , f i :B→B∪{0}. Definition 2.3.2. For 1≤ i< n, the crystal raising operator e i acts on T∈ SSYT n (λ) as follows: • if all entries i+ 1 of T are i-paired, then e i (T)= 0; • otherwise, e i changes the leftmost unpaired i+ 1 to i. Definition 2.3.3. For 1≤ i< n, the crystal lowering operator f i acts on T∈ SSYT n (λ) as follows: • if all entries i of T are i-paired, then f i (T)= 0; • otherwise, f i changes the rightmost unpaired i to i+ 1. We draw a crystal graph with an i-colored edge from b to b ′ whenever b ′ = f i (b), omitting edges to 0. For example, see the rightmost graph in Fig. 2.7. The crystal base is endowed with a map, wt :B→Z n , to the weight lattice. Using this, we define the character of a crystalB by char(B)= ∑ b∈B x wt(b) 1 1 ··· x wt(b) n n . (2.3.1) Thus ifB is the crystal for a module V , then we have char(B)= char(V). In particular, forB λ the crystal on SSYT n (λ), we have char(B λ )= char(S λ )= s λ (x 1 ,...,x n ), (2.3.2) where the latter is the Schur polynomial indexed byλ in n variables. Definition 2.3.4. An element u∈B of a highest weight crystalB is a highest weight element if e i (u)= 0 for all 1≤ i< n. 21 Each connected crystalB λ has a unique highest weight element b, and we have wt(b) = λ. Part of the beauty of highest weight elements, and indeed of crystals, lies in the following decomposition combining (2.3.1) and (2.3.2), char(B)= ∑ u∈B e i (u)=0∀i s wt(u) (x 1 ,...,x n ). (2.3.3) Littelmann [41] considered a crystal structure for Demazure modules as certain truncations of highest weight crystals, giving a new proof of the Demazure character formula. Kashiwara [30] generalized this to all symmetrizable Kac-Moody algebras. Definition 2.3.5. Given a subset X of a highest weight crystalB and an index 1≤ i< n, the Demazure operatorD i is given by D i (X)={b∈B| e k i (b)∈ X for some k≥ 0}. (2.3.4) Definition 2.3.6 ([41]). Given a highest weight crystalB λ and a permutation w, the Demazure crystalB w λ is given by B w λ =D i m ··· D i 1 ({u λ }), (2.3.5) where u λ is the highest weight element ofB λ , and s i m ··· s i 1 is any reduced expression for the permutation w. For example, Fig. 2.7 constructs the Demazure crystal filtration B 1234 (2,1,1,0) D 3 −→ B 1243 (2,1,1,0) D 2 −→ B 1423 (2,1,1,0) D 1 −→ B 4123 (2,1,1,0) D 2 −→ B 4213 (2,1,1,0) D 3 −→ B (2,1,1,0) . More generally, a subset X⊆ B of a crystalB is a Demazure crystal if each connected compo- nent of X is isomorphic to a connected Demazure crystal. Just as Demazure crystals combinatorial- ize Demazure characters, the Demazure operator combinatorializes the isobaric divided difference operator. 22 B id (2,1,1,0) 3 2 1 1 B s 3 (2,1,1,0) 3 2 1 1 4 2 1 1 3 B s 2 s 3 (2,1,1,0) 3 2 1 1 4 2 1 1 4 3 1 1 2 3 B s 1 s 2 s 3 (2,1,1,0) 3 2 1 1 3 2 1 2 4 2 1 1 4 2 1 2 4 3 1 1 4 3 1 2 4 3 2 2 1 1 1 1 2 3 3 B s 2 s 1 s 2 s 3 (2,1,1,0) 3 2 1 1 3 2 1 2 4 2 1 1 3 2 1 3 4 2 1 2 4 3 1 1 4 2 1 3 4 3 1 2 4 3 2 2 4 3 1 3 4 3 2 3 1 1 1 1 1 2 2 2 2 2 3 3 B (2,1,1,0) 3 2 1 1 3 2 1 2 4 2 1 1 3 2 1 3 4 2 1 2 4 3 1 1 3 2 1 4 4 2 1 3 4 3 1 2 4 3 2 2 4 3 1 3 4 2 1 4 4 3 2 3 4 3 1 4 4 3 2 4 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 Figure 2.7: The Demazure crystalsB w (2,1,1,0) for several w. Proposition 2.3.7. Let X be a Demazure subset of a highest weight crystalB. Then on the level of characters, we have char(D i (X))=π i (char(X)). (2.3.6) Proof. For a subset X⊆ B to be a Demazure crystal, we must have a decomposition X ∼ = B w (1) λ (1) ⊔···⊔ B w (k) λ (k) char(X) = char(B w (1) λ (1) )+··· + char(B w (k) λ (k) ) for some partitions λ (1) ,...,λ (k) and permutations w (1) ,...,w (k) , where k is the number of con- nected components of the crystal. The Demazure operatorD i acts on each connected component separately, and so factors through the decomposition. Thus it suffices to show for λ a partition and w a permutation, we have char(D i (B w λ ))=π i (char(B w λ )), which is precisely what Littelmann shows in [41]. 23 Thus our approach to proving (K3) is to show KD(D)=D i (KD(s i D)) and take characters using Proposition 2.3.7. For this, we need the Demazure crystal on Kohnert diagrams constructed in [8]. 2.3.2 Kohnert crystals Assaf [8] defined a crystal structure on diagrams that interwines the crystal operators on tableaux via the injective, weight-reversing map from KD(a) to SSYT n (sort(a)) defined by Assaf and Sear- les [5, Def 4.5]. Definition 2.3.8 ([8]). For T a diagram and 1≤ i< n, we i-pair the cells of T in rows i,i+ 1 iteratively by i-pairing an unpaired cell in row i+ 1 with an unpaired cell in row i weakly to its left whenever all cell in rows i or i+ 1 lying between them are already i-paired. Definition 2.3.9 ([8]). For 1≤ i< n, the Kohnert raising operator e i acts on a diagram T as follows: • if all cells in row i+ 1 of T are i-paired, then e i (T)= 0; • otherwise, e i moves the rightmost unpaired cell in row i+ 1 to row i. Note that a Kohnert move does not, in general, coincide with a Kohnert raising operator, and vice-versa; for instance, a Kohnert move may move a cell up multiple rows. See Figure 2.9 for an example of the converse, where a Kohnert raising operator moves a cell which is not at the end of its row. Definition 2.3.10 ([8]). For 1≤ i< n, the Kohnert lowering operator f i acts on a diagram T as follows: • if all cells in row i of T are i-paired, then f i (T)= 0; • otherwise, f i moves the leftmost unpaired cell in row i to row i+ 1. 24 gg g g g gg g gg g g g g gg g gg g gg g g g g g g g g g g gg g g g gg g g gg g 1 1 1 1 1 2 2 2 2 2 3 3 gg gg gg g g gg gg g gg g g g gg gg gg 1 1 1 2 2 2 Figure 2.8: The Kohnert crystal on Kohnert diagrams for the bottom-left diagram, which is north- west. Given the asymmetry between raising and lowering operators for Demazure crystals, we define Kohnert crystals using the former. Assaf [8, Thm 4.1.1] proves the Kohnert raising operators are closed within the set of Kohnert diagrams, provided the initial diagram is northwest. Theorem 2.3.11 ([8]). For D a northwest diagram, if T ∈ KD(D) and e i (T)̸= 0, then e i (T)∈ KD(D). We remark the analog of Theorem 2.3.11 for Kohnert lowering operators is false in general, though true under certain circumstances considered below. Nonetheless, for D northwest, Theo- rem 2.3.11 makes the following well-defined. Definition 2.3.12 ([8]). For D a northwest diagram, the Kohnert crystal on D is the set KD(D) together with the Kohnert raising operators. For example Fig. 2.8 shows the Kohnert crystal for the bottom-left diagram. Notice there are two connected components, and the leftmost corresponds precisely with the rightmost Demazure 25 crystal in Fig. 2.7. One can easily verify the rightmost component is also a Demazure crystal. The- orem 5.3.4 of [8] proves the Kohnert crystal for northwest diagrams is always a union of Demazure crystals. Theorem 2.3.13 ([8]). For D a northwest diagram, the Kohnert crystal on KD(D) is a disjoint union of Demazure crystals. That is to say, there exist partitions λ (1) ,...,λ (m) and permutations w (1) ,...,w (m) such that KD(D) ∼ =B w (1) λ (1) ⊔···⊔ B w (m) λ (m) . For example, from Fig. 2.8, we have the decomposition KD(D) ∼ =B 4213 (2,1,1,0) ⊔B 3124 (2,2,0,0) To establish Theorem 2.2.17(K3) and prove our main result, by Proposition 2.3.7, it is enough to show that for D northwest with row r a subset of row r+ 1, we have KD(D)=D r (KD(s r D)). (2.3.7) For the example in Fig. 2.8, row 2 is a subset of row 3, and the crystal for KD(s 2 D), which consists of the unshaded elements, satisfies D 2 (KD(s 2 D)) ∼ = D 2 B 4123 (2,1,1,0) ⊔B 3124 (2,2,0,0) = D 2 (B 4123 (2,1,1,0) )⊔D 2 (B 3124 (2,2,0,0) ) = B 4213 (2,1,1,0) ⊔B 3124 (2,2,0,0) = KD(D). To prove (2.3.7), we first show we have containments KD (s r D)⊆ KD(D)⊆ D r (KD(s r D)). The first containment was shown in Lemma 2.2.19. To prove the second, we have the following technical result, illustrated in Fig. 2.9. 26 Lemma 2.3.14. Let D be a diagram, and r a row index such that row r is contained in row r+ 1. Given T∈ KD(D), there exists S∈ KD(s r D) such that (1) S and T agree in all rows t̸= r,r+ 1; (2) if S,T differ in some column c at row r or r+1, then T has a cell only in row r+1 and S has a cell only in row r. (3) if, at rows r,r+ 1, S,T differ in column c and agree in some column b< c, and if, in column b, row r has a cell, so does row r+ 1. c c r r+1 g g gg g ggg ggg x g r r+1 g g ggg x g ggg gg g Figure 2.9: An illustration of T (left) and S (right) in Lemma 2.3.14. We postpone the proof of Lemma 2.3.14 until the end of this section, but use it to establish that KD(s r D), KD(D) and D r (KD(s r D)) all have the same number of connected components, each with the same highest weight. Theorem 2.3.15. Let D be a northwest diagram and r a row index such that row r is contained in row r+ 1. By Theorem 2.3.13, suppose the Kohnert crystals decompose into Demazure crystals (with minimal length permutations) as KD(s r D) ∼ = s G i=1 B u (i) µ (i) and KD(D) ∼ = t G i=1 B v (i) ν (i) . Then s= t, and for all i, we haveµ (i) =ν (i) and u (i) ⪯ v (i) in Bruhat order. Moreover, we have the containments KD(s r D)⊆ KD(D)⊆ D r (KD(s r D)). Proof. Each connected Demazure crystalB w λ has a unique highest weight element, say T , and T satisfies wt(T)=λ. Thus there exist elements T 1 ,...,T s ∈ KD(s r D) such that e i (T j )= 0 for all 27 i, j, and wt(T j )=µ ( j) . By Lemma 2.2.19, we have T j ∈ KD(D), and since the crystal operators are independent of the ambient diagram, each T j is also a highest weight for KD(D). For the converse, we claim that if e r (T)= 0 for T ∈ KD(D), then T ∈ KD(s r D). To see this, let S∈ KD(s r D) be as in Lemma 2.3.14. Suppose there exists a column c where T and S disagree. By condition (2), T has a cell x in row r+1 but not in row r of this column. Any column b< c that has a cell y in row r agrees with S by condition (2), hence by condition (3) also has a cell in row r+ 1 to which y must be r-paired, and so x cannot be r-paired to y. This contradicts the premise that e r (T)= 0 so we conclude T and S are identical, proving the claim. In particular, if T is a highest weight element of KD(D), then e r (T)= 0 and so T∈ KD(s r D). By Lemma 2.2.19 we have KD(s r D)⊆ KD(D) from which it follows thatB u (i) λ (i) ⊆ B w (i) λ (i) hence u (i) ⪯ w (i) . Finally, for any T∈ KD(D) let k be maximal such that e k r (T)̸= 0. Then using the above claim once again, we have e k r (T)∈ KD(s r D) which is to say T∈D r (KD(s r D)), demonstrating the second containment. Proof of Lemma 2.3.14. We proceed by induction on the number of Kohnert moves used to obtain T from D. For the base case, observe s r D is obtained by applying raising operators e r to D by the proof of Lemma 2.2.19, and so for T = D we may take S= s r D. Suppose for some T∈ KD(D), we have constructed S∈ KD(s r D) satisfying conditions (1)-(3). Suppose T ′ is obtained by a Kohnert move on T , say moving a cell x lying in column c. We construct S ′ by a (possibly trivial) sequence of Kohnert moves on S such that conditions (1)-(3) are satified for S ′ and T ′ . Case A: x not in rows r,r+ 1 of T . • Case A1: T ′ is obtained by moving x between rows that lie either strictly above row r or strictly below row r+ 1. By condition (1), S and T agree in all rows t̸= r,r+ 1, so we can perform the same Kohnert move on S to obtain S ′ . Since this Kohnert move does not move any cells in rows r or r+ 1, conditions (1)-(3) hold by induction. • Case A2: T ′ is obtained by moving x from row s> r+ 1 to row q< r. 28 In order to perform this Kohnert move, T must have cells in rows r and r+ 1 of column c. By condition (2) S and T must then agree in rows r and r+1 of column c. Thus by condition (1) we can perform the same Kohnert move on S to obtain S ′ , and conditions (1)-(3) once again hold by induction since this Kohnert move does affect any cells in rows r or r+ 1. • Case A3: T ′ is obtained by moving x from row s> r+ 1 to row r+ 1. Since x moves into row r+1 from T to T ′ , condition (2) implies that S and T agree in column c; otherwise, T would have a cell in row r+ 1, column c, which would block x from moving into row r+ 1. Then by condition (1) we can again perform the same Kohnert move on S to obtain S ′ . In particular, S ′ and T ′ agree in column c, so conditions (1) and (2) immediately follow by induction. Since S ′ and T ′ both have a cell in row r+ 1, column c, the implication of condition (3) is satisfied as well. • Case A4: T ′ is obtained by moving x from row s> r+ 1 to row r. Here, S and T may not agree in column c by condition (2). If this is the case, then in rows r and r+ 1 of column c, T has a cell only in row r+ 1 while S has a cell only in row r. However, after moving x, T ′ has cells in rows r and r+ 1 of column c. Thus in S, we can move x up to row r+ 1 so that S ′ and T ′ both have cells in rows r and r+ 1 of column c. If, on the other hand, S and T do agree in column c, then we can perform the same Kohnert move on S which we performed on T to obtain the same result: S ′ and T ′ both have cells in rows r and r+ 1 of column c, and therefore the columns are identical in either case by condition (1). Conditions (1) and (2) again follow by induction, and condition (3) holds because S ′ and T ′ both have cells in rows r and r+ 1 of column c. Case B: x in row r+ 1 of T . • Case B1: S and T agree in column c. In this case we can perform the same Kohnert move on S to obtain S ′ . Conditions (1) and (2) then hold by induction. For condition (3), notice that since x is able to move out of row 29 r+ 1 from T to T ′ , there can be no columns right of c in which S and T differ; otherwise by condition (2) there would be a cell in row r+ 1 of T which blocks x from moving. Thus condition (3) holds vacuously for every column weakly right of c, and holds by induction for every column left of c. • Case B2: S and T differ in column c. By condition (2), T has a cell only in row r+ 1, while S only has a cell in row r, in column c. In this case x lies in row r in T ′ , and we define S ′ = S so that S and T agree in column c. Conditions (1) and (2) hold by induction. Condition (3) also holds by induction because, as we noted above, there can be no columns right of c in which S and T differ. Case C: x in row r of T . By condition (2), S and T must agree in column c. • Case C1: S and T agree in all columns right of c. In this case, we can perform the same Kohnert move on S to obtain S ′ so that S ′ and T ′ agree in column c. Then conditions (1) and (2) hold by induction. By assumption column c is right of all columns in which S and T differ, so condition (3) also holds by induction. • Case C2: S and T differ in some column right of c. Here S and T both have cells in row r, column c and there exists some column d > c in which S and T differ. By condition (3), S and T must both have cells in rows r and r+ 1 of column c. Furthermore T has no cells in row r right of column c — otherwise x is blocked from moving — and this implies that there is at most one cell in rows r and r+1 per column right of c in S. Indeed, for any column d> c either (i) S and T differ in rows r and r+ 1 of column d, and condition (2) implies that S has a single cell in row r, column d, or (ii) S and T agree in rows r and r+ 1 of column d, in which case S has no cells in row r, column d, since T does not either. 30 We define S ′ as follows: first, move every cell in row r+1 of S which lies in a column strictly right of c into row r via a sequence of Kohnert moves to obtain an intermediate diagram ˜ S. Let y be the cell in row r+ 1, column c of S (which is in the same position in ˜ S). Then perform a Kohnert move on ˜ S by moving y to obtain S ′ . By construction, y lands in the same position in S ′ as x lands in T ′ . Condition (1) holds as the only new cell introduced outside of rows r and r+ 1 lies in the same position in S ′ and T ′ . Condition (2) holds in all columns left of c by induction, and we now show it holds in all columns weakly right of c. In rows r and r+1 of column c, T ′ has a cell only in row r+ 1 and S ′ has a cell only in row r, so condition (2) holds. For any column d> c, we have two possibilities: (i) S and T differ in rows r and r+ 1. Then S has a cell in row r, column d, and this cell is in the same place in S ′ . Since T and T ′ agree right of column c, condition (2) holds. (ii) S and T agree in rows r and r+ 1. Without loss of generality column d is nonempty, so that S and T have a cell only in row r+ 1 of column d. Then in S ′ this cell is moved to row r, column d, while it stays fixed in T ′ . Therefore S ′ and T ′ differ in column d in rows r and r+ 1; T ′ has a cell only in row r+ 1 and S ′ has a cell only in row r, so condition (2) is satisfied. We have just shown that S ′ and T ′ differ in rows r and r+ 1 in all columns right of c in accordance with condition (2). Since we assumed that S and T differ in some column right of c, condition (3) holds by induction. Thus S ′ is constructed so that (1)-(3) hold. The result follows by induction. 2.3.3 Kohnert tableaux Since we have established KD(s r D)⊆ KD(D)⊆ D r (KD(s r D)) 31 by Theorem 2.3.15, in order to prove the latter two sets are equal, we will showD r (KD(D))= KD(D) whenever D is northwest with row r a subset of row r+ 1. This amounts to showing the lowering analog of [8, Thm 4.1.1], which is straightforward for the composition case. Lemma 2.3.16. For a composition a with 0≤ a r ≤ a r+1 , we haveD r (KD(D(a)))= KD(D(a)). Proof. Letλ be the partition rearrangement of a, and let w be the minimal length permutation such that w· λ = a. Then KD(D(a)) ∼ =B w λ . If a r+1 = a r , then w≺ s r w in Bruhat order. By (2.3.5), we haveD r (B w λ )=B s r w λ =B w λ , where the latter holds because s r w· λ = w· λ. If a r+1 > a r , then s r w≺ w in Bruhat order. In particular, w has a reduced expression of the form s r s i k ··· s i 1 . Then we have KD(D(a)) ∼ =B w λ ∼ =D r D i k ··· D i 1 ({u λ }). The result follows from the observationD 2 r (X)=D r (X) for any X. To prove KD(D)=D r (KD(s r D)) for any northwest diagram D, we leverage labelings of Kohn- ert diagrams introduced in [5] and generalized in [8]. Unlike tableaux, Kohnert diagrams do not have labels for their cells. In this section we consider a canonical labeling of the cells of a diagram that allows one to determine if a given diagram belongs to a set KD(D) for a particular northwest diagram D. Assaf and Searles [5, Def 2.5] give an algorithm by which the cells of a diagram T are labeled with respect to a weak composition a in order to determine whether or not T∈ KD(D(a)), forD(a) the key diagram of a. Definition 2.3.17 ([5]). For a weak composition a and a diagram T ∈ KD(D(a)), the Kohnert la- beling of T with respect to a, denoted byL a (T), assigns labels to cells of T as follows. Assuming all columns right of column c have been labeled, assign labels{i| a i ≥ c} to cells of column c from top to bottom by choosing the smallest label i such that the i in column c+ 1, if it exists, is weakly higher. For example, the columns of the diagram in Figure 2.10 are labeled with respect to(3,0,5,1,4), from right to left, with entries{3},{3,5},{1,3,5},{1,3,5},{1,3,4,5}. 32 gggg g ggg gg g gg gggg g ggg 3 gg g gg gggg 5 g gg 3 g 3 gg g gg ggg 1 g 5 g g 3 g 3 g 3 gg g 5 gg gg 1 g 1 g 5 g g 3 g 3 g 3 gg 3 g 5 gg 5 g 1 g 1 g 1 g 5 g 4 g 3 g 3 g 3 g 3 g 3 g 5 g 5 g 5 Figure 2.10: The labeling algorithm applied to a Kohnert diagram forD(3,0,5,1,4). To handle the ambiguity where a given diagram is a Kohnert diagram for multiple weak com- positions, Assaf and Searles [5, Def 2.3] define Kohnert tableaux to be labelings that carry the information of the weak composition as well. Definition 2.3.18 ([5]). For a weak composition a of length n, a Kohnert tableau of content a is a diagram labeled with 1 a 1 ,2 a 2 ,...,n a n satisfying (i) there is exactly one i in each column from 1 through a i ; (ii) each entry in row i is at least i; (iii) the cells with entry i weakly ascend from left to right; (iv) if i< j appear in a column with i below j, then there is an i in the column immediately to the right of and strictly below j. Assaf and Searles [5, Thm 2.8] prove for any diagram T and weak composition a for which T andD(a) have the same column weight, T∈ KD(D(a)) if and only ifL a (T) is a Kohnert tableau. Thus the labeling algorithm provides the bijection between Kohnert tableaux of content a and Kohnert diagrams for a. In fact, the algorithm ensures conditions (i), (iii), and (iv) always hold, so we emphasize the missing condition with the following. Definition 2.3.19. A labeled diagram is flagged if entries in row i are at least i. Thus we may restate [5, Thm 2.8] as follows. Proposition 2.3.20 ([5]). Given a diagram T with the column weight ofD(a) for a weak composi- tion a, T∈ KD(D(a)) if and only ifL a (T) is flagged. 33 To state the analogous definitions for northwest diagrams, we use rectification [8, Def 4.2.4]. Rectification maps any diagram to one that is a Kohnert diagram for a composition by transposing the diagram, applying all possible Kohnert lowering operators (in any order), and then transposing back. Rectification is a powerful tool for studying Kohnert crystals precisely because it commutes with the crystal operators in the sense that for any digram T we have e r (R c (T))=R c (e r (T)), where R c is the rectification operator on column c. [8, Thm 4.2.5]. Moreover, it facilitates a generalization of the labeling algorithm and Kohnert tableaux, for which we present a modified version of rectification that takes labels into account [8, Def 5.1.5]. As with crystal operators, we first present a pairing rule [8, Def 5.1.4]. Definition 2.3.21 ([8]). Let T be a diagram with labelingL . Given a column index c, the label c-pairing of T with respect toL is defined on cells of column c+ 1 as follows: assuming all cells below x in column c+ 1 have been label paired, label pair x with the cell weakly below it in column c with the largest label that is weakly smaller thanL(x), if it exists, and otherwise leave x unpaired. It follows from the label pairing rule (and its equivalence to the column pairing rule for rectifi- cation proved in [8, Lem 5.2.3]) and [5, Lemma 2.2] that a diagram is a Kohnert diagram for some weak composition if and only if for all i, all cells in column i+ 1 are label i-paired. We call any such diagram rectified . In particular, notice a cell is unpaired only if there is no cell immediately to its left. Definition 2.3.22 ([8]). Let T be a diagram with labelingL . Given a column c, the rectified labeling rect c (L) of the rectified diagram rect c (T) is constructed recursively as follows. Let x 1 ,...,x m be the cells in column c+1 of T that are not label c-paired, taken from lowest to highest. Initially setL ′ =L . Then • for i from 1 to m, if there exist label c-paired cells y in column c and z in column c+ 1 with z below x i such thatL(y)≤ L ′ (x i )<L ′ (z), then choose z so thatL ′ (z) is maximal and swap the labels of x i and z; 34 • for every cell z in column c+ 1, if z is label c-paired with y in column c, then setL ′ (z)= L(y). Move all unpaired cells in column c+ 1 left to column c, maintaining their rows, and set rect c (L) to be the labeling of rect c (T) obtained by maintaining labels fromL ′ as cells move left. For examples of label rectification, see Fig. 2.11. g 1 g 1 g 4 g 4 g 3 g 3 rect 1 −−→ g 1 g 1 g 4 g 4 g 3 g 3 rect 2 −−→ g 1 g 1 g 1 g 4 g 3 g 3 g 3 g 3 g 3 g 5 g 5 rect 1 −−→ g 3 g 3 g 3 g 5 g 5 rect 2 −−→ g 5 g 3 g 3 g 3 g 5 Figure 2.11: Examples of label rectification. While we will use individual steps in this procedure, we are also interested in the fully rectified diagram obtained as follows. Definition 2.3.23 ([8]). Let T be a diagram with labelingL . The rectified labeling rect(L) of the rectified diagram rect(T) is constructed by applying rect ∗ c = rect m ◦ rect m− 1 ◦···◦ rect c for c from m to 1, where m+ 1 is the rightmost occupied column of T . For example, consider the leftmost diagram in Fig. 2.12. Here we omitted rect ∗ c when it acts trivially. Notice the fully rectified diagram matches Fig. 2.10, which is a Kohnert diagram for a composition. g 1 g 1 g 4 g 5 g 4 g 3 g 5 g 6 g 3 g 3 g 6 g 5 g 5 rect ∗ 6 −−→ g 1 g 1 g 4 g 5 g 4 g 3 g 5 g 6 g 3 g 3 g 6 g 5 g 5 rect ∗ 4 −−→ g 1 g 1 g 4 g 3 g 4 g 3 g 3 g 5 g 3 g 3 g 5 g 5 g 5 rect ∗ 2 −−→ g 1 g 1 g 4 g 5 g 4 g 3 g 3 g 3 g 3 g 3 g 5 g 5 g 5 rect ∗ 1 −−→ g 1 g 1 g 1 g 5 g 4 g 3 g 3 g 3 g 3 g 3 g 5 g 5 g 5 Figure 2.12: The label rectification of a diagram. Given diagrams T,D of the same column weight, following [8, Def 5.1.8], we label T with respect to D by a greedy algorithm that uses rectification to follow the Assaf–Searles labeling algorithm. For this, given a diagram T and a column c, partition T at column c by T ≤ c ⊔T >c , where 35 the former contains cells in columns weakly left of c, and the latter contains cells in columns strictly right of c. Definition 2.3.24 ([8]). For diagrams T,D of the same column weight, construct the Kohnert labeling of T with respect to D, denoted byL D (T), as follows. Once all columns of T right of column c have been labeled, set T ′ = T ≤ c ⊔ rect(T >c ), where the leftmost occupied column of the latter is c+ 1. Bijectively assign labels {r| D has a cell in column c, row r} to cells in column c of T from smallest to largest by assigning label r to the topmost unlabeled cell x such that if there exists a cell z in column c+1 of rect(T >c ) with label r, then x lies weakly below z. For example, we label the middle diagram in Fig. 2.13 with respect to the left diagram, which is northwest, to get the labeling on the right. When labeling column c, we must first label rectify columns c+ 1 through 7, which is done in Fig. 2.12. g 1 g 1 g 3 g 3 g 3 g 4 g 4 g 5 g 5 g 5 g 5 g 6 g 6 ggg g g gg g gg g gg g 1 g 1 g 4 g 5 g 4 g 3 g 5 g 6 g 3 g 3 g 6 g 5 g 5 Figure 2.13: Left to right: a northwest diagram D given the row labeling, a diagram T with the same column weight as D, and the Kohnert labeling of T with respect to D. Generalizing the characterization for left-justified diagrams to northwest diagrams, combining Theorems 5.2.2 and 5.2.4 from [8] proves the following. Theorem 2.3.25 ([8]). For D a northwest diagram,L D is well-defined and flagged for T if and only if T∈ KD(D). By Theorem 2.3.25, we must showL D is well-defined and flagged for f r (T) whenever T ∈ KD(D) for D northwest with row r a subset of row r+ 1. Using Proposition 2.2.10, it is enough to consider the case when row r is empty. 36 Theorem 2.3.26. Let D be a northwest diagram such that row r is empty. For T ∈ KD(D) with f r (T)̸= 0, f r (T)∈ KD(D). That is,D r (KD(D))= KD(D). Proof. Let U = f r (T). By Theorem 2.3.25 we haveL D (T) is well-defined and flagged. By [8, Lemma 5.3.3], we knowL D (U) is well-defined as well. By [8, Thm 5.3.2] rect (L D (T))= L a (rect(T)) and rect(L D (U))=L b (rect(U)) for some compositions a and b, with the former a Kohnert tableau with respect to a sinceL D (T) is flagged. This is to say that rect (T)∈ KD(D(a)). Using [8, Lemma 5.3.3] once more we also note that a= b. Since row r of D is empty, we have a r = 0. From Lemma 2.3.16, we have f r (rect(T))∈ KD(a). Since crystals moves commute with rectification [8, Thm 4.2.5] we write rect (U)∈ KD(a) which impliesL a (rect(U))= rect(L D (U)) is flagged. Rectification does not affect the flagged condition [8, Lemma 5.3.1] so we finally have L D (U) is flagged and so U∈ KD(D) by [8, Thm 5.2.4]. 2.4 Recurrence for Kohnert polynomials We can now establish the third and final term in Magyar’s recurrence, and so prove Theorem 2.2.17, restated below for convenient reference. Theorem 2.2.17. Kohnert polynomials of northwest diagrams satisfy the following: (K1) K ∅ = 1; (K2) if the first column of D is exactly C k , then K D = x 1 x 2 ··· x k K D− C k ; (2.4.1) (K3) if every cell in row r has a cell in row r+ 1 in the same column, then K D =π r (K s r D ). (2.4.2) 37 Proof. We have observed (K1) by definition, and (K2) is proved in Lemma 2.2.18. Assume now row r is empty. By Theorem 2.3.26, we have D r (KD(D))= KD(D). Thus applying D r to the nested sequence in Theorem 2.3.15 gives D r (KD(s r D)) ⊆ D r (KD(D)) ⊆ D r (D r (KD(s r D))) = D r (KD(s r D)), and soD r (KD(s r D))= KD(D). By Proposition 2.3.7, we have K D = char(KD(D))= char(D r (KD(s r D)))=π r (char(KD(s r D)))=π r (K s r D ). Thus (2.4.2) holds. By Proposition 2.2.10, we have char(S flag D ) and K D satisfy the same recur- rence, and so must coincide. Finally, (M3) for char(S flag D ) implies (K3) forK D . 2.4.1 Corollaries Theorem 2.2.17 has several corollaries, the first being (2.1.1). Corollary 2.4.1. For D a northwest diagram, we have char(S flag D )=K D = ∑ T∈KD(D) x wt(T) 1 1 ··· x wt(T) n n . Definition 2.2.3 gives a spanning set for the flagged Schur modules indexed by flagged tableaux. However, just as the indexing set of the basis for the Schur modules Definition 2.2.2 is the subset of semistandard Young tableaux, it would be nice to have an explicit subset of flagged tableaux such that the corresponding∆ T gives a basis for the flagged Schur module. Indeed, by Corollary 2.4.1, we also such a combinatorial model for a basis. The subset of flagged tableaux are precisely those obtained from the Kohnert tableaux by moving each cell of the Kohnert diagram to the row indicated by its entry and changing the entry to the row index from which it came; see Fig. 2.14. 38 g 1 g 2 g 2 g 2 g 3 g 1 g 2 g 2 g 2 g 3 g 1 g 2 g 2 g 2 g 3 g 1 g 2 g 2 g 2 g 3 g 1 g 3 g 2 g 2 g 2 1 2 2 2 3 1 2 2 1 3 1 2 1 1 3 1 2 1 1 2 1 2 2 2 1 Figure 2.14: The labeled Kohnert diagrams for D the leftmost diagram and the corresponding flagged tableaux indexing the basis. Corollary 2.4.2. For D a northwest diagram, a basis forS flag D is given by {∆ T |D(T)∈ KD(D)}, whereD(T) is the diagram with a cell in row r, column c if and only if T has a cell in column c with entry r. By [8, Thm 5.3.4], the Kohnert crystal is a disjoint union of Demazure crystals, the characters of which are Demazure characters, thus giving a new proof of the positivity shown in [56, Thm 20]. By [8, Def 6.1.2], a Kohnert diagram for D is Yamanouchi if its labeling rectifies to a composition diagram with every cell in row i labeled by i. For example, the left and right diagrams in Fig. 2.14 are the only two Yamanouchi Kohnert diagrams. By [8, Thm 6.1.3], K D = ∑ T∈YKD(D) κ wt(T) , (2.4.3) where YKD(D) is the set of Yamanouchi Kohnert diagrams for D. Corollary 2.4.3. For D a northwest diagram, char(S flag D ) is a nonnegative sum of Demazure char- acters. In particular, char(S flag D )= ∑ T∈YKD(D) κ wt(T) . Considering the case when D is the Rothe diagram of a permutation, Kraskiewicz and Pragacz [36] showed the character of the flagged Schur module is a Schubert polynomial, and so we give another proof of Kohnert’s rule for Schubert polynomials, which is proved bijectively in [7]. 39 Corollary 2.4.4. For D(w) the Rothe diagram of a permutation w, K D(w) equals the Schubert polynomialS w . 2.4.2 Percent avoiding diagrams Magyar’s recurrence, Theorem 2.2.9, holds for the more general class of %-avoiding diagrams. A natural question is whether our result holds for this more general class. The answer is a resounding no. Definition 2.4.5. A diagram D is %-avoiding if whenever ( j,k),(i,l)∈ D with i< j and k< l, then either(i,k)∈ D or( j,l)∈ D. Notice every northwest shape is, in particular, %-avoiding. Lemma 2.4.6. Let z be a cell in row s of a diagram D. Let U be the subdiagram of D consisting of the rows strictly above s weakly below some r< s plus the cells right of z inclusive in row s. If U is northwest and there exists a cell in row r in the same column as z, then there exists no T∈ KD(D) with wt(T)= wt(D)+ Kα r,s where K is the number of cells right of z inclusive in row s. Proof. Assume we can construct such a diagram T from Kohnert moves on D. Notice Kohnert moves cannot move a cell to a row strictly above r or from a row strictly below row s since we must preserve the weights of rows strictly above r and also those strictly below s. Thus those two regions coincide for D and T . Let c denote the column containing z. Suppose there is a cell x in row k column b≤ c of U but that this position is vacant in T . Then x must have been moved by a Kohnert move so the number of cells in column b strictly above k must be strictly greater in T than in D. In particular, there must be some cell y in row k ′ with r≤ k ′ < k column b of T whose position is vacant in D. The northwest condition on x and the cell in row r column c, both lying in U, imply there is a cell in row r column b of D, so in fact r< k ′ < k. Furthermore, the presence of x along with the vacancy of row k ′ column b in D imply that row k ′ has no cells right of b in U, hence in D which has an identical row k ′ . 40 To preserve the row weight of row k ′ between D and T , there must be a column b ′ < b such that row k ′ column b ′ has a cell x ′ in D but not in T . If we replace x,k, and b with x ′ , k ′ , and b ′ respectively and we repeat the argument then it never terminates, implying our diagram T does not actually exist. We know that in order to get from D to T , precisely K cells must be removed from row s with none more entering. These must be the rightmost K cells in s which includes z. Thus, we begin the above contradictory process by letting x= z. Theorem 2.4.7. If D is %-avoiding but not northwest, then char(S flag D )̸=K D . Proof. Pick rows r and s with r< s that exhibit a violation of the northwest condition and are minimally separated. That is, the only violations of the northwest condition in the subdiagram consisting of rows weakly between r and s occur with cells in the rows r and s themselves. Let y be the rightmost cell in row s for which there is a cell strictly right of its column c in row r, but not in column c of row r. Let K be the number of cells of D in row s weakly right of y for which there is not a corresponding cell in the same column of row r. By Proposition 2.2.5 we have that x wt(D)+Kα r,s is a monomial in the monomial expansion of char(S flag D ). It now suffices to show that x wt(D)+Kα r,s is not a monomial inK D . Let z be Kth cell from the right in row s. Since D is %-avoiding and there is not a cell in the same column as y of row r, for any cell in row r right of y, of which there is at least one by assumption, there is a corresponding cell in the same column of row s. This corresponding cell in row s is not one of those counted in the definition of K. Therefore the number of cells weakly right of y in row s is strictly greater than K which is to say that z lies strictly to the right of y. Now let U be the subdiagram of D consisting only of the rows strictly above s weakly below r, and additionally the cells weakly right of z in row s. By choice of r,s and y we know U is northwest. Moreover, if there were no cell in row r of the same column as z, then since U is northwest there would be no cells right of this column in row r in either U or D. However, this means the K cells weakly right of z are precisely those cells weakly right of y that do not have a corresponding cell in the same column of row r. This is impossible since y is included in this count but lies left of z. Thus 41 there must be a cell in row r in the same column as z. By Lemma 2.4.6, there is no T ∈ KD(D) with wt(T)= wt(D)+ Kα r,s , and so x wt(D)+Kα r,s does not appear in the monomial expansion of K D . 42 Chapter 3 A crystal analysis of P-arrays 3.1 Introduction The chromatic symmetric function generalizes the chromatic polynomial of a graph [61]. The Stanley-Stembridge Conjecture [65, Conjecture 5.5] states that the chromatic symmetric function of the incomparability graph of a (3+1)-free poset is a positive sum of elementary symmetric functions. The conjecture has long motivated research into the chromatic symmetric functions of these graphs. In particular, Gasharov showed the weaker result that these symmetric functions are a posi- tive sum of Schur functions, or are “s-positive” [25]. This was accomplished by reinterpreting the chromatic symmetric functions as generating functions for P-arrays, a combinatorial object corre- sponding to proper colorings. Moreover, the coefficients of the expansion into Schur functions can be stated in terms of P-arrays. Stanley expressed the desire for a direct bijective proof of Gasharov’s result, which would generalize the Robinson-Schensted correspondence [63]. A better understanding of Gasharov’s result could help in addressing the natural suspicion that it holds for the larger class of claw-free graphs [63, Conjecture 1.4]. Some thought has gone towards finding such a Robinson-Schensted correspondence for P-arrays [45],[67],[16],[32] but the problem in its full generality remains open. 43 Among the immediate consequences of this hypothetical correspondence would be a way to inter- pret each Schur function in the chromatic symmetric function as an explicit generating function for P-arrays in its own right. In this paper we similarly refine the chromatic symmetric function as a sum of smaller s- positive generating functions. Crystals were introduced by Kashiwara [30] and highest weight crystals have served as a paradigm for interpreting and achieving s-positivity results due to their characters being Schur functions. So-called normal crystals have characters that are symmetric, but not necessarily s-positive. We define normal crystals on the set of P-arrays which, nevertheless, do have s-positive characters. The structure determines a sign-reversing involution, similar to the one in Gasharov’s proof, which we use to show that the characters of the components are s-positive. This refines the s-positivity theorem of Gasharov. Guay-Paquet reduced the Stanley-Stembridge Conjecture to the case of unit interval orders [26]. More recently, Shareshian and Wachs defined a quasisymmetric generalization of the chro- matic symmetric function which, when applied to incomparability graphs of natural unit interval orders, suggests a strengthened version of the conjecture and yields an analogously refined s- positivity theorem [59]. We show that our construction is also applicable to this quasisymmetric setting, further refining the s-positivity theorem. Shareshian and Wachs use an involution similar to Gasharov’s, but more efficient in some sense. By the same measure, our involution is optimal. Our goals are similar to [32] which, subject to additional poset avoidance conditions that are conjectured to be unnecessary, refines s-positivity of chromatic quasisymmetric functions for nat- ural unit interval orders. The paper also presents a Robinson-Schensted correspondence in this case. In comparison, our work more generally deals with underlying (3+1)-free posets without additional constraints. We begin in Section 3.2 by reviewing the necessary context around P-arrays and the chromatic symmetric function. In Section 3.3 we explain what we mean by a crystal and discuss the diagram crystal defined in [8]. In Section 3.4 we define the r alignment of a P-array which is a sort of “pair- ing rule” that allows us to define the normal crystal on P-arrays in Section 3.5. The s-positivity 44 property of this crystal is proved in Section 3.6, and its applicability to the chromatic quasisym- metric function for natural unit interval orders demonstrated in Section 3.7. Finally, in Section 3.8 we push our construction further by developing a highest weight crystal interpretation for the Schur functions in the chromatic symmetric function when they correspond to partitions of length 1 or 2. This last development relies heavily on the diagram crystal in [8], and we are interested to see if it can be extended to partitions of greater lengths. There is also the question of whether a structure like ours can be defined on the proper colorings of more general claw-free graph, which would prove s-positivity of their chromatic symmetric functions. 3.2 Chromatic Symmetric Functions A proper coloring of a graph G=(V,E) is a map κ : V →Z + such that κ(v)̸=κ(w) whenever (v,w)∈ E. An independent subset of V contains no two elements that form an edge. Proper colorings κ : V →Z + are then equivalently defined by requiring that κ − 1 (i) is an independent subset for all i∈Z + . Definition 3.2.1. [61] The chromatic symmetric function of a finite graph G is given by X G (x)= ∑ κ ∏ v∈V x κ(v) where the sum is over all proper colorings of G. It is evident that these functions are indeed symmetric. Given a poset(P,≤ P ), its incomparability graph, denoted inc(P), has vertex set P and edges (u,v) whenever u and v are incomparable in P, henceforth denoted by u∥ v. See Fig. 3.1 for an example of a poset and its incomparability graph. Notice that independent subsets of inc(P) are precisely the chains in P. That is, there is a canonical ordering on any independent subset. This 45 observation motivates the following definition of objects that correspond to proper colorings of the incomparability graph. a b c d e f g h a b c d e f g h Figure 3.1: The Hasse diagram (left) and incomparability graph (right) of a (3+1)-free poset. Definition 3.2.2. [25] Let(P,≤ P ) be a finite poset. A P-array is an indexing{A i, j } of the elements of P such that if A i, j is defined with j> 1 then A i, j− 1 is also defined and A i, j− 1 < P A i, j . We letA P denote the set of P-arrays. We visualize a P-array{A i, j } as the placement of the elements of P on a grid indexed by matrix convention, where A i, j is placed in position(i, j). Fig. 3.2 shows two P-arrays for the poset in Fig. 3.1. h d a f e g b c a b c d f g h e Figure 3.2: Two P-arrays with the one on the right being a P-tableau. Given a P-array{A i, j }, the corresponding proper coloring ofκ : P→Z + of inc(P) is defined by κ(A i, j )= i. Given a proper coloring κ : P→Z + , the corresponding P-array is obtained by 46 letting A i,1 ,A i,2 ,...,A i,k be the elements of κ − 1 (i) in increasing order. The weight of a P-array A is the weak composition wt(A) whose ith part is the number of elements in row i. If κ is the corresponding proper coloring then wt(A) i = #κ − 1 (i) so we realize that X inc(P) = ∑ A∈A P ∏ i≥ 1 x wt(A) i i . Developing these combinatorial objects further, we have the following definition. Definition 3.2.3. [25] Let (P,≤ P ) be a poset. A P-tableau is a P-array with the additional con- straint that whenever A i, j is defined with i> 1, then A i− 1, j is also defined and A i− 1, j ̸> P A i, j . The second P-array in Fig. 3.2 is a P-tableau. We see that the weight of a P-tableau is a partition in general, i.e. if T is a P-tableau then wt(T) i ≥ wt(T) i+1 for all i≥ 1. A poset is (a+b)-free, for natural numbers a and b, if it does not contain as an induced subposet the disjoint union of a a-chain with a b-chain. The poset in Fig. 3.1 is (3+1)-free for instance. Let s λ denote the Schur function indexed by a partitionλ. We say a symmetric function is s- positive if it is a sum of Schur functions. A graph is said to be s-positive if its chromatic symmetric function is. The following theorem by Gasharov now reveals the motivation behind the definition of P-tableaux. Theorem 3.2.4. [25] Let (P,≤ P ) be a (3+1)-free poset. Then X inc(P) =∑ λ c λ s λ where c λ is the number of P-tableaux of weightλ. In particular, inc(P) is s-positive. The theorem and its proof don’t give us an idea of which monomials (P-arrays) “contribute” to which Schur functions (P-tableaux) in any canonical way. Our primary goal in this paper is then to refine this result by developing a finer structure on P-arrays for a more localized explanation of s-positivity. 47 3.3 Crystals In this section we define what we mean by a crystal, as well as some associated terminology. We use the notationZ ∗ to represent the set ofZ-valued sequences with finitely many nonzero entries. For r∈Z + we letα r =(0,...,0,1,− 1,0,...)∈Z ∗ with the 1 as the rth entry. The following is a specialization of Kashiwara’s definition to type A. Definition 3.3.1. [30] A crystal is a setB, called the vertex set, together with maps • wt :B→Z ∗ called the weight map, • ε r ,ϕ r :B→Z for each r∈Z + , and • e r , f r :B→B∪{0} for all r∈Z + called the crystal raising and lowering operators. satisfying the following axioms: C1 ϕ r (b)− ε r (b)= wt r (b)− wt r+1 (b) for all r∈Z + and b∈B, C2 for b∈B with e r (b)̸= 0 we have wt(e r (b))= wt(b)+α r ,ε r (e r (b))=ε r (b)− 1, andϕ r (e r (b))=ϕ r (b)+ 1, C2’ for b∈B with f r (b)̸= 0 we have wt( f r (b))= wt(b)− α r ,ε r ( f r (b))=ε r (b)+ 1, andϕ r ( f r (b))=ϕ r (b)− 1, and C3 for b,b ′ ∈B we have e r (b)= b ′ if and only if f r (b ′ )= b. Notice that axioms C2 and C2’ are equivalent given axiom C3. Also, be warned that we will overload the symbols wt,ε r ,ϕ r ,e r , f r to be used on multiple vertex sets. However, there should be no ambiguity as it will be clear to which vertex set the maps are being applied. A crystal is normal if ε r (b)= max{k∈Z ≥ 0 : e k r (b)̸= 0} and ϕ r (b)= max{k∈Z ≥ 0 : f k r (b)̸= 0}. 48 In this case,ε r andϕ r are called the upper and lower string lengths. The character of a crystal with vertex setB is char(B)= ∑ v∈B ∏ r≥ 1 x wt(v) r r . Lemma 3.3.2. The character of a normal crystal is symmetric. Proof. LetB be a normal crystal. Let a∈Z ∗ and assume a r > a r+1 . It suffices to show there is a bijection between elements ofB with weight a and weight s r a where s r is the simple transposition (r r+ 1) acting onZ ∗ in the obvious way. Let b∈B with wt(b)= a. Normality implies the string lengths are non-negative. Then by C1 we haveϕ r (b)≥ a r − a r+1 . Since the crystal is normal, we have f a r − a r+1 r (b)̸= 0. By C2’ we have wt( f a r − a r+1 r (b))= wt(b)− (a r − a r+1 )α r = a− (a r − a r+1 )α r = s r a. So f a r − a r+1 r is a map from elements of weight a to elements of weight s r a. Symmetrically, e a r − a r+1 r is a map from elements of weight s r a to elements of weight a, and on these sets it is the inverse of f a r − a r+1 r by C3. A highest weight element of a crystal with vertex setB is some b∈B such that e r (b)= 0 for all r∈Z ≥ 0 . Stembridge gave a local axiomatization of the highest weight crystalsB λ indexed by partitionsλ, which are each associated with an irreducible representation of the general linear group [66]. We do not define these crystals, but we recall the important facts that B λ has a unique highest weight element, this element has weight λ, and char(B λ )= s λ . A more comprehensive description of crystals can be found in [14]. In the following sections we will define a normal crystal on A P for any (3+1)-free poset(P,≤ P ), with wt :A P →Z ∗ as already defined. We see that char (A P ) coincides with X inc(P) and is therefore s-positive by Theorem 3.2.4. We then generalize this result by showing that the character of any connected component of the crystal is s-positive. In Section 3.5 we define the operators e r and f r , and in Section 3.6 we prove the refined s-positivity result. 49 3.3.1 The Diagram Crystal For the remainder of this section we will discuss Assaf’s diagram crystal whose components are, in fact, highest weight crystals in the cases we care about [8]. This is not a prerequisite for un- derstanding anything up through Section 3.7. However, it may be a helpful example of a crystal with nice properties, and it is necessary for Section 3.8 where we partially succeed in pushing our refinement of Theorem 3.2.4 even further. By a diagram we mean a finite subset of Z + × Z + which we will interpret as positions in a grid using matrix convention. The elements of a diagram are referred to as its cells. We letD denote the set of diagrams. The weight map onD is defined so that wt(D) i is the number of cells in row i of D∈D. Definition 3.3.3. [8] Let D be a diagram and r∈Z + . The set of r-pairs of D is a set of disjoint pairs of cells between rows r and r+ 1 defined iteratively as follows. We say that two unpaired cells x and y in rows r and r+ 1 respectively with x weakly left of y form an r-pair whenever every other cell in rows r and r+ 1 in a column weakly between x and y is already part of an r-pair. Definition 3.3.4. [8] We define e r :D→D∪{0} on D∈D as follows. If every cell in row r+ 1 of D is r-paired then e r (D)= 0. Otherwise, take (r+ 1,c) to be the rightmost cell in row r+ 1 of D that is not r-paired and set e r (D)=(D\{(r+ 1,c)})∪{(r,c)}. That is, we “move” the cell (r+ 1,c) to row r. Definition 3.3.5. [8] Given the above definition, the crystal lowering operator f r :D→D∪{0} is implicitly defined as follows. Let D∈D. If every cell in row r of D is r-paired then f r (D)= 0. Otherwise, take (r,c) to be the leftmost cell in row r of D that is not r-paired and set f r (D)= (D\{(r,c)})∪{(r+ 1,c)}. Fig. 3.3 shows the diagrams reachable from{(1,1),(1,2),(2,2),(2,3)}∈D using only edges colored 1 and 2. We will also find useful the notion of column c-pairing which is defined similarly to r-pairing. 50 gg gg gg g g gg gg g g gg gg gg g g g g 2 2 1 1 2 1 Figure 3.3: A portion of the diagram crystal. Definition 3.3.6. [8] Let D be a diagram and c∈Z + . The column c-pairing of D is an iterative construction where we say that two cells x and y in columns c and c+ 1 respectively with x in a row weakly below that of y are column c-paired whenever every other cell in columns c and c+ 1 in a row weakly between x and y is already column c-paired. The following is a consequence of [8, Theorem 4.2.5]. Proposition 3.3.7. The diagram crystal raising and lowering operators do not change the number of column c-pairs. We will find this proposition applicable when a diagram D shares a connected component in the crystal with a diagram D ′ that is “top-justified”, i.e. such that when (r,c)∈ D with r> 1 we have(r− 1,c)∈ D as well. In this case, Proposition 3.3.7 implies every cell in the shorter of the two columns c and c+ 1 is column c-paired. A second invariant of components in the diagram crystal that we will find useful is given by the next lemma. Lemma 3.3.8. Let D be a diagram and consider sequences of cells(r 1 ,c 1 ),...,(r k ,c k ) in D such that r i < r i+1 and c i ≤ c i+1 . The maximal length of such a sequence is unchanged by the crystal raising and lowering operators. 51 Proof. Suppose (r 1 ,c 1 ),...,(r k ,c k )∈ D is a sequence of cells in a diagram D with r i < r i+1 and c i ≤ c i+1 of maximal length. We will show that there is such a sequence of cells of length k in e r (D) if e r (D)̸= 0. The only way this might not be the case is if r+ 1= r i for some 1≤ i≤ k and e r were to remove (r i ,c i ) from D. In this case, we have (r i − 1,c i )∈ e r (D). Unless i> 1 and r i− 1 = r i − 1, we get another sequence of cells in e r (D) with the desired properties by simply replacing(r i ,c i ) in the sequence with(r i − 1,c i ). So assume i> 1 and r i− 1 = r i − 1. We know(r i ,c i ) is not r-paired in D. Then(r i− 1 ,c i− 1 ) must be r-paired with some(r i ,d) with c i− 1 ≤ d < c i . Replacing (r i ,c i ) in the sequence (r 1 ,c 1 ),...,(r k ,c k ) with (r i ,d) then gives us a sequence in e r (D) of length k with the desired properties. It is similar to show that if f r (D)̸= 0 then f r (D) has such a sequence of cells of length k. Once again considering the cases of a component containing a top-justified diagram, the lemma says that the maximal length of a sequence of cells as described above is the maximal number of cells in a column. Assaf’s Theorem 5.3.4 specializes to the following statement. Theorem 3.3.9. [8] Let D be a top-justified diagram. Then the component of the diagram crystal containing D is a highest weight crystal, with character s wt(D) and unique highest weight element D. 3.4 The r Alignment For the remainder of this paper, (P,≤ P ) is always a finite (3+1)-free poset. In this section we introduce the r alignment of a P-array. The r alignment is fundamental for defining the crystal operators on P-arrays in Section 3.5. It is a way of horizontally spacing out the elements of rows r and r+1, and can be thought of as a “pairing rule,” analogous to the idea of r-pairs for the diagram crystal, which helps us determine how e r and f r affect rows r and r+ 1. 52 Definition 3.4.1. Let A be a P-array and let r≥ 1. Let the chains a 1 < P ··· < P a m and b 1 < P ··· < P b n be the elements of rows r and r+1 of A respectively. Let C :{a 1 ,...,a m }→Z + be the function inductively defined so that C(a k )= max({i| b i < P a k }∪{C(a i )| 1≤ i< k})+ 1. Then we define the r pre-alignment of A to be the map {a 1 ,...,a m ,b 1 ,...,b n }→{r,r+ 1}× Z + such that each a k maps to(r,C(a k )), and each b k to(r+ 1,k). Fig. 3.4 shows a visualization of the r pre-alignment for a P-array A if P is as in Fig. 3.1, row r of A contains the elements b,c,d and row r+ 1 contains a,h. r b c d r+ 1 a h Figure 3.4: An r pre-alignment for a P-array with P as in Fig. 3.1. Definition 3.4.2. Let A be a P-array and let r≥ 1. Let the chains a 1 < P ··· < P a m and b 1 < P ··· < P b n be the elements of rows r and r+ 1 of A respectively. Letφ 0 be the r pre-alignment. We construct the r alignment of A as follows. Suppose we have some φ k :{a 1 ,...,a m ,b 1 ,...,b n }→{r,r+ 1}× Z + . Select the rightmost element x mapped to some(i,c) such that column c+ 1 ofφ k is nonempty and contains no y> P x, if such an x exists. Then we define φ k+1 :{a 1 ,...,a m ,b 1 ,...,b n }→{r,r+ 1}× Z + so that φ k+1 (x)=(i,c+ 1) and φ k+1 coincides with φ k elsewhere. If no such x exists then the r alignment of A is defined to be φ k . 53 Fig. 3.5 shows the r alignment for a P-array with rows r and r+ 1 as in Fig. 3.4. r b c d r+ 1 a h Figure 3.5: The r alignment for a P-array with P as in Fig. 3.1. Proposition 3.4.3. The r alignment of a P-array A is well-defined. Additionally, if a 1 < P ··· < P a m and b 1 < P ··· < P b n are the elements of rows r and r+ 1 of A respectively, then in the r alignment each a i is in a column strictly left of a i+1 and each b i is in a column strictly left of b i+1 . Proof. Our attention is restricted entirely to rows r and r+ 1 of A. That the r pre-alignment φ 0 satisfies the condition on a 1 ,...,a m and b 1 ,...,b n is immediate from the definition. Let d be the rightmost occupied column in the r pre-alignment. Suppose we haveφ k as in Definition 3.4.2 that satisfies the condition on a 1 ,...,a m and b 1 ,...,b n , and the rightmost occupied column ofφ k is d. Should these inductive hypotheses continue to hold, we will have shown that the process to determine the r alignment must terminate, as we are limited by d in how far we can shift each element to the right. The other obstacle to well-definiteness is the question of whether x, as in the definition, is unique when it exists. To this end, consider some column c inφ k which contains two elements, some a i and b j . If column c+ 1 contains an element inφ k then by assumption it is either a i+1 or b j+1 . Thus, there is at most one element in column c for which there is no greater element in column c+ 1. This indeed proves that x is uniquely determined when it exists. Now suppose x exists and resides in some column c of φ k . By choice of x, column c+ 1 is nonempty inφ k , so c< d. Inφ k+1 we know x is mapped to column c+ 1, so any column right of d remains unoccupied. Moreover, if x= a i for some 1≤ i≤ m, then by the inductive hypothesis a i− 1 (if it exists) is strictly left of column c in bothφ k andφ k+1 . If a i+1 > P x exists then it is strictly right of column c+ 1 ofφ k andφ k+1 using both the inductive hypothesis and the choice of x. The same argument can be made when x= b i for some 1≤ i≤ n. Then the inductive hypothesis on a 1 ,...,a m and b 1 ,...,b n continues to hold. 54 Lemma 3.4.4. In the r alignment of a P-array A, any element in row r+ 1 that does not share a column with an element in row r is strictly left of any element in row r that does not share a column with an element in row r+ 1. Proof. This is clear for the row r pre-alignment. Prior to moving an element from some column c to column c+ 1 while obtaining the r alignment, column c must contain an element in each row while column c+ 1 contains exactly one element. Thus, the property is maintained as we move elements to the right. In Section 3.5 we will define our crystal operators on A P using the r alignment. To prove that our operators are essentially inverses as required, we will want to see that they act predictably on the r alignment. This motivates us to write a more static characterization of the r alignment, as opposed to the procedural definition. Definition 3.4.5. Let A be a P-array. Let the chains a 1 < P a 2 < P ··· < P a m and b 1 < P b 2 < P ··· < P b n be the elements of rows r and r+ 1 of A respectively. A weak r alignment of A is a map φ :{a 1 ,...,a m ,b 1 ,...,b n }→{r,r+ 1}× Z + satisfying the following properties: 1. each a i is mapped to row r in a column strictly right of a i− 1 if it exists, and each b i is mapped to row r+ 1 in a column strictly right of b i− 1 if it exists, 2. ifφ maps an element to some column c> 1, then it also maps an element to column c− 1, 3. if x< P y with x mapped to row r+ 1 and y mapped to row r then x is mapped to a column strictly left of y, and 4. if some x is mapped to some column c, then eitherφ maps some y> P x to column c+ 1 or it maps no elements to column c+ 1. Definition 3.4.6. Let A be a P-array. Let the chains a 1 < P a 2 < P ··· < P a m and b 1 < P b 2 < P ··· < P b n be the elements of rows r and r+ 1 of A respectively. Let φ,ψ :{a 1 ,...,a m ,b 1 ,...,b n }→ {r,r+ 1}× Z + . We write φ ⪯ ∗ ψ when the position of each element in φ is in a column weakly 55 left of its position inψ. This defines a partial order on such maps. If φ andψ are weak r alignments of A we writeφ⪯ ψ. Proposition 3.4.7. Let A be a P-array and r≥ 1. The r alignment is the unique minimal weak r alignment according to the partial order⪯ . Proof. That the r alignment satisfies (1) and (4) follows by definition. Property (3) is clear in the pre-alignment, and x can never be shifted right into a column containing y. So (3) holds in the alignment as well. Let φ 0 denote the r pre-alignment. For every element in a column c> 1 of φ 0 there is a lesser element in column c− 1. Therefore there is some chain of elements in consecutive columns x 1 < P ··· < P x ℓ where x 1 is in the first column and x ℓ is in the rightmost nonempty column, call it d. No element can move right of d as we construct the r alignment, and therefore each element in the chain x 1 < P ··· < P x ℓ is in the same position in the r alignment. This proves that the r alignment satisfies (2), and is therefore a weak r alignment. Now letψ be a weak r alignment. For any a k there is some sequence b 1 ,...,b j ,a i ,...,a k (possibly with j = 0) that is increasing in ≤ P and occupy consecutive columns in the r pre- alignment, similarly to above. By properties (1) and (3), each element in this sequence must be in a column strictly right of the preceding one in ψ. Thus, the position of a k in ψ is weakly right of its position in the r pre-alignment. The same is certainly true for each b k soφ 0 ⪯ ∗ ψ. Let φ k be an intermediate step between the r pre-alignment and r alignment as in Definition 3.4.2, and assumeφ k ⪯ ∗ ψ. Suppose there exists a rightmost x inφ k such that the adjacent column to its right is nonempty and contains no element greater than x. We will show x lies strictly right in ψ of its position inφ k . This is true if x is in the rightmost nonempty column ofψ, which is weakly right of the rightmost nonempty column ofφ k , which is in turn strictly right of x inφ k . Otherwise, there is some y> P x one column right of x inψ by (4). 56 Say x lies in column c inφ k . If y lies in a column strictly right of c inφ k , it must be in column c+ i for some i≥ 2 by choice of x. Then y lies strictly right of column c+ i− 1 inψ which means x lies strictly right of c+ i− 2≥ c inψ. Suppose instead y lies in some column weakly left of c inφ k . Then x and y lie in opposite rows. However, this means any element right of x inφ k is greater than x which contradicts our choice of x. By induction we can now say thatφ k+1 ⪯ ∗ ψ. Then ifφ k is the r alignment we haveφ k ⪯ ψ. Using Proposition 3.4.7, we end the section with a final useful lemma. Lemma 3.4.8. If the r alignment of A maps a unique element x∈ P to some column c, then any element strictly left of c is smaller than any element weakly right of c in the r alignment. Proof. Let y be an element strictly left of c, and z an element weakly right of c. If y and z share a row, then y< P z by (1) in Definition 3.4.5. We have y< P x by (4) in Definition 3.4.5, so if z shares a row with x we have y< P z as well. Thus, assume y shares a row with x and z does not. If z is in row r+ 1, let z ′ be the element in the same column of row r which exists by Lemma 3.4.4. We have z̸< P z ′ by (3) in Definition 3.4.5 so y< P z by the (3+1)-free condition. Otherwise, it suffices to assume z is the leftmost element in row r in a column strictly right of c. If z does not share a column with any element in row r then once again we have y< P z by (4) in Definition 3.4.5. So suppose we have some z ′ in the same column as z in row r+ 1. If we had y̸< P z in this circumstance, that would imply z< P z ′ by the (3+1)-free condition, and also x̸< P z. Then moving the position of z one column to the left would still satisfy conditions (1)-(4) in Definition 3.4.5, contradicting Proposition 3.4.7. 3.5 The P-Array Crystal Operators We now have the tools to define the crystal operators on A P . Definition 3.5.1. The crystal lowering operator f r :A P →A P ∪{0} acts on A∈A P as follows. Let a 1 < P ··· < P a m and b 1 < P ··· < P b n be the entries of rows r and r+ 1 of A respectively. 57 • If every column of the r alignment with an entry in row r also contains an entry in row r+ 1 then define f r (A)= 0. • Otherwise let p be minimal such that a p does not share a column with an element in row r+ 1 in the r alignment. Let t≥ 0 be minimal such that there is no b i > P a p+t one column right of a p+t . Then we move a p ,...,a p+t to row r+ 1, and any b i that shares a column with one of these entries to row r. The following proposition shows that the operation is well-defined, and also shows the main content of axiom C2’. Proposition 3.5.2. The crystal lowering operator f r is well-defined, and if we have A ∈A P with f r (A)̸= 0 then wt( f r (A))= wt(A)− α r . Proof. Let A∈A P and let a 1 < P ··· < P a m and b 1 < P ··· < P b n be the entries of rows r and r+1 of A respectively. Select a p as in the definition, assuming it exists. By Lemma 3.4.4 every nonempty column strictly right of the column containing a p in the r alignment must contain an entry in row r. In particular, there is no b i in any column strictly right of a m and it therefore makes sense to select t≥ 0 as in the definition. The entries a p ,...,a p+t reside in consecutive columns of the r alignment by Lemma 3.4.4 and property (1) of weak r alignments, and for each 0≤ i< t there is some b j ̸> P a p+i that shares a column with a p+i+1 by choice of t. Let q≥ 1 be minimal such that either b q lies in a column strictly right of a p or does not exist. The relevant columns of the r alignment are then as in Fig. 3.6 (which degenerates to just the column containing a p if t= 0). a p a p+1 a p+2 ··· a p+t b q b q+1 ··· b q+t− 1 Figure 3.6: The columns in the alignment of A whose entries are to be row swapped by f r . We must show (i) b q− 1 < P a p 58 (ii) a p+t < P b q+t (iii) a p− 1 < P b q (iv) b q+t− 1 < P a p+t+1 whenever these indices are valid. The point being that (i) and (ii) show row r+ 1 of f r (A) is a chain, and (iii) and (iv) show row r of f r (A) is a chain. When t = 0 we need not consider (iii) or (iv). We have (i) immediately from Lemma 3.4.8. If b q+t is one column right of a p+t then (ii) follows by choice of a p+t . Otherwise, by Lemma 3.4.4 there is some a p+t+i with i> 2 that shares a column with b q+t . By (3) in Definition 3.4.5, b q+t ̸< P a p+t+i so we get (ii) by the (3+1)-free condition. If t̸= 0 we know a p+t− 1 ̸< P b q+t− 1 so (iv) follows from the (3+1)-free condition. By property (3) in Definition 3.4.5 we have b q ̸< P a p+1 so (iii) follows from the (3+1)-free condition. Fig. 3.6 makes clear that wt( f r (A))= wt(A)− α r . Lemma 3.5.3. Let A be a P-array. Let a p < P ··· < P a p+t and b q < P ··· < P b q+t− 1 be the elements whose rows are swapped by f r as in Fig. 3.6. Then a p+i+1 ∥ b q+i for each 0≤ i< t. Proof. We have a p+i ̸< P b q+i by choice of a p+t , thus a p+i+1 ̸< P b q+i . Since a p+i+1 shares a column with b q+i in the r alignment, we have a p+i+1 ̸> P b q+i by property (3) in Definition 3.4.5. As with the crystal lowering operator, we will soon define the crystal raising operator using the r alignment of a P-array. To eventually see that these two operations are inverses, we then want to know how f r affects the r alignment. The answer is given by the following lemma. Lemma 3.5.4. Applying f r to a P-array A does not change the column of any entry in the r align- ment. 59 Proof. Let a 1 < P a 2 < P ··· < P a m and b 1 < P b 2 < P ··· < P b n be the entries of rows r and r+ 1 of A respectively. Proposition 3.5.2 demonstrates that the entries of rows r and r+ 1 in f r (A) are respectively given by a 1 ,...,a p− 1 ,b q ,...,b q+t− 1 ,a p+t+1 ,...,a m and b 1 ,...,b q− 1 ,a p ,...,a p+t ,b q+t ,...,b n for p and t as in the definition of f r , and q≥ 1 minimal such that b q either does not exist or lies in a column strictly right of a p in the r alignment of A. Define φ :{a 1 ,...,a m ,b 1 ,...,b n }→{r,r+ 1}× Z + so that it agrees with the r alignment, except on a p ,...a p+t which are sent to row r+ 1, and b q ,...,b q+t− 1 which are sent to row r, without changing column assignments. This amounts to swapping the rows in Fig. 3.6. We must show thatφ is the r alignment of f r (A). We see thatφ inherits from the r alignment of A properties (1), (2), and (4) in Definition 3.4.5. The only entries in row r of φ weakly left of some b i with 1≤ i< q are among a 1 ,...,a p− 1 . All these entries occupy the same position in the r alignment of A so cannot violate (3). For q+ t ≤ i≤ n, the only entries in row r of φ left of b i that do not also occupy row r in the r alignment of A are b j for q≤ j< q+ t which are less than b i and therefore do not violate (3) in Definition 3.4.5. The only elements of row r left of a p inφ are a i that are lesser than a p . Each a p+i for 1≤ i≤ t shares a column with an incomparable element by Lemma 3.5.3, so cannot violate (3). Soφ is a weak r alignment of f r (A). Suppose we have a weak r alignmentψ of f r (A) such thatψ⪯ φ. We will showψ =φ. Since a p is the leftmost element in row r of the r alignment of A that does not share a column with an entry in row r+ 1, we know from Lemma 3.4.4 that b 1 ,...,b q− 1 ,a p ,...,a m occupy adjacent columns in the r alignment of A, hence in φ. Properties (1) and (3) of weak r alignments force 60 each element in this chain to be strictly right of the previous element inψ, so the elements of this chain occupy the same columns inψ andφ. Take b i for q+t≤ i≤ n and suppose b j occupies the same column inψ andφ for any j> i. The only elements greater than b i are those b j , and possibly some a k for k> p+ t. We have assumed that each such greater element occupies the same column in ψ and φ. Recalling that a p is the leftmost element in row r of the r alignment of A that does not share a column with an entry in row r+ 1, Lemma 3.4.4 precludes b i from being strictly right of a m in the r alignment, hence inφ. Then inφ, (3) and (4) place b i one column left of the leftmost entry greater than b i . Property (4) forces b i to occupy the same position inψ. Since by choice of b q we have a p+i ̸< P b q+i for each 0≤ i< t, we know b q+i < P a p+i+2 by the (3+1)-free condition (if a p+i+2 exists). We also have b q+i ∥ a p+i+1 by Lemma 3.5.3. This means that inφ, each such b q+i is one column left of the leftmost entry greater than b q+i if such an entry exists. We have already determined that φ and ψ agree on the positions of entries greater than any of b q ,...,b q+t− 1 , other than these entries themselves. Now properties (1) and (4) of weak r alignments prohibitψ from placing any of these b q+i strictly left of its position inφ. It remains only to show thatφ andψ agree on a 1 ,...,a p− 1 . Define ξ :{a 1 ,...,a m ,b 1 ,...,b n }→{r,r+ 1}× Z + to agree with the r alignment of A (hence the column assignments of φ), except on the positions of a 1 ,...,a p− 1 where it instead agrees withψ. We knowξ places each a i left of a i+1 so it satisfies (1). Column assignments inψ all agree withξ which therefore satisfies (2) and (4). If b j < P a i for some p≤ i≤ m and 1≤ j≤ n then the positions of these entries inξ agrees with the r alignment of A and therefore they do not violate (3). For 1≤ i< p we can only have b j < P a i for some j< q. The positions of these elements in ξ agree with ψ so cannot violate (3) either. Thenξ is a weak r alignment that precedes the r alignment of A according⪯ . Thenξ is, in fact, the r alignment of A by Proposition 3.4.7. In particular, for i< p we haveψ(a i )=ξ(a i )=φ(a i ). Thereforeφ =ψ soφ is the alignment of f r (A) by Proposition 3.4.7. 61 We next give a series of analogous statements for the crystal raising operator. The proofs are mostly, but not entirely, symmetric. Definition 3.5.5. The crystal raising operator e r :A P →A P ∪{0} acts on A∈A P as follows. Let a 1 < P ··· < P a m and b 1 < P ··· < P b n be the entries of rows r and r+ 1 of A respectively. • If every column of the r alignment with an entry in row r+ 1 also contains an entry in row r then define e r (A)= 0. • Otherwise let p be maximal such that b p does not share a column with an entry in row r in the r alignment. Let t≥ 0 be minimal such that there is no a i > P b p+t one column right of b p+t . Then we move b p ,...,b p+t to row r, and any a i that shares a column with one of these entries to row r+ 1. Proposition 3.5.6. The crystal raising operator e r is well-defined, and if we have A ∈A P with e r (A)̸= 0 then wt(e r (A))= wt(A)+α r . Proof. Let A∈A P and let a 1 < P ··· < P a m and b 1 < P ··· < P b n be the entries of rows r and r+1 of A respectively. Select b p as in the definition, assuming it exists. If the column immediately right of b n in the r alignment contains some a i , then b n < P a i by Lemma 3.4.8, and it therefore makes sense to select t≥ 0 as in the definition of e r . The entries b p ,...,b p+t reside in consecutive columns of the r alignment by choice of b p and property (3) of weak r alignments, and for each 0≤ i< t there is some a j ̸> P b p+i that shares a column with b p+i+1 . Let q≥ 1 be minimal such that either a q lies in a column strictly right of b p or does not exist. The relevant columns of the r alignment are then as shown in Fig. 3.7 (which degenerates to just the column containing b p if t= 0). a q a q+1 ··· a q+t− 1 b p b p+1 b p+2 ··· b p+t Figure 3.7: The columns in the alignment of A whose elements are to be row swapped by e r . To show that both rows of e r (A) are chains, it suffices to show 62 (i) a q− 1 < P b p (ii) b p+t < P a q+t (iii) b p− 1 < P a q (iv) a q+t− 1 < P b p+t+1 whenever these indices are valid. When t= 0 we need not consider (iii) or (iv). We have (i) immediately from Lemma 3.4.8. By choice of b p we know that a q+t , if it exists, lies one column right b p+t . Then (ii) follows by choice of b p+t . If t̸= 0 we have b p+t− 1 ̸< P a q+t− 1 so (iv) follows from the (3+1)-free condition. We also know b p ̸< P a q so we must have a q ̸< P b p+1 or we could violate the characterization of the r alignment of A given in Proposition 3.4.7 by shifting a q one position to the left. Thus (iii) follows from the (3+1)-free condition. Fig. 3.7 makes clear that wt(e r (A))= wt(A)+α r . Lemma 3.5.7. Let A be a P-array. Let b p < P ··· < P b p+t and a q < P ··· < P a q+t− 1 be the elements whose rows are swapped by e r as in Fig. 3.7. Then b p+i+1 ∥ a q+i for each 0≤ i< t. Proof. We have b p+i ̸< P a q+i by choice of b p+t . Were we to have a q+i < P b p+i+1 , we could shift a q ,...,a q+i left one position in the r alignment of A and we would still have a weak r alignment, contradicting Proposition 3.4.7. On the other hand, a q+i ̸> P b p+i+1 by property (3) of weak r alignments. Lemma 3.5.8. Applying e r to a P-array A does not change the column of any element in the r alignment. Proof. Let a 1 < P a 2 < P ··· < P a m and b 1 < P b 2 < P ··· < P b n be the entries of rows r and r+ 1 of A respectively. Proposition 3.5.6 demonstrates that rows r and r+ 1 in e r (A) are given by a 1 ,...,a q− 1 ,b p ,...,b p+t ,a q+t ,...,a m 63 and b 1 ,...,b p− 1 ,a q ,...,a q+t− 1 ,b p+t+1 ,...,b n where p and t are as in the definition of e r , and q≥ 1 is minimal such that a q either does not exist or lies in a column strictly right of b p in the r alignment of A. Define φ :{a 1 ,...,a m ,b 1 ,...,b n }→{r,r+ 1}× Z + so that it agrees with the r alignment of A, except on b p ,...,b p+t which are sent to row r, and a q ,...,a q+t− 1 which are sent to row r+ 1. This amounts to swapping the rows in Fig. 3.7. We must show thatφ is the r alignment of e r (A). We see thatφ inherits from the r alignment of A properties (1), (2), and (4) in Definition 3.4.5. Any entry in row r of φ weakly left of b i for some 1≤ i< p is also in row r weakly left of b i in the r alignment of A and therefore cannot violate (3). The only elements in row r ofφ weakly left of b i for p+t< i≤ n are either in the same position in the alignment of A, or are some b j < P b i . Either way, such b i cannot be part of a violation of (3). Each a q+i with 0≤ i< t shares a column in φ with an incomparable element by Lemma 3.5.7 and so cannot violate (3) either. Soφ is a weak r alignment. Suppose we have a weak r alignmentψ such thatψ⪯ φ. We will show thatψ =φ. Since b p is the rightmost entry in row r+ 1 of the r alignment of A that does not share a column with an entry in row r, we know from Lemma 3.4.4 that b 1 ,...,b p+t ,a q+t ,...,a m occupy adjacent columns in the alignment of A, hence inφ. Properties (1) and (3) of weak r alignments force each element in this chain to lie strictly right of the previous one inψ, so the elements of this chain occupy the same columns inψ andφ. Let x be some element of the chain a q ,...,a q+t− 1 ,b p+t+1 ,...,b n , and assume we have deter- mined all elements greater than x in the chain to occupy the same position in both φ and ψ. In this case, we have in fact determined by the previous paragraph that all entries greater than x, in the chain or otherwise, occupy the same position in both φ and ψ. If q= m+ 1, i.e. a m lies in a 64 column left of b p in the r alignment of A, then this chain is vacuous as there cannot exist b p+1 in a column strictly right of a m by choice of b p . Assuming q̸= m+ 1 we then know x must lie weakly left of a m inφ, hence inψ, by choice of b p . Then the position of x is uniquely determined by (3) and (4) in bothφ andψ to be one column left of the leftmost greater element y. Since y occupies the same position inφ andψ by induction, so to does x. It remains to showφ andψ agree on a 1 ,...,a q− 1 . Define ξ :{a 1 ,...,a m ,b 1 ,...,b n }→{r,r+ 1}× Z + to agree with the r alignment of A (hence the column assignments of φ), except on the positions of a 1 ,...,a q− 1 where it instead agrees withψ. We knowξ places each a i left of a i+1 so it satisfies (1). Column positions ofψ agree withξ which therefore satisfies (2) and (4). The positions of each a i with q≤ i≤ m and each b j agree betweenξ and the r alignment of A, so no such a i can violate (3) inξ . For 1≤ i< q, we can only have b j < P a i if j< p. The positions of these elements in ξ agree with ψ so cannot violate (3) either. Then ξ is a weak r alignment that precedes the r alignment of A according to⪯ . Then ξ is, in fact, the r alignment of A by Proposition 3.4.7. In particular, for i< q we haveψ(a i )=ξ(a i )=φ(a i ). Thereforeφ =ψ soφ is the alignment of e r (A) by Proposition 3.4.7. Proposition 3.5.9. Let A be a P-array. Consider the induced subgraph G of inc(P) restricted to vertices in rows r and r+ 1 of A. Applying f r or e r to A swaps the rows of the elements in a single connected component of G. Proof. We will prove this for f r , with the argument for e r being symmetric. If f r swaps the row of some x∈ P, and if y∈ P is incomparable to x, then y cannot share a row with x either before or after applying f r which is to say f r swaps the row of y as well. Thus f r swaps the rows of elements in a union of connected components of G. Now we must show that all elements whose rows are swapped by f r belong to the same con- nected component. Let a 1 < P ··· < P a m and b 1 < P ··· < P b n be the entries of rows r and r+ 1 of 65 A respectively. Let a p ,...,a p+t and b q ,...,b q+t− 1 be the entries whose rows will be changed by f r , as in Fig. 3.6. We have that a p+i+1 ∥ b q+i for 0≤ i< t by Lemma 3.5.3. That is to say each column in Fig. 3.6 is a subset of a connected component. This also implies b q+i ̸< P a p+i , and we have b q+i ̸> P a p+i by definition of f r . Thus there is an edge in G between each column in Fig. 3.6. So all of a p ,...,a p+t and b q ,...,b q+t− 1 belong to the same connected component as desired. With the crystal operators well-defined, we now define the string lengths for A∈A P by ε r (A)= max{k∈Z ≥ 0 : e k r (A)̸= 0} and ϕ r (b)= max{k∈Z ≥ 0 : f k r (A)̸= 0} as required of a normal crystal. Theorem 3.5.10. The setA P together with the crystal operators and string lengths defined in this section, and the weight map defined in Section 3.2, form a normal crystal. Proof. We need only show that we have a crystal, as normality follows by definition of the string lengths. We first demonstrate axiom C3. If f r (A)= A ′ then by Lemma 3.5.4 and Lemma 3.4.4, the leftmost entry x in row r of the r alignment of A that does not share a column with an entry in row r+1, is the rightmost element in row r+1 of the alignment of A ′ that does not share a column with an entry in row r. By Proposition 3.5.9, both f r and e r operate by swapping the rows of the entries in the connected component of x of the induced subgraph of inc(P) restricted to vertices in rows r and r+ 1 of A, or equivalently A ′ . Thus e r (A ′ )= A. The other direction is similar, so C3 holds. To prove C2 holds, let A∈A P with e r (A)̸= 0. We have wt(e r (A))= wt(A)+α r by Propo- sition 3.5.6. We have ε r (e r (A))= ε r (A)− 1 by definition of ε r . By definition of ϕ r we have, ϕ r ( f r (e r (A)))=ϕ r (e r (A))− 1 which reduces toϕ r (e r (A))=ϕ r (A)+1 in light of C3. Then axiom C2 is satisfied, as is axiom C2’ since we have already proved axiom C3. We have that f r (A)̸= 0 for A∈A P exactly when there is a column in the r alignment of A that contains an entry in row r but not in row r+ 1. In this case, by Lemma 3.5.4 and the definition of f r , we know that in the r alignment of f r (A), there is exactly one fewer such column than in 66 the r alignment of A. We deduce thatϕ r (A) is the number of columns in the r alignment of A that contain an entry in row r but not in row r+ 1. Similarly, ε r (A) is the number of columns in the r alignment of A that contain an entry in row r+ 1 but not row r. Thus, we see thatϕ r (A)− ε r (A)= wt r (A)− wt r+1 (A) in accordance with axiom C1. Fig. 3.8 shows an example of a poset P and a subset ofA P with edges given by the crystal. The illustrated crystal component contains two highest weight elements, so we know our structure is not a highest weight crystal. Even so, in Section 3.6 we see that the crystal components have s-positive characters, which is not a general fact about normal crystals. a b c d c a d b c a b d c a d b a c d b c a d b b c a d b c d a a d b c d a b c 2 2 1 2 2 1 1 1 Figure 3.8: A segment of the P-array crystal for the given poset. To end this section, we show that the P-tableaux now hold crystal-theoretic significance in our context. Theorem 3.5.11. The highest weight elements of the crystal onA P are the P-tableaux. Proof. Let T ∈A P and take r≥ 1 to be arbitrary. Let a 1 < P a 2 < P ··· < P a m and b 1 < P b 2 < P ··· < P b n be the entries of rows r and r+ 1 of T respectively. Suppose e r (T)= 0. Then every 67 nonempty column in the r alignment of T contains some a i . By property (2) of weak r alignments this means each a i is in column i of the r alignment. Each b i is necessarily in column i or greater, so b i ̸< P a i by property (3). This shows that rows r and r+ 1 of T satisfy the required column condition of P-tableaux. Conversely, suppose m≥ n and b i ̸< P a i for each 1≤ i≤ m. Then each a i gets mapped to column i in the r pre-alignment of T . As entries shift right to obtain the r-alignment, the chain a 1 ,...,a m must stay put, and the entries b 1 ,...,b n will never move past the rightmost occupied column m. Thus, every nonempty column in the r alignment of T contains some a i which means e r (T)= 0. We have shown that e r (T)= 0 is equivalent to the P-tableau column condition on elements in rows r and r+ 1. Therefore T is a highest weight element if and only if it is a P-tableau. 3.6 s-Positivity The crystal onA P induces a normal crystal on any subset. In this section we prove our awaited refinement of Theorem 3.2.4 by analyzing the characters of connected components of A P . If C is a connected component, we will use the notation C a to denote the set of P-arrays in C with weight a. The proof follows in the spirit of Gasharov’s by using the Jacobi-Trudi identity and the usual inner product on symmetric functions to get an alternating formula for the Schur coefficients in terms of the monomial coefficients [25]. However, the involution we use to cancel terms is based on our crystal operators, which allows us to stay within a connected component. Theorem 3.6.1. Let C be a connected component ofA P . Then char(C)=∑ λ c λ s λ where c λ is the number of P-tableaux in C of weightλ. Proof. First note that char(C) is a symmetric function by Lemma 3.3.2. Let ∑ λ c λ s λ be the expansion of char(C) into the Schur basis for symmetric functions. Set n = #P and let λ = (λ 1 ,...,λ ℓ ) be a partition of n. For π ∈ S ℓ we let π(λ) denote the weak 68 composition with ith componentπ(λ) i =λ π(i) − π(i)+ i for 1≤ i≤ ℓ. Notice thatπ(λ)̸=σ(λ) whenπ̸=σ, as we haveπ(i)<σ(i) for some i which yields π(λ) i − σ(λ) i =(λ π(i) − λ σ(i) )+(σ(i)− π(i))> 0. The Jacobi-Trudi identity says s λ = det(h λ j − j+i ) ℓ i, j=1 = ∑ π∈S ℓ sgn(π)h π(λ) where h a here denotes the complete homogeneous symmetric function of the weak composition a. If a is not a weak composition, i.e. it has a negative part, then h a is considered to be zero. Recall that the Schur functions are an orthonormal basis with respect to the inner product on symmetric functions defined by ⟨m λ ,h µ ⟩= δ λµ where m λ stands for the monomial symmetric function indexed byλ. Therefore c λ =⟨char(C),s λ ⟩=⟨char(C), ∑ π∈S ℓ sgn(π)h π(λ) ⟩= ∑ A∈ F π∈S ℓ C π(λ) sgn(A) (3.6.1) where sgn(A) takes the value sgn(π) when A∈ C π(λ) . We will define an involution ι : F π∈S ℓ C π(λ) → F π∈S ℓ C π(λ) . Let A={A i, j }∈ C π(λ) for some π∈ S ℓ . If A is a P-tableau then setι(A)= A. Otherwise, there is a minimal c such that some A i+1,c exists with i> 0, but either A i,c does not exist or A i,c > P A i+1,c . With c so fixed, let r be maximal among such i. For convenience denote rows r and r+1 of A by a 1 < P ··· < P a π(λ) r and b 1 < P ··· < P b π(λ) r+1 . For k< c we have a k ̸> p b k which puts a k in column k of the r pre-alignment of A. If b c+1 does not exist, then a c is placed in column c 1 of the r pre-alignment. This is to say column c+ 1 either contains a c or b c+1 or is emtpy. Since P is (3+1)-free and a k ̸> P b k when k< c, we have that such a k are less than b c+1 , if it exists. Then every entry in a column strictly left of c+ 1 in the r pre-alignment is less than any entry in column c+1. This prohibits any of these lesser entries from 69 occupying column c+ 1 in the r alignment, as none of them can enter column c+ 1 until both a c and b c+1 have vacated, but neither can an entry be moved into a previously empty column. This is to say that in the r alignment, the entries weakly left of column c are exactly a 1 ,...,a c− 1 and b 1 ,...,b c . The remaining entries a c ,...,a π(λ) r and b c+1 ,...,b π(λ) r+1 now lie strictly right of column c in the r alignment. So ifπ(λ) r ≥ π(λ) r+1 then there are at least π(λ) r − π(λ) r+1 + 1=π(λ) r − (πs r )(λ) r such a i that do not share a column with an entry in row r+ 1. Then it makes sense to define ι(A)= f π(λ) r − (πs r )(λ) r r (A). If insteadπ(λ) r <π(λ) r+1 then there are at least π(λ) r+1 − π(λ) r − 1=(πs r )(λ) r − π(λ) r entries b i strictly right of column c that do not share a column with an entry in row r. Here, it makes sense to define ι(A)= e (πs r )(λ) r − π(λ) r r (A). In either of these cases, note that A gets mapped into C (πs r )(λ) . This means both thatι reverses the sign of A, and our codomain is correct. In the case where π(λ) r ≥ π(λ) r+1 we have seen that the π(λ) r − (πs r )(λ) r entries in row r of the r alignment that do not share a column with an entry in row r+ 1 lie strictly right of column c. Similarly, in the case where π(λ) r <π(λ) r+1 , we know the rightmost (πs r )(λ) r − π(λ) r in row r+ 1 of the r alignment that do not share a column with an entry in row r all lie strictly right of column c. In both cases, a 1 ,...,a c− 1 and b 1 ,...,b c lie weakly left of column c. So ι will not change the rows of any a 1 ,...,a c− 1 or b 1 ,...,b c . That is,ι does not change the indices of any A i, j 70 with j< c, or with j= c and i> r. The selection of both c and r between A and ι(A) therefore agree, and to applyι 2 to A is to apply e (πs r s r )(λ) r − (πs r )(λ) r r ◦ f π(λ) r − (πs r )(λ) r r or f (πs r )(λ) r − (πs r s r )(λ) r r ◦ e (πs r )(λ) r − π(λ) r r which are both the identity map by axiom C3. Sinceι reverses the sign of P-arrays that are not P-tableaux, terms cancel in equation 3.6.1 to get c λ = ∑ T sgn(T) summed over P-tableaux in F π∈S ℓ C π(λ) . If π(i)>π(i+ 1) then π(λ) i <π(λ) i+1 which means each such P-tableau is in C λ as it must have partition weight. Furthermore, these P-tableaux have positive sign so we conclude c λ is the number of P-tableaux of weightλ in C. Recall that by Proposition 3.5.9, our crystal operators, and therefore our involution ι, work by flipping connected components of the incomparability graph restricted to relevant rows. This is necessary if we want to obtain another P-array from our input. As a quick point of compari- son between our involution and previous variations used to obtain coarser s-positivity theorems, Gasharov would have us flip all such connected components right of what we called column c [25], while Shareshian and Wachs would have us flip the subset of those connected components which have odd cardinality [59, Proof of Thm 6.3]. Our involution is even more restrained, flipping a minimal number of these components. 71 3.7 Natural Unit Interval Orders A unit interval order is a poset isomorphic to a finite subset P ofR with the relation u< P v when u+1< v. These are axiomatized by the requirements that they be (3+1)-free and (2+2)-free, which can be seen as a modification to the argument presented in [58] or more directly in [10]. Definition 3.7.1. A natural unit interval order is a finite poset (P,≤ P ) on a subset ofN such that • u< P v implies u< v as natural numbers, and • if u< P w with v∥ w and v∥ u, then u< v< w as natural numbers. These define the same isomorphism classes as the unit interval orders [59]. Definition 3.7.2. [59] Given a finite graph G=(V,E) with V ⊂ N, define the chromatic qua- sisymmetric function by X G (x,q)= ∑ κ q asc(κ) ∏ v∈V x κ(v) summed over all proper colorings of G, where asc(κ)= #{(u,v)∈ E|κ(u)<κ(v) and u< v as natural numbers}. An edge that counts towards asc(κ) is an ascent. We can recover the chromatic symmetric function as X G (x)= X G (x,1). If P is a (3+1)-free poset on a subset ofN, then for A∈A P we write asc(A) to mean asc(κ) whereκ is the proper coloring of inc(P) that corresponds to A by sending row i of A to the number i. In other words, asc(A) is the number of pairs (x,y)∈ P× P such that x∥ y, x< y as natural numbers, and x occurs in a higher row of A than y. When P is a natural unit interval order we show that the crystal onA P respects this statistic. Theorem 3.7.3. If P is a natural unit interval order then asc is constant on connected components of the crystal onA P . 72 Proof. Let A∈A P with f r (A)̸= 0. We must show asc(A)= asc( f r (A)). Let G be the graph induced by inc(P) by restricting to the vertices in rows r and r+ 1 of A. By Proposition 3.5.9, f r (A) is obtained from A by swapping the rows of all elements in some connected components C of G. Because G is bipartite and P is (3+1)-free, the degree of each vertex is at most 2 and therefore C is a path or a cycle. Since wt( f r (A))= wt(A)− α r we know C contains some t+ 1 elements in row r of A, and t elements in row r+ 1. Since G is bipartite and C has an odd number of elements, C is in fact a path. Suppose we have some u∈ P\C and v∈ P with u∥ v. If v / ∈ C then neither element changes rows between A and f r (A) so the pair(u,v) is an ascent in both or neither of A and f r (A). If v∈ C then u cannot reside in either row r or r+ 1 of A. Again, the pair (u,v) is an ascent in both or neither of A and f r (A). To show asc(A)= asc( f r (A)) we must finally show that there is a 1-1 correspondence between ascents in A and ascents in f r (A) which occur entirely within C. In both cases, we will there is exactly one ascent involving each given v∈ C from row r+ 1 of A (equivalently row r of f r (A)), which is enough since each ascent must include exactly one such element. Since C is a path containing t+1 elements from row r and t elements from row r+1 of A, each element in row r+1 has degree 2 in G. Then there are u,w∈ C in row r of A with u< P w, and with v incomparable to both. By definition of a natural unit interval order, u< v< w as natural numbers. Then(u,v) is an ascent in A but not f r (A), and(v,w) is an ascent in f r (A) but not A. These are the only elements in C incomparable to v, so these are the only possible ascents contained in C involving v. Now if C is a connected component ofA P for a natural unit interval order, it makes sense to define asc (C) as asc(A) for any A∈ C. Therefore we have the following corollary. Corollary 3.7.4. If P is a natural unit interval order then X inc(P) (x,q)= ∑ C q asc(C) char(C) summed over connected components of the crystal onA P . 73 Still for a natural unit interval order, Shareshian and Wachs showed X inc(P) (x,q) is symmetric in x, and in fact s-positive in the sense that the coefficient of each q i is s-positive [59]. Corollary 3.7.4 together with Theorem 3.6.1 provide a new proof of these facts. 3.8 Two-Row P-Tableaux We again assume only that the poset(P,≤ P ) is (3+1)-free. We have seen that we can refine A P into connected components which have s-positive characters, but these characters are still not single Schur functions in general. Take, for instance, the component shown in Fig. 3.8 which contains two P-tableaux. However, we will develop an explicit method to write the Schur functions corresponding to certain P-tableaux as generating functions of disjoint connected subsets ofA P . These connected subsets implicitly have the structure of a highest weight crystal which bears relation to, though does not necessarily coincide with, the crystal onA P . The P-tableaux T we consider are those such that wt(T) i = 0 for all i> 2, and we call them two-row P-tableaux. If T is a two-row P-tableau we also say the diagram of T is the image of the 1 alignment of T , which we denoteD(T). Lemma 3.8.1. If T is a two-row P-tableau then each cell in row 2 ofD(T) has a cell directly above it in row 1. Proof. By Theorem 3.5.11 we have e 1 (T)= 0 from which the result follows. For T a two-row P-tableau we have s wt(T) = s wt(D(T)) which, by Lemma 3.8.1 and Theorem 3.3.9, is the character of the connected component ofD(T) in the diagram crystal. In order to write s wt(T) as the generating function for a set of P-arrays, we therefore want to associate the diagrams with highest weightD(T) to some P-arrays in a weight preserving way. LetD(T) denote this set of diagrams. Definition 3.8.2. Let T be a two-row P-tableau, and D∈D(T). The filling or diagram filling of D with respect to T is a bijection L T (D) : P→ D constructed column by column from left to right as follows. 74 Suppose we have determined the entries mapped to all cells strictly left of some column c. If column c of the 1 alignment of T contains a unique entry, we map it to the unique cell in column c of D. Otherwise, column c of the 1 alignment contains two entries x 1 ,x 2 ∈ P in rows 1 and 2 respectively. We assign x 1 and x 2 to the cells in column c according to the first of the following rules whose prerequisites are met. 1. Suppose some y∈ P is mapped to the topmost cell(r,c− 1) in column c− 1 of D, and that there is exactly one x i greater than y. Then we let s≤ r be maximal such that(s,c)∈ D and we map x i to(s,c) and the remaining poset element to the remaining cell. 2. Suppose some y∈ P is mapped to the lowest cell (r,c− 1) in column c− 1 of D, and that there is exactly one x i greater than y. Then we let s≤ r be maximal such that(s,c)∈ D and we map x i to(s,c) and the remaining poset element to the remaining cell. 3. Map x 1 and x 2 to the upper and lower cells in column c of D respectively. An example of a diagram filling is shown in Fig. 3.9. a b c d f g h e y g h g d g a g f g e g g g b g c Figure 3.9: A diagram filling with respect to the P-tableau in Fig. 3.2. Proposition 3.8.3. Let T be a two-row P-tableau, and D∈D(T). The diagram filling L T (D) is well-defined. 75 Proof. If column c> 1 contains two cells in D, then Proposition 3.3.7 implies each cell in column c− 1 must be column(c− 1)-paired. Then for any(r,c− 1)∈ D there exists s≤ r with(s,c)∈ D as presumed in rules (1) and (2). We denote the set of diagram fillings by DF P ={L T (D)| T is a two-row P-tableau and D∈D(T)} and the set of diagram fillings for a fixed two-row P-tableau T by DF P,T ={L T (D)| D∈D(T)}. The point of this construction is that each row of a diagram filling increases in ≤ P from left to right, so the filling specifies a P-array. We will prove this shortly. Lemma 3.8.4. Let T be a two-row P-tableau, and D∈D(T). Let x 1 ,x 2 ∈ P be in the same column c in the 1 alignment of T with x 1 in row 1 and x 2 in row 2. Suppose they satisfy the condition that whenever we have a distinct element y< P x 1 we have y< P x 2 as well. Then L T (D)(x 1 ) is in a row above L T (D)(x 2 ). In particular, this applies when x 1 < P x 2 . Proof. The proof is by induction on the column index. If c= 1 then the column is governed by rule (3) so we are done. Suppose c> 1 and that the column’s assignments are defined according to rule (2). This is to say there is some y∈ P assigned to the lowest cell (r,c− 1) in column c− 1 of D, and y< P x 2 since we cannot have only that y< P x 1 . We must have that(r,c− 1) is distinct from the topmost cell in column c− 1 since rule (1) does not apply. By Proposition 3.3.7, both cells in column c− 1 are column (c− 1)-paired which means that both cells in column c are in rows weakly above r. Therefore, when we take s≤ r to be maximal such that(s,c)∈ D it must be the case that(s,c) is the lowest cell in column c and is the image of x 2 . 76 Finally suppose c> 1 and that the column’s assignments are defined according to rule (1). There is some y 2 ∈ P assigned to the topmost cell (r,c− 1) in column c− 1, and we know that y 2 ̸< P x 1 . We therefore must have that y 2 is in row 2 of the 1 alignment of T . Let y 1 be the element in row 1 column c− 1 of the 1 alignment. If we have some z< P y 1 < P x 1 then also z< P y 2 by the (3+1)-free condition, but the inductive hypothesis contradicts L T (D)(y 2 ) being the topmost cell in column c− 1. So rule (1) never actually applies in these circumstances. Remark 3.8.5. If x 1 and x 2 are comparable, then x 1 < P x 2 by property (3) of weak r alignments, so Lemma 3.8.4 applies. Proposition 3.8.6. Let T be a two-row P-tableau and D∈D(T). Suppose we have two cells (r 2 ,c 1 ),(r 1 ,c 2 )∈ D such that r 1 ≤ r 2 and c 1 < c 2 . Further assume there is no distinct (r,c)∈ D with r 1 ≤ r≤ r 2 and c 1 ≤ c≤ c 2 . Then L T (D)(x)=(r 2 ,c 1 ) and L T (D)(y)=(r 1 ,c 2 ) for some x< P y. Proof. Let x,y∈ P be the entries mapped to the cells (r 2 ,c 1 ) and (r 1 ,c 2 ) respectively. If there is some column d with c 1 < d≤ c 2 that contains only one entry in the 1 alignment, then x< P y by Lemma 3.4.8. We may proceed under the assumption that there is no such column d, and we first deal with the subcase where c 2 = c 1 + 1. Suppose(r 2 ,c 1 ) is the topmost cell in column c 1 . If every entry in column c 2 of the 1 alignment of T is greater than x, we are done. Otherwise, by definition of weak r alignments there is one entry in column c 2 that is greater than x, and one entry in column c 2 that is not. Rule (1) of the definition of diagram fillings therefore applies, and ensures that the entry greater than x is assigned to(r 1 ,c 2 ). That is, x< P y. Suppose instead that there is (r,c 1 )∈ D with r< r 2 , hence r< r 1 by assumption. Say that L T (D)(z)=(r,c 1 ). If column c 2 of the filling is defined according to rule (1), then there is a single entry w in column c 2 of the 1 alignment of T that is greater than z. We must have that z and w share a row in T . Moreover, w̸= y since w must be mapped to a row weakly less than r< r 1 . So x and y share a row in T implying x< P y. If column c 2 of the filling is not defined according to rule 77 (1), and every entry in column c 2 of the 1 alignment of T is greater than x, we again have x< P y in particular. If column c 2 of the filling is not defined according to rule (1) and there is exactly one entry in column c 2 of the alignment greater than x, then column c 2 of the filling is defined according to rule (2). This ensures that the entry greater than x is mapped to(r 1 ,c 2 ), which is to say x< P y. Now suppose c 2 > c 1 + 1, still assuming there is no column d with c 1 < d≤ c 2 containing just one cell. Proposition 3.3.7 implies that each cell in a column d with c 1 ≤ d< c 2 is column d-paired. This further implies that the topmost cell in each such column d is in a row weakly below the topmost cell in column d+ 1. Then(r 1 ,c 2 ) cannot be the topmost cell in column c 2 , else there would be some(r,c 2 − 1)∈ D with r 1 ≤ r≤ r 2 . So(r 1 ,c 2 ) is the lowest cell in its column. Since the 1 alignment of T is a weak 1 alignment, there is a chain x< P z< P w of entries in columns c 1 ,c 1 + 1,c 2 . If y= w we are done, so assume not. We must have y̸< P w by Lemma 3.8.4 since (r 1 ,c 2 ) is the lowest cell in column c 2 . Then x< P y by the (3+1)-free condition. In particular, if we apply Proposition 3.8.6 when r 1 = r 2 , we see that the entries in each row of the diagram fillings form a chain. We have therefore justified our earlier claim that each diagram filling gives us a P-array. Explicitly, for L∈ DF P we write array(L) to mean the P-array{A i, j } where L(A i, j ) is the jth cell from the left in row i of Im(L). Let DFA P ={array(L)| L∈ DF P } denote the set of P-arrays obtained from diagram fillings, and let DFA P,T ={array(L)| L∈ DF P,T } denote the set of P-arrays obtained from diagram fillings with a fixed two-row P-tableau T . Having shown that diagram fillings essentially give us a map from pairs of P-tableau and di- agrams to P-arrays, we want to see that this association is injective and compatible with weight 78 maps, which allows us to impose the structure of a highest weight crystal onto each DFA P,T . First we have a series of somewhat technical lemmas. Lemma 3.8.7. Let T a two-row P-tableau. We have array(L T (D(T)))= T . Proof. If some column c of L T (D(T)) is not defined according to rule (1) or (2) in Definition 3.8.2, then the column assignments of entries in this column agree with T . Rules (1) and (2) only apply if there are some x,y in columns c− 1 and c of the 1 alignment of T respectively with x̸< P y. Then y must be placed in the row opposite to x in both T and array(L T (D(T))). By induction, we see that T and array(L T (D(T))) must agree on the rows of all elements in column c of the 1 alignment of T . Lemma 3.8.8. Let T be a two-row P-tableau and D∈D(T). Let x∈ P and say L T (D)(x)=(r,c). If column c+1 of D contains two cells, then there exists y> P x with L T (D)(y)=(s,c+1) for some s≤ r. Proof. Suppose(r,c) is the topmost cell in column c of D. If there is exactly one y> P x in column c+ 1 of L T (D) then rule (1) in the definition of L T (D) ensures it resides in a row weakly above r. Otherwise, both elements in column c+ 1 are greater than L T (D)(r,c), with at least one still being weakly above row r by Proposition 3.3.7. Suppose instead that(r,c) is the lower cell in column c and that it is distinct from the topmost cell. Then all cells in column c+ 1 of D are weakly above row r by Proposition 3.3.7, and at least one such cell corresponds to an entry greater than x by Proposition 3.4.7. Lemma 3.8.9. Let T be a two-row P-tableau and D∈D(T). Suppose for some x,y∈ P we have L T (D)(x)=(r 1 ,c 1 ) and L T (D)(y)=(r 2 ,c 2 ) with r 1 ≤ r 2 and c 1 ≤ c 2 . Then x̸> P y. Proof. If there is some column d with c 1 < d≤ c 2 that contains only one cell in D, then the result follows by Lemma 3.4.8. So assume not. In light of Lemma 3.8.8 we may take a chain x= x 1 < P x 2 < P ··· < P x k where each consecutive element is one column to the right, and in a row weakly above the position of the previous element, 79 and x k is in column c 2 of L T (D). If x k = y we are done, and if not then x k is in a weakly row above r 1 and therefore strictly above r 2 . Now y< P x< P x k would be a contradiction of Lemma 3.8.4. Lemma 3.8.10. Let T be a two-row P-tableau and let D∈D(T)\{D(T)}. Let r≥ 1 be minimal such that e r (D)̸= 0. Let x∈ P be the element such that L T (D)(x) is the rightmost cell in row r+ 1 that is not r-paired. Then x is the rightmost entry in row r+ 1 of the r alignment of array(L T (D)) that does not share a column with an entry in row r. Proof. Let a 1 < P ··· < P a m and b 1 < P ··· < P b n be the entries of rows r and r+ 1 in array(L T (D)) respectively. Let p be maximal such that L T (D)(b p ) is not r-paired. Say L T (D)(b p )=(r+ 1,c). Suppose we have some a k with k maximal such that L T (D)(a k ) is in a column strictly left of column c. Say L T (D)(a k )=(r,c ′ ) and assume a k ̸< P b p . By Lemma 3.3.8, there can be no sequence of cells(r 1 ,c 1 ),(r 2 ,c 2 ),(r 3 ,c 3 )∈ D with each r i < r i+1 and c i ≤ c i+1 . Thus, L T (D)(a k ) cannot be (r− 1)-paired without forming such a sequence together with the (r− 1)-paired cell and(r+ 1,c). By choice of r, we must then have r= 1. Since a k ̸< P b p we can say that column c ′ + 1 of the 1 alignment of T contains two entries using Lemma 3.4.8. Then in D, we must have (1,c ′ + 1) column c ′ -paired with L T (D)(a k )=(1,c ′ ) contradicting either the choice of a k , or the fact that L T (D)(b p )=(2,c) is not 1-paired. Then we must have a k < P b p . Because L T (D)(b p ) is not 1-paired, each L T (D)(a i ) with i≤ k must be 1-paired to some L T (D)(b j ) with j< p. Any L T (D)(a k− i ) must then be in a column weakly left of L T (D)(b p− i− 1 ) for i≥ 0. Then a k− i ̸> P b p− i− 1 by Lemma 3.8.9. In particular, this puts a k in a column strictly left of b p in the 1 pre-alignment of array(L T (D)). In the 1 alignment of array(L T (D)), a k remains in a column strictly left of b p since a k < P b p . We must determine the positions of the remaining a i in the r alignment of array(L T (D)) (no longer assuming r = 1 since a k as defined above may not exist). Let q≥ 1 be minimal such that either a q does not exist, or L T (D)(a q ) is in a column strictly right of L T (D)(b p ). Note that n− p≤ m− q+ 1 since any L T (D)(b i ) with i> p is r-paired with some L T (D)(a j ) with j≥ q by choice of p. This also means each L T (D)(b p+i ) with i≥ 1 is in a column weakly right of L T (D)(a q+i− 1 ). 80 Suppose b p+k ̸< P a q+k for some k≥ 0, and L T (D)(b p+k ) lies in a column strictly left of L T (D)(a q+k ). If k> 0, further assume L T (D)(b p+k ) shares a column with L T (D)(a q+k− 1 ). We claim b p+k+1 exists and L T (D)(b p+k+1 ) shares a column with L T (D)(a q+k ). If k = 0 so that L T (D)(b p ) does not share a column with any cell in row r, this fact is necessitated by Proposition 3.8.6 and our remark that L T (D)(b p+k+1 ) is in a column weakly right of L T (D)(a q+k ). When k> 0 we have L T (D)(a q+k− 1 ) and L T (D)(b p+k ) sharing a column. The former cannot be (r− 1)-paired without violating Lemma 3.3.8, so we must have r = 1 by choice of r. Since b p+k ̸< P a q+k , Lemma 3.4.8 implies L T (D)(a q+k ) shares a column with another cell. This cell can only be in row 2, and therefore corresponds to b p+k+1 , again by Lemma 3.3.8. Therefore, there is someℓ≥ 0 such that (perhaps vacuously) (a) b p+i ̸< P a q+i for all 0≤ i<ℓ, (b) L T (D)(b p+i ) and L T (D)(a q+i− 1 ) share a column for all 1≤ i<ℓ, and (c) if a q+ℓ exists then b p+ℓ < P a q+ℓ . Additionally, ifℓ> 0 we know r= 1. Recall that each L T (D)(b p+i ) with i≥ 1 lies weakly to the right of the column containing L T (D)(a q+i− 1 ), and therefore b p+i ̸< P a q+i− 1 by Lemma 3.8.9. We can now say that in the r pre-alignment of array(L T (D)), the entries b 1 ,...,b p+ℓ ,a q+ℓ ,...,a m lie in consecutive columns. If a q+ℓ actually exists, then a m lies in the rightmost occupied column p− q+ m+ 1≥ n. If a q+ℓ does not exist, then b n lies in the rightmost occupied column. In either case, the entries b 1 ,...,b p+ℓ ,a q+ℓ ,...,a m will remain in consecutive column in the r alignment of array(L T (D)). Moreover, the entries b p+ℓ+1 ,...,b n will be among the columns containing a q+ℓ ,...,a m . To complete the proof, it remains to show that a q ,...,a q+ℓ− 1 occupy columns p+ 1,..., p+ℓ in the r alignment. This relies, mainly, on showing that a q+i ̸< P b q+i+1 for each 0≤ i<ℓ. This is all vacuous whenℓ= 0 so assumeℓ> 0 hence r= 1. Assume to the contrary that a q+i < P b q+i+1 for some 0≤ i<ℓ. Lemma 3.8.4 asserts that a q+i is in row 1, and b q+i+1 in row 2 of T . In fact, a q ,...,a q+i must all reside in row 1 of T , and 81 b p ,...,b q+i+1 in row 2 since b p+ j ̸< P a q+ j for each 0≤ j≤ i. There must then be some y∈ P in row 1 of T such that L T (D)(y) shares a column with L T (D)(b p )=(2,c). If we have some x< P y< P a q , then since b p ̸< P a q , the (3+1)-free condition gives x< P b p . By Lemma 3.8.4, we then have L T (D)(y) in a row above L T (D)(b p ), which is to say row 1. This contradicts the choice of p such that L T (D)(b p ) is not r-paired. Now a q+ℓ− 1 ̸< P b p+ℓ and we have seen b p+ℓ lies in column p+ℓ of the r alignment of array(L T (D)). We also saw that column p+ℓ+ 1 is either empty or contains a q+ℓ . Property (4) of weak r alignments therefore requires a q+ℓ− 1 to be in column p+ℓ. If 0< i<ℓ and a q+i lies in column p+ i+ 1, then by Property (4) again we must have a q+i− 1 in column p+ i, as a q+i− 1 is not smaller than b p+i in column p+ i. To summarize, the entries a q ,...,a m lie in columns p+ 1,..., p+ m− q+ 1 of the r alignment of array(L T (D)), while b p+1 ,...,b n lie in a subset of those same columns. Since each a i with i< q lies in a column strictly left of b p in the r alignment, b p is the rightmost entry in row r+ 1 of the r alignment of array(L T (D)) that does not share a column with an entry in row r. There is a loose form of compatibility between diagram crystals and the crystal onA P as shown in the next proposition. Proposition 3.8.11. Let T be a two-row P-tableau and let D∈D(T)\{D(T)}. Let r≥ 1 be minimal such that e r (D)̸= 0. Then array(L T (e r (D)))= e r (array(L T (D))). Proof. Let a 1 < P ··· < P a m and b 1 < P ··· < P b n be the entries of rows r and r+ 1 in array(L T (D)) respectively. Let p be maximal such that the cell corresponding to b p in L T (D) is not r-paired. Let G be the induced subgraph of inc(P) on the vertices in a 1 ,...,a m and b 1 ,...,b n . By Lemma 3.8.10 and Proposition 3.5.9, we obtain e r (array(L T (D))) from array(L T (D)) by taking the vertices in the connected component of G containing b p , and swapping them between rows r and r+ 1. We must show the same description holds for array(L T (e r (D))). Say L T (D)(b p )=(r+ 1,c). Notice that L T (D) and L T (e r (D)) agree in all columns strictly left of c, as the diagram fillings are defined column by column from left to right, and D and e r (D) coincide in those columns. 82 Next we consider how column c differs in L T (D) and L T (e r (D)). We claim L T (e r (D))(b p )= (r,c). This is clear if column c of the 1 alignment of T contains a single entry. Otherwise, column c of both diagram fillings will be defined according to the same rule (1), (2), or (3) in the definition. We see that(r+ 1,c) is the topmost cell in column c of D if and only if(r,c) is the topmost cell in column c of e r (D). So if column c is defined by rule (3) we indeed get L T (e r (D))(b p )=(r,c). If (r,c− 1) / ∈ D, then for any (s,c− 1)∈ D, (r+ 1,c) is the lowest cell in column c weakly above row s in D if and only if(r,c) is the lowest cell in column c weakly above row s in e r (D). Similarly, if there is(r ′ ,c)∈ D distinct from(r+1,c), then it is the lowest cell in column c weakly above row s in D if and only if this is also true in e r (D). Therefore if column c is defined according to rules (1) or (2), we will again get that L T (e r (D))(b p )=(r,c). If instead(r,c− 1)∈ D, then also(r+ 1,c− 1)∈ D as(r+ 1,c) is not r-paired. In this case we must have r= 1 by choice of r because(r,c− 1) cannot be(r− 1)-paired without violating Lemma 3.3.8. It would also violate Lemma 3.3.8 to have a cell in row 3 or greater of column c in D, so we are back in the case where column c of the diagram fillings contains only the entry b p . Let q≥ 1 be minimal such that either a q does not exist, or lies in a column strictly right of b p in the 1 alignment of T . Suppose for some k≥ 0 and all 0≤ i≤ k we have (a) L T (D)(b p+i )=(r+ 1,c+ i) and L T (e r (D))(b p+i )=(r,c+ i), and (b) if i> 0 then L T (D)(a q+i− 1 )=(r,c+ i) and L T (e r (D))(a q+i− 1 )=(r+ 1,c+ i). We will show that if a q+k exists with L T (D)(a q+k )=(r,c+ k+ 1), and b p+k ̸< P a q+k then the same hypotheses apply for k+ 1, and otherwise that column c+ k+ 1 is identical in L T (D) and L T (e r (D)). First assume L T (D)(a q+k )=(r,c+k+1) and b p+k ̸< P a q+k . The latter assumption with Propo- sition 3.8.6 implies either (r,c+ k)∈ D or (r+ 1,c+ k+ 1)∈ D. In either case, we have cells in rows r and r+1 of the same column, so the cell in row r cannot be(r− 1)-paired without violating Lemma 3.3.8. This means r= 1 by choice of r. If we are in the case (1,c+ k)=(r,c+ k)∈ D then k> 0 hence L T (D)(a q+k− 1 )=(1,c+ i). Since b p+k ̸< P a q+k , L T (D)(a q+k )= (1,c+ k+ 1) cannot be the only cell in its column using 83 Lemma 3.4.8. A cell in row 3 or greater of column c+ i+ 1 would again contradict Lemma 3.3.8 together with (1,c+ k) and (2,c+ k). Therefore we have (r+ 1,c+ k+ 1)=(2,c+ k+ 1)∈ D anyway. It must be the case that L T (D)(b p+k+1 )=(2,c+ k+ 1). Now L T (e r (D)) must bijectively assign a q+k and b p+k+1 to the cells(1,c+ k+ 1) and(2,c+ k+ 1). We have L T (e r (D))(b p+k ) = (1,c+ k), and b p+k ̸< P a q+k . For L T (e r (D)) to have in- creasing rows as per Proposition 3.8.6, the only choice is L T (e r (D))(a q+k )=(2,c+ k+ 1) and L T (e r (D))(b p+k+1 )=(1,c+ k+ 1). Then (a) and (b) hold true for k+ 1 as claimed. Assume a q+k does not exist or L T (D)(a q+k ) is not in column c+k+1. Equivalently,(r,c+k+ 1) / ∈ D. We will show L T (D) and L T (e r (D)) coincide in column c+ k+ 1. If b p+k+1 exists then by choice of p the cells of b p+1 ,...,b p+k+1 are r-paired with a subset of the cells of a q ,...,a m . In particular, a q+k would have to also exist and L T (D)(b p+k+1 ) lies in a column weakly right of L T (D)(a q+k ), hence strictly right of column c+k+1. Therefore, we can see that(r+1,c+k+1)/ ∈ D. If k> 0 then we have seen r= 1 and(1,c+ k),(2,c+ k)∈ D. Using Lemma 3.3.8, there can be no cell in column c+ k+ 1 of D in row 3 or greater. Thus column c+ k+ 1 of D is vacant, and the two diagram fillings trivially agree in this column. Suppose instead k= 0. Any(s,c+1)∈ D is the lowest cell in its column weakly above the row of L T (D)(b p )=(r+ 1,c) in D if and only if it is the lowest cell in its column weakly above the row of L T (e r (D))(b p )=(r,c) in e r (D). Similarly if there is(r ′ ,c)∈ D distinct from(r+ 1,c) then (s,c+ 1) is the lowest cell in its column weakly above r ′ in D if and only if this is also the case in e r (D). It follows that L T (D) and L T (e r (D)) will be defined the same way in column c+ 1. Finally, assume L T (D)(a q+k )=(r,c+ k+ 1), but b p+k < P a q+k . We will once again show the two diagram fillings agree in column c+ k+ 1. If the column contains only one cell then this is immediate so we will assume it contains two. We have several cases. • Case: k= 0 and(r+ 1,c+ 1) / ∈ D Any (s,c+ 1)∈ D is the lowest cell in its column weakly above the row of L T (D)(b p )= (r+ 1,c) in D if and only if it is the lowest cell in its column weakly above the row of L T (e r (D))(b p )=(r,c) in e r (D). If there is(r ′ ,c)∈ D distinct from(r+ 1,c) then(s,c+ 1) 84 is similarly the lowest cell in its column below row r ′ in D if and only if the same is true in e r (D). Column c+ 1 will therefore be defined identically in L T (D) and L T (e r (D)). • Case: k= 0 and(r+ 1,c+ 1)∈ D If column c+ 1 is defined in L T (D) and L T (e r (D)) by rule (3) we are done, so assume not. We must have b p less than the entry corresponding to(r+1,c+1) in D, hence every entry in column c+1. If L T (D)(b p ) is the topmost cell in its column, then the conditions for defining column c+ 1 of L T (D) or L T (e r (D)) according to rule (1) are not satisfied. If L T (D)(b p ) is the lowest cell in its column, then the conditions for defining column c+ 1 of L T (D) or L T (e r (D)) according to rule (2) are not satisfied. Thus, there must be (r ′ ,c)∈ D distinct from (r+ 1,c), and column c+ 1 of both diagram fillings is determined by mapping the unique entry in column c+1 of the 1 alignment of T to the lowest cell in column c+1 weakly above row r ′ . • Case: k> 0 We have seen that we must have r= 1, and we know(1,c+ k),(2,c+ k)∈ D∩ e r (D). Using Lemma 3.3.8, the two cells in column c+ k+ 1 of D∩ e r (D) can only be (1,c+ k+ 1) and (2,c+ k+ 1). Thus L T (D)(b p+k+1 )=(2,c+ k+ 1). We know b p+k− 1 ̸< P a q+k so the (3+ 1)-free condition implies a q+k < P b p+k+1 . Thus, every entry in column c+ k+ 1 of the 1 alignment of T is greater than every entry in column c+ k. Then column c+ k+ 1 of L T (D) and L T (e r (D)) are defined according to rule (3). Now we fix k≥ 0 to be maximal such that for all 0≤ i< k hypotheses (a) and (b) hold. We have summarily shown that array(L T (e r (D))) is obtained from array(L T (D)) by putting a q ,...,a q+k− 1 in row r+ 1 and putting b p ,...,b p+k in row r. Together, these entries form a union C of connected components of G, since array(L T (e r (D))) is, in fact, a P-array. To finish, we must show that C is actually a single connected component of G, i.e. the connected component containing b p . Assume k> 0 hence r = 1, else this is trivial. For every 0≤ i< k we have L T (D)(a q+i ) above L T (D)(b p+i+1 ) in the same column and L T (e r (D))(a q+i ) below 85 L T (e r (D))(b p+i+1 ) in the same column. Then we must have a q+i ∥ b p+i+1 by Lemma 3.8.4. We also know b p+i ̸< P a q+i for each 0≤ i< k, and it now suffices to show b p+i ̸> P a q+i . If we had b p+i > P a q+i > P a q+i− 1 for i> 0, this would contradict that a q+i− 1 ∥ b p+i as just shown. Then assume b p > P a q . If b p is the only entry in its column of the 1 alignment of T , then it is in row 1 of T and a q in row 2. This is impossible by property (3) of weak 1 alignments. Then there must be some x∈ P with L T (D)(x)=(s,c) and s> 2. Because b p and a q cannot share a row in T , x and a q do share a row. Thus x< P a q < P b p but this contradicts Lemma 3.8.4. We conclude C is a connected component of G and we are done. Finally, we can use the previous proposition to help summarize the key properties of diagram fillings. Theorem 3.8.12. The map ϕ : G T two-row P-tableau {(T,D)| D∈D(T)}→ DFA P given by(T,D)7→ array(L T (D)) is a bijection, and wt(ϕ(T,D))= wt(D). Additionally, DFA P,T is a connected subset of the crystal onA P for each two-row P-tableau T . Proof. We have that ϕ is onto by definition of DFA P , and it is also immediate that wt(D) = wt(ϕ(T,D)) whenever D∈D(T). Let T be a two-row P-tableau and D∈D(T). If D̸=D(T) then we may take r≥ 1 to be minimal such that e r (D)̸= 0 and we get that e r (ϕ(T,D))=ϕ(T,e r (D)) by Proposition 3.8.11. Repeated application shows that some sequence of raising operations applied toϕ(T,D) gives us ϕ(T,D(T))= T , by Lemma 3.8.7, with the corresponding sequence of P-arrays being contained in DFA P,T . Therefore DFA P,T is a connected subsets of the P-array crystal. Moreover, we claim that r as chosen is minimal such that e r (ϕ(T,D))̸= 0. To see this, let 1≤ s< r. Let a 1 < P ··· < P a m and b 1 < P ··· < P b n be the entries in rows s and s+ 1 ofϕ(T,D) respectively. By choice of r, the cell of each b i in L T (D) is s-paired. In particular, a i exists and must lie in a column weakly left of b i . By Lemma 3.8.9, a i ̸> P b i which confirms that e s (A)= 0. 86 Therefore, the sequence of raising operations applied toϕ(T,D) to obtain T depends only on ϕ(T,D), not D or T per se. This is to say that if we have ϕ(T,D)=ϕ(T ′ ,D ′ ) for some two-row P-tableau T ′ and diagram D ′ ∈D(T ′ ) then the same sequence of raising operations applied to ϕ(T,D) yields T ′ , which thus coincides with T . Finally, we must show that when D and D ′ are distinct diagrams inD(T) then ϕ(T,D)̸= ϕ(T,D ′ ). We may take some (r,c)∈ D\ D ′ and we get that row r of ϕ(T,D) contains an entry from column c of the 1 alignment of T , while row r ofϕ(T,D ′ ) contains no such entry. Therefore ϕ is a bijection. Using Theorem 3.3.9 and the map ϕ from Theorem 3.8.12, we can have DFA P,T inherit the structure of a highest weight crystal fromD(T) when T is a two-row P-tableau. In particular s wt(T) = ∑ A∈DFA P,T ∏ i≥ 1 x wt(A) i i . It would be nice to extend this result to account for P-tableaux with an arbitrary number of rows, thereby defining highest weight crystals on the full set of P-arrays, and achieving a Robinson- Schensted correspondence for P-arrays. However, we do not expect such an extension to be readily available. For one thing, we have the notable property that DFA P,T is a connected subset of the crystal defined in Section 3.5 but after removing this subset for the two-row P-tableau in Fig. 3.8 the remaining maximal connected subsets do not individually have s-positive characters. 87 Chapter 4 Complete Flagged Homogeneous Polynomials 4.1 Introduction The classical theory of symmetric functions is beautifully described by the combinatorics of Young tableaux and has applications to the representation theory of the general linear group in particular. The distinguished symmetric basis{s λ } of Schur polynomials corresponds to irreducible represen- tations. A related picture emerges when one considers subrepresentations of the Borel subgroup B⊂ GL n of lower triangular matrices. The charactersκ a are the key polynomials also called De- mazure characters and they are no longer symmetric but rather a basis for the polynomial ring [17, 18]. These objects remain of high combinatorial interest with many models that generalize the Schur polynomials. The key polynomials can be thought of as truncated, nonsymmetric, or “flagged” versions of the Schur polynomials (although “flagged Schur polynomials” refers to a different set of objects). The other classical symmetric bases have not found comparable counterparts, but we define a strong candidate for the complete homogeneous symmetric polynomials{h λ }. We call them the complete flagged homogeneous polynomials h a . These are a basis for the polynomial ring that lift the{h λ } basis in a suitable way. Like the Schur polynomials, the complete symmetric polynomials are characters of the general linear group so we might ask thath a is the character of a B-module, which pans out. This is to say that the new basis enjoys a similar relationship to the complete symmetric polynomials as the key polynomials do to Schur polynomials. 88 On another axis, the relationship between{κ a } and{h a } refines the relationship between {s λ } and{h λ }. The classical case is most easily encoded by the well-known Kostka coefficients, which describe the Schur expansion of h λ and are given combinatorial meaning by the RSK correspon- dence. We determine a similar model for the key expansion ofh a by utilizing Haglund, Haiman, and Loehr’s combinatorial formula for nonsymmetric Macdonald polynomials [27] and a nonsym- metric Cauchy identity discovered by [37]. We reprove the latter by introducing an RSK analogue comparable to the one given by Mason [48], but more specifically suited for the setting. The inverse Kostka coefficients for the opposite expansion were given a signed combinatorial formula by E˘ gecio˘ glu and Remmel [19]. Here too, we obtain a satisfying model for the flagged analogues. The resulting combinatorics of snakes generalizes the notion of rim hooks, alternatively called ribbons or border strips. We discuss how the new expansion explains the cancellation that occurs in the classical formula. Our paper adheres to the following outline. In Section 4.2 we define the new basis and prove some basic facts. Section 4.3 establishes a representation theoretic interpretation. In Section 4.4 we review the combinatorics of non-attacking fillings used in Haglund, Haiman, and Loehr’s formula [27] and give our preferred interpretation of the flagged Kostka analogues. Section 4.5 develops our flagged RSK algorithm and is not logically necessary for understanding the rest of our paper. We show in Section 4.6 that theh a polynomials and, in fact, their products are nonnegative sums of Schubert polynomials. In Section 4.7 we define snakes as generalizations of rim hooks, and use them to describe the flagged inverse Kostka analogues. 4.2 Complete Flagged Homogeneous Polynomials We begin with some basic notions. A weak composition a=(a 1 ,a 2 ,...,a n ) of length at most n is an element ofN n , and we can always identifyN n withN n ×{ 0}⊂ N n+1 . The value a i is called the ith part of a, and the sum of parts is the size of a. A partition is a weak composition whose 89 parts are in weakly decreasing order. Usually we will have an implicit fixed value of n but we will sometimes be more explicit, particularly in Section 4.5. Given a weak composition a and a set of indeterminates x 1 ,x 2 ,...,x n , we write x a =∏ 1≤ i≤ n x a i i . We let sort(a) denote the partition obtained by putting the parts of a into weakly decreasing order, and rev(a) denotes the weak composition(a n ,a n− 1 ,...,a 1 ). The complete homogeneous symmetric polynomial of degree k is h k (x 1 ,...,x n )= ∑ 1≤ i 1 ≤ i 2 ≤···≤ i k ≤ n x i 1 x i 2 ··· x i k . More generally, the complete homogeneous symmetric polynomial of a partitionλ is h λ (x 1 ,...,x n )= ∏ 1≤ i≤ n h λ i (x 1 ,...,x n ). (4.2.1) These form a basis for the space of symmetric polynomials. We letM n denote the set of n× n matrices with entries in N, and letL n denote the subset of lower triangular matrices. Another well-known characterization of these polynomials is h λ (x 1 ,...,x n )= ∑ M∈M n row(M)=λ x col(M) (4.2.2) where the row and column sums row(M) and col(M) are the weak compositions whose ith part is the sum of entries in the ith row (respectively column) of M. The complete homogeneous symmetric function indexed by the partition λ is the stable limit of h λ (x 1 ,...,x n ) as n→∞. We will typically use the symbol h λ alone to denote the stable limit, and specify the arguments when we want the polynomial. This notation makes more sense when one considers that the polynomial is recovered by setting all but the first n indeterminates in h λ to zero. What follows is our nonsymmetric analogue of the complete homogeneous symmetric polyno- mials. 90 Definition 4.2.1. The complete flagged homogeneous polynomial indexed by a weak composition a is defined by h a = ∏ i≥ 1 h a i (x 1 ,...,x i ). Alternatively, we may think of these polynomials as sums over certain lower triangular matri- ces. Lemma 4.2.2. We have h a = ∑ L∈L n row(L)=a x col(L) . Proof. We have h a = ∏ i≥ 1 h a i (x 1 ,...,x i )= ∏ i≥ 1 ∑ c∈N i |c|=a i x c = ∑ L∈L n row(L)=a x col(L) , the last equality identifying a choice of c∈N i with the ith row vector of a matrix. In order for us to consider these polynomials a convincing analogue for the complete symmetric basis, they should at minimum lift the complete homogeneous symmetric basis and themselves form a basis for polynomials. The next couple of propositions establish these facts. Proposition 4.2.3. The complete flagged homogeneous polynomials have the stable limit lim k→∞ h 0 k × a (x 1 ,x 2 ,...,x k ,0,0,...)= h sort(a) . Proof. For a weak composition b, the coefficient of x b in h sort(a) is the number ofN-matrices with row sums sort(a) and column sums b. This is the same as the number ofN-matrices with row sums 0 k × a and column sums b. The coefficient of x b inh 0 k × a (x 1 ,...,x k ,0,...,0) is the number ofN-matrices with row sums 0 k × a, column sums b, and support in the first k columns. These objects coincide as soon as k≥ n. The dominance order on weak compositions is defined by a⊴ b if and only if a 1 +··· + a k ≤ b 1 +··· + b k for all k≥ 1. In this case we say b dominates a. 91 Proposition 4.2.4. The set{h a } indexed by weak compositions of length at most n is aZ-basis for Z[x 1 ,...,x n ]. Proof. Any nonzero entry in one of the first k rows of a lower triangularN-matrix L is also con- tained in one of the first k columns. Therefore col(L)⊵ row(L). Using the characterization of h a in Lemma 4.2.2 we then have that x b can appear in the monomial expansion of h a only if b dominates a. Therefore the transition matrix from{h a } to{x a } is upper triangular if both sets are ordered according to a linear extension of dominance order on the indexing compositions. In fact the transition matrix is upper uni-triangular since x a appears in the expansion of h a exactly once, indexed by the unique diagonal matrix with row sums a. So the transition matrix is invertible. Since{x a } is a basis forZ[x 1 ,...,x n ], so is{h a }. It is worth noting that the{h a } basis does not contain its symmetric counterpart. For example h 11 (x 1 ,x 2 )=h 02 +h 11 − h 20 . 4.3 h as a Character The polynomialh a also has a representation theoretic interpretation which we can see through the combinatorics of Kohnert polynomials. We use the notation[n]={1,2,...,n}. A finite subset D ofZ + × [n] is called a diagram and its elements are referred to as cells. A cell is conceptualized as a box sitting at a lattice point in the first quadrant of the Cartesian coordinate plane. Thus, when we speak of a cell being “weakly left” of some other cell or column index, we are saying its first coordinate is weakly less than the comparable value. A cell lies “strictly above” row r if its second index is strictly greater than r, etc. Definition 4.3.1. [34] Given a diagram D we may perform a Kohnert move by removing the rightmost cell (c,r) in some row of D, and appending a cell in the topmost vacant position in column c strictly below row r, assuming such a position exists. 92 The set of diagrams obtainable from D through some sequence of Kohnert moves is denoted KD(D). Such diagrams are Kohnert diagrams of D. Figure 4.1: The three possible Kohnert moves on the top diagram. Definition 4.3.2 ([4]). The Kohnert polynomial of a diagram D is K D = ∑ T∈KD(D) x wt(T) where wt(T) is the weak composition defined by wt(T) i being the number of cells in row i of T . By conventionK / 0 = 1. A diagram D also indexes a B-moduleS flag D called a flagged Schur module , where B⊂ GL n is the subalgebra of lower triangular matrices. The character ofS flag D is the trace of the action of the diagonal matrix diag(x 1 ,...,x n ). Precise definitions can be found in [56] or [1]. A diagram D is southwest if (d,r),(c,s)∈ D with c< d and r< s implies (c,r)∈ D. The following was conjectured in [4]. Theorem 4.3.3 ([1]). For a southwest diagram D the character ofS flag D coincides with the Kohnert polynomialK D . Thus, we can show that h a is the character of a flagged Schur module by showing it is the Kohnert polynomial of a southwest diagram. 93 Figure 4.2: A southwest diagram (left) and non-southwest diagram (right). Theorem 4.3.4. The polynomialh a is the character of the B-moduleS flag D a where D a = ( (c,r)∈ Z + × [n]| r− 1 ∑ k=1 a k < c≤ r ∑ k=1 a k ) . Figure 4.3: The diagram D (1,0,3,6,1,0,2) . Proof. The diagram is southwest as we see(d,r),(c,s)∈ D a with r< s implies d≤ c. Therefore it suffices to show h a =K D a by Theorem 4.3.3. We will show there is a bijection φ : KD(D a ) ∼ →{L∈L n | row(L)= a} such that the diagram weight corresponds to the column sums of the matrix. For T ∈ KD(D a ) we define φ(T)= ( (c,r)∈ T| r= j, i− 1 ∑ k=1 a k < c≤ i ∑ k=1 a k ) ! i, j . There is exactly one cell in each column of T up to∑ n k=1 a j so for fixed i we have n ∑ j=1 ( (c,r)∈ T| r= j, i− 1 ∑ k=1 a k < c≤ i ∑ k=1 a k ) = a i . 94 That is, row(φ(T))= a. Additionally, for(c,r)∈ T with∑ i− 1 k=1 a k < c≤ ∑ i k=1 a k we know that the only cell in column c of D a is(c,i), which implies r≤ i. Thereforeφ(T) is always lower triangular, soφ is well-defined. Next for fixed j we have n ∑ i=1 ( (c,r)∈ T| r= j, i− 1 ∑ k=1 a k < c≤ i ∑ k=1 a k ) =|{(c,r)∈ T| r= j}|= wt(T) j soφ translates weights to column sums as intended. We claim that any T∈ KD(D a ) has the property that if r− 1 ∑ k=1 a k ,b 1 ! , r− 1 ∑ k=1 a k + 1,b 2 ! ,..., r ∑ k=1 a k ,b a r ! are the (unique) cells in these columns then b 1 ≥ b 2 ≥···≥ b a r . This is true for D a where r= b 1 = ··· = b a r . Supposing the property holds for some T ∈ KD(D a ), a Kohnert move can only replace a cell ∑ r− 1 k=1 a k + i,b i with ∑ r− 1 k=1 a k + i,b i − 1 since there is only one cell in each column. Even then, this is only possible if there is no cell in the same row to the right of ∑ r− 1 k=1 a k + i,b i , which is to say b i > b i+1 when i< a r . The property b 1 ≥ b 2 ≥···≥ b a r then must be maintained under Kohnert moves. The above implies that for L∈L n with row(L)= a there is only one diagram of which L can possibly be the image. Namely, we must have that within the columns ∑ r− 1 k=1 a k ,...,∑ r k=1 a k the rightmost L r,1 columns must have cells in row 1, the next rightmost L r,2 columns have cells in row 2, and so on to L r,r . For any such L the diagram just described is a Kohnert diagram. Indeed, starting with the diagram D a and r= 1, within the columns∑ r− 1 k=1 a k ,...,∑ r k=1 a k we use Kohnert moves to put the rightmost put the cells in the rightmost L r,1 columns into row 1, followed by the next rightmost L r,2 columns into row 2, etc. We repeat this procedure for r= 2 then r= 3 and so on. These are all valid sequences of Kohnert moves because the cells in columns∑ r− 1 k=1 a k ,...,∑ r k=1 a k start in a row strictly below any other cell to the right. 95 Thereforeφ is a bijection which allows us to conclude K D a = ∑ L∈L n row(L)=a x col(L) =h a by Lemma 4.2.2. It follows as a corollary thath a is a positive sum of key polynomials defined in Section 4.4. Definitions of peelable tableaux and Yamanouchi Kohnert diagrams can be found in [56] and [1] respectively. Corollary 4.3.5. We have h b = ∑ a ˜ K ab κ a where ˜ K ab are non-negative integers. In particular, ˜ K ab is the number of D b -peelable tableaux whose left key is a, and the number of Yamanouchi Kohnert diagrams of D b with weight a. Proof. Sinceh b is the character ofS flag D b with D b southwest, the two combinatorial interpretations for the coefficients follow immediately from [56, Thm 20] and [1, Cor 4.1.3] respectively. In Section 4.4 we provide another combinatorial interpretation for the numbers ˜ K ab that is more closely analogous to the usual model for Kostka coefficients. 4.4 Keys, Atoms, and Nonsymmetric Macdonald Polynomials Macdonald’s symmetric functions P λ (x;q,t) [43] are symmetric functions indexed by partitions λ with rational coefficients in q,t. They interpolate the Hall–Littlewood symmetric functions H λ (x;t)= P λ (x;0,t) and the Jack symmetric functions J λ,α (x)= lim t→1 P λ (x;t α ,t). They further specialize to the classical bases m λ = P λ (x;q,1), s λ = P λ (x;q,q), and e λ ′ = P λ (x;1,t) 96 regardless of the values of the free parameters. The nonsymmetric Macdonald polynomials E a (x 1 ,...,x n ;q,t) were introduced by Opdam [53] and Macdonald [44]. They recover their symmetric analogues by P λ (x 1 ,...,x n ;q,t)= ∏ u∈D(a) 1− q leg(u)+1 t arm(u)+1 ∏ u∈D(rev(λ)) 1− q leg(u) t arm(u)+1 E 0 n × a (x 1 ,...,x n ;q,t) (4.4.1) where a is any rearrangement of λ, and arm and leg are defined below [27, Cor 5.2.2]. Haglund, Haiman and Loehr gave a combinatorial formula for the monomial expansion of nonsymmetric Macdonald polynomials as follows [27]. Given a weak composition a, its key diagram is D(a)={(c,r)∈Z + × [n]| c≤ a r }. (4.4.2) A filling of a diagram is a map from the diagram to[n]. The weight wt(T) of a filling T is the weak composition defined by wt i (T)=|T − 1 (i)|. When T is a filling of a key diagram the shape sh(T) of T is the corresponding weak composition. Given a filling T :D(a)→ [n] of D(a), define the augmented filling ˆ T on ˆ D(a) =D(a)⊔ {(0,1),...,(0,n)} by ˆ T(u)= T(u) if u∈D(a) i if u=(0,i) . The cells{(0,1),...,(0,n)} are called the basement. A pair of cells attack each other if they lie in the same column or in adjacent columns with the cell on the left strictly higher than the cell on the right. A filling T is non-attacking if no pair of attacking cells have the same value in T . The arm arm(u) of a cell u∈D(a) is the number of cells below u in the same column plus the number of cells above u in the left adjacent column of ˆ D(a). The leg leg(u) of a cell u∈D(a) is the number of cells weakly to its right in the same row. Given a non-attacking filling T , the 97 major index of T , denoted by maj(T), is the sum of the legs of all cells u such that T(u) is strictly greater than the entry of the cell immediately to its left. Similarly, the co-major index, denoted by comaj(T), is the sum of the legs of all cells u such that T(u) is strictly less than the entry immediately left of u. The conditions maj(T)= 0 and comaj(T)= 0, which will concern us, respectively say that the entries of T are weakly decreasing and increasing within rows from left to right. A triple of a non-attacking filling T ofD(a) is three cells with distinct entries of either the form Type I (r,c),(r,c+ 1),(s,c)∈D(a) with r< s and a r > a s , or Type II (r,c+ 1),(s,c),(s,c+ 1)∈D(a) with r< s and a r ≤ a s . Say that i, j,k are the entries of a triple where k is left of i in the same row. The triple is a co- inversion triple if i< j< k or j< k< i or k< i< j. An inversion triple satisfies one of the reverse inequalities. The Type I and Type II co-inversion triples are illustrated in Fig. 4.4. Let coinv(T) (resp. inv(T)) denote the number of co-inversion (resp. inversion) triples of T . j . . . ⟲ k i k i ⟳ . . . j i< j< k or j< k< i or k< i< j Figure 4.4: The positions and orientation for co-inversion triples. Remark 4.4.1. Note that our definition of inversion triples is not equivalent to the one used in [27] and elsewhere. The authors allow their inversion triples to contain repeat entries and use coinv ′ to denote the statistic we call inv. Remark 4.4.2. We will only care about co-inversion triples when the major index is zero, and inversion triples when the co-major index is zero. In these cases, the only realizable inequalities given one of the arrangements in Figure 4.4 are i< j< k and i> j> k respectively. The combinatorial identity is now as follows. 98 Theorem 4.4.3 ([27]). The nonsymmetric Macdonald polynomial is given by E a (x;q,t)= ∑ T :D(a)→[n] ˆ T non− attacking q maj( ˆ T) t coinv( ˆ T) x wt(T) ∏ u∈D(a) ˆ T(u)̸= ˆ T(left(u)) 1− t 1− q leg(u)+1 t arm(u)+1 . where left(u) is the cell immediately left of u in ˆ D(a). Additionally, E a (x;q − 1 ,t − 1 )= ∑ T :D(a)→[n] ˆ T non− attacking q comaj( ˆ T) t inv( ˆ T) x wt(T) ∏ u∈D(a) ˆ T(u)̸= ˆ T(left(u)) 1− t 1− q leg(u))+1 t arm(u)+1 . The Ferrers diagrams of a partition λ areD(λ) andD(rev(λ)). The former is known as the French convention and the latter is the English convention. We use whichever is convenient. Definition 4.4.4. A semi-standard Young tableau (SSYT) is a filling of a Ferrers diagram in the French convention such that column entries strictly increase bottom to top and row entries weakly increase left to right. The set of SSYT is denoted SSYT. This leads to the combinatorial definition of Schur polynomials: s λ (x 1 ,...,x n )= ∑ a∈N n K λa x a where K λa =|{Q∈ SSYT| sh(Q)=λ,wt(Q)= a}| are the classical Kostka coefficients . It is well-known that K λa depends only onλ and sort(a). The stable limit of a Schur polynomial as n→∞ is the Schur function s λ with notation completely analogous to h λ . The Schur polynomials and complete homogeneous symmetric polynomials are related by the expansion h µ (x 1 ,...,x n )= ∑ λ K λµ s λ (x 1 ,...,x n ) 99 summed over partitions. This is a consequence of the RSK algorithm which is reviewed in Section 4.5. The Schur polynomials are generalized by the key polynomials κ a indexed by weak compo- sitions. These were first introduced by Demazure and are also known as Demazure characters [18]. They coincide with the characters ofS flag D(a) . A simple consequence of Sanderson’s rela- tion between Macdonald polynomials and affine Demazure characters [57] is that nonsymmetric Macdonald polynomials specialize to key polynomials, which was also shown combinatorially by Assaf [2]. Theorem 4.4.5 ([57]). The key polynomialκ a (x) is given by κ a (x)= E a (x;0,0). (4.4.3) A reverse partition yields the Schur polynomial: s λ (x 1 ,...,x n )=κ rev(λ) . (4.4.4) Let t i, j denote the transposition exchanging i and j with i< j. For a permutation w= w 1 ··· w k letℓ(w) denote the number of pairs i< j such that w i > w j . Bruhat order on the symmetric group is defined to be the transitive closure of the cover relation w≤ wt i, j wheneverℓ(wt i, j )=ℓ(w)+ 1. The Demazure atomsA a (X) were originally studied under the name standard bases by Las- coux and Sch¨ utzenberger [39]. They are a monomial positiveZ-basis for polynomials character- ized by κ a = ∑ b≤ a A b where b≤ a means there exists a partition λ and permutations σ ≤ τ such that b i =λ σ(i) and a i =λ τ(i) for all i. See [54] for an overview. Mason showed that the Demazure atoms are also specializations of nonsymmetric Macdonald polynomials. 100 Theorem 4.4.6 ([49]). The Demazure atomA a (x) is given by A a (x)= E rev(a) (x n ,x n− 1 ,...,x 1 ;∞,∞). (4.4.5) Theorem 4.4.3 along with the specializations (4.4.3) and (4.4.5) motivate the following defini- tions. Definition 4.4.7 ([2]). A semi-standard key tableau (SSKT) is a filling T of a key diagram such that ˆ T is non-attacking and maj( ˆ T)= coinv( ˆ T)= 0. The set of SSKT is denoted SSKT. Definition 4.4.8. A reverse semi-skyline augmented filling (reverse SSAF) is a filling T of a key diagram such that ˆ T is non-attacking and comaj( ˆ T)= inv( ˆ T)= 0. The set of reverse SSAF is denoted SSAF. 7 6 5 4 3 2 1 6 1 2 4 4 3 3 3 1 3 3 2 1 7 6 5 4 3 2 1 7 7 5 4 4 4 5 5 6 3 3 5 1 Figure 4.5: An SSKT (left) and a reverse SSAF (right) of shape(1,0,3,6,1,0,2). The reverse SSAF are equivalent to Mason’s semi-skyline augmented fillings up to reversal of the alphabet and shape [48]. What follow are some simple facts about SSKT that will find their use in later sections. Lemma 4.4.9. If row r of an SSKT is weakly shorter than row s with r< s then any entry in row r is strictly less than the entry in the same column of row s. Proof. This is true in the basement column. Suppose in some column we have entries i< j in rows r,s respectively. If there are entries i ′ , j ′ immediately right of i, j respectively then we must have i ′ < j ′ else j ′ < i ′ < j would be a Type II co-inversion triple. We are done by induction. 101 Lemma 4.4.10. Suppose row r is strictly longer than row s in an SSKT S with r< s. Suppose for some column c> 1 we have S(c,r)< S(c,s). Then S(c− 1,r)< S(c− 1,s) as well. Proof. If not then S(c,r)< S(c− 1,s)< S(c− 1,r) creates a Type I co-inversion triple. Lascoux [37] discovered the Cauchy-like identity ∏ i+ j≤ n+1 (1− x i y j ) − 1 = ∑ a∈N n A rev(a) (x)κ a (y) (4.4.6) which was generalized to classical types by Fu and Lacscoux [22]. More recently, Choi and Kwon [15] gave a crystal theoretic interpretation and proof. Azenhaus and Emami [9] also generalized the result to truncated staircase shapes using Mason’s translation of RSK to skyline fillings [49]. Reversing the alphabet x 1 ,...,x n and applying Theorems 4.4.5 and 4.4.6, we get (4.4.6) in the following equivalent form. Theorem 4.4.11 ([37]). We have ∏ 1≤ j≤ i≤ n (1− x i y j ) − 1 = ∑ a∈N n E a (x;∞,∞)E a (y;0,0). (4.4.7) In Section 4.5 we reprove this fact by describing a bijection, equivalent to RSK, between lower triangularN matrices and {(T,S)∈ SSKT× SSAF| sh(S)= sh(T)}. The section can be skipped without loss of continuity. The main new result for the current section is now a new interpretation for the coefficients ˜ K ab . Theorem 4.4.12. The homogeneous complete flagged polynomials expand into key polynomials by h b = ∑ a ˜ K ab κ a 102 where ˜ K ab =|{T∈ SSAF| sh(T)= a, wt(T)= b}|. Proof. We have ∏ 1≤ j≤ i≤ n (1− x i y j ) − 1 = ∑ a∈N n E a (x;∞,∞)E a (y;0,0)= ∑ a∈N n E a (x;∞,∞)κ a (y) using Theorem 4.4.11 for the first equality and 4.4.5 for the second. Note that the left hand side is ∑ A lower triangularN− matrix x row(A) y col(A) so its x b coefficient is h b (y) by Lemma 4.2.2. We know from Theorem 4.4.3 that E a (x;∞,∞)= ∑ T∈SSAF sh(T)=a x wt(T) . Then using Theorem 4.4.3 to take the coefficient of x b in∑ a∈N n E a (x;∞,∞)κ a (y) we get ∑ T∈SSAF wt(T)=b κ sh(T) (y)= ∑ a ˜ K ab κ a with ˜ K ab as described. Given this combinatorial interpretation of ˜ K ab , Mason [48] showed bijectively that these num- bers are related to classical Kostka coefficents by K λb = ∑ sort(a)=λ ˜ K ab . (4.4.8) 103 This should not be too surprising as one can also consider taking the expansion h b = ∑ a ˜ K ab κ a to its symmetric stable limit, an argument that one can make more precise by using Lemma 4.7.19 to see the coefficients don’t change when they are suitably identified under the limiting operation. The bijection used by Mason is described explicitly in Section 4.5. Since key polynomials are a positive sum of Demazure atoms, Theorem 4.4.12 implies the same is true forh a . We can directly find the coefficients using the same tools as before. Theorem 4.4.13. We have h b = ∑ a ˜ K ab A a where ˜ K ab =|{T∈ SSKT| sh(T)= rev(a), wt(T)= rev(b)}|. Proof. The left hand side of (4.4.6) is unchanged by interchanging the x’s and y’s. So ∏ i+ j≤ n+1 (1− x i y j ) − 1 = ∑ a∈N n κ a (x)A rev(a) (y). Reversing the alphabet x 1 ,...,x n and applying Theorem 4.4.5 we get ∏ 1≤ j≤ i≤ n (1− x i y j ) − 1 = ∑ a∈N n κ a (x n ,...,x 1 )A rev(a) (y 1 ,...,y n ) = ∑ a∈N n E a (x n ,...,x 1 ;0,0)A rev(a) (y 1 ,...,y n ). The coefficient of x b on the left ish b as before. Using Theorem 4.4.3 the coefficient of x b on the right is ∑ T∈SSKT wt(T)=rev(b) A rev(sh(T)) (y 1 ,...,y n )= ∑ a ˜ K ab A a (y 1 ,...,y n ) which is therefore equated toh b (y 1 ,...,y n ). 104 Here also there is a relation to the classical Kostka coefficients. Namely K λb = ˜ K λb (4.4.9) for a partitionλ. In short, this is because s λ (x 1 ,...,x n )=κ rev(λ) . 4.5 Flagged RSK The celebrated RSK correspondence associates to eachN-matrix a pair of SSYT of the same shape in such a way that the weights of the SSYT correspond to the column and row sums of the matrix. Myriad resources such as [62] and [24] give an in-depth introduction to the bijection. Mason gave an equivalent characterization of RSK using the combinatorics of semi-skyline augmented fillings [48], which was generalized by Haglund, Mason and Remmel [28]. Azenhas and Emami [9] used Mason’s algorithm to prove (4.4.6), as a special case, which is ultimately our application as well. Our RSK analogue is equivalent to the restriction of the full correspondence to lower triangular matrices, and is therefore more specialized than Mason’s. In our context it is consequently sim- pler and more uniquely suited. In comparison to Mason who associates to each matrix a pair of semi-skyline augmented fillings whose shapes are rearrangements of each other, we put the lower triangular matrices into explicit bijection with pairs of SSKT and reverse SSAF that share a shape. Just as the RSK algorithm shows how the monomial expansion of a complete symmetric function factors through the Schur expansion, our “flagged” RSK specialization is shows how the monomial expansion ofh a factors through the key expansion. Up until this point there has been an implicit dependence on a fixed value n, particularly in the definitions of our combinatorial objects SSYT ,SSKT,SSAF. In this section it will help to make the dependence explicit, so we instead use the notation SSYT n ,SSKT n ,SSAF n . There is one more combinatorial object we must introduce in order to continue. Definition 4.5.1. A reverse semi-standard Young tableau (reverse SSYT) is a filling of a Ferrers diagram in the French convention with entries such that columns strictly decrease bottom to top 105 and rows weakly decrease left to right. The set of reverse SSYT with entries in [n] is denoted SSYT n . For any set SSYT n ,SSYT n ,SSAF n ,SSKT n of tableau-like objects we may append an argument restricting the shape. For instance SSKT n (a)={S∈ SSKT n | sh(T)= a}. First we review the full RSK correspondence. Given a reverse SSYT P and positive integer j, we define P← j to be the reverse SSYT obtained by the following procedure. 1. Let r= 1. 2. Place j in the leftmost position in the rth row from the bottom not occupied by a weakly larger entry, removing the entry j ′ that occupies the position if necessary. 3. If an entry j ′ was indeed removed from the row, go back to step 2 replacing the values j with j ′ , and r with r+ 1. This insertion algorithm is the central component of RSK. We also need to know how to inter- pret anN-matrix inM n as a biword, which is a multiset in the alphabet i j | i, j∈[n] . A matrix A corresponds to the biword whose number of elements i j is equal to the i, j entry of A. We represent a biword as a two-line array i 1 i 2 ··· i ℓ j 1 j 2 ··· j ℓ where i k j k are the elements of the biword, i k ≤ i k+1 , and if i k = i k+1 then j k ≥ j k+1 . Let N n ={(P,Q)∈ SSYT n × SSYT n | sh(P)= sh(Q)}. (4.5.1) 106 Given(P,Q)∈N define (P,Q)← i j =(P← j,Q ′ ) where Q ′ is obtained from Q by adding the box sh(P← j)\ sh(P) with entry i. Then we define A RSK 7→ ··· (/ 0, / 0)← i 1 j 1 ← i 2 j 2 ···← i ℓ j ℓ where A is the matrix corresponding to the biword i 1 i 2 ··· i ℓ j 1 j 2 ··· j ℓ . This is a bijection from M n →N n which differs from the usual construction only in the superficial way that we have replaced SSYT× SSYT with SSYT× SSYT. When A RSK 7→ (P,Q) we call P the insertion tableau and Q the recording tableau. By construction, wt(P) and wt(Q) are the column and row sums respectively of the corresponding matrix. As an example, the tableaux in Figure 4.6 are the image of 1 3 3 4 4 4 5 5 5 5 6 7 7 1 3 2 4 3 1 4 4 3 2 1 6 3 . The final insertion computation ← 3 is shown in Figure 4.7. 1 2 3 1 4 3 2 6 4 3 3 3 1 7 5 4 7 3 4 5 1 3 4 5 5 6 Figure 4.6: A pair of tableaux inN n . Before describing the flagged RSK algorithm, let us see how its proposed image N flag n ={(S,T)∈ SSKT n × SSAF n | sh(S)= sh(T)} (4.5.2) 107 1 2 3 4 3 1 6 4 3 3 2 1 ← 3 1 2 3 4 3 1 ← 2 6 4 3 3 3 1 1 2 3 ← 1 4 3 2 6 4 3 3 3 1 1 2 3 1 4 3 2 6 4 3 3 3 1 Figure 4.7: The insertion procedure← 3. embeds intoN n . Definition 4.5.2 ([48]). The map ρ : SSAF n → SSYT n takes T ∈ SSAF n and yields the unique SSYTρ(T) whose ith column consists of the same set of entries as does the ith column of T . The map is clearly weight-preserving. It is also a well-defined bijection where ρ − 1 (Q) is constructed column by column from left to right by placing each entry in the corresponding column of Q, from smallest to largest, into the topmost available position immediately right of a weakly lesser entry (allowing the basement column into consideration) [48]. See Figure 4.8. Definition 4.5.3 ([3]). The map τ : SSKT n → SSYT n takes S∈ SSKT n and yields the unique reverse SSYTτ(S) whose ith column consists of the same set of entries as does the ith column of S. Let τ a : SSKT n (a)→ SSYT n (sort(a)) denote the restriction ofτ. In contrast toρ,τ is not a bijection, butτ a is a well-defined embedding. The latter has the left inverseτ † a which takes P∈ SSYT n (sort(a)) and yields a filling τ † a (P) :D(a)→[n] defined column by column from right to left and bottom to top, at each cell selecting the smallest remaining entry 108 in the column set that maintains the decreasing row condition [3]. The filling τ † a (P) is an SSKT exactly when P is in the image ofτ a . Once again, see Figure 4.8. 7 6 5 4 3 2 1 6 1 2 4 4 3 3 3 1 3 3 2 1 7 6 5 4 3 2 1 7 7 5 4 4 4 5 5 6 3 3 5 1 1 2 3 1 4 3 2 6 4 3 3 3 1 7 5 4 7 3 4 5 1 3 4 5 5 6 τ τ † (1,0,3,6,1,0,2) ρ ρ − 1 Figure 4.8: An illustration of the mapsτ andρ. Abusing notation, we letτ× ρ refer to the actual product map restricted toN flag n →N n . This is our embedding. Lemma 4.5.4. The map τ× ρ :N flag n →N n is well-defined and injective with the left inverse (P,Q)7→(τ † sh(ρ − 1 (Q)) (P),ρ − 1 (Q)). . Proof. Let S∈ SSKT n . The partition conjugate to sort(sh(S)) is also the partition conjugate to sh(τ(S)), the ith part of either being the number of cells in column i of S. Thus sort(sh(S))= sh(τ(S)). In the same way, sort(sh(T))= sh(ρ(T)) for T ∈ SSAF n . Then for(S,T)∈N flag n we have sh(S)= sh(T) which implies sh(τ(S))= sh(ρ(T)) so indeed (τ× ρ)(S,T)∈N n and the map is well-defined. 109 For(S,T)∈N flag n let(P,Q)=(τ(S),ρ(T)). We have sh(S)= sh(T) so (τ † sh(ρ − 1 (Q)) (P),ρ − 1 (Q))=(τ † sh(ρ − 1 (ρ(T))) (τ(S)),ρ − 1 (ρ(T))) =(τ † sh(T) (τ sh(T) (S)),T) =(S,T) which demonstrates the left inverse forτ× ρ. Remark 4.5.5. Since SSYT and reverse SSYT are completely determined by their column sets, Lemma 4.5.4 says that an element ofN flag n is also completely determined by the column sets of its constituent fillings. We now construct the flagged RSK correspondence. Given S∈ SSKT n and j∈ [n] define S↽ n j to be the filling obtained as follows, always allowing entries in the basement column into consideration. 1. Let c=∞. 2. Let c ′ ≤ c be maximal such that column c ′ contains strictly fewer entries weakly greater than j than does column c ′ − 1. 3. Place the entry j in the topmost position in column c ′ not occupied by a weakly larger entry, and immediately right of an entry weakly greater than j. Remove the entry j ′ that occupies the position if it exists. 4. If an entry j ′ was indeed removed, go back to step 2 replacing the values c with c ′ and j with j ′ . Remark 4.5.6. We consider SSYT n , SSYT n , SSAF n , SSKT n , N n , N flag n to all be subsets of SSYT n+1 , SSYT n+1 , SSAF n+1 , SSKT n+1 ,N n+1 ,N flag n+1 respectively in the obvious way. In particular, we may write S↽ n j for S∈ SSKT i with i≤ n. The subtlety obscured by the notation ˆ S is that the number of cells in the basement depends on whether we view S as an element 110 of SSKT n or SSKT i . When we write S↽ n j we are implicitly considering S as an element of SSKT n , and because of the basement this is not necessarily equivalent to S↽ i j. Given(S,T)∈N flag n and j∈[n] define (S,T)↽ n j =(S↽ n j,T ′ ) where T ′ is obtained from T by adding the cellD(sh(S↽ n j))\D(sh(S)) with entry n. Given a biword i 1 i 2 ··· i ℓ j 1 j 2 ··· j ℓ with each j k ≤ i k , i.e. corresponding to a lower triangular matrix L, we define L RSK flag 7→ ··· (/ 0, / 0)↽ i 1 j 1 ↽ i 2 j 2 ··· ↽ i ℓ j ℓ . The pair of fillings in Figure 4.5 is in fact the image of 1 3 3 4 4 4 5 5 5 5 6 7 7 1 3 2 4 3 1 4 4 3 2 1 6 3 whose final insertion computation ↽ 7 3 is shown in Figure 4.9. 7 6 5 4 3 2 1 3 ↓ 6 2 4 4 3 3 2 1 3 3 1 1 7 6 5 4 3 2 1 2 ↓ 6 2 4 4 3 3 3 1 3 3 1 1 7 6 5 4 3 2 1 1 ↓ 6 2 4 4 3 3 3 1 3 3 2 1 7 6 5 4 3 2 1 6 1 2 4 4 3 3 3 1 3 3 2 1 Figure 4.9: The insertion procedure↽ 7 3. 111 Lemma 4.5.7. If S∈ SSKT n , and j 0 ∈[n], then the operation S↽ n j 0 is well-defined and the result is in SSKT n . Proof. Suppose we are at the beginning of step (2) of the insertion S↽ n j 0 and we have reached step (4) exactly t times. Let j 0 , j 1 ,... j t be the successive values of j in the procedure so far,∞= c 0 ,c 1 ,...,c t the successive values of c, and S= S 0 ,S 1 ,...,S t the successive intermediate fillings. Assume that the procedure thus far has been well-defined, and S t ∈ SSKT n . It can be seen from the insertion definition that j 0 > j 1 >··· > j t . Our first order of business is to show that the next steps (2), (3), (4) of the insertion are well- defined. If c ′ is determined as in step (2) then step (3) is well-defined, and (4) certainly is as well. So we need only show step (2) makes sense. If t = 0 so that c=∞, a column c ′ as described in step (2) certainly exists as the basement column contains the entry j 0 . Suppose t> 0. Column c t cannot contain an entry equal to j t , since an entry j t has just been replaced by a strictly larger entry and the previous filling could only have had a single entry j t in the column due to the non-attacking condition on S t . Then there is a leftmost column b≤ c which does not contain the entry j t in row j t , and b> 0. Since the rows of S t weakly decrease left to right and there is an entry j t in row j t column b− 1, any entry in row j t column b must be strictly less than j t . Therefore column b contains strictly fewer entries weakly greater than j t than does column b− 1, so step (2) is well-defined. Now we must show that the filling S t+1 obtained by applying step (3) is an SSKT. maj( ˆ S t+1 )= 0 The entry immediately left of the insertion position is weakly greater than j t , and if an entry exists immediately to the right of the insertion position then it is weakly less than the entry j t+1 < j t that j t replaces. Therefore maj( ˆ S t+1 ) remains zero. ˆ S t+1 is non-attacking We have seen above that column c t of S t does not contain an entry j t . Then none of the columns c ′ ,c ′ + 1,...,c t can contain an entry j t without contradicting the choice of c ′ . Therefore no two cells share an entry in column c ′ of S t+1 . 112 If there is an entry j t in column c ′ − 1 strictly above the row of the newly inserted entry j t , then the position immediately right of the former entry cannot contain another entry j t , but this contradicts the insertion position. Therefore ˆ S t+1 remains non-attacking. ˆ S t+1 contains no Type I co-inversion triples Because maj( ˆ S t+1 )= 0, recall that the only possi- ble Type I co-inversion triples are of the form u< v< w with u,w in the strictly longer row r below the row s containing v (cf. i< j< k in Fig. 4.4). If j t takes the role of u, then there must be an entry v ′ > j t immediately right of v to justify the insertion position. Then there must be an entry immediately right of j t since the row is strictly longer than row s. This means that in ˆ S t there must have been an entry j t+1 < j t occupying the position of j t . Then j t+1 < v< w is a co-inversion triple in ˆ S t which is a contradiction. If j t takes the role of w, then we must have that the previous entry j t+1 < j t in the position satisfies j t+1 < v. To be consistent with the choice of insertion position, the entry v ′ im- mediately left of v satisfies v ′ < j t . The entry w ′ immediately left of the insertion position satisfies w ′ ≥ j t > v ′ ≥ v> j t+1 . In particular j t+1 < v ′ < w ′ is a co-inversion triple in ˆ S t , a contradiction. Suppose j t takes the role of v. Since u< j t < w with u in column c ′ +1, we must have c t = c ′ and t > 0 by choice of c ′ . Now we know that j t lies in column c t = c ′ of ˆ S t− 1 . It must lie weakly below row r, else j t < w contradicts Lemma 4.4.9, or we again have a co-inversion triple u< j t < w. Say that v ′ is the entry in (c t − 1,s), i.e. immediately left of j t in ˆ S t+1 and j t+1 in ˆ S t− 1 . In ˆ S t− 1 the row containing j t strictly below row s must be strictly longer than row s, else j t+1 < j t < v ′ is a Type II co-inversion triple. Since j t− 1 is inserted into the position occupied by j t in ˆ S t− 1 , and j t− 1 > j t+1 , we must have v ′ < j t− 1 ≤ p to justify this insertion position. Then j t < v ′ < p is a co-inversion triple in ˆ S t− 1 which is impossible. Finally, we must consider that in obtaining ˆ S t+1 we may have changed the relative length of rows and to create new Type I triples that do not contain the newly inserted entry. That is, 113 suppose row r,s are the same length in ˆ S t , but row r is strictly longer in ˆ S t+1 . By Lemma 4.4.9 every entry in row s is strictly greater than the entry in the same column of row r in ˆ S t . This carries over to ˆ S t+1 which means none of the new Type I triples can be co-inversion triples. ˆ S t+1 contains no Type II co-inversion triples Since maj( ˆ S t+1 )= 0, the only Type II co-inversion triples u< v< w have u,w in the weakly longer row s above the row r containing v. First we consider the special case where we have such a co-inversion triple in a pair of rows whose relative lengths have been changed by the insertion of j t . This is to say, in ˆ S t+1 the entry j t lies in the insertion position(c ′ ,s), and some entry q of the cell(c ′ ,r) is the rightmost entry in row r of both ˆ S t and ˆ S t+1 . Let q ′ be the entry immediately left of q, and p the entry of(c ′ − 1,s). If q< j t then q< p which means p> q ′ to avoid a Type I co-inversion triple in ˆ S t . In this case, by Lemma 4.4.10, every entry of row s is strictly greater than the entry in row r of the same row, in both ˆ S t and ˆ S t+1 . This prevents any co-inversion triples in these rows. If instead q> j t , recalling the position immediately right of q is empty, by choice of insertion column c ′ we must have that c ′ = c t and t > 0. Then in ˆ S t− 1 we must have the entry j t in column c t , necessarily weakly below row r by Lemma 4.4.9. Let h be the entry immediately left of j t in ˆ S t− 1 . We must have p< h since j t− 1 gets inserted immediately right of h rather than p. This leads to the Type I co-inversion triple j t < p< h in ˆ S t− 1 which is a contradiction. Now we consider the Type II co-inversion triples that contain the inserted entry j t . If j t takes the role of u, the same position must be empty in ˆ S t else it would be occupied by some j t+1 < j t leading to the co-inversion triple j t+1 < v< w. This puts us back into the above special case which has already been discounted. We cannot have j t take the role of v as this would contradict the choice of insertion position. If j t takes the role of w then it must have replaced an entry j t+1 in ˆ S t . So the row s is weakly longer than row r in ˆ S t as well. By Lemma 4.4.9 we then have u> v, a contradiction. 114 We are done by induction on t. We now see that the flagged insertion algorithm is a special case of the usual insertion algorithm for RSK. Proposition 4.5.8. For j 0 ∈[n] the diagram SSKT n SSKT n SSYT n SSYT n ↽ n j 0 τ τ ← j 0 commutes. Proof. Consider step (2) of the insertion algorithmτ(S)← j 0 . We have some value of j we must insert, some row value r, and also some column c from which the entry j was just removed (taking c=∞ if j= j 0 ). The entry j is to be inserted in the leftmost position in the rth row not occupied by a weakly larger entry, say in column c ′ . Before applying step (2) of the algorithm, column c ′ then contains exactly r− 1 entries weakly greater than j since the column entries are sorted. If c ′ > 1 then the entry in row r column c ′ − 1 is weakly greater than j by choice of insertion position. Then if c ′ > 1, column c ′ contains strictly fewer entries strictly greater than j than does column c ′ − 1. In fact c ′ is the maximal column index c ′ ≤ c with this property as it is exactly the lowest r− 1 entries in columns c ′ + 1,c ′ + 2,...,c that are weakly greater than j. This is to say we can restate the algorithmτ(S)← j 0 in terms of column sets as follows. 1. Set j= j 0 and c=∞. 2. Let c ′ be the rightmost column c ′ ≤ c such that either c ′ = 1 or column c ′ contains strictly fewer entries strictly greater than j than does column c ′ − 1. Let j ′ be the largest entry j ′ < j such that columns c ′ − 1 and c ′ contain the same number of entries weakly greater than j ′ , if it exists. 3. Place j in column c ′ and remove j ′ if it exists. 4. Go back to step (2) setting j= j ′ and c= c ′ . 115 Note that if j ′ did not satisfy the condition we place on it, then it would be reinserted into the same column, a step that this characterization freely ignores. If we show that the column sets of S↽ n j 0 are characterized in this same way then we are done by definition of τ. Now considering step (3) of the insertion S↽ n j 0 , if there are strictly fewer entries weakly greater than j ′ in column c ′ than in column c ′ − 1, then this is still true after replacing j ′ by j. Then j ′ will be inserted into the same column c ′ , which is ignorable if we care only about column sets. Suppose there is an entry k in column c ′ , maximal such that k< j and both columns c ′ and c ′ − 1 contain the same number of entries weakly greater than k. If j is to be inserted immediately right of the entry u≥ j> k then by choice of k there exists j ′ immediately right of u with j ′ ≥ k. If j ′ > k then j ′ will be inserted back into the same columns as above. We conclude as follows that the column set characterization above holds for S↽ n j 0 . If there is no entry k as described then we may terminate they algorithm as any additional steps will see us merely adding and removing entries from the same fixed column. If there is such an entry k then the algorithm for S↽ n j 0 will eventually remove k from the column, and any intermediate steps are ignorable in terms of column sets. The entry k is determined exactly the same way as j ′ in the column set characterization forτ(S)← j 0 , so we have found that the same algorithm computes the column sets of S↽ n j 0 . Remark 4.5.9. Notice that the operation 3 2 1 2 1 ↽ 3 3 ! = 3 2 1 3 2 1 requires only one iteration of its defining algorithm whereas the equivalent insertion 1 2 ← 3 = 1 2 3 requires more replacements to be made: 2 by 3 and 1 by 2. In fact it is true in general that the classical insertion algorithm requires at least as many applications of its core loop as does the 116 equivalent flagged operation. Consider that when an entry j is added to a column set by classical insertion, each existing entry in the column less than j and greater than the entry j ′ that is ultimately removed from the column (if it exists) must have its position shifted up. Meanwhile the flagged insertion algorithm shuffles some subset of these entries. We are ready to show the flagged RSK algorithm is a restriction of the classical algorithm. Theorem 4.5.10. The map RSK flag :L n →N flag n is well-defined and the diagram L n N flag n M n N n RSK flag τ× ρ RSK commutes. Proof. Assume that for any biword of length at mostℓ, its image under the flagged RSK algorithm is inN flag n and the maps commute. Take a biword i 1 ··· i ℓ+1 j 1 ··· j ℓ+1 with each j k ≤ i k . Let (S,T) and (P,Q) be the respective images of i 1 ··· i ℓ j 1 ··· j ℓ under RSK flag and RSK, so that (P,Q)=(τ(S),ρ(T)) by assumption. Let (S ′ ,T ′ )= RSK flag i 1 ··· i ℓ+1 j 1 ··· j ℓ+1 =(S,T)↽ i ℓ+1 j ℓ+1 and (P ′ ,Q ′ )= RSK i 1 ··· i ℓ+1 j 1 ··· j ℓ+1 =(P,Q)← i ℓ+1 j ℓ+1 . By Proposition 4.5.8 we knowτ(S ′ )= P ′ . Then, the column c ofD(sh(S ′ ))\D(sh(S)) is also the column of sh(P ′ )\ sh(P). Therefore both T ′ and Q ′ are obtained by adding the entry i ℓ+1 to column c of T and Q respectively. Since Q=τ(T) the column sets of T ′ and Q ′ still match and the only question is whether T ′ is in SSAF n . We will showρ − 1 (Q ′ )= T ′ to prove this is the case. 117 Note that for a reverse SSYT R and p≥ q, the box added from R← p is in a column strictly left of the second box added from(R← p)← q. This is because we can inductively assume the first entry inserted into a row r is weakly greater than the second entry inserted into row r if it exists, and consequently the first entry removed from the row is strictly left of and weakly greater than the second removed entry, if it exists. In particular, this fact implies any entry of Q ′ strictly right of column c is strictly less than i ℓ+1 . The constructions of ρ − 1 (Q ′ ) and ρ − 1 (Q) are identical in the first c− 1 columns where the column sets are identical. In the construction ofρ − 1 (Q ′ ), the entry i ℓ+1 is the last to be added to column c, so the positions of all other entries of the column coincide in both reverse SSAF. Since all entries strictly right of column c are strictly less than i ℓ+1 , the rest of theρ − 1 (Q ′ ) andρ − 1 (Q) construction proceed identically. Therefore,ρ − 1 (Q ′ ) can be obtained from T =ρ − 1 (Q) by placing the entry i ℓ+1 in the topmost empty position of ˆ T immediately right of an occupied cell (treating T as an element of SSAF i ℓ+1 ). By Lemma 4.4.9 the entries of column c− 1 of ˆ S immediately left of an empty position must be increasing from bottom to top. It follows that the cellD(sh(S ′ ))\D(sh(S)) is also the topmost empty position of ˆ S immediately right of an occupied cell (treating S and an element of SSAF i ℓ+1 ). Thereforeρ − 1 (Q ′ )= T ′ which shows T ′ ∈ SSAF n . We are done by induction onℓ. We can now show that the flagged RSK algorithm is indeed a bijection as with the classical algorithm. Theorem 4.5.11. The map RSK flag :L n →N flag n is a bijection. Moreover, for L∈L n and(S,T)= RSK flag (L), the column and row sums of L respectively correspond to wt(S) and wt(T). Proof. The second assertion is by construction. Since RSK :M n →N flag n is a bijection, it is also immediate from Theorem 4.5.10 that RSK flag is injective. Let F :M n →L 2n be the map i 1 ··· i ℓ j 1 ··· j ℓ 7→ i 1 + n ··· i ℓ + n j 1 ··· j ℓ 118 to see we have the following commutative diagram L n M n L 2n M 2n N flag n N n N flag 2n N 2n RSK flag RSK F RSK flag RSK τ× ρ τ× ρ where the mapN n →N flag 2n is RSK flag ◦ F◦ RSK − 1 . Let G :N n →N 2n be the map that adds n to each entry of the recording tableau. From defi- nitions we have RSK◦ F = G◦ RSK so G is contextualized by the commutative diagram. Also let (τ× ρ) † be the left inverse forτ× ρ from Lemma 4.5.4. For(S,T)∈N flag n , say we increment the entries of T by n and shift each entry in both fillings up by n rows. Call the result (S ↑ ,T ↑ ). Each entry in the first column of T ↑ is still in the row matching its entry, so T ↑ ∈ SSAF 2n . The only non-trivial property to check to ensure S ↑ ∈ SSKT 2n is that there are no Type II co-inversion triples involving the basement column, but this follows from Lemma 4.4.9. Notice that(S ↑ ,T ↑ ) have the same column sets as (τ× ρ) † ◦ G◦ (τ× ρ)(S,T) and so must be the object we get by following the diagram fromN flag n toN flag 2n . Now instead take a biword W = i 1 ··· i ℓ j 1 ··· j ℓ not associated with a lower triangular ma- trix, so i t < j t for some minimal t. When we insert j t into the SSKT during the operation RSK flag (F(W)) it will be the largest entry thus far and therefore be placed in the first column of row i t +n. Letting(S ′ ,T ′ )= RSK flag (F(W)) this implies that the S ′ (1,i t +n)≥ j t . Then there is no S∈ SSKT n for which S ′ = S ↑ , as that would require ˆ S(1,i t )≥ j t > i t = ˆ S(0,i t ) hence maj( ˆ S)> 0. The image ofτ× ρ :N flag n →N n is therefore contained in, and equal to, the image of RSK| L n : L n →N n . We conclude RSK flag is a bijection. 119 Remark 4.5.12. While RSK flag is perhaps best thought of as a restriction of RSK as suggested by Theorem 4.5.10, the proof of Theorem 4.5.11 is interesting in that it implies RSK flag is actually equivalent to RSK. Indeed, we have RSK= G − 1 ◦ (τ× ρ)◦ RSK flag ◦ F (abusing notation so that G − 1 is really just the obvious left inverse for G). Finally, our motivating application for this bijection was a new proof of Theorem 4.4.11. Proof of Theorem 4.4.11. We can see ∏ 1≤ j≤ i≤ n (1− x i y j ) − 1 = ∑ L∈L n x row(L) y col(L) . By Theorem 4.4.3 we also have ∑ a∈N n E a (X;∞,∞)E a (Y ;0,0)= ∑ (S,T)∈N flag n x wt(T) y wt(S) . The right hand sides are equal by Theorem 4.5.11. 4.6 Schubert Expansions Another generalization of the Schur polynomials are the Schubert polynomialsS w [40]. They are indexed by permutations w∈ S ∞ that fix all but finitely many positive integers and represent Schu- bert classes in the cohomology of the complete flag variety. A permutation v is a k-grassmanian if v(i)< v(i+ 1) whenever i̸= k. The k-grassmannians correspond to partitionsλ of at most k parts by takingλ k+1− i = v(i)− i, and we write v= v(λ,k). We recover the Schur polynomials as s λ (x 1 ,...,x k )=S v(λ,k) . 120 One can find a discussion of Schubert polynomials in [24]. Using a Pieri rule we will see that the complete flagged homogeneous polynomials are a positive sum of Schubert polynomials. The k-Bruhat order defined by Bergeron and Sottile [12] is the transitive closure of the cover relation w≤ k wt i, j whenever ℓ(wt i, j ) =ℓ(w)+ 1 and i≤ k< j. The following Pieri rule was conjectured by Bergeron and Billey [11], before being proved geometrically by Sottile [60] and combinatorially by Kogan and Kumar [33]. Theorem 4.6.1 ([60]). For u a permutation and m,k positive integers, we have S u · S v((m),k) = ∑ u≤ k w w=ut i 1 , j 1 ··· t i m , j m j 1 ,..., j m distinct S w . We saw theh-basis does not contain the complete homogeneous symmetric polynomials and so is not truly a multiplicative basis. Its structure constants can even be negative. For instance h 2 (0,1) =h (0,2) +h (1,1) − h (2,0) . Interestingly, it follows from Theorem 4.6.1 that the product ofh’s remains Schubert positive. Corollary 4.6.2. Given compositions a and b, we have h a h b = ∑ w c w a,b S w where c w a,b are nonnegative integers. Proof. For a weak composition 0 k × (m) with at most one nonzero part, we have h 0 k × (m) = s (m) (x 1 ,...,x k )=S v((m),k) . 121 Any product of complete flagged homogeneous polynomials is therefore a product of such Schu- bert polynomials which has a non-negative Schubert expansion by Theorem 4.6.1. We will explicitly describe the Schubert expansion of a singleh a polynomial. To reduce nota- tion we say w/u is a horizontal m-strip in k-Bruhat order if u≤ k w and there exists a saturated chain in k-Bruhat order of length m, say exemplified by w= ut i 1 , j 1 ··· t i m , j m , where j 1 ,..., j m are all distinct. Theorem 4.6.3. For a weak composition b=(b 1 ,...,b m ), we have h b = ∑ w C w,b S w , where C w,b is the number of sequences id= w (0) ,...,w (m) = w such that each w (k) /w (k− 1) is a horizontal(b k )-strip in k-Bruhat order. Proof. We proceed by induction on m. For m= 0 (i.e. b has no nonzero parts) we have h b = 1= s / 0 =S id . The only sequence enumerated by a C w,b is id= w (0) so C id,b = 1 is the only nonzero value. This proves the base case. Now take m> 0 and assume the result holds for weak compositions of length m− 1. We have h b =h (b 1 ,...,b m− 1 ) s (b m ) (x 1 ,...,x m )= ∑ w C w,(b 1 ,...,b m− 1 ) S w S v((b m ),m) and applying Theorem 4.6.1 yields the result by induction. 4.7 The Inverse Kostka Analogue E˘ gecio˘ glu and Remmel found a signed combinatorial formula for the entries of the classical inverse Kostka matrix [19]. We generalize their notion of a rim hook tabloid from the context of Ferrers 122 diagrams to key diagrams in order to analogously expand the key polynomials into the basis of complete flagged homogeneous polynomials. First let us reconstruct the definition of a rim hook. Young’s lattice is the set of partitions with length at most n partially ordered by the relationλ⪯ Y µ wheneverλ i ≤ µ i for all 1≤ i≤ n. We will say two cells are connected if they are either vertically or horizontally adjacent. Taking the tran- sitive closure of the “connected” relation decomposes a diagram into equivalence classes, which we call connected components. We say the diagram is connected if there is a unique connected component. Definition 4.7.1. A rim hook R of a partitionµ is a subset of the Ferrers diagramD(rev(µ)) such that 1. R is connected, 2. D(rev(µ))\ R=D(rev(λ)) for someλ⪯ Y µ, and 3. R does not contain a quadruple of cells(c,r),(c+ 1,r),(c,r+ 1),(c+ 1,r+ 1). Figure 4.10: A rim hook ofλ =(8,7,6,5,3,2). Now the replacement for Young’s lattice on weak compositions is the following partial order. Definition 4.7.2 ([6]). The key poset on weak compositions of length at most n is the partial order defined by a⪯ b whenever (i) a i ≤ b i for all 1≤ i≤ n, and (ii) if a i > a j for some 1≤ i< j≤ n then b i > b j . 123 As an example (0,6,0,1,2,8,4)⪯ (3,7,0,2,5,8,6) but (0,6,0,1,5,8,2)̸⪯ (3,7,0,2,5,8,6). We say that two cells are weakly connected if they share a column or lie in adjacent columns with the cell on the left in a weakly higher row (cf. attacking cells). As before, this generates an equivalence relation on the cells in a diagram, whose classes we call weakly connected compo- nents, and we say the diagram is weakly connected if there is a unique weakly connected compo- nent. An example of the concept is illustrated in Figure 4.11. 3 2 3 1 1 3 3 3 1 Figure 4.11: A diagram with enumerated weakly connected components. Rim hooks are now generalized by the following definition. Definition 4.7.3. A snake S of a weak composition b is a subset of the key diagramD(b) such that 1. S is weakly connected, 2. D(b)\ S=D(a) for some weak composition a⪯ b, and 3. S does not contain a triple of cells(c,s),(c+ 1,s),(c+ 1,r) with r< s. The next result shows that the generalization is a natural one. Proposition 4.7.4. Letµ be a partition. The snakes of rev(µ) are exactly the rim hooks ofµ. Proof. Suppose λ ⪯ Y µ for a partition λ. Then rev(λ)⪯ rev(µ) as (ii) in Definition 4.7.2 is vacuously satisfied. Conversely, if b⪯ rev(µ) then the fact that the parts of rev(µ) are weakly 124 Figure 4.12: A snake of b=(3,7,0,2,5,8,6). increasing implies the same is true for b. Thus b= rev(λ) for a partitionλ. That is,λ 7→ rev(λ) is an injective morphism of posets and its image is closed under “going down” in the poset. Now say that R=D(rev(µ))\D(rev(λ)) is a rim hook ofµ withλ⪯ Y µ. A connected pair of cells is also weakly connected, so R is weakly connected. We have also seen that rev(λ)⪯ rev(µ). Suppose R contains a triple(c,s),(c+1,s),(c+1,r) of cells with r< s. Then(c,s),(c+1,s)/ ∈ D(rev(λ)) which implies (c,s− 1),(c+ 1,s− 1) / ∈D(rev(λ)) since the parts of rev(λ) weakly increase. However, (c,s− 1),(c+ 1,s− 1)∈D(rev(µ)) since rev(µ) s ≥ rev(µ) r ≥ c+ 1. In particular(c,s),(c+ 1,s),(c,s− 1),(c+ 1,s− 1) all belong to R which contradicts that R is a rim hook. We must conclude that R is a snake of rev(µ). On the other hand, suppose S is a snake of rev(µ). We have seen that (2) in Definition 4.7.3 implies S=D(rev(µ))\D(rev(λ)) for a partition λ ⪯ Y µ. Condition (3) in Definition 4.7.3 is actually stronger than (3) in Definition 4.7.1 so the latter is satisfied. It remains to argue S is connected. Say u,v∈ S form a weakly connected pair. Suppose first that they lie in the same column with u below v. Since the parts of rev(µ) weakly increase and u∈D(rev(µ)) we have that every cell above u in the same column is inD(rev(µ)). Similarly, since v / ∈D(rev(λ)) every cell below v in the same column is not inD(rev(λ)). Then there is a connected vertical strip of cells in S between u and v putting them in the same connected component. Now suppose u,v lie in adjacent columns, say with v left of, and weakly above u. Take the cell w immediately left of u. As before, v / ∈D(rev(λ)) implies w / ∈D(rev(λ)). So w∈ S, (w,u) is connected, and w and v share a connected component by the above argument. It follows that u and 125 v lie in the same component. Therefore the unique weakly connected component of S is also the unique connected component of S, so S is a rim hook. A snake is special if it is empty or contains the lowest cell in column 1 of the key diagram. The objects that index our expansion into theh basis are the special snake tabloids. Definition 4.7.5. A special snake tabloid(S 1 ,...,S n ) of shape b with at most n parts is a decom- positionD(b)= S 1 ⊔···⊔ S n such that each S i is a special snake ofD(b)\(S 1 ⊔···⊔ S i− 1 ), with S i = / 0 if and only if row i of this subdiagram is empty. The weight of a special snake tabloid U =(S 1 ,...,S n ) is the weak composition wt(U)=(|S 1 |,...,|S n |). (4.7.1) The height of a snake S, denoted ht(S), is the number of rows in which S contains a cell, except where S= / 0 in which case the height is 1. The sign of a snake is sign(S)=(− 1) ht(S)− 1 , (4.7.2) and the sign of a special snake tabloid is the product of signs of its snakes. 4 4 4 4 1 1 6 6 6 6 4 2 2 2 4 2 1 1 1 4 1 2 2 2 2 2 2 1 1 1 1 7 7 7 4 2 2 6 4 4 4 4 4 4 4 5 4 1 1 1 4 4 2 2 2 2 2 1 1 1 1 1 Figure 4.13: Two special snake tabloids of shape (3,7,0,2,5,8,6), with respective weights (10,10,0,7,0,4,0) and(8,7,0,11,1,1,3) and signs− 1 and+1. We can now state our theorem. 126 Theorem 4.7.6. For a weak composition b we have κ b = ∑ a∈N n ˜ K − 1 ab h a (4.7.3) where ˜ K − 1 ab = ∑ U sign(U) summed over special snake tabloids U of weight a and shape b. The notation ˜ K − 1 ab reflects that these coefficients, and the coefficients ˜ K ab define inverse transi- tion matrices. It is interesting to compare Theorem 4.7.6 with E˘ gecio˘ glu and Remmel’s result. In light of Proposition 4.7.4 it can be stated as follows. Theorem 4.7.7 ([19]). For a partitionµ we have s µ (x 1 ,...,x n )= ∑ λ K − 1 λµ h λ (x 1 ,...,x n ) (4.7.4) summed over partitions, and with K − 1 λµ = ∑ U sign(U) summed over special snake tabloids U of weightλ and shapeµ. Applying Theorem 4.7.6 to κ rev(µ) = s µ (x 1 ,...,x n ) we see that the same objects index the expansion of s µ into both bases{h λ } and{h a }. This is in spite of the fact that the former basis is not contained in the latter, but consistent with Proposition 4.2.3. There is no known cancellation-free formula for the inverse Kostka coefficients K − 1 λµ . Theorem 4.7.6 is not cancellation-free either as seen by Figure 4.14, however it is cancellation free in the special case of Schur polynomials. 127 2 1 1 2 2 2 1 1 1 2 2 2 2 1 1 1 1 1 Figure 4.14: Special snake tabloids with the same weight and shape but opposite signs. Theorem 4.7.8. For a partitionµ, the formula ˜ K − 1 arev(µ) = ∑ U sign(U) (4.7.5) summed over special snake tabloids U of weight a and shape rev(µ) is cancellation-free, i.e. does not contain both positive and negative terms. Proof. For a partition λ =(λ 1 ,...,λ ℓ ,0,0,...) we claim that a special snake S of rev(λ) is com- pletely determined byλ and the size of S. Since S is a rim hook by Proposition 4.7.4 we work with Definition 4.7.1. If |S|= 0 the claim is obvious so we may assume(1,n+ 1− k)∈ S. SayD(rev(µ))=D(rev(λ))\S withµ⪯ Y λ. Suppose(r,c)∈ S with c< rev(λ) r− 1 . We have rev(µ) r < c which implies rev(µ) r− 1 < c. Then(r,c),(r,c+ 1),(r− 1,c),(r− 1,c+ 1)∈ S which contradicts Definition 4.7.1. So we must have c≥ rev(λ) r− 1 for any(r,c)∈ S with r> 1. Let V ={(r,c)∈D(rev(λ))| r= 1 or c≥ rev(λ) r− 1 } and take a cell (r,c)∈ V . If (r,c+ 1)∈ V then c< rev(λ) r which implies (r+ 1,c) / ∈ V . If (r− 1,c)∈ V then c≤ rev(λ) r− 1 implies(r,c− 1)/ ∈ V . Then taking U to be the connected compo- nent of V containing(1,n+ 1− ℓ) we can write U ={u 1 ,u 2 ,...,u m } where each pair(u i ,u i+1 ) is connected with u i+1 to the right or above u i . Note that if m> 2 then the row and column differences of u 1 and u m sum to m and therefore(u 1 ,u m ) is not connected. It must be the case that u 1 =(1,n+ 1− ℓ) since this is the leftmost cell in the lowest row of D(rev(λ)). Now S is a connected subset of U containing u 1 which can be nothing but S= {u 1 ,u 2 ,...,u k } for k the cardinality of S. This proves the claim that S is determined by its size and λ. 128 Given a special snake tabloid U=(S 1 ,...,S n ) of shape rev(µ) we know by induction that each S i is a rim hook of a Ferrers diagramD(rev(λ))\(S 1 ⊔···⊔ S i− 1 ). It therefore follows from the claim that U is completely determined by µ and wt(U). Not only is the sum (4.7.5) cancellation- free: it has at most one term. From Theorem 4.7.8 we then see that the entirety of cancellation that occurs in Theorem 4.7.7 is explained as a result of distinct complete flagged homogeneous polynomials stabilizing to the same complete homogeneous symmetric function. For instance, E˘ gecio˘ glu and Remmel give the tabloids in Figure 4.15 as an example of objects whose terms cancel in equation (4.7.4) [19]. This occurs because their sorted weights yield the same partition. However the weights are necessarily distinct as weak compositions so the terms do not cancel in equation (4.7.3); just their stable limits do. 4 4 2 2 2 1 4 3 3 3 1 1 Figure 4.15: Special snake tabloids whose terms cancel in equation (4.7.4) but not in equation (4.7.3). Further applications for snakes are outside the scope of this paper, however it is worth consid- ering that these objects could extend definitions and results that traditionally rely on rim hooks. For instance, the Murnaghan-Nakayama rule can be used to write the power sum symmetric poly- nomials in terms of rim hooks and the Schur basis [51, 52]. Replacing rim hooks with snakes and Schur polynomials with key polynomials defines what may be an interesting power sum analogue. Additionally, the polynomials of Lascoux, Leclerc, and Thibon are defined in terms of rim hooks and may also be candidates for generalization [38]. The remainder of the section builds up to the proof of Theorem 4.7.6. Given a snake S of b, our next lemmas require the set G(S)={T :D(b)→[n]| T| D(b)\S ∈ SSKT and T| S = 1}. (4.7.6) 129 Notice that the generating function of this set can be written ∑ T∈G(S) x wt(T) = x |S| 1 κ sh(D(b)\S) where sh(D(b)\ S) is the weak composition indexing the key diagramD(b)\ S 7 7 3 1 1 1 6 5 4 4 3 2 2 2 4 3 1 1 1 1 1 2 2 2 2 2 1 1 1 1 1 Figure 4.16: A filling T∈G(S) with S highlighted. Lemma 4.7.9. Let S be a snake ofD(b). If T∈G(S) then maj( ˆ T)= coinv( ˆ T)= 0. Proof. Since T| S is identically 1 and T| D(b)\S is an SSKT we must have maj( ˆ T)= 0. SayD(a)= S\D(b) so that a⪯ b. Take rows r< s such that b r ≤ b s . It follows that a r ≤ a s as well. Then any co-inversion triple of ˆ T within these rows must have an entry in S, else it would also be a co-inversion triple in ˆ T| D(a) . However, a r ≤ a s implies that any cell in row s of S does not share a column with a cell in row r ofD(a). So any (necessarily Type II) triple of T whose top right cell is in S must also have its bottom cell in S. The triple then contains two entries 1 and is not a co-inversion triple. Now take rows r< s with b r > b s . If a r ≤ a s then by Lemma 4.4.9 we have T(c,r)< T(c,s) whenever c≤ a r . Since T(c,r)= 1 for all a r < c≤ b r , it follows that T(c,r)≤ T(c,s) for all c≤ b s . In this case there are no co-inversion triples of T within these rows. If instead a r > a s , then any (Type I) co-inversion triple of T within these rows must have entries 1< i< j with the entry 1 in S. The premise a r > a s then further implies that the cell of i is in S, so i= 1 which is a contradiction. Definition 4.7.10. Given a snake S ofD(b) and T ∈G(S), we say an ordered pair of cells x,y∈ D(b) is an S-attack if 130 (a) x and y are attacking cells with y either above x or in the column to its right, (b) x∈ S, (c) both cells have entry 1, and (d) if x and y lie in distinct columns then S does not contain the cell immediately left of y. Condition (d) is a bit of a technical point, but an S-attack is approximately just a violation of the non-attacking condition for T with some symmetry-breaking stipulations. Lemma 4.7.11. Let S be a special snake ofD(b). If T ∈G(S) is not an SSKT then T contains a S-attack. Proof. By Lemma 4.7.9, maj( ˆ T)= coinv( ˆ T)= 0 but T is not an SSKT so it must contain a pair of attacking cells with entry 1 and at least one of those cells in S. If S is composed only of the first row of D(b). Any cell y with entry 1 attacking a cell in S must in fact lie in the same column as some cell x in S so this pair is an S-attack. If S contains cells in multiple rows, let y be the leftmost, then lowest cell in S that is not in row 1. In order for S to be weakly connected, y must share a column with a cell x in row 1 and again this pair is an S-attack. The central step in our proof of theκ toh expansion is showing the existence of the following involution on the set F ={(S,T)| S a special snake ofD(b), T∈G(S) and T / ∈ SSKT}. (4.7.7) when b 1 > 0. Definition 4.7.12. With b 1 > 0 and (S,T)∈F define S ′ as follows. Take x to be the rightmost, then topmost cell that occurs as the first cell in an S-attack. Let y be the rightmost, then lowest cell such that x,y is an S-attack. Let B(y) be the set containing y and all cells in the same row to 131 its right inD(b). Then S ′ is defined to be the symmetric difference of S and B(y). We also write ι(S,T)=(S ′ ,T). Remark 4.7.13. A choice of x as in Definition 4.7.12 exists by Lemma 4.7.11, so the procedure is well-defined. We claimι is a sign-reversing involution onF . The content of the proof is in several interme- diate results. The next lemma should help clarify the nature of the procedure. 7 7 3 3 3 2 6 5 4 4 4 4 3 1 4 3 1 1 1 1 1 2 2 2 1 1 1 1 1 1 1 ι ←→ 7 7 3 3 3 2 6 5 4 4 4 4 3 1 4 3 1 1 1 1 1 2 2 2 1 1 1 1 1 1 1 Figure 4.17: The mapι. Lemma 4.7.14. In the setting of Definition 4.7.12, either B (y)⊂ S or B(y)∩ S= / 0. Equivalently, either S ′ = S\ B(y) or S ′ = S∪ B(y). Proof. Suppose z is the leftmost cell in S∩ B(y). If z is not in column 1 then in order for S to be weakly connected its highest cell w in the column left of z must be weakly above some cell v∈ S in the same column as z. If w does not attack z, then v is distinct from z and attacks z. Either way, we see that z must be y so B(y)⊂ S. Lemma 4.7.15. In the setting of Definition 4.7.12, S ′ is a special snake of b. Proof. By Lemma 4.7.14, either S ′ = S∪ B(y) or S ′ = S\ B(y). We first show that either set is weakly connected. Both S and B(y) are individually weakly connected. Moreover y is weakly connected to x in S, so S∪ B(y) is weakly connected. In the case B(y)⊂ S, suppose S\ B(y) fails to be weakly connected. There must be a cell z∈ B(y) attacking a cell w∈ S\ B(y) such that x and w do not share a weakly connected component of S\B(y). If the cell w ′ immediately left of w is in S, then w ′ 132 must lie in a column strictly left of x. The pair of w ′ and z then constitute an S-attack contradicting the choice of x. If there is no such w ′ in S then w and z in some order constitute an S-attack which still contradicts the choice of x. Next we showD(b)\ S ′ =D(a ′ ) for some a ′ ⪯ b. LetD(a)=D(b)\ S so that a⪯ b. When S∩ B(y)= / 0 it is clear thatD(b)\(S∪ B(y))=D(a)\ B(y) is indeed a key diagramD(a ′ ). Say we have r< s with a r ≥ a s . Since T| D(a) is a SSKT we can apply Lemma 4.4.9 to see that no entry 1 in row s of T| D(a) can share a column with a cell in row r ofD(a). Every entry of B(y) must be 1, since y has entry 1 maj( ˆ T)= 0. Thus, B(y) does not lie in row s. It follows that a ′ r ≥ a ′ s , so a ′ ⪯ a⪯ b. If B(y)⊂ S then the cell immediately left of y cannot be contained in S without violating either (3) in Definition 4.7.3 or (d) in Definition 4.7.10. Thus D(b)\(S\ B(y)) is also a key diagram D(a ′ ). Take row indices r< s with b r ≥ b s . Say there is a cell (c,s)∈ S\ B(y). If (r,c)∈D(a) then we must have (r,c)∈ S since a⪯ b. Thus, we will have a ′ r ≥ a ′ s unless (r,c)∈ B(y). In the latter case,(r,c) and(s,c) constitute an S-attack with(r,c) contradicting the choice of x. Therefore a ′ ⪯ b. We finally check (3) in Definition 4.7.3. Since S is a snake, S\ B(y) maintains compliance. Suppose S∩B(y)= / 0 and we have z∈ B(y) with some w∈ S sharing a column. If w is below z then w,z is an S-attack forcing z= y. A cell immediately left of z can then not be in B(y) or S which means it will not violate (3) together with w and z. If w is above z and there is a cell z ′ immediately right of z then w,z ′ would be an S-attack in contradiction of our choice of x and y. If such z ′ does not exists, then the fact that w∈ S and z / ∈ S contradicts a⪯ b. Therefore S∪ B(y) must satisfy (3). We have shown that S ′ is a snake. It is in fact a special snake since y being the second cell in an S-attack precludes it from being(1,1). Lemma 4.7.16. In the setting of Definition 4.7.12, we have (S ′ ,T)∈F . Proof. Since S ′ is a special snake by Lemma 4.7.15, we just need to show T ∈G(S ′ ). We have T| S ′ = 1 by construction so we are further reduced to the claim that T| D(b)\S ′ is an SSKT. 133 SayD(a)=D(b)\ S andD(a ′ )=D(b)\ S ′ . IfD(a ′ )=D(a)\ B(y) then ˆ T| D(a ′ ) will remain non-attacking with weakly decreasing rows left to right. Suppose T| D(a ′ ) contains a Type II (resp. Type I) co-inversion triple i< j< k as in Fig. 4.4. This is only possible if the row s containing i,k is strictly (resp. weakly) shorter than the row r containing j in T| D(a) . Since T| D(a) is an SSKT, the Type I case is impossible by Lemma 4.4.9. For the Type II case we have r< s and a r > a s . Then there is an entry j ′ of T| D(a) immediately right of j, with i< j ′ ̸= 1. In particular j ′ ̸= 1, so the corresponding cell is inD(b ′ ) as well, where a ′ r ≤ a ′ s ≤ a s . So there is an entry i ′ immediately right of i in T| D(a) as well, with i ′ ≤ i< j ′ . This argument repeats indefinitely by replacing i, j with i ′ , j ′ so we must reject the existence of the Type II triple. In this case T| D(a ′ ) is an SSKT. If instead D(a ′ ) =D(a)∪ B(y) then in addition to ruling out co-inversion triples, we must ensure none of the new 1 entries from B(y) attack any existing 1 entries in T| D(a) , which we deal with first. Suppose we do have such a pair of attacking cells z,w with z∈ B(y)⊂ S and w∈D(a). This pair satisfies (b) and (c) in Definition 4.7.10. Since w / ∈ S, any cell immediately left of w is also not in S so (d) is satisfied as well. Since z is in a column strictly right of x, or above x in the same column, satisfying (a) would contradict our choice of x, so we must take (a) to be false. The first way this may occur is if w lies below z in the same column. If y= z then x would be in position to attack w contradicting our choice of y, so the cell z ′ immediately left of z must be in B(y)⊂ S. This makes z ′ ,w an S-attack which contradicts our choice of x. Then (a) must fail by having w in the column immediately left of z, necessarily in a row s above the row r of z. We see a r ≤ a s as z but not w is in S. However T| D(a) (w)= 1 which is therefore less than the entry in the same column of row r contradicting Lemma 4.4.9. Thus, T| D(a ′ ) is non-attacking. Now suppose once again that there exists a co-inversion triple i< j< k in T| D(a ′ ) , say with i,k in row r and j in row s. If it is a Type I triple with r< s and a ′ r > a ′ s then we must have j> 1 so the corresponding cell is inD(a). Since the entries do not form a co-inversion triple in T| D(a) , we must either have a r ≤ a s or the cell containing i not inD(a), which implies a r ≤ a s anyway. Then j< k contradicts Lemma 4.4.9. 134 Instead assume we have Type II triple with r> s and a ′ r ≥ a ′ s . Similar to before, we must have a r < a s to avoid a co-inversion triple in T| D(a) . That is, a r < a s = a ′ s ≤ a ′ r with B(y) in row r. This implies(a s ,r)∈ B(y). Additionally, a⪯ b implies a s ≤ a ′ r ≤ b r < b s so(a s +1,s)∈ S. Then(a s ,r) and(a s + 1,s) create an S-attack which contradicts the choice x. So the Type II co-inversion triple is impossible. We conclude T| D(a ′ ) is an SSKT which completes the proof. Lemma 4.7.17. In the setting of Definition 4.7.12, a pair of cells z ,w∈D(b) is an S-attack if and only if it is an S ′ -attack. Proof. Recall that S ′ is a snake and T ∈G(S ′ ) by Lemmas 4.7.15 and 4.7.16. Suppose the pair (z,w) is an S-attack. Conditions (a) and (c) in Definition 4.7.10 don’t depend on S so are satisfied for S ′ . By choice of x we must have z / ∈ B(y) so z∈ S ′ . Suppose w is in the column right of z. We know x must lie in a column weakly right of z, and if they share a column then x is weakly above z. Then y is in a row strictly above w or a column weakly right of w. The cell immediately left of w is then not in B(y), and since it cannot be in S by (d) it is not in S ′ either. Then (d) is satisfied S ′ so z,w is an S ′ -attack. Conversely assume z,w is an S ′ -attack. Again, conditions (a) and (c) are already satisfied for S. To show z∈ S we must show z / ∈ B(y). Assuming z∈ B(y) to the contrary, then S ′ = S∪ B and w∈ S else T| D(b)\S violates the non-attacking condition. Note that the cell immediately left of w is not contained in S ′ , hence S, as it would violate either (d) in Definition 4.7.10 or (3) in Definition 4.7.3. Since S is weakly connected, it must contain a cell v attacking w̸=(1,1) from the bottom or left so that(v,w) is an S-attack. We cannot have x in a column strictly left of v or strictly right of z∈ B(y) so x shares a column with v or w. If x shares a column with w, this forces z into the same column. We must have that z= y with x strictly below y= z strictly below w. Since z / ∈ S and w∈ S, there must exist a cell z ′ immediately right of z inD(b) to satisfy (2) in Definition 4.7.3. But now the S-attack (w,z ′ ) contradicts the choice of x. Now suppose that instead of sharing a column with w, x shares a column immediately left of w with v. By choice of x, x lies above v. Then(x,w) is an S-attack so by choice of y, y lies below w 135 in the same column. Since w but not y is in S, (2) in Definition 4.7.3 requires there to exist a cell y ′ immediately right of y. Now(w,y ′ ) is an S-attack that contradicts the choice of x. Therefore it must be the case that z / ∈ B(y) so z∈ S. Finally to demonstrate (d) in Definition 4.7.10, suppose w is in the column right of z. Let w ′ be the cell immediately left of w, which cannot be in S ′ since z,w is an S ′ -attack. If w ′ were in S then w ′ ,z would be an S-attack forcing x to lie in a column strictly right of w ′ , or above w ′ in the same column. This would prevent w ′ from being in B(y) and therefore contradict w ′ / ∈ S ′ . We have thus shown that z,w is an S-attack. We can now put everything together to see thatι is a sign-reversing involution. Theorem 4.7.18. Let b be a weak composition with b 1 > 0. Then ι :F →F is a well-defined involution that reverses the sign of the snake. Proof. Suppose(S ′ ,T)=ι(S,T). We have(S ′ ,T)∈F by Lemma 4.7.16. By Lemma 4.7.17, an S-attack is equivalent to an S ′ -attack and it is therefore apparent from the definition that ι(S ′ ,T)= (S,T). From Lemma 4.7.14 we see that S and S ′ differ in exactly one row in which exactly one of S and S ′ contain cells. Thus sign(S)=− sign(S ′ ). The involution ι is only defined when b is a weak composition with b 1 > 0. Before proving Theorem 4.7.6 we need one more lemma to sidestep this restriction, and some notation. Suppose we have a set I⊂ [n] with elements i 1 <··· < i ℓ . Define the set of weak compositions C I,k ={a||a|= k and if a i > 0 then i∈ I}. For any a∈ C I,k , and any other subset J⊂ [n] with the same number of elements j 1 <··· < j ℓ , we let a J denote the weak composition in C J,k given by a J j m = a i m . Lemma 4.7.19. Let I,J⊂{ 1,...,n} with elements i 1 <··· < i ℓ and j 1 <··· < j ℓ respectively. The following diagram commutes. 136 Span{h a | a∈ C I,k } Span{κ a | a∈ C I,k } Span{h a | a∈ C J,k } Span{κ a | a∈ C J,k } id h a J id κ a J id h a I id κ a I Proof. Any entry in the first column of a reverse SSAF T must match its row index. Then wt(T) m = 0 implies sh(T) m = 0. By Theorem 4.4.12 we can therefore see Span{h a | a∈ C I,k }⊂ Span{κ a | a∈ C I,k } and restricting to any complete homogeneous subspace shows they coincide due to dimension. For a∈ C I,k we have (a J ) I i m = a J j m = a i m so we see that h a 7→h a J and κ a 7→κ a J are indeed isomorphisms with the specified inverses. By Theorem 4.4.12 we have h a J = ∑ T∈SSAF wt(T)=a J κ sh(T) so we are done once we show ∑ T∈SSAF wt(T)=a J κ sh(T) = ∑ T∈SSAF wt(T)=a κ sh(T) J. Indeed, if T is a reverse SSAF of weight a, we obtain a new reverse SSAF T J of weight a J and shape sh(T) J by setting T J (c, j r )= j s whenever T(c,i r )= i s . Entries in the first column still match their row index. Moreover for a pair of cells(c,i r ),(d,i s ) in T , the corresponding pair of cells(c, j r ),(d, j s ) in T J maintains the same relative ordering of rows, columns, and entries. So T J will in fact remain a reverse SSAF. The inverse map T7→ T I is defined symmetrically, and we are done. 137 Proof of Theorem 4.7.6. The statement is trivial for b=(0,0,...) so we take b to have at least one nonzero part and proceed under the assumption that the expansion holds for any weak composition with strictly fewer nonzero parts. We first assume b 1 > 0. If S is a special snake ofD(b) then it contains all of the first row so D(b)\ S has a shape with strictly fewer nonzero parts than b. Therefore using the inductive hypothesis ∑ sh(U)=b sign(S)h wt(U) = ∑ S sign(S)h (|S|,0,0,...) ∑ sh(U ′ )=sh(D(b)\S) sign(U ′ )h wt(U ′ ) = ∑ S sign(S)x |S| 1 κ sh(D(b)\S) = ∑ S sign(S)x |S| 1 ∑ T∈SSKT sh(T)=sh(D(b)\S) x wt(T) = ∑ (S,T) T∈G(S) sign(S)x wt(T) where the special snake tabloids U ofD(b) decompose into a special snake S ofD(b) and a special snake tabloid U ′ ofD(b)\ S. It is now our goal to show ∑ (S,T) T∈G(S) sign(S)x wt(T) = ∑ T∈SSKT sh(T)=b x wt(T) =κ b . (4.7.8) The snake S is weakly connected, and for cells in distinct rows to be weakly connected is for them to be attacking. Then if T∈G(S) is an SSKT, S can be nothing but the first row of D(b) and we have sign(S)= 1. Conversely, for T∈ SSKT we have T∈G(S) if we take S to be the first row ofD(b). This is to say ∑ (S,T) T∈G(S) sign(S)x wt(T) = ∑ T∈SSKT sh(T)=b x wt(T) + ∑ (S,T)∈F sign(S)x wt(T) . The last summation is zero by Theorem 4.7.18 so (4.7.8) follows. 138 We now drop the assumption that b 1 > 0. Take I={i 1 <··· < i ℓ } to be the nonzero indices of b. Let J={ j 1 <··· < j ℓ } be any subset of[n] of the same size with j 1 = 1. Then b J 1 = b i 1 > 0 and b J has the same number of nonzero parts as b. The special snake tabloid expansion is therefore valid for b J . Using Lemma 4.7.19 we have κ b =κ (b J ) I = ∑ sh(U)=b J sign(U)h wt(U) I. Given a special snake tabloid U =(S 1 ,...,S n ) of shape b J we obtain a special snake tabloid U ′ = (S ′ 1 ,...,S ′ n ) of shape b and weight wt(S) I by requiring that(c,i r )∈ U ′ i s if and only if(c, j r )∈ U j s . Given a pair of cells(c, j r ),(d, j s ) in U, the corresponding pair of cells(c,i r ),(d,i s ) in U ′ maintains the same relative order of their row, column, and snake indices. So U ′ indeed remains a special snake tabloid, and the inverse map from special snake tabloids of shape b to tabloids of shape b J is defined symmetrically. Additionally, each U j m contains cells in the same number of rows as does U ′ i m , so sign(U)= sign(U ′ ). We see that κ b = ∑ sh(U)=b J sign(U)h wt(U) I = ∑ sh(U)=b sign(U)h wt(U) completing the inductive step and the proof. 139 References [1] Sam Armon, Sami Assaf, Grant Bowling, and Henry Ehrhard. “Kohnert’s rule for flagged Schur modules”. Journal of Algebra 617 (2023), pp. 352–381. ISSN: 0021-8693. DOI: https://doi.org/10.1016/j.jalgebra.2022.10.032. [2] Sami Assaf. “Nonsymmetric Macdonald polynomials and a refinement of Kostka–Foulkes polynomials”. Trans. Amer. Math. Soc. 370.12 (2018), pp. 8777–8796. ISSN: 0002-9947. [3] Sami Assaf and Anne Schilling. “A Demazure crystal construction for Schubert polynomi- als”. Algebraic Combinatorices 1 (2018), pp. 225–247. [4] Sami Assaf and Dominic Searles. “Kohnert polynomials”. Exp. Math. 31.1 (2022), pp. 93– 119. [5] Sami Assaf and Dominic Searles. “Kohnert tableaux and a lifting of quasi-Schur functions”. J. Combin. Theory Ser. A 156 (2018), pp. 85–118. [6] Sami Assaf and Stephanie van Willigenburg. “Skew key polynomials and a generalized Littlewood–Richardson rule”. European Journal of Combinatorics 103 (2022), p. 103518. ISSN: 0195-6698. DOI:https://doi.org/10.1016/j.ejc.2022.103518. [7] Sami H. Assaf. “A bijective proof of Kohnert’s rule for Schubert polynomials”. Comb. The- ory 2.1 (2022), Paper No. 5, 9. [8] Sami H. Assaf. “Demazure crystals for Kohnert polynomials”. Trans. Amer. Math. Soc. 375.3 (2022), pp. 2147–2186. [9] O. Azenhas and A. Emami. “An analogue of the Robinson-Schensted-Knuth correspon- dence and non-symmetric Cauchy kernels for truncated staircases”. European J. Combin. 46 (2015), pp. 16–44. [10] Barry Balof and Kenneth Bogart. “Simple Inductive Proofs of the Fishburn and Mirkin Theorem and the Scott-Suppes Theorem”. Order 20.1 (2003), pp. 49–51. [11] Nantel Bergeron and Sara Billey. “RC-graphs and Schubert polynomials”. Experiment. Math. 2.4 (1993), pp. 257–269. [12] Nantel Bergeron and Frank Sottile. “Schubert polynomials, the Bruhat order, and the geom- etry of flag manifolds”. Duke Math. J. 95.2 (1998), pp. 373–423. 140 [13] I. N. Bernstein, I. M. Gelfand, and S. I. Gelfand. “Schubert cells, and the cohomology of the spaces G/P”. Uspehi Mat. Nauk 28.3(171) (1973), pp. 3–26. [14] Daniel Bump and Anne Schilling. Crystal Bases: Representations And Combinatorics. New Jersey: World Scientific Publishing Company, 2017. ISBN: 9789814733434. [15] Seung-Il Choi and Jae-Hoon Kwon. “Lakshmibai-Seshadri paths and non-symmetric Cauchy identity”. Algebr. Represent. Theory 21.6 (2018), pp. 1381–1394. [16] Timothy Y . Chow. “Descents, Quasi-Symmetric Functions, Robinson-Schensted for Posets, and the Chromatic Symmetric Function”. Journal of Algebraic Combinatorics 10 (1999), pp. 227–240. [17] Michel Demazure. “D´ esingularisation des vari´ et´ es de Schubert g´ en´ eralis´ ees”. Ann. Sci. ´ Ecole Norm. Sup. (4) 7 (1974). Collection of articles dedicated to Henri Cartan on the occasion of his 70th birthday, I, pp. 53–88. [18] Michel Demazure. “Une nouvelle formule des caract` eres”. Bull. Sci. Math. (2) 98.3 (1974), pp. 163–172. [19] ¨ Omer E˘ gecio˘ glu and Jeffrey B. Remmel. “A combinatorial interpretation of the inverse kostka matrix”. Linear and Multilinear Algebra 26.1-2 (1990), pp. 59–84. DOI: 10.1080/ 03081089008817966. [20] Alex Fink, Karola M´ esz´ aros, and Avery St. Dizier. “Schubert polynomials as integer point transforms of generalized permutahedra”. Adv. Math. 332 (2018), pp. 465–475. [21] Alex Fink, Karola M´ esz´ aros, and Avery St. Dizier. “Zero-one Schubert polynomials”. Math. Z. 297.3-4 (2021), pp. 1023–1042. [22] Amy M. Fu and Alain Lascoux. “Non-symmetric Cauchy kernels for the classical groups”. J. Combin. Theory Ser. A 116.4 (2009), pp. 903–917. [23] William Fulton. “Flags, Schubert polynomials, degeneracy loci, and determinantal formu- las”. Duke Math. J. 65.3 (1992), pp. 381–420. [24] William Fulton. Young Tableaux. With Applications to Representation Theory and Geom- etry. Londn Mathematical Society Student Texts 35. Cambridge: Cambridge University Press, 1997. [25] Vesselin Gasharov. “Incomparability graphs of (3 + 1)-free posets are s-positive”. Discrete Mathematics 157.1 (1996), pp. 193–197. ISSN: 0012-365X. DOI: https://doi.org/10. 1016/S0012-365X(96)83014-7. [26] Mathieu Guay-Paquet. A modular relation for the chromatic symmetric functions of (3+1)- free posets. 2013. DOI:10.48550/ARXIV.1306.2400. 141 [27] J. Haglund, M. Haiman, and N. Loehr. “A combinatorial formula for nonsymmetric Mac- donald polynomials”. Amer. J. Math. 130.2 (2008), pp. 359–383. [28] James Haglund, Sarah Mason, and Jeffrey Remmel. “Properties of the nonsymmetric Robinson-Schensted-Knuth algorithm”. J. Algebraic Combin. 38.2 (2013), pp. 285–327. [29] Masaki Kashiwara. “Crystalizing the q-analogue of universal enveloping algebras”. Comm. Math. Phys. 133.2 (1990), pp. 249–260. [30] Masaki Kashiwara. “The crystal base and Littelmann’s refined Demazure character for- mula”. Duke Math. J. 71.3 (1993), pp. 839–858. [31] Masaki Kashiwara and Toshiki Nakashima. “Crystal graphs for representations of the q- analogue of classical Lie algebras”. J. Algebra 165.2 (1994), pp. 295–345. [32] Dongkwan Kim and Pavlo Pylyavskyy. “Robinson–Schensted correspondence for unit in- terval orders”. Sel. Math. New Ser. 27 (2021). [33] Mikhail Kogan and Abhinav Kumar. “A proof of Pieri’s formula using the general- ized Schensted insertion algorithm for rc-graphs”. Proc. Amer. Math. Soc. 130.9 (2002), pp. 2525–2534. [34] Axel Kohnert. “Weintrauben, Polynome, Tableaux”. Bayreuth. Math. Schr. 38 (1991). Dis- sertation, Universit¨ at Bayreuth, Bayreuth, 1990, pp. 1–97. [35] Witold Kra´ skiewicz and Piotr Pragacz. “Foncteurs de Schubert”. C. R. Acad. Sci. Paris S´ er. I Math. 304.9 (1987), pp. 209–211. [36] Witold Kra´ skiewicz and Piotr Pragacz. “Schubert functors and Schubert polynomials”. Eu- ropean J. Combin. 25.8 (2004), pp. 1327–1344. ISSN: 0195-6698. [37] Alain Lascoux. “Double crystal graphs”. Studies in memory of Issai Schur. Birkh¨ auser, Boston, 2003, pp. 95–114. [38] Alain Lascoux, Bernard Leclerc, and Jean-Yves Thibon. “Ribbon tableaux, Hall–Littlewood functions, quantum affine algebras, and unipotent varieties”. Journal of Mathematical Physics 38.2 (Feb. 1997), pp. 1041–1068. ISSN: 0022-2488. DOI:10.1063/1.531807. [39] Alain Lascoux and Marcel-Paul Sch¨ utzenberger. “Keys & standard bases”. Invariant theory and tableaux (Minneapolis, MN, 1988). V ol. 19. IMA V ol. Math. Appl. Springer, New York, 1990, pp. 125–144. [40] Alain Lascoux and Marcel-Paul Sch¨ utzenberger. “Polynˆ omes de Schubert”. C. R. Acad. Sci. Paris S´ er. I Math. 294.13 (1982), pp. 447–450. [41] Peter Littelmann. “Crystal graphs and Young tableaux”. J. Algebra 175.1 (1995), pp. 65–87. 142 [42] G. Lusztig. “Canonical bases arising from quantized enveloping algebras”. J. Amer. Math. Soc. 3.2 (1990), pp. 447–498. ISSN: 0894-0347. [43] I. G. Macdonald. “A new class of symmetric functions”. Actes du 20e Seminaire Lotharingien 372 (1988), pp. 131–171. [44] I. G. Macdonald. “Affine Hecke algebras and orthogonal polynomials”. Ast´ erisque 237 (1996). S´ eminaire Bourbaki, V ol. 1994/95, Exp. No. 797, 4, 189–207. [45] Alexander Magid. “Enumeration of convex polyominoes: A generalization of the Robinson- Schensted correspondence and the dimer problem”. PhD thesis. Brandeis University, 1992. [46] Peter Magyar. “Borel-Weil theorem for configuration varieties and Schur modules”. Adv. Math. 134.2 (1998), pp. 328–366. [47] Peter Magyar. “Schubert polynomials and Bott-Samelson varieties”. Comment. Math. Helv. 73.4 (1998), pp. 603–636. [48] Sarah Mason. “A decomposition of Schur functions and an analogue of the Robinson- Schensted-Knuth algorithm”. S ´ EM. LOTHAR. COMBIN 57 (2008), B57e, 24 p. [49] Sarah Mason. “An explicit construction of type A Demazure atoms”. J. Algebraic Combin. 29.3 (2009), pp. 295–313. [50] Karola M´ esz´ aros, Linus Setiabrata, and Avery St. Dizier. “An orthodontia formula for Grothendieck polynomials”. Trans. Amer. Math. Soc. 375.2 (2022), pp. 1281–1303. [51] F. D. Murnaghan. “On the Representations of the Symmetric Group”. Amer. J. Math. 59.3 (1937), pp. 437–488. ISSN: 00029327, 10806377. [52] Tadasi NAKAYAMA. “On some modular properties of irreducible representations of a sym- metric group, I”. Jpn. J. Math. 17 (1940), pp. 165–184. DOI:10.4099/jjm1924.17.0_165. [53] Eric M. Opdam. “Harmonic analysis for certain representations of graded Hecke algebras”. Acta Math. 175.1 (1995), pp. 75–121. [54] Ying Anna Pun. “On decomposition of the product of Demazure atoms and Demazure char- acters”. PhD thesis. University of Pennsylvania, 2016. [55] Victor Reiner and Mark Shimozono. “Key polynomials and a flagged Littlewood-Richardson rule”. J. Combin. Theory Ser. A 70.1 (1995), pp. 107–143. [56] Victor Reiner and Mark Shimozono. “Percentage-avoiding, northwest shapes and peelable tableaux”. J. Combin. Theory Ser. A 82.1 (1998), pp. 1–73. [57] Yasmine B. Sanderson. “On the connection between Macdonald polynomials and Demazure characters”. J. Algebraic Combin. 11.3 (2000), pp. 269–275. 143 [58] Dana Scott and Patrick Suppes. “Foundational Aspects of Theories of Measurement”. The Journal of Symbolic Logic 23.2 (1958), pp. 113–128. ISSN: 00224812. [59] John Shareshian and Michelle L. Wachs. “Chromatic quasisymmetric functions”. Advances in Mathematics 295 (2016), pp. 497–551. ISSN: 0001-8708. DOI: https://doi.org/10. 1016/j.aim.2015.12.018. [60] Frank Sottile. “Pieri’s formula for flag manifolds and Schubert polynomials”. Ann. Inst. Fourier (Grenoble) 46.1 (1996), pp. 89–110. [61] Richard P. Stanley. “A Symmetric Function Generalization of the Chromatic Polynomial of a Graph”. Advances in Mathematics 111.1 (1995), pp. 166–194. ISSN: 0001-8708. DOI: https://doi.org/10.1006/aima.1995.1020. [62] Richard P. Stanley. Enumerative Combinatorics. V ol. 2. Cambridge: Cambridge University Press, 1999. [63] Richard P. Stanley. “Graph colorings and related symmetric functions: ideas and applica- tions A description of results, interesting applications, & notable open problems”. Discrete Mathematics 193.1 (1998), pp. 267–286. ISSN: 0012-365X. DOI: https://doi.org/10. 1016/S0012-365X(98)00146-0. [64] Richard P. Stanley. “On the number of reduced decompositions of elements of Coxeter groups”. European J. Combin. 5.4 (1984), pp. 359–372. [65] Richard P. Stanley and John R. Stembridge. “On immanants of Jacobi-Trudi matrices and permutations with restricted position”. Journal of Combinatorial Theory, Series A 62.2 (1993), pp. 261–279. ISSN: 0097-3165. DOI: https://doi.org/10.1016/0097- 3165(93)90048-D. [66] John R. Stembridge. “A local characterization of simply-laced crystals”. Trans. Amer. Math. Soc. 355.12 (2003), pp. 4807–4823. ISSN: 0002-9947. [67] Thomas S. Sundquist, David G. Wagner, and Julian West. “A Robinson–Schensted Algo- rithm for a Class of Partial Orders”. Journal of Combinatorial Theory, Series A 79.1 (1997), pp. 36–52. ISSN: 0097-3165. DOI:https://doi.org/10.1006/jcta.1997.2769. 144
Abstract (if available)
Abstract
We consider three topics with some common themes. First we show that Kohnert's combinatorial model for key polynomials computes the characters of a large class of flagged Schur modules indexed by northwest diagrams. Moreover, within the class of %-avoiding diagrams it is precisely the northwest diagrams whose corresponding characters are described by the model. Second, we define a normal crystal structure on the P-arrays considered by Gasharov. While not a highest weight crystal, its connected components have Schur positive characters thereby refining the results of Gasharov, and Shareshian and Wachs, describing how the chromatic symmetric functions of incomparability graphs of (3+1)-free posets are Schur positive. Finally, we define a new basis for the polynomial ring which lifts the complete homogeneous symmetric polynomials while retaining representation theoretic significance. Using a specialized RSK algorithm we give an explicit nonnegative expansion into key polynomials and, generalizing the special rim hook tabloids of Eğecioğlu and Remmel, give an explicit signed expansion for key polynomials into this new basis.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Characters of flagged Schur modules and compatible slides
PDF
A Pieri rule for key polynomials
PDF
Categorical operators and crystal structures on the ring of symmetric functions
PDF
Advances in the combinatorics of shifted tableaux
PDF
A proposed bijection for the K-Kohnert rule for Grothendieck polynomials
PDF
Applications on symmetric and quasisymmetric functions
PDF
Center and trace of the twisted Heisenberg category
PDF
Koszul-perverse sheaves and the A⁽¹⁾₂ dual canonical basis
PDF
On the Kronecker product of Schur functions
PDF
The structure and higher representation theory of odd categorified sl(2)
PDF
Limit theorems for three random discrete structures via Stein's method
PDF
The physics of membrane protein polyhedra
PDF
Branes, topology, and perturbation theory in two-dimensional quantum gravity
PDF
Computation of class groups and residue class rings of function fields over finite fields
PDF
Applications of quantum error-correcting codes to quantum information processing
PDF
Moyal star formalism in string field theory
PDF
Imposing classical symmetries on quantum operators with applications to optimization
PDF
Efficient inverse analysis with dynamic and stochastic reductions for large-scale models of multi-component systems
PDF
Eigenfunctions for random walks on hyperplane arrangements
PDF
Performant, scalable, and efficient deployment of network function virtualization
Asset Metadata
Creator
Ehrhard, Henry (author)
Core Title
Colors, Kohnert's rule, and flagged Kostka coefficients
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Mathematics
Degree Conferral Date
2023-08
Publication Date
06/01/2023
Defense Date
05/09/2023
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
algebraic combinatorics,chromatic symmetric function,complete homogeneous symmetric polynomial,crystal graph,Demazure character,flagged Schur module,key polynomial,Kohnert's rule,OAI-PMH Harvest,Schur polynomial,symmetric function theory
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Assaf, Sami (
committee chair
), Fulman, Jason (
committee member
), Zanardi, Paolo (
committee member
)
Creator Email
hehrhard@gmail.com,hehrhard@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113144561
Unique identifier
UC113144561
Identifier
etd-EhrhardHen-11912.pdf (filename)
Legacy Identifier
etd-EhrhardHen-11912
Document Type
Dissertation
Format
theses (aat)
Rights
Ehrhard, Henry
Internet Media Type
application/pdf
Type
texts
Source
20230602-usctheses-batch-1051
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
algebraic combinatorics
chromatic symmetric function
complete homogeneous symmetric polynomial
crystal graph
Demazure character
flagged Schur module
key polynomial
Kohnert's rule
Schur polynomial
symmetric function theory