The Computation of Kostka Numbers and Littlewood-Richardson coefficients is #P-complete Hariharan Narayanan University of Chicago.

Slides:



Advertisements
Similar presentations
Chapter 6 Matrix Algebra.
Advertisements

Common Variable Types in Elasticity
10.4 Complex Vector Spaces.
5.4 Basis And Dimension.
Rules of Matrix Arithmetic
5.1 Real Vector Spaces.
Common Variable Types in Elasticity
Applied Informatics Štefan BEREŽNÝ
Cryptography and Network Security
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Matrices: Inverse Matrix
Chapter 2 Matrices Finite Mathematics & Its Applications, 11/e by Goldstein/Schneider/Siegel Copyright © 2014 Pearson Education, Inc.
Symmetries of Hives, Generalized Littlewood-Richardson Fillings, and Invariants of Matrix Pairs over Valuation Rings. (preliminary report) Glenn D. Appleby.
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
Chapter 5: Linear Discriminant Functions
3D Geometry for Computer Graphics
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 86 Chapter 2 Matrices.
Deterministic Length Reduction: Fast Convolution in Sparse Data and Applications Written by: Amihood Amir, Oren Kapah and Ely Porat.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
3D Geometry for Computer Graphics
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Control Systems and Adaptive Process. Design, and control methods and strategies 1.
CALCULUS – II Matrix Multiplication by Dr. Eman Saad & Dr. Shorouk Ossama.
Subdivision Analysis via JSR We already know the z-transform formulation of schemes: To check if the scheme generates a continuous limit curve ( the scheme.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 86 Chapter 2 Matrices.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Table of Contents Matrices - Multiplication Assume that matrix A is of order m  n and matrix B is of order p  q. To determine whether or not A can be.
MATRICES. Matrices A matrix is a rectangular array of objects (usually numbers) arranged in m horizontal rows and n vertical columns. A matrix with m.
Lecture 10: Inner Products Norms and angles Projection Sections 2.10.(1-4), Sections 2.2.3, 2.3.
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Linear Equations in Linear Algebra
 Row and Reduced Row Echelon  Elementary Matrices.
Chapter 3: The Fundamentals: Algorithms, the Integers, and Matrices
Chapter Content Real Vector Spaces Subspaces Linear Independence
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
MAT 2401 Linear Algebra 2.1 Operations with Matrices
SINGULAR VALUE DECOMPOSITION (SVD)
Elementary Linear Algebra Anton & Rorres, 9th Edition
Section 2.3 Properties of Solution Sets
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
2 2.1 © 2012 Pearson Education, Inc. Matrix Algebra MATRIX OPERATIONS.
8.4.2 Quantum process tomography 8.5 Limitations of the quantum operations formalism 量子輪講 2003 年 10 月 16 日 担当:徳本 晋
Quantum Two 1. 2 Angular Momentum and Rotations 3.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
What is Matrix Multiplication? Matrix multiplication is the process of multiplying two matrices together to get another matrix. It differs from scalar.
Linear Algebra Chapter 2 Matrices.
ILAS Threshold partitioning for iterative aggregation – disaggregation method Ivana Pultarova Czech Technical University in Prague, Czech Republic.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
Unsupervised Learning II Feature Extraction
Matrix Algebra Definitions Operations Matrix algebra is a means of making calculations upon arrays of numbers (or data). Most data sets are matrix-type.
Introduction Types of Matrices Operations
4. The Eigenvalue.
10.5 Inverses of Matrices and Matrix Equations
Polynomial Optimization over the Unit Sphere
1.3 Vector Equations.
Symmetric Matrices and Quadratic Forms
EIGENVECTORS AND EIGENVALUES
Eigenvalues and Eigenvectors
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Symmetric Matrices and Quadratic Forms
Presentation transcript:

The Computation of Kostka Numbers and Littlewood-Richardson coefficients is #P-complete Hariharan Narayanan University of Chicago

Young tableaux A Young tableau of shape λ = (4, 3, 2) and content μ = (3, 3, 2, 1).  The numbers in each row are non-decreasing from the left to the right.  The numbers in each column are strictly increasing from the top to the bottom.

Skew tableaux  As with tableaux, the numbers in each row of a skew tableau are non-decreasing from the left to the right, and the numbers in each column are strictly increasing from the top to the bottom A skew tableau of shape (2)*(2,1) and content μ = (1, 1, 2, 1).

LR (skew) tableaux  A (skew) tableau is said to be LR if, when its entries are read right to left, top to bottom, at any moment, the number of copies of i encountered is not less than the number of copies of i+1 encountered, for each i An LR skew tableau of shape (2)*(2,1) and content (2, 2, 1) A skew tableau of shape (2)*(2,1) and content (2, 2, 1) that is not LR.

Kostka numbers Kostka numbers  The Kostka number K λμ is the number of Young tableaux having shape λ and content μ If λ = (4, 3, 2) and μ = (3, 3, 2, 1), K λ μ =4 and the tableaux are -

Littlewood-Richardson Coefficients Littlewood-Richardson Coefficients Let α and λ be partitions and ν be a vector with non-negative integer components. The Littlewood-Richardson coefficient c λα ν is the number of LR skew tableaux of shape λ*α that have content ν If λ = (2, 1), α = (2, 1) and ν = (3, 2, 1), c λα ν =2 and the LR skew tableaux are - 2 3

Consider the group SL n ( C ) of n×n matrices over complex numbers that have determinant 1. Any matrix G = (g ij ) in SL(n, C) can be defined to act upon the formal variable x i by x i Σ g ij x j. This leads to an action on the vectorspace T of all polynomials f in the variables {x i } according to G(f) x = f G -1 x. Representation theory Representation theory

One can decompose T into the direct sum of vectorspaces V λ, where λ = (λ 1, …, λ k ) ranges over all partitions (of all natural numbers n) such that each V λ is invariant under the action of SL n (C) and cannot be decomposed further into the non-trivial sum of SL n (C) invariant subspaces. Representation theory Representation theory

Let H be the subgroup of SL n (C) consisting of all diagonal matrices. Though V λ cannot be split into the direct sum of SL n (C) - invariant subspaces, it can be split into the direct sum of one dimensional H-invariant subspaces V λμ V λ = + V λμ +K λμ, μ where μ ranges over all partitions of the same number n that λ is a partition of, and the multiplicity with which V λμ occurs in V λ is K λμ, the Kostka number defined earlier. Representation theory Representation theory

The Littlewood-Richardson coefficient appears as the multiplicity of V ν in the decomposition of the tensor product V λ ◙V α :- V λ ◙V α = + V ν +c λα ν The Kostka numbers and the Littlewood Richardson coefficients also play an essential role in the representation theory of the symmetric groups [F97]. Representation theory Representation theory

Related work  Probabilistic polynomial time algorithms exist, that calculate the set of all non-zero Kostka numbers, and Littlewood-Richardson coefficients, for certain fixed parameters, in time, polynomial in the total size of the input and output [BF97].  Vector partition functions have been used to calculate Kostka numbers and Littlewood-Richardson coefficients [C03], and to study their properties [BGR04].  In his thesis, E. Rassart described some polynomiality properties of Kostka numbers and Littlewood-Richardson coefficients, and asked if they could be computed in polynomial time [R04].

The problems are in #P Kostka numbers :  A tableau is fully described by the number of copies of j that are present in its i th row. This description has polynomial length.  Given such a description, one can verify whether it corresponds to a tableau of the required kind, in polynomial time, by checking  whether the columns have strictly increasing entries, from the top to the bottom, and that  the shape and content are correct.

The problems are in #P Littlewood Richardson coefficients :  An LR skew tableau is fully described by its shape and the number of copies of j that are present in the i th row of.  An LR skew tableau λ*α is fully described by its shape and the number of copies of j that are present in the i th row of it.  This description has polynomial length.  Given such a description, one can verify whether it corresponds to an LR skew tableau of the required kind by checking  whether the columns have strictly increasing entries, from the top to the bottom,  that the shape and content are correct and  that the skew tableau is LR.  Each can be verified in polynomial time.

Hardness Results We prove that the computation of the Kostka number is #P hard, by reducing to it, the #P complete problem [DKM79] of computing the number of 2 × k contingency tables with given row and column sums. We prove that the computation of the Kostka number K λμ is #P hard, by reducing to it, the #P complete problem [DKM79] of computing the number of 2 × k contingency tables with given row and column sums. The problem of computating the Littlewood – Richardson coefficient is shown to be #P-hard by reducing to it, the computation of the Kostka number. The problem of computating the Littlewood – Richardson coefficient c λα ν is shown to be #P-hard by reducing to it, the computation of the Kostka number K λμ.

Contingency tables A contingency table is a matrix of non-negative integers having prescribed row and column sums. A contingency table is a matrix of non-negative integers having prescribed row and column sums

Contingency tables Counting the number of 2 × k contingency tables with given row and column sums is #P complete. [DKM79] Counting the number of 2 × k contingency tables with given row and column sums is #P complete. [DKM79]Example: A contingency table with row sums a = (4, 3) and column sums b = (3, 2, 2) :

We shall exhibit a reduction from the problem We shall exhibit a reduction from the problem of counting the number of 2 × k contingency tables with row sums a = (a 1, a 2 ), a 1 ≥ a 2 and column sums b = (b 1, …, b k ) to the set of Young tableau having shape λ = (a 1 +a 2, a 2 ) and content μ = (b, a 2 ). Reduction to computing Kostka numbers

The Robinson Schensted Knuth (RSK) correspondence gives a bijection between the set I(a, b) of contingency tables having row sums a, column sums b, The Robinson Schensted Knuth (RSK) correspondence gives a bijection between the set I(a, b) of contingency tables having row sums a, column sums b, and the set U T(λ’, a) × T(λ’, b) of pairs of tableaux having contents a and b respectively. and the set U T(λ’, a) × T(λ’, b) of pairs of tableaux having contents a and b respectively. RSK correspondence

P Q C

RSK correspondence P Q 11

RSK correspondence P Q 1111

RSK correspondence P Q

RSK correspondence P Q

RSK correspondence P Q

RSK correspondence P Q

RSK correspondence P Q

RSK correspondence P Q

RSK correspondence P Q Content P = (3, 2, 2) = b Content Q = (4, 3) = a

RSK correspondence Thus, the RSK correspondence gives us the identity |I(a, b)| = Σ K λ’a K λ’b Thus, the RSK correspondence gives us the identity |I(a, b)| = Σ K λ’a K λ’b  K λ’a > 0 implies that  K λ’a > 0 implies that λ 1 ≥ a 1 and that λ 2 ≤a 2, but b is arbitrary, so the summation is over a set exponential in the size of (a, b).

RSK correspondence P Q Content P = b Content Q = a  Q is fully determined by its shape and content since it has only 1’s and 2’s. K λ’a > 0 In other words K λ’a > 0 implies that K λ’a = 1.

RSK correspondence P Q Content P = b Content Q = a  The shape of P and Q could be any s = (s_1, s_2) such that s_1 ≥ a_1 and s_2 ≤a_2, but none other.

In our example, shape λ = (7, 3), and content μ = (3, 2, 2, 3)  Extend P by padding it with copies of k+1 to a tableau T of shape λ and content μ, where λ = (a_1 + a_2, a_2) and μ = (b, a_2). Reduction to computing Kostka numbers

Row sums a = content of Q= (4, 3), Column sums b = content of P = (3, 2, 2), shape λ = (7, 3), and content μ = (3, 2, 2, 3) From Contingency tables to Kostka numbers P Q

From Kostka numbers to Littlewood- Richardson coefficients Given a shape λ, content μ = (μ 1,…, μ s ), let α = (μ 2, μ 2 +μ 3, …, μ 2 +…+μ s ), and let ν = λ + α Claim: K λμ = c λα ν

From Kostka numbers to Littlewood- Richardson coefficients T S

Any tableau of shape λ and content μ, can be embedded in an LR skew tableau of shape λ*α, and content ν From Kostka numbers to Littlewood- Richardson coefficients

Any tableau of shape λ and content μ, can be embedded in an LR skew tableau of shape λ*α, and content ν From Kostka numbers to Littlewood- Richardson coefficients

Any LR skew tableau of shape λ*α and Any LR skew tableau of shape λ*α and content ν, when restricted to λ, has content content ν, when restricted to λ, has content μ From Kostka numbers to Littlewood- Richardson coefficients

Any LR skew tableau of shape λ*α and Any LR skew tableau of shape λ*α and content ν, when restricted to λ, has content content ν, when restricted to λ, has content μ From Kostka numbers to Littlewood- Richardson coefficients

Future directions Do there exist Fully Polynomial Randomized Approximation Schemes (FPRAS) for the evaluation of these quantities? Do there exist Fully Polynomial Randomized Approximation Schemes (FPRAS) for the evaluation of these quantities?

Thank You! Thank You!

References [BF97] A. Barvinok and S.V. Fomin, Sparse interpolation of symmetric polynomials, Advances in Applied Mathematics, 18 (1997), , MR 98i: [BF97] A. Barvinok and S.V. Fomin, Sparse interpolation of symmetric polynomials, Advances in Applied Mathematics, 18 (1997), , MR 98i: [BGR04] S. Billey, V. Guillimin, E. Rassart, A vector partition function for the multiplicities of sl k (C), Journal of Algebra, 278 (2004) no. 1, [BGR04] S. Billey, V. Guillimin, E. Rassart, A vector partition function for the multiplicities of sl k (C), Journal of Algebra, 278 (2004) no. 1, [C03] C. Cochet, Kostka numbers and Littlewood-Richardson coefficients, preprint (2003). [C03] C. Cochet, Kostka numbers and Littlewood-Richardson coefficients, preprint (2003).  [DKM79] M. Dyer, R. Kannan and J. Mount, Sampling Contingency tables, Random Structures and Algorithms, (1979)  [F97] W. Fulton, Young Tableaux, London Mathematical Society Student Texts 35 (1997).  [R04] E. Rassart, Geometric approaches to computing Kostka numbers and Littlewood-Richardson coefficients, preprint (2003).