A Multidimensional Lorenz Dominance Relation. Multiple attributes of standard of living Hence, methods of measurement of inequality need to be extended.

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

Chapter Matrices Matrix Arithmetic
Applied Informatics Štefan BEREŽNÝ
Pigou-Dalton consistent multidimensional inequality measures: some characterizations C. Lasso de la Vega, A.de Sarachu, and A. Urrutia University of the.
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
Mathematics. Matrices and Determinants-1 Session.
Chapter 5 Orthogonality
2.III. Basis and Dimension 1.Basis 2.Dimension 3.Vector Spaces and Linear Systems 4.Combining Subspaces.
4.4 Matrices: Basic Operations. Addition and Subtraction of matrices To add or subtract matrices, they must be of the same order, mxn. To add matrices.
Presented by Johanna Lind and Anna Schurba Facility Location Planning using the Analytic Hierarchy Process Specialisation Seminar „Facility Location Planning“
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach Wayne Lawton Department of Mathematics National University of Singapore S ,
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Linear Algebra Chapter 4 Vector Spaces.
Dominance Relationships The empirical examination of partial orderings via various forms of dominance relationships.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
A rectangular array of numbers (we will concentrate on real numbers). A nxm matrix has ‘n’ rows and ‘m’ columns What is a matrix? First column First row.
2 2.1 © 2016 Pearson Education, Inc. Matrix Algebra MATRIX OPERATIONS.
Matrices & Determinants Chapter: 1 Matrices & Determinants.
Linear Programming System of Linear Inequalities  The solution set of LP is described by Ax  b. Gauss showed how to solve a system of linear.
1 1.7 © 2016 Pearson Education, Inc. Linear Equations in Linear Algebra LINEAR INDEPENDENCE.
1 1.3 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra VECTOR EQUATIONS.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Algebra 3: Section 5.5 Objectives of this Section Find the Sum and Difference of Two Matrices Find Scalar Multiples of a Matrix Find the Product of Two.
Yaomin Jin Design of Experiments Morris Method.
Session 5 Review Today Inequality measures Four basic axioms Lorenz
Mathematics.
Chap. 4 Vector Spaces 4.1 Vectors in Rn 4.2 Vector Spaces
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
AGC DSP AGC DSP Professor A G Constantinides©1 Signal Spaces The purpose of this part of the course is to introduce the basic concepts behind generalised.
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
2 2.1 © 2012 Pearson Education, Inc. Matrix Algebra MATRIX OPERATIONS.
Matrices: Basic Operations and Their Properties
CALCULUS – III Matrix Operation by Dr. Eman Saad & Dr. Shorouk Ossama.
Session 3 Review Distributions Pen’s parade, quantile function, cdf Size, spread, poverty Data Income vector, cdf Today Inequality and economics Welfare.
Greatest Common Divisors & Least Common Multiples  Definition 4 Let a and b be integers, not both zero. The largest integer d such that d|a and d|b is.
1.3 Matrices and Matrix Operations. A matrix is a rectangular array of numbers. The numbers in the arry are called the Entries in the matrix. The size.
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES.
1.7 Linear Independence. in R n is said to be linearly independent if has only the trivial solution. in R n is said to be linearly dependent if there.
Ch 6 Vector Spaces. Vector Space Axioms X,Y,Z elements of  and α, β elements of  Def of vector addition Def of multiplication of scalar and vector These.
D. AriflerCMPE 548 Fall CMPE 548 Routing and Congestion Control.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
2 - 1 Chapter 2A Matrices 2A.1 Definition, and Operations of Matrices: 1 Sums and Scalar Products; 2 Matrix Multiplication 2A.2 Properties of Matrix Operations;
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
CS 285- Discrete Mathematics Lecture 11. Section 3.8 Matrices Introduction Matrix Arithmetic Transposes and Power of Matrices Zero – One Matrices Boolean.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
MTH108 Business Math I Lecture 20.
7.3 Linear Systems of Equations. Gauss Elimination
Mathematics-I J.Baskar Babujee Department of Mathematics
7.1 Matrices, Vectors: Addition and Scalar Multiplication
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
1.4 Inverses; Rules of Matrix Arithmetic
Eigenvalues and Eigenvectors
MATHEMATICS Matrix Algebra
Markov Chains Mixing Times Lecture 5
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
Matrices and Vectors Review Objective
of Matrices and Vectors
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Basis and Dimension Basis Dimension Vector Spaces and Linear Systems
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Linear Equations in Linear Algebra
Numerical Analysis Lecture 16.
1.3 Vector Equations.
Section 2.4 Matrices.
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
I.4 Polyhedral Theory (NW)
I.4 Polyhedral Theory.
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
Matrices and Determinants
Presentation transcript:

A Multidimensional Lorenz Dominance Relation

Multiple attributes of standard of living Hence, methods of measurement of inequality need to be extended to the multidimensional context. As in unidimensional theory, two approaches to comparing the degrees of inequality of alternative distributions: (1) complete ordering by means of inequality indices.[See, for instance, Kolm (1977) and Tsui (1995, 1999).] Different indices may lead to different orderings. (2) partial ordering; not necessarily complete but intuitively more acceptable; dominance criteria. 2

In unidimensional theory the most widely used dominance relation is Lorenz dominance. This paper considers the problem of obtaining a multidimensional Lorenz dominance relation (MLDR). If relative weights of the different attributes are known, the problem is trivial. Otherwise, no obvious or unique answer. The literature contains several suggestions. A general definition of an MLDR is developed in this paper. The paper proposes two axioms which an MLDR may reasonably be expected to satisfy, notes that the existing literature does not seem to contain an example of an MLDR that satisfies both the axioms and constructs one that does. 3

We shall be concerned with relative inequality. Confine attention to discrete distributions. n individuals; m attributes; X = distribution matrix = ((x p j )) ; x p j = amount of j-th attribute allocated to the p-th individual; x p and x j are, resp.,the p-th row and the j-th column of X. X is non-negative; at least one positive entry in each column. For technical reasons, assume that no attribute is equally distributed. X = the set of all such X’s. A binary relation D on X is called a multidimensional inequality dominance relation (MIDR) if it satisfies the following conditions: 4

Continuity (CONT), Quasi-ordering (QORD), Ratio Scale Invariance (RSI), Invariance w.r.t. Row Permutations (IRP) and Uniform Lorenz Majorization (ULM) [If X and Y in X are such that X = BY for some positive, symmetric and bistochastic matrix B, then X D Y]. ULM is similar to (but technically independent of) Kolm’s (1977) axiom of Uniform Majorization [If X = BY for some bistochastic matrix B, then society prefers X to Y.] The antecedent of ULM implies that as we move from Y to X, the Lorenz curve for each attribute moves “upward” in such a way that the area between the line of equality and the Lorenz curve decreases in the same proportion for each attribute. 5

For all X and Y in X, X D Y is interpreted to mean that the over-all degree of inequality in the distribution matrix X is no more than that in the distribution matrix Y, whatever the method of measuring inequality may be. Let L be the unidimensional Lorenz dominance relation on the set of non-negative distribution vectors. A multidimensional Lorenz dominance relation (MLDR), L M, on X is an MIDR such that if m = 1, then L M = L. 6

The literature contains several specific MLDR’s. Two examples: (1) For all X and Y in X, X L M Y if and only if (Xw) L (Yw) for all w in R ++ m. (2) For all X and Y in X, X L M Y if and only if x j L y j for all j = 1,2,…,m. We impose the following two axioms on any MLDR, L M. Axiom 1[ Comonotizing Majorization (CM)]: For all X and Y in X such that (i) X is mixed monotonic and (ii) Y is a comonotonization of X, X P M Y where P M is the asymmetric component of L M. 7

[A real vector whose components are in non- increasing (resp. non-decreasing) order is non- increasing (non-decreasing) monotonic. A matrix is comonotonic if either all its columns are non- increasing monotonic or all of them are non- decreasing monotonic. It is mixed monotonic if all its columns are non-increasing or non-decreasing comonotonic but it is not comonotonic. For any matrix, its comonotonization is a comonotonic matrix obtained by rearranging the entries in each column, if necessary. Obviously, a matrix has two comonotonizations (non-increasing and non-decreasing).] 8

9

Axiom 2 [Prioritization of Attributes under Comonotonicity (PAC)]: For all comonotonic X in X, and for all i, j = 1,2,…,m such that x j L x i, if y in R + n is such that (i) 0 ≠ y ≠ k1 n for any scalar k, (ii) y is comonotonic with the columns of X and (iii) yLx i for all i=1,2,…,m, then [X − i,y L M X − j,y ] where X − i,y and X − j,y are the matrices obtained by replacing the i-th and the j-th columns of X, resp., by y. Intuitively, PAC means that if x j Lorenz dominates x i and y lorenz dominates all columns of X, then by replacing x i by y in X get a better distribution (in terms of L M ) than that obtained by making the same replacement for x j i.e. it is “more important” to reduce the more acute inequalities. 10

CM and PAC are independent. The existing literature does not seem to contain an example of an MLDR which satisfies both CM and PAC. E.g., Example (1) above [Dominance by all positive weights] satisfies CM but violates PAC. Example (2) [Column-wise Dominance] satisfies PAC but violates CM. Does there exist an MLDR satisfying CM and PAC? The answer is in the affirmative. For all X in X, let X^ denote the ‘scaled’ version of X obtained by dividing each entry by the arithmetic mean of the column containing it. 11

Let Com(X^) be a comonotonization of X^ and C(X^) the matrix of covariances of the columns of Com(X^). For all X in X, let w(X^) be the first eigen vector (i.e. the eigen vector associated with the maximal eigen value) of C(X^). Consider the following relation L* on X: For all X and Y in X, X L*Y if and only if [X^w(X^)] L [Y^w(Y^)]. Proposition: L* is an MLDR satisfying CM and PAC. 12