Some Illustrations of Econometric Problems Topic1.

Slides:



Advertisements
Similar presentations
Presentation on Probability Distribution * Binomial * Chi-square
Advertisements

Chapter 4 Probability and Probability Distributions
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 4-1 Business Statistics: A Decision-Making Approach 6 th Edition Chapter.
A. A. Elimam College of Business San Francisco State University Random Variables And Probability Distributions.
Chapter 2: Probability Random Variable (r.v.) is a variable whose value is unknown until it is observed. The value of a random variable results from an.
Probability. Probability Definitions and Relationships Sample space: All the possible outcomes that can occur. Simple event: one outcome in the sample.
1 Midterm Review Econ 240A. 2 The Big Picture The Classical Statistical Trail Descriptive Statistics Inferential Statistics Probability Discrete Random.
Chapter 2: Probability.
Chapter 6 Continuous Random Variables and Probability Distributions
The Simple Regression Model
Topic 2: Statistical Concepts and Market Returns
Evaluating Hypotheses
Probability and Statistics Review
Statistical Background
OMS 201 Review. Range The range of a data set is the difference between the largest and smallest data values. It is the simplest measure of dispersion.
Chapter 5 Continuous Random Variables and Probability Distributions
Chap 6-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 6 Continuous Random Variables and Probability Distributions Statistics.
QMS 6351 Statistics and Research Methods Probability and Probability distributions Chapter 4, page 161 Chapter 5 (5.1) Chapter 6 (6.2) Prof. Vera Adamchik.
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
Chap 4-1 EF 507 QUANTITATIVE METHODS FOR ECONOMICS AND FINANCE FALL 2008 Chapter 4 Probability.
CEEN-2131 Business Statistics: A Decision-Making Approach CEEN-2130/31/32 Using Probability and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 4 Continuous Random Variables and Probability Distributions.
Random Variable and Probability Distribution
Econ 482 Lecture 1 I. Administration: Introduction Syllabus Thursday, Jan 16 th, “Lab” class is from 5-6pm in Savery 117 II. Material: Start of Statistical.
Expected Value (Mean), Variance, Independence Transformations of Random Variables Last Time:
Review of Probability.
Probability and Probability Distributions
Chapter 4 Continuous Random Variables and Probability Distributions
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 4 and 5 Probability and Discrete Random Variables.
1 CY1B2 Statistics Aims: To introduce basic statistics. Outcomes: To understand some fundamental concepts in statistics, and be able to apply some probability.
QA in Finance/ Ch 3 Probability in Finance Probability.
Chapter 1 Probability and Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Odds. 1. The odds in favor of an event E occurring is the ratio: p(E) / p(E C ) ; provided p(E C ) in not 0 Notations: The odds is, often, expressed in.
Dr. Gary Blau, Sean HanMonday, Aug 13, 2007 Statistical Design of Experiments SECTION I Probability Theory Review.
11-1 Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Probability and Statistics Chapter 11.
Mid-Term Review Final Review Statistical for Business (1)(2)
Probability The definition – probability of an Event Applies only to the special case when 1.The sample space has a finite no.of outcomes, and 2.Each.
Theory of Probability Statistics for Business and Economics.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
 A probability function is a function which assigns probabilities to the values of a random variable.  Individual probability values may be denoted by.
Copyright © 2006 Brooks/Cole, a division of Thomson Learning, Inc.
1 Lecture 4. 2 Random Variables (Discrete) Real-valued functions defined on a sample space are random vars. determined by outcome of experiment, we can.
Random Variables A random variable is simply a real-valued function defined on the sample space of an experiment. Example. Three fair coins are flipped.
 Review Homework Chapter 6: 1, 2, 3, 4, 13 Chapter 7 - 2, 5, 11  Probability  Control charts for attributes  Week 13 Assignment Read Chapter 10: “Reliability”
ENGR 610 Applied Statistics Fall Week 3 Marshall University CITE Jack Smith.
Chap 4-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 4 Using Probability and Probability.
Probability theory Petter Mostad Sample space The set of possible outcomes you consider for the problem you look at You subdivide into different.
LECTURE 14 TUESDAY, 13 OCTOBER STA 291 Fall
Chapter 5: Probability Analysis of Randomized Algorithms Size is rarely the only property of input that affects run time Worst-case analysis most common.
1 Continuous Probability Distributions Continuous Random Variables & Probability Distributions Dr. Jerrell T. Stracener, SAE Fellow Leadership in Engineering.
Chapter 2 Statistical Background. 2.3 Random Variables and Probability Distributions A variable X is said to be a random variable (rv) if for every real.
Random Variables (1) A random variable (also known as a stochastic variable), x, is a quantity such as strength, size, or weight, that depends upon a.
Copyright © Cengage Learning. All rights reserved. Elementary Probability Theory 5.
Probability theory Tron Anders Moger September 5th 2007.
Chap 6 Further Inference in the Multiple Regression Model
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
AP STATISTICS Section 7.1 Random Variables. Objective: To be able to recognize discrete and continuous random variables and calculate probabilities using.
Lecture 6 Dustin Lueker.  Standardized measure of variation ◦ Idea  A standard deviation of 10 may indicate great variability or small variability,
1 Probability: Introduction Definitions,Definitions, Laws of ProbabilityLaws of Probability Random VariablesRandom Variables DistributionsDistributions.
R.Kass/F02 P416 Lecture 1 1 Lecture 1 Probability and Statistics Introduction: l The understanding of many physical phenomena depend on statistical and.
Welcome to MM305 Unit 3 Seminar Prof Greg Probability Concepts and Applications.
Chapter 11 – Introduction to Risk Analysis u Why do individuals, companies, and stockholders take risks?
Chapter 2: Probability. Section 2.1: Basic Ideas Definition: An experiment is a process that results in an outcome that cannot be predicted in advance.
Chap 4-1 Chapter 4 Using Probability and Probability Distributions.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
MECH 373 Instrumentation and Measurements
EMIS 7300 SYSTEMS ANALYSIS METHODS FALL 2005
Virtual University of Pakistan
Business Statistics Topic 4
Review of Probability and Estimators Arun Das, Jason Rebello
Presentation transcript:

Some Illustrations of Econometric Problems Topic1

Econometrics attempts to measure quantitatively the concepts developed by Economic theory and use the measures to prove or disprove the latter.

Problem1: Estimating the demand curve of a product and measuring the price elasticity of demand at a single point Step1:Collection of data (after sorting various problems associated with it. )

One additional problem: How do you Know that this data is suitable for estimating a demand curve? The data needs to come from a period such that  consumer income  prices of related goods and  consumer tastes and preferences had all remained constant.

Solution: Adjust data and/or throw some of it out This is known as the Identification Problem.

Step2: Identify that PED  d(lnQ)/d(lnP) Step3: Change the numbers in the dataset to the natural log form. That is, change P =2 to lnP = ln2 etc.

Step4: Propose the regression model Recognize that  = dlnQ/dlnP = PED lnQ = ll nP + 

Step5: Impose the restrictions of the Classical Linear Regression Model Perform a linear regression of lnQ on lnP and estimate 

Problem2: Do we suffer from money illusion? Testing the homogeneity property of degree 0 of a demand function Theory :Demand stays unchanged if all prices as well as consumer income change by the same proportion. The rational consumer does not suffer from money illusion.

The demand function is homogeneous of degree 0. Procedure of Test: Step 1: Estimate lnQ =   lnP +   lnp 1 +   lnp 2 + ….+  n lnp n +  lnY Test the hypothesis :        n 

Problem3: Does a production function exhibit Constant, Increasing or Decreasing returns to scale? A Cobb-Douglas production function: Q = AL a K b A, a, b >0 CRS if a+b =1, IRS if a+b > 1 DRS otherwise.

Step1: Rewrite the production function as lnQ = lnA + alnL + blnK Step2: Collect data on Q, L and K Step3: Transform each number to its natural log form

Step4: Run a linear regression of lnQ on lnL and lnK and estimate the coefficients a and b. Step5: Test the hypotheses H0 : a+b =1 versus H1: a+b >1 ; and/or H0 : a+b =1 versus H1: a+b < 1

Diagnostic tests An airliner suspects that the demand relationship post September 11 is not the same as it was before How does it verify this?

Chow test Use the sum of squared errors or squared residuals, or RSS, to evaluate how good an estimated regression line High RSS means poor fit and vice versa

Idea: If the old model is no longer applicable then the use of newly acquired data will produce a larger RSS, compared to the original value of the RSS So reject the null hypothesis that the demand is unchanged if the new RSS is too large compared to the old value

A statistic called an F-statistic and a distribution called an F-distribution is used to quantify the notion of ‘too large’

Revision of Probability Theory Topic2

Suppose that an experiment is scheduled to be undertaken What is the chance of a success? What is the chance of a failure?

Success and Failure are called Outcomes of the experiment or Events Events may be made up of elementary events Experiment: Tossing a dice An elementary event : Number 3 shows up An event : A number less than 3 shows up

Question: What is the probability of getting a 3? Answer: 1/6 (assuming all six outcomes are equally likely ) This approach is known as Classical Probability If we could not assume that all the events were equally likely, we might proceed as follows:

If the experiment was done a large number of times, say 100, and number 3 came up 21 times, then (Probability of getting a 3) = 21/100 This is the Relative Frequency approach to assess probability Assigning probability values according to One’s own beliefs is the Subjective probability method

We shall follow the Relative Frequency approach That is, use past data to assess probability

Probability Theory The probability of an event e is the number P(e) = f/N where f is the frequency of the event occurring N is the total frequency and N is ‘large’.

The sample space S is the grey area = 1 Event A: Red oval Event B: Blue Triangle A’: Everything that is still grey (not A)

Red and White and Blue: A or B (A  B) White Area: A and B (A  B) A and B are not mutually exclusive

The sample space S is the grey area = 1 Event A: Red ovalEvent B: Blue Triangle A and B are mutually exclusive

Axiomatic Probability is a branch of probability theory based on the following axioms. (1) 0  P(e)  1 (2) If e, f are a pair of events that are mutually exclusive then probability both e and f occur is zero P(e and f) = 0

(3) P(e) = 1 – P(e’) where e’ is the event “ not e” (4) P(S) = 1 that is, the sample space S contains all events that can possibly occur (5)P(  ) = 0 where  is the non-event That is, an event not contained within S will not occur.

= P(A or B) + P(A and B) P(B) = Triangle Area P(A) = Oval Area P(A or B) = r + w + b P(A and B) = w So, P(A) + P(B) = r + w + b + w The Addition Rule for 2 events A and B P(A or B) = P(A) + P(B) - P( A and B)

Probability Distributions

Outcome Deterministic Random Non-numeric Numeric

Throwing a dice and noting the number on the side up has a numeric outcome. Let Y be “the result of throwing a dice” Y is a random variable because Y can take any of the values 1,2,3,4,5 and 6.

The probability distribution of Y (assuming a fair dice) is given by the Table below: Y= y P(Y=y) 1 1/6 2 1/6 3 1/6 4 1/6 5 1/6 6 1/6

Formally, we denote by x i (for i = 1,2,….n) the possible values taken by the random variable X. If p(x i ) is the probability assigned to x i, then  p(x i ) = 1 (1)

The Expected Value of a random variable Y ( E(Y) ) is the value the variable is most likely to take, on average The Expected Value therefore is also called the Average or Mean of the Probability Distribution.

The Standard Deviation of a random variable Y (  Y  measures its dispersion around the expected value. The Variance (  2 Y ) is the square of the standard deviation.

Expectation: The Expected value of X, written E(X) is the weighted average of the values X can take. Using notations, E(X) ≡  p(x i )x i (2)

Calculations : For the probability distribution discussed, Y= y P(Y=y) 1 1/6 2 1/6 3 1/6 4 1/6 5 1/6 6 1/6 E(Y) = (1/6) *1 + (1/6) * 2 + (1/6) * 3 + (1/6) * 4 + (1/6) * 5 + (1/6) * 6 = 3.5  2 Y = (1/6) *(1-3.5) 2 + (1/6) * (2 -3.5) 2 + (1/6) * (3 -3.5) 2 + (1/6) * (4 -3.5) 2 + (1/6) * (5 -3.5) 2 + (1/6) * (6 -3.5) 2 =  Y =  = 1.708

The expected value of a constant k is k itself E(k) = k The idea is that if I am always going to get the same number, say 5, then the expected value is 5

The expected value of a function g(X) is given by E(g(x)) ≡  p(x i )g(x i ) (3) Example: Suppose that I stand to win the money value of the square of the number that shows up on the dice.

I throw a 2 I win 4, and if I throw 6, I get 36, etc. E(X 2 ) =  p(x i )x i 2 = 1/6* /6* /6* /6* /6* /6* 6 2 =

E(kX) = kE(X) where k is a constant Variance of X, (   X ) is the spread around the mean value of X,  x. So   X ≡ E(X -  x ) 2 where  x is mean of X or E(X).

E(X-  x ) 2 = E(X 2 ) – 2E(X  x )+ E(  x 2 ) = E(X 2 ) –  x  (X)+ E(  x 2 ) = E(X 2 ) –  x  x +  x 2   X = E(X 2 ) –  x 2 (X-  x ) 2 = X 2 - 2X  x +  x 2

Standard deviation of X,  X =   In the dice-throwing example,  x = 3.5 and E(X 2 ) = So   X = – (3.5) 2 = 2.917

and so  Y = a  X Theory: If Y ≡ aX + b where a and b are constants, then  Y = a  X + b ;   Y = a 2   X The risk and the return of 10 shares is ten times that of holding one share of the same company.

= aE(X) + b Proof: E(Y) = E(aX +b) = E(aX) + E(b)  = a  x + b

= E(aX + b - a  x –b) 2   Y = E(Y-  Y ) 2  = E(aX - a  x ) 2

= E(a 2 X 2 ) –  a   x  (X)+ E(a 2  x 2 ) = E(aX) 2 ) – 2E(aXa  x )+ E(a  x ) 2 =  = a 2 E(X 2 ) –  a 2  x  x + a 2  x 2 = a 2 E(X 2 ) – a 2  x 2 = a 2 (E(X 2 ) –  x 2 ) = a 2   X

Continuous random variables Each possible value of the random variable x has zero probability but a positive probability density

The probability density function (pdf), is denoted by f(x) f(x) assigns a probability density to each possible value x the random variable X may take.

XX The f(.) function assigns a vertical distance to each value of x x Probability Density f(x)

The integral of the pdf on an interval is the probability that the random variable takes a value within this interval. P(a  X  b ) = a  b f(x)dx

The total probability must be 1 = -   +  f(x)dx P(-  X  ) = 1

Example: A pdf is given by f(x) where f(x) = 3x 2 for 0 ≤ x ≤ 1 = 0 otherwise = 0  1 3x 2 dx = [x 3 ] 0,1 = 1 3 – 0 3 = 1 0  1 f(x)dx Proof that the function is indeed a pdf:

E(X) =  X xf(x)dx = 0  1 x*3x 2 dx = 0  1 3x 3 dx = [(3/4)x 4 ] 0,1 = 0.75

The mode is the value of X that has the maximum density. So it is 1. The median m solves 0  m 3x 2 dx = 0.5 m 3 = 0.5 so that m = [x 3 ] 0,m = 0.5

(   X ) = 0  1 (X-0.75) 2 3x 2 dx = 0  1 3x 4 dx - 0  1 4.5x 3 dx + 0  x 2 dx = =