Download presentation
Presentation is loading. Please wait.
Published byRodney Allen Modified over 9 years ago
1
1 Statistical Tests of Returns to Scale Using DEA Rajiv D. Banker Hsihui Chang Shih-Chi Chang
2
2 Introduction Simar and Wilson (2002) conduct simulations to evaluate non-parametric tests of returns to scale in DEA –Bootstrap-based tests –Binomial tests –DEA-based tests They claim that when the production technology exhibits CRS -Their own binomial tests perform quite poorly in almost every instance -DEA-based tests perform poorly -Bootstrap-based tests consistently perform best compared to the other tests
3
3 Objectives of Our Study Use experimentally designed simulations to evaluate the relative performance of Bootstrap- based and DEA-based test statistics for returns to scale based on the occurrence of both Type I and Type II errors Type I error occurs when one rejects the null hypothesis when it is true while Type II error occurs when one fails to reject the null hypothesis when the alternative hypothesis is true
4
4 Preview of Simulation Results DEA-based tests perform much better than Bootstrap-based tests in terms of the occurrence of Type I errors and is comparable for Type II errors Simulations reveal that Simar and Wilson (2002) (1) Misreport the performance of DEA-based tests, (2) Inflate the performance of their Bootstrap-based tests The performance of Bootstrap-based tests is very sensitive to different decision rules employed by Simar and Wilson to evaluate the null hypothesis, contrary to their claim that they are equivalent DEA-based tests have an advantage of using only a small fraction of CPU time required by Bootstrap-based tests
5
5 BCC Model of DEA
6
6 Additive Test Statistics Simar-Wilson use this statistic for comparison
7
7 Multiplicative Test Statistics This statistic is suggested when the efficiency is multiplicative as in y = θ f ( x )
8
8 Bootstrap-based Test Statistics Observe that these statistics are very similar to the multiplicative DEA-based test statistics except for the bootstrap
9
9 Experimental Design Design elements: –production technology –sample size –range of input values –efficiency distribution Composed production function which includes a mix of a shifted Cobb-Douglas production function that exhibits variable returns to scale, and a linear production function that characterizes constant returns to scale
10
10 Production Technology = 0, 0.25, 0.50, 0.75, and 1
11
11
12
12 Sample Size n=40 and n=60 Input Range Three different ranges for input values which are generated uniformly over the intervals of 5 and 15 (both incr. and decr. RTS) 5 and 10 (only incr. RTS) 10 and 15 (only decr. RTS)
13
13 Efficiency Distribution v is generated for each observation j {1, … N} from a standard normal distribution N(0,1)
14
14 Simulated Observations We use the Frontier Efficiency Analysis with R (FEAR) package (Wilson, 2008) to compute output oriented DEA inefficiency scores for both CCR and BCC models.
15
15 Bootstrap-based Test Procedures (1)Generate a random sample of size N from DEA inefficiency index (2)Compute adjusted output value (3)Re-estimate CCR and BCC models using adjusted output and the original input to obtain the bootstrap DEA inefficiency estimates (4)Compute test statistics T 1 and T 2
16
16 Bootstrap-based Test Procedures (5) Repeat steps (1)-(4) B =2000 times to provide a set of estimates T ib, i=1,2 and b=1,2,,B (6) Construct the empirical distributions of Bootstrap estimates T ib, i=1,2 (7) Use the empirical distributions of T ib to compute bias of bootstrap estimates and construct the bias-corrected distributions of T ib (8) Select the nominal size (i.e., α =0.01, 0.02, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3)
17
17 The (5.6) Test Procedure -Use bootstrap estimates of T ib and the original estimates of T i to determine the critical value Cα for each selected nominal size by following the procedures outlined in Eq. (5.1)-(5.6) in Simar and Wilson (2002, pp. 121-122) -Use the decision rule in Eq. (5.6) to evaluate the null hypothesis of constant returns to scale, i.e., reject H0 if the observed T i is less than (1-Cα)
18
18 The (5.11) Test Procedure -Estimate the probability value p that bootstrapped T ib is less than the observed T i based on procedures outlined in Eq. (5.7)-(5.11) in Simar and Wilson (2002, p122) -Use the decision rule to compare the probability value with the selected nominal size to evaluate the null hypothesis of constant returns to scale, i.e., reject H0 if p < α
19
19 Number of Trials and Bootstraps 1,000 experiments (trials), each with 2,000 bootstraps for Bootstrap-based test procedures
20
20 CPU Time For every 1,000 experiments, each Bootstrap- based test procedure using FEAR program took on average 8,048 seconds of CPU time For every 1,000 experiments each DEA-based test procedure used on average only 3 seconds of CPU time CPU time is for a Lenovo desktop PC equipped with Intel Core CPU E8400 @ 3.00GHZ and 2.00G RAME8400 @ 3.00GHZ
21
21 Simulation Results The performance of DEA-based statistics is e comparable to that of the Bootstrap-based statistics in terms of the occurrence of type II errors DEA-based statistics outperform Bootstrap- based statistics in terms of the occurrence of type I errors when the null is true The performance of Bootstrap-based statistics is much worse when the decision rule (5.6) is employed to evaluate the null hypothesis than when (5.11) is used.
22
22 Conclusion Performance of DEA-based statistics proposed by Banker (1996) is comparable to that of Bootstrap-based statistics suggested by Simar and Wilson (2002) for occurrence of Type II errors and is, in fact, superior for Type I errors DEA-based statistics have an advantage of using much less CPU time than the Bootstrap test procedures
23
23 Implications There is no need to use the Bootstrap-based test procedures since they yield comparable results to DEA-based procedures There is a need to focus on more research using direct DEA-based statistical tests
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.