Download presentation
Presentation is loading. Please wait.
Published byHelen Conley Modified over 9 years ago
1
CAMP RESOURCE XVII JUNE 24-25, 2010 Empirical steps towards a research design in multi-attribute non-market valuation
2
Plan of the talk Focus on SP data and survey development Experimental designs evolved from orthogonality Choice of elicitation method (incentive compatibility) Gumbel Heteroskedasticity vs Utility Heteroskedasticity Generalised logit Decision heuristics as systematic components of heterogeneity ATTRIBUTE ATTENDANCE Maybe also.... Order effects in repeated choices Context effects Subjective scenario conjectures
3
Basic generic question What should I think about when starting a new SP survey/study for multi-attribute non-market valuation? What do recent research results suggest? 1 st issue is whether the new SP data are need to enrich existing RP data, or they are to serve stand-alone If data enrichment the typical concern is to supplement the existing RP data and break away from multicollinearity (need special experimental designs “pivoted” on existing data)
4
Assume the study is only SP: typical steps? Define research question Draft survey, decide on: specify provision rules, policy deliverables, preference elicitation mode, incentive compatibility, administration mode (face2face, CAPI, web-based, phone supported, paper and pencil, etc. ) info to deliver, feedback on respondent’s understanding of this info info on attribute processing, scenario conjectures etc. Run focus groups Get starting designs (Orthogonal on the Diff.)
5
Typical steps (Cont’ed) Simulate data, estimate specification of interests and welfare estimates for scenarios of interest (here you find if the data you collect give you back the spec you need, e.g. Animal welfare study) example, plotexampleplot Run pilot(s) Amend draft survey Obtain priors to optimize sample size use Prior on parameter estimates (beta hats) Prior on specific functional form (Choice probabil.)
6
ExpDes: One shot vs sequential One shot Use priors and select design criterion (or combination of design criteria and their respective weights) D-efficiency, S-efficiency, C-efficiency, Minimum entropy, Minimum complexity, etc. Sequential Determine size of sampling waves Decide Bayesian rule to adopt to embed sequential learning Each previous phase “informs” design of all following stages Same sample size can give 1/3 more accuracy
7
Elicitation Methods Pair-wise Pair-wise + status quo Full Ranking Rating Best-worst
8
Inter agency Joint decision-making and group interactions diadic (e.g. Couples, Beharry et al.) or triadic (couples + child, Marcucci et al.)) Consensus seeking with interaction (e.g. Connected business solutions, location decisions, etc.
9
Types of heteroskedastic effects Gumbel error heteroskedasticity Common form sigma=exp(z’theta), so that >0 z= vector of choice-task related effects (e.g. measures of choice complexity, Swait and Admowicz 2001, DeShazo and Fermo 2002) Utility heteroskedasticity Var(U 1,2 ) Var(U sq ) (Scarpa et al. 2005, Hess and Rose 2009) Common form additional error component Both forms are likely to co-exist, and SQ choice-task often induce the latter
10
Scale and utility effects in logit Taste heterogeneity, MXL Scale heterogeneity, S-MNL Generalised Logit, G-MNL
12
G-MNL to WTP-space “WTP Space” “Utility Space” Set gamma =0 and phi=1
13
From –beta_n/phi_n From WTP_n
14
Attribute processing: non-attendance Either Ask people which attributes they attended to Yes/no to attendance to each attribute Or Likert scale Or infer it from observed sequence of choices Zero constrained latent classes Variable selection model (spike model in CV) Recent evidence: attendance may not be the same across all the sequence of choices (choice-task non attendance)
16
From a WTP estimate of Euro 790/year down to Euro 20/year!!! For preservation of mountain land landscaped
17
Variable selection (spike model equivalent)
18
Conclusions Multi attribute research design is becoming increasingly complex Need to simultaneously address many issues before one can retrieve “unconfounded” utility structures Respondent interaction and feedback are increasingly becoming as validating and informative
19
Order effects WTP estimates depend on the order at which you estimate them in the sequence of choices Learning effects? Strategic response effects? Heterogeneity?
20
Order effects in WTPs Marginal WTP (odor) Marginal WTP (color) Confidence intervals
21
Context effects in choice-tasks with 3 alternatives From Rooderkerk, van Heerde and Bijmolt, 2009
22
Scenario adjustments Proposed scenarios may be mis-construed or subjectively adjusted (e.g. Risk latency in micro-risk (Cameron and DeShazo)) Subjective perception of Status-quo attribute levels versus objectively measured ones (Marsh et al.)
23
Conclusions Multi attribute research design is becoming increasingly complex Need to simultaneously address many issues before one can retrieve “unconfounded” utility structures Respondent interaction and feedback are increasingly becoming as validating and informative
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.