A Bayesian Model for the Study of Accuracy, Reciprocity and Congruence in Interpersonal Perception Paramjit Gill Okanagan University College, Kelowna,

Slides:



Advertisements
Similar presentations
Introduction to Monte Carlo Markov chain (MCMC) methods
Advertisements

A Tutorial on Learning with Bayesian Networks
TK 6413 / TK 5413 : ISLAMIC RISK MANAGEMENT TOPIC 6: VALUE AT RISK (VaR) 1.
The Model Following these assumptions, I propose a hierarchical model with these characteristics: where is the number of goals scored by a team’s offense.
1 Introduction to Inference Confidence Intervals William P. Wattles, Ph.D. Psychology 302.
Effect Size and Meta-Analysis
Cox Model With Intermitten and Error-Prone Covariate Observation Yury Gubman PhD thesis in Statistics Supervisors: Prof. David Zucker, Prof. Orly Manor.
Topic 6: Introduction to Hypothesis Testing
CHAPTER 16 MARKOV CHAIN MONTE CARLO
Bayesian statistics – MCMC techniques
Chapter 8 Estimation: Additional Topics
QUANTITATIVE DATA ANALYSIS
Software Quality Control Methods. Introduction Quality control methods have received a world wide surge of interest within the past couple of decades.
Evaluating Hypotheses
Lecture 9: One Way ANOVA Between Subjects
Chap 9-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 9 Estimation: Additional Topics Statistics for Business and Economics.
Meta-analysis & psychotherapy outcome research
Chapter 11 Multiple Regression.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
The Analysis of Variance
The Calibration Process
Determining the Size of
Introduction to Monte Carlo Methods D.J.C. Mackay.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 14 Analysis.
Introduction to Statistical Inferences
Chapter 9 Statistical Data Analysis
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 9 Section 1 – Slide 1 of 39 Chapter 9 Section 1 The Logic in Constructing Confidence Intervals.
Estimation Bias, Standard Error and Sampling Distribution Estimation Bias, Standard Error and Sampling Distribution Topic 9.
A Beginner’s Guide to Bayesian Modelling Peter England, PhD EMB GIRO 2002.
Biostatistics IV An introduction to bootstrap. 2 Getting something from nothing? In Rudolph Erich Raspe's tale, Baron Munchausen had, in one of his many.
1 Physical Fluctuomatics 5th and 6th Probabilistic information processing by Gaussian graphical model Kazuyuki Tanaka Graduate School of Information Sciences,
Exam I review Understanding the meaning of the terminology we use. Quick calculations that indicate understanding of the basis of methods. Many of the.
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
Illustrating DyadR Using the Truth & Bias Model
Finding Scientific topics August , Topic Modeling 1.A document as a probabilistic mixture of topics. 2.A topic as a probability distribution.
From Theory to Practice: Inference about a Population Mean, Two Sample T Tests, Inference about a Population Proportion Chapters etc.
Geo597 Geostatistics Ch9 Random Function Models.
1 Estimation From Sample Data Chapter 08. Chapter 8 - Learning Objectives Explain the difference between a point and an interval estimate. Construct and.
One-with-Many Design: Introduction David A. Kenny June 11, 2013.
Brian Macpherson Ph.D, Professor of Statistics, University of Manitoba Tom Bingham Statistician, The Boeing Company.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
1 Chapter 7 Sampling Distributions. 2 Chapter Outline  Selecting A Sample  Point Estimation  Introduction to Sampling Distributions  Sampling Distribution.
Estimating Component Availability by Dempster-Shafer Belief Networks Estimating Component Availability by Dempster-Shafer Belief Networks Lan Guo Lane.
POOLED DATA DISTRIBUTIONS GRAPHICAL AND STATISTICAL TOOLS FOR EXAMINING COMPARISON REFERENCE VALUES Alan Steele, Ken Hill, and Rob Douglas National Research.
Inferential Statistics Part 1 Chapter 8 P
MODES-650 Advanced System Simulation Presented by Olgun Karademirci VERIFICATION AND VALIDATION OF SIMULATION MODELS.
6.1 Inference for a Single Proportion  Statistical confidence  Confidence intervals  How confidence intervals behave.
1 3. M ODELING U NCERTAINTY IN C ONSTRUCTION Objective: To develop an understanding of the impact of uncertainty on the performance of a project, and to.
Three Frameworks for Statistical Analysis. Sample Design Forest, N=6 Field, N=4 Count ant nests per quadrat.
Statistics : Statistical Inference Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University 1.
Chapter Seventeen. Figure 17.1 Relationship of Hypothesis Testing Related to Differences to the Previous Chapter and the Marketing Research Process Focus.
Learning to Detect Events with Markov-Modulated Poisson Processes Ihler, Hutchins and Smyth (2007)
12.3 Efficient Diversification with Many Assets We have considered –Investments with a single risky, and a single riskless, security –Investments where.
Simulation Study for Longitudinal Data with Nonignorable Missing Data Rong Liu, PhD Candidate Dr. Ramakrishnan, Advisor Department of Biostatistics Virginia.
Social Relations Model: Multiple Variables David A. Kenny.
Bayesian Travel Time Reliability
IE241: Introduction to Design of Experiments. Last term we talked about testing the difference between two independent means. For means from a normal.
Copyright © 2009 Pearson Education, Inc. 8.1 Sampling Distributions LEARNING GOAL Understand the fundamental ideas of sampling distributions and how the.
Sampling Distributions Sampling Distributions. Sampling Distribution Introduction In real life calculating parameters of populations is prohibitive because.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The accuracy of averages We learned how to make inference from the sample to the population: Counting the percentages. Here we begin to learn how to make.
Spatio-temporal Modelling and Mapping of Teenage Birth Data Paramjit Gill Okanagan University College, Kelowna, BC, Canada Abstract We.
AP Stat 2007 Free Response. 1. A. Roughly speaking, the standard deviation (s = 2.141) measures a “typical” distance between the individual discoloration.
5. Evaluation of measuring tools: reliability Psychometrics. 2011/12. Group A (English)
Effects of Self-Monitoring on Perceived Authenticity in Dyads
Introducing Bayesian Approaches to Twin Data Analysis
Remember that our objective is for some density f(y|) for observations where y and  are vectors of data and parameters,  being sampled from a prior.
Evaluation of measuring tools: reliability
Ch13 Empirical Methods.
Presentation transcript:

A Bayesian Model for the Study of Accuracy, Reciprocity and Congruence in Interpersonal Perception Paramjit Gill Okanagan University College, Kelowna, BC, Canada Abstract A fully Bayesian approach is proposed for the analysis of accuracy and mutuality in interpersonal perceptions. The Bayesian analysis is based on social relations model (SRM) formulation. Inference is straightforward using Markov chain Monte Carlo (MCMC) methods as implemented in the software package WinBugs. An example is provided to highlight the use of Bayesian analysis of interpersonal attraction data. 1. Introduction Accuracy in interpersonal perception is a fundamental and one of the oldest topics in social and personal psychology. Are people’s perceptions of others valid? This is the most obvious question in the field of interpersonal perception, yet, surprisingly, the most difficult to study (Kenny 1994, Chapter 7). In the late 1940's and early 1950's, the study of individual differences in the accuracy of social perception became a dominant area of research but Cronbach (1955) and others argued that a comprehensive understanding of accuracy requires more sophisticated statistical and computational procedures than those available at that time. The “second wave of accuracy research” promised to provide a satisfactory solution. Kenny & Albright (1987) argued that the accuracy research must be nomothetic, interpersonal, and compartmental. They proposed the use of social relations model (SRM) as an appropriate tool to do so. Accuracy is thus defined by the links between various components of the SRM. Although the SRM provides a methodological framework for the analysis, actual practical usage is hampered by lack of available computational machinery. Purpose of this communication is to present a computationally tractable, fully Bayesian approach to the analysis of accuracy in interpersonal perceptions. The vehicle for doing this is modern Bayesian computation made accessible in the software package WinBugs (Spiegelhalter et al., 2000). The Bayesian approach is based on SRM formulation which partitions a response into various components. Accuracy is measured by interrelationships among these components. 2. Social Relations Model We follow Kenny (1994, Chapter 7) where social relations model is proposed to for the study of accuracy of personal perceptions. We assume that the design used in the study is round robin or reciprocal. That is, each subject serves as judge and target and each subject interacts with all other subjects. For each dyad (pair) of subjects i and j, we have four measurements on the level of a trait y ij, y ji, x ij and x ji. Here y ij represents the response (impression) of subject i as an actor (judge) towards subject j as a partner (target) and x ji represents a postdiction (perception) by partner j of the impression y ij. In y ji and x ij the roles are reversed. The SRM partitions the responses into population-specific, actor-specific, partner-specific and dyadic components in an additive fashion y ij =     i   j    ij x ji =     j   i    ji y ji =     j   i    ji x ij =     i   j    ij The model parameters are divided into three groups Population-specific:   = population average impression   = population average perception      = Perception bias the subjects have in estimating their own trait level Subject-specific:   i = average impression that subject i has about others   i = average perception that subject i has about what others think of him   i = average impression others have of subject i   i = average perception others have of subject i’s impression of others Dyad-specific:   ij = dyadic interaction between subjects i and j as reported by i as a judge. It is the special relative impression of i toward j, subtracting out the actor and partner effects.   ji = perception by subject j of the dyadic interaction   ij 2.1 Statistical Assumptions We note that SRM is a random effects model. The subjects involved in the study are assumed to be a random sample from a population and we are interested in generalizing beyond the particular persons involved in the study. Subject-specific and dyad- specific effects are assumed Normal random random variables with E(   i ) = E(   i ) = E(   i ) = E(   i ) = E(   ij ) = E(   ij ) = 0 var(   i ) =     var(   i ) =     var(   i ) =    var(   i ) =     var(   ij ) =     var(   ij ) =    Correlations among subject-specific effects represent various kinds of accuracy and reciprocity as follows.  corr(   i,   i ) =    measures individual-level reciprocity of impression. A positive value means that people who are seen by others as possessing a given trait also see others as possessing the same trait.  corr(   i,   i ) =    measures assumed (or perceived) individual reciprocity. A positive value indicates that people who think of others possessing a given trait also perceive that others think similarly about them. We would expect this correlation to be higher than    which measures actual reciprocity.  corr(   i,   i ) =   measures perceiver accuracy. A positive value means that perceiver’s average response (perception) well corresponds to the average impression of his interaction partners.  corr(   i,   i ) =   measures individual-level accuracy. A positive value means that people have a reasonable understanding of how they are generally viewed by others as a whole.  corr(   i,   i ) =   measures assumed individual-level accuracy. When people see a subject A possessing a trait (say friendly), they assume that A knows that other see him friendly. This correlation is higher than   which measures actual accuracy. Correlations among dyad-specific effects measure dyadic accuracy, mutuality and congruence as follows (see Figure 1).  corr(   ij,   ji ) =   indicates mutuality or dyadic reciprocity in the sense that if subject A treats subject B in an especially friendly manner, does B treat A in an especially friendly manner in return?  corr(   ij,   ij ) =    measures dyadic congruence or assumed dyadic reciprocity in the sense that subject A likes subject B because A thinks that B likes A.  corr(   ij,   ji ) =   measures dyadic accuracy of a perceiver to predict his partner’s behavior towards the perceiver. That is, if subject A sees subject B as especially friendly, does B act especially friendly with A? Among the variance components, we find that actor and partner variations in attraction levels are very similar (Mean    = 66,  Mean     = 69). It is, however, interesting to note that there is substantial actor variation in perception (Mean    = 82). It tells us that some people believe that they are more attractive and others believe that they are not that attractive. On the other hand, partner variation in perception is relatively much lower (Mean    = 21). That is, there is a slight tendency for some people to be seen as harsh judges and others to be seen as lenient ones. The dyadic variance in the reported attraction level (Mean    = 152) is almost double than the dyadic variance in the perceived attraction (Mean    = 76). It tell us that subjects were not capable of fully realizing the extent of variability in dyadic interactions. 4. Future Research The relationship between persons develops over time and therefore, the model should accommodate the longitudinal nature of data. It means that the model parameters are assumed to be occasion-specific. It would be of interest to include covariates (such as sex) in the model. For example, in the Curry-Emerson study, the students lived in the residences as room-mate pairs. Work on measuring the effect of physical proximity on the degree of accuracy, reciprocity and congruence is under progress. 5. References 1. Cronbach, J. L (1955). Psychological Bulletin, 52, Curry, T.J. & Emerson, R.M.(1970). Sociometry, 33, Kenny, D.A.(1994). Interpersonal Perceptions. Guilford Press: New York 4. Kenny, D.A. & Albright, L.(1987). Psychological Bulletin, 102, Spiegelhalter, D., Thomas, A. & Best, N.(2000). WinBUGS User Manual. MRC Biostatistics Unit: Cambridge. 6. Acknowledgements This research is being supported by a grant from the Natural Sciences and Engineering Research Council (NSERC) of Canada and is a part of ongoing joint work with Professor C. F. Bond Jr. of Texas Christian University, Fort Worth. Joint Statistical Meetings New York City, August 11-15, 2002                     Bayesian Formulation The population parameters {   ,   ,   ,   ,   ,   ,  ,  ,  ,  ,  ,  ,  ,   }are called the variance-covariance parameters and are of primary interest. They are parts of matrices   = Cov(   i,   i,   i,   i ) and   = Cov(   ij,   ji,   ij,   ji ) If one does not have strong prior opinion, diffuse prior distributions for parameters and hyper-parameters can be used. Following conventional Bayesian protocol, we assume    N      ];    N[0,10000];     IG[0.0001,0.0001]    N      ];    N[0,10000];     IG[0.0001,0.0001] We use independent inverse-Wisharts as priors for   and   :      Wishart 4 [(4    4      Wishart 4 [(4    4  Having specified the Bayesian model; the model assumptions and data induce a posterior distribution in accordance with the Bayesian paradigm. The posterior distribution is the distribution of the parameters conditional on the data and is the final product from which inference proceeds. Typically however, one is interested in the average value and variation of some of the parameters. If we repeatedly generate values of a parameter from the posterior distribution, average those values and calculate their standard deviation, we will then have obtained estimates of the posterior mean and posterior standard deviation. Methods of Markov chain Monte Carlo (MCMC) provide an iterative approach to variate generation from posterior distributions. Gibbs sampling algorithm (as implemented in WinBugs ) is used to simulate from the marginal posterior distributions of the parameters of interest. 3.  Curry & Emerson Data Example Curry & Emerson (1970) conducted a study on previously unacquainted students who lived together in a residence-hall at the University of Washington. Six 8-person round robin groups of students reported their attraction toward their group members on a 100-point scale at weeks 1, 2, 4, 6, and 8. The subjects also provided perception of attraction ratings towards them by other subjects. For simplicity, we consider five time points as replicates. A more realistic analysis would consider longitudinal profiling of variance- covariance components. Table 1 shows means, standard deviations, 2.5% and 97.5% quantiles of the marginal posterior distributions of the some key parameters for the attraction data. We see that the perception bias     is positive but small. It means that subjects, on the average, have a pretty good idea when estimating the level of attraction they command from others. Individual level reciprocity (Mean   = 0.12) and its perception (Mean   = 0.85) make an interesting comparison. Low reciprocity means that people who are seen by others as attractive, do not see others as attractive. But people who think others as attractive assume that others think similarly about them. Individual level accuracy (Mean   = 0.34) is lower than the assumed individual level accuracy (Mean   = 0.85). This tells us that people have a poor understanding of how they are generally viewed by others. On the other hand, people assume that others have an almost perfect notion of how they are seen by others. Both the dyadic level reciprocity (Mean   = 0.39) and accuracy (Mean   = 0.33) are rather low. It is possible that these values increase with time which could be confirmed with a detailed longitudinal analysis. Not surprisingly, the dyadic congruence (Mean   = 0.65) is high which tells us that subjects have a tendency to like specific others because they think that those specific others like them. When compared with mean   = 0.39, it means that subjects believed that their unique impressions of specific partners were reciprocated more than they really were reciprocated. Figure 1. Mutuality, congruence, and accuracy triangle (Kenny & Albright, 1987) ParameterMeanSD2.5%97.5%           Table 1. Some summary results from the Bayesian analysis of attraction data