Step-by-step techniques in SPSS Whitney I. Mattson 09/15/2010.

Slides:



Advertisements
Similar presentations
The Organisation As A System An information management framework The Performance Organiser Data Warehousing.
Advertisements

Outcomes of The Living Murray Icon Sites Application Project Stuart Little Project Officer, The Living Murray Environmental Monitoring eWater CRC Participants.
Math Qualification from Cambridge University
Programming Types of Testing.
Livelihoods analysis using SPSS. Why do we analyze livelihoods?  Food security analysis aims at informing geographical and socio-economic targeting 
17a.Accessing Data: Manipulating Variables in SPSS ®
1 A Balanced Introduction to Computer Science, 2/E David Reed, Creighton University ©2008 Pearson Prentice Hall ISBN Chapter 17 JavaScript.
LFDA Practical Session FMSP stock assessment tools Training Workshop.
Tables and graphs for frequencies and summary statistics
Introduction to Structured Query Language (SQL)
Latent Semantic Analysis (LSA). Introduction to LSA Learning Model Uses Singular Value Decomposition (SVD) to simulate human learning of word and passage.
C++ for Engineers and Scientists Third Edition
Introduction to Structured Query Language (SQL)
Introduction Your previous experiences have helped you understand how to represent situations using algebraic equations, and how solving those equations.
Statistics—Chapter 4 Analyzing Frequency Distributions Read pp , , 116, ,
CIS101 Introduction to Computing Week 12 Spring 2004.
GRA 5917: Input Politics and Public Opinion Data manipulation and descriptive statistics GRA 5917 Public Opinion and Input Politics. Lecture, August 26th.
SW388R7 Data Analysis & Computers II Slide 1 Multiple Regression – Split Sample Validation General criteria for split sample validation Sample problems.
Introduction to SPSS (For SPSS Version 16.0)
Chapter 9 Introduction to ActionScript 3.0. Chapter 9 Lessons 1.Understand ActionScript Work with instances of movie clip symbols 3.Use code snippets.
Fundamentals of Python: From First Programs Through Data Structures
Programming For Nuclear Engineers Lecture 12 MATLAB (3) 1.
How to Analyze Data? Aravinda Guntupalli. SPSS windows process Data window Variable view window Output window Chart editor window.
Understanding Statistics
Copyright © 2012 Pearson Education, Inc. Publishing as Pearson Addison-Wesley C H A P T E R 6 Value- Returning Functions and Modules.
Intro to Statistics and SPSS. Mean (average) Median – the middle score (even number of scores or odd number of scores) Percent Rank (percentile) – calculates.
Using SPSS for Windows Part II Jie Chen Ph.D. Phone: /6/20151.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Splitting Files PowerPoint Prepared by Alfred.
Example SPSS Basic Medical Statistics Course October 2010 Wilma Heemsbergen.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Appendix A: Additional Topics A.1 Categorical Platform (Optional)
Next Colin Clarke-Hill and Ismo Kuhanen 1 Analysing Quantitative Data 1 Forming the Hypothesis Inferential Methods - an overview Research Methods Analysing.
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 1. The Statistical Imagination.
Pseudocode Simple Program Design Third Edition A Step-by-Step Approach 2.
Behavioral observations LAB: 1. Information which can be obtained (1)  the presence or absence of the particular activity;  the frequency of occurrence.
C M Clarke-Hill1 Analysing Quantitative Data Forming the Hypothesis Inferential Methods - an overview Research Methods.
Priya Ramaswami Janssen R&D US. Advantages of PROC REPORT -Very powerful -Perform lists, subsets, statistics, computations, formatting within one procedure.
September 18-19, 2006 – Denver, Colorado Sponsored by the U.S. Department of Housing and Urban Development Conducting and interpreting multivariate analyses.
More repeated measures. More complex repeated measures As with our between groups ANOVA, we can also have more than one repeated measures factor 2-way.
McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Using Nonexperimental Research.
Chapter 4 concerns various SAS procedures (PROCs). Every PROC operates on: –the most recently created dataset –all the observations –all the appropriate.
Mr. Magdi Morsi Statistician Department of Research and Studies, MOH
Chapter 3: Organizing Data. Raw data is useless to us unless we can meaningfully organize and summarize it (descriptive statistics). Organization techniques.
DTC Quantitative Methods Summary of some SPSS commands Weeks 1 & 2, January 2012.
14b. Accessing Data Files in SAS ®. 1 Prerequisites Recommended modules to complete before viewing this module  1. Introduction to the NLTS2 Training.
SW388R6 Data Analysis and Computers I Slide 1 Comparing Central Tendency and Variability across Groups Impact of Missing Data on Group Comparisons Sample.
Practical Object-Oriented Design with UML 2e Slide 1/1 ©The McGraw-Hill Companies, 2004 PRACTICAL OBJECT-ORIENTED DESIGN WITH UML 2e Chapter 10: Statecharts.
Chi-Square Analyses.
Outline of Today’s Discussion 1.The Chi-Square Test of Independence 2.The Chi-Square Test of Goodness of Fit.
GG 313 beginning Chapter 5 Sequences and Time Series Analysis Sequences and Markov Chains Lecture 21 Nov. 12, 2005.
1 PEER Session 02/04/15. 2  Multiple good data management software options exist – quantitative (e.g., SPSS), qualitative (e.g, atlas.ti), mixed (e.g.,
More repeated measures. More on sphericity With our previous between groups Anova we had the assumption of homogeneity of variance With repeated measures.
Getting the most out of interactive and developmental data Daniel Messinger
Analyzing Data. Learning Objectives You will learn to: – Import from excel – Add, move, recode, label, and compute variables – Perform descriptive analyses.
1 3.0 Understanding Functional Assessment Checklist for Teachers and Students (FACTS) Interview.
Just the basics: Learning about the essential steps to do some simple things in SPSS Larkin Lamarche.
Control Structures
ECONOMETRICS ii – spring 2018
Data Entry and Managment
INDIRECT STANDARDIZATION BY MBBSPPT.COM
Computing A Variable Mean
Secondary Data Analysis Lec 10
Lab 3 and HRP259 Lab and Combining (with SQL)
Producing Descriptive Statistics
Topics Introduction to Value-returning Functions: Generating Random Numbers Writing Your Own Value-Returning Functions The math Module Storing Functions.
Multiple Regression – Split Sample Validation
Chapter 17 JavaScript Arrays
Simulation Supplement B.
Getting the most out of interactive and developmental data
Presentation transcript:

Step-by-step techniques in SPSS Whitney I. Mattson 09/15/2010

 How to look at proportions of a behavior  How to look at proportion of co-occurrence  How to look at simple patterns of transition  Using a rate per minute measure  SPSS syntax for the functions described

 Contains  Repeated rows for each subject  Each row corresponds to the same unit of time  Multiple variables from a 1 to 5 scale ▪ Missing values represent no occurrence  These methods are  Most applicable to files in a similar format  Tools here can be adapted to other cases

 The more traditional way:  Split your file by a break variable, here id SORT CASES BY id. SPLIT FILE LAYERED BY id.  Run Frequencies FREQUENCIES VARIABLES=AU1 /ORDER=ANALYSIS.  This works well ▪ But is limited in what it can tell us

 An aggregation approach:  In Data > Aggregate … ▪ Set your break variable (the same as the split file) ▪ Create two summaries of each variable ▪ Weighted N ▪ Weighted Missing Values ▪ Create a new dataset with only the aggregated variables

 The new file contains  A row for each subject  The numerator and denominator for our proportion  The proportion can be calculated with a compute statement  More time consuming  Needed for more complex proportion scores  Proportions can be analyzed DATASET DECLARE Agg. AGGREGATE /OUTFILE='Agg' /BREAK=id /AU1_n=N(AU1) /AU1_nmiss=NMISS(AU1). COMPUTE AU1_prop=AU1_n / (AU1_n + AU1_nmiss). EXECUTE.

 Back to the base file  Compute a value when variables co-occur ▪ Here when there is one valid case of variable AU1 and variable AU4  Aggregate again ▪ Add in summaries of the new variable ▪ Weighted N ▪ Weighted Missing Values  Compute the proportion of time these two variables co-occur IF (NVALID(AU1)>0 & NVALID(AU4)>0) AU1_AU4=1. EXECUTE. DATASET DECLARE Agg. AGGREGATE /OUTFILE='Agg' /BREAK=id /AU1_n=N(AU1) /AU1_nmiss=NMISS(AU1) /AU4_n=N(AU4) /AU4_nmiss=NMISS(AU4) /AU1_AU4_n=N(AU1_AU4) /AU1_AU4_nmiss=NMISS(AU1_AU4). COMPUTE AU1_AU4_prop=AU1_AU4_n / (AU1_AU4_n + AU1_AU4_nmiss). EXECUTE.

 We now have a proportion of the session that AU1 and AU4 co-occur  Using these same functions with different denominators yields other proportions  For example  If you instead computed AU1 and AU4 co-occurrence over AU4 cases  Proportion of time during AU4 when AU1 co-occurred COMPUTE AU1_AU4_during_AU4_prop=AU1_AU4_n / (AU4_n). EXECUTE.

 Proportions are helpful in looking at characteristics of behavior broadly  However, we miss the evolution of sequence and co-occurrence throughout time  Time-series or lag analysis can tell us how often certain behaviors transition to certain other behaviors.

 Using the lag function to get values in previous rows  lag ( variable name )  Returns the last row’s value for the specified variable  Can be used in compute statements to compare changes in variables

 Here we use a lag function to assess a transition  When AU11 moves to AU11 & AU14 ▪ This gives us the frequency that AU14 occurs when AU11 is already there IF (NVALID(AU11)>0 & NVALID(lag(AU11))>0 & NVALID(lag(AU14)) 0) AU11_to_AU11_AU14=1. EXECUTE.

 In addition to obtaining a straight frequency you can also use this transition variable to  Assess a proportion of a specific transition out of all transitions  Summarize several of these variables into a composite variable of transitions  Plug these variables into more complex equations

 Here are a few other useful time series variables you can create: (All of these are accessible through the Transform > Create Time Series… menu)  Lead – Returns the value of the variable in the next row  Difference – Returns the change in value from the previous row to the current row ▪ Useful for finding changes in levels within a variable  In this menu you can easily change how many steps back or forward (order) your function takes ▪ For example the value two rows previous

 Creating a rate per minute measure can  Help tell you how often a behavior occurs ▪ While controlling for variation in session duration  Can be used to summarize changes during meaningful epochs of time ▪ For example, when Stimulus A is presented, do subjects increase their onset of Behavior X

 Calculating a rate per minute  Create a transition (lag) variable for behavior onset  Use Aggregation to create: ▪ Frequency of onset variable ▪ A duration of session variable IF (NVALID(AU1)>0 & NVALID(lag(AU1))<1) AU1_onset=1. EXECUTE. DATASET DECLARE Agg. AGGREGATE /OUTFILE='Agg' /BREAK=id /AU11_onset_n=N(AU1_onset) /frame_n=N(frame).

 The new aggregated dataset allows  Calculation of a rate per minute variable (30 for the number of frames per second, 60 for the number of seconds in a minute)  Comparison across subjects in rate per minute COMPUTE AU11_RPM=AU11_onset_n / (frame_n / (30*60)). EXECUTE.

 You can also use this same method for different epochs of time  Just add more break variables  For example, I create variable Stim_1 that signifies when I present a stimuli  I then aggregate by ID and this new variable…

 Like so…  We now have a rate per minute for both conditions IF (frame 599) Stim_1=1. EXECUTE. AGGREGATE /OUTFILE='Agg' /BREAK=id Stim_1 /AU1_onset_n=N(AU1_onset) /frame_n=N(frame).

 Based on the aggregated datasets presented here you can  Analyze group differences in ▪ Proportions of behavior ▪ Proportions of behavior co-occurrence ▪ Number of transitions ▪ Rate per minute across meaningful periods of time

 Based on these variable creation techniques you can  Combine methods to produce variables which assess more complex questions  For example: ▪ Is the proportion of Variable A during Variable B higher after Event X? ▪ Is the rate of transition per minute from Variable A to Variable B more frequent when Variable C co-occurs?

 As with any set of analyses, ensure that the particular variable you are calculating in a meaningful construct  Thank you for your interest!