Questionnaire-Part 2. Translating a questionnaire Quality of the obtained data increases if the questionnaire is presented in the respondents’ own mother.

Slides:



Advertisements
Similar presentations
Developing Satisfaction Surveys: Integrating Qualitative and Quantitative Information David Cantor, Sarah Dipko, Stephanie Fry, Pamela Giambo and Vasudha.
Advertisements

Reliability and Validity of Researcher-Made Surveys.
1Reliability Introduction to Communication Research School of Communication Studies James Madison University Dr. Michael Smilowitz.
Research Methodology Lecture No : 11 (Goodness Of Measures)
Measurement Reliability and Validity
STATISTICS FOR MANAGERS LECTURE 2: SURVEY DESIGN.
Mixed-methods data analysis Graduate Seminar in English Language Studies Suranaree, March 2011 Richard Watson Todd KMUTT
Chapter 3 Producing Data 1. During most of this semester we go about statistics as if we already have data to work with. This is okay, but a little misleading.
Report Assessment AE Semester Two
Dr. Chris L. S. Coryn Spring 2012
UNDERSTANDING BILINGUAL TRANSLATION OF SPECIALIZED TEXTS.
Research Methods in MIS
RESEARCH METHODS IN EDUCATIONAL PSYCHOLOGY
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
Chapter 5 Copyright © Allyn & Bacon 2008 This multimedia product and its contents are protected under copyright law. The following are prohibited by law:
Validity and Validation: An introduction Note: I have included explanatory notes for each slide. To access these, you will probably have to save the file.
Chapter 9 Descriptive Research. Overview of Descriptive Research Focused towards the present –Gathering information and describing the current situation.
Item Analysis: Classical and Beyond SCROLLA Symposium Measurement Theory and Item Analysis Modified for EPE/EDP 711 by Kelly Bradley on January 8, 2013.
Reflective practice Session 4 – Working together.
Measurement and Data Quality
Reliability, Validity, & Scaling
Arun Srivastava. Types of Non-sampling Errors Specification errors, Coverage errors, Measurement or response errors, Non-response errors and Processing.
Power Point Slides by Ronald J. Shope in collaboration with John W. Creswell Chapter 13 Survey Designs.
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT OSMAN BIN SAIF Session 15.
COLLECTING QUANTITATIVE DATA: Sampling and Data collection
Copyright 2010, The World Bank Group. All Rights Reserved. Testing and Documentation Part I.
Data Collection & Processing Hand Grip Strength P textbook.
Chapter 4: Explicit Reports An Introduction to Scientific Research Methods in Geography As Reviewed by: Michelle Guzdek GEOG 4020 Prof. Sutton 2/1/2010.
Instrumentation.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Unultiplying Whole Numbers © Math As A Second Language All Rights Reserved next #5 Taking the Fear out of Math 81 ÷ 9 Division.
” Interface” Validity Investigating the potential role of face validity in content validation Gábor Szabó, Robert Märcz ECL Examinations EALTA 9 - Innsbruck,
United Nations Regional Workshop on the 2010 World Programme on Population and Housing Censuses: Census Evaluation and Post Enumeration Surveys Bangkok,
Measures of Variability In addition to knowing where the center of the distribution is, it is often helpful to know the degree to which individual values.
Chapter Fourteen Data Preparation 14-1 Copyright © 2010 Pearson Education, Inc.
Unaddition (Subtraction)
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
Measurement Validity.
Learning Objective Chapter 9 The Concept of Measurement and Attitude Scales Copyright © 2000 South-Western College Publishing Co. CHAPTER nine The Concept.
VALIDITY AND VALIDATION: AN INTRODUCTION Note: I have included explanatory notes for each slide. To access these, you will probably have to save the file.
ICES 2007 Labour Cost Index and Sample Allocation Outi Ahti-Miettinen and Seppo Laaksonen Statistics Finland (+ University of Helsinki) Labour cost index.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Question paper 1997.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Validity: Introduction. Reliability and Validity Reliability Low High Validity Low High.
Copyright 2010, The World Bank Group. All Rights Reserved. Testing and Documentation Part II.
JS Mrunalini Lecturer RAKMHSU Data Collection Considerations: Validity, Reliability, Generalizability, and Ethics.
McGraw-Hill/Irwin © 2012 The McGraw-Hill Companies, Inc. All rights reserved. Obtaining Valid and Reliable Classroom Evidence Chapter 4:
Multiplying Decimals © Math As A Second Language All Rights Reserved next #8 Taking the Fear out of Math 8.25 × 3.5.
Teaching Writing.
Experimental Research Methods in Language Learning Chapter 12 Reliability and Reliability Analysis.
Translation and Cross-Cultural Equivalence of Health Measures
Chapter 14: Affective Assessment
Chapter 6 - Standardized Measurement and Assessment
Applied Opinion Research Training Workshop Day 3.
Chapter 34 Organisation & Collection of Data. Primary & Secondary Data PRIMARY DATA is collected for a particular purpose. PRIMARY DATA is obtained from.
Measurement Chapter 6. Measuring Variables Measurement Classifying units of analysis by categories to represent variable concepts.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Data Screening. What is it? Data screening is very important to make sure you’ve met all your assumptions, outliers, and error problems. Each type of.
RELIABILITY AND VALIDITY Dr. Rehab F. Gwada. Control of Measurement Reliabilityvalidity.
Chapter Fourteen Data Preparation 14-1 Copyright © 2010 Pearson Education, Inc.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Professor Jim Tognolini
Selecting the Best Measure for Your Study
Interviewing Techniques
Interviewing Techniques
Concept of Test Validity
Test Design & Construction
Chapter 8 VALIDITY AND RELIABILITY
Presentation transcript:

Questionnaire-Part 2

Translating a questionnaire Quality of the obtained data increases if the questionnaire is presented in the respondents’ own mother tongue The translated version of a questionnaire is usually the only version that the respondents see. (You don’t need to provide them with the English version) For that reason, the quality of translation is very important.

Translation can be done by the researcher himself/herself However, if it is administered without further study on it, there may be some problems in some items or the whole scale In that case, the researchers cannot do anything else, but exclude the items in the questionnaire

So, while translating there are mainly two things we should keep in mind: the need to produce a close translation of the original text so that we can claim that the two versions are equivalent, the need to produce natural-sounding texts in the target language, similar to the words people would actually say.

How can we do this? a) After the initial translation is completed, consult external reviewers b) recruit an independent translator to backtranslate the target language version into the source language. back-translation: involves an independent translator turning the L2 version of the questionnaire back into the source language and then comparing the two texts: If the back- translated version corresponds with the source language version, this is an indication that both instruments are asking the same questions, which attests to the accuracy of the translation

Piloting the Questionnaire In questionnaires so much depends on the actual wording of the items (even minor differences can change the response pattern) Thus, “field testing,” that is, piloting the questionnaire at various stages of its development on a sample of people who are similar to the target sample is important These help the researcher to collect feedback and make alterations and fine-tune the final version of the questionnaire.

Item Analysis Item analysis can be conducted at two different points in the survey process: After the final piloting stage: in this case, the results are used to fine-tune and finalize the questionnaire. After the administration of the final questionnaire: such a “post hoc analysis” is useful to screen out any items that have not worked properly.

The procedures in both cases are similar and usually involve checking three aspects of the response pattern: 1. Missing responses and possible signs that the instructions were not understood correctly If some items are left out by several respondents, that should serve as an indication that something is not right: Perhaps the item is too difficult, too ambiguous, or too sensitive; Perhaps its location in the questionnaire is such that it is easily overlooked.

Two main ways of dealing with missing data: Listwise deletion: one missing value deletes a whole case from all the analyses even if some of the available data could be used for certain calculations. Pairwise deletion: temporary deletion of a case from the analysis only when specific statistics are computed that would involve the particular missing value.

2. The range of the responses elicited by each item. We should avoid including items that are endorsed by almost everyone or by almost no one, because they are difficult if not impossible to process statistically (since statistical procedures require a certain amount of variation in the scores) Although the lack of variation may well be the true state of affairs in the group, it may be useful in many cases to increase item variation by adding additional response categories or rewording the question.

3. The internal consistency of multi-item scales. Questionnaires should contain multi-item scales, rather than single items, to focus on any particular content domain Multi-item scales are effective if the items within a scale work together in a homogeneous manner; that is, if they measure the same target area In psychometric terms this means that each item on a scale should correlate with the other items and with the total scale score, which has been referred “Internal Consistency” Following this principle, a simple way of selecting items for a scale is to compute correlation coefficients for each potential item with the total scale score and to retain the items with the highest correlations.

A word of caution: Before we discard an item on the basis of the item analysis, we should first consider how the particular item fits in with the overall content area of the whole scale Automatic exclusion of an item suggested by the computer may lead to narrowing down the scope of the content area too much If a problem item represents an important dimension of the targeted domain, we should try and alter its wording or replace it with an alternative item rather than simply delete it.