Presentation is loading. Please wait.

Presentation is loading. Please wait.

Managing large amounts of data and the analysis process.

Similar presentations


Presentation on theme: "Managing large amounts of data and the analysis process."— Presentation transcript:

1 Managing large amounts of data and the analysis process

2  Manage vast amounts of qualitative data generated  Ensure analysis enables with both consistency and flexibility  Enable comparisons on the ethics and efficacy of conditionality across three data sets KI, practitioners, WSU  Within the comparative QL panel study establish a baseline  Within the comparative QL panel study be able to track change (or lack of it) over time  Within the QL element retain the capacity to look across the sample (breadth) and drill down to individual cases (depth) Analysis: some challenges of the welfare conditionality project 2

3  Some necessary structure e.g. the development of 9 complementary but different question guides Studies with a particular emphasis on comparison will usually also require more structure, since it will be necessary to cover broadly the same issues with each of the comparison groups. It will also need to be more structured where field work is carried out by a team of researchers to ensure consistency in approaches and issues covered (Arthur and Nazroo, 2003 :111).  For QL comparative panel study use an appropriate tool to organise and manage the data to facilitate rigorous systematic analysis across all data Framework matrix analysis in NVivo 10 i.e. construct thematic grids to summarise interviews by common core themes that link directly back to data  Adopt a twin track approach for discrete elements Enable enough structure and provide freedom to meet differing perspectives/requirements of the team Meeting the challenge 3

4  Some necessary structure e.g. the development of 9 complementary but different question guides Studies with a particular emphasis on comparison will usually also require more structure, since it will be necessary to cover broadly the same issues with each of the comparison groups. It will also need to be more structured where field work is carried out by a team of researchers to ensure consistency in approaches and issues covered (Arthur and Nazroo, 2003 :111).  Use an appropriate tool to organise and manage the data to facilitate rigorous systematic analysis across all data Framework analysis in NVivo 10  Adopt a twin track approach for discrete elements Enable enough structure and freedom to meet differing perspectives/requirements of the team 4

5  Cross-sectional analysis i.e. looking at individual cases across the sample at each particular wave of data collection;  Repeat cross-sectional analysis, looking for behaviour change in individual cases between particular points of time; and  Longitudinal case narratives, to explore how, and for whom, behaviour changes occurred (or not) within the period studied  Between case comparison to identify replication and diversity in behaviour change ;  Between group comparisons to explore commonality and difference between groups will also be undertake to facilitate an understanding of the interaction of personal circumstances, the broader social context, and conditional welfare mechanisms in triggering, sustaining or inhibiting behaviour change  nternational evidence indicates that benefit sanctions substantially raise short-term exits from benefits and may also increase short-term job entry; but the longer-term outcomes for earnings, job quality and employment retention appear unfavourable.  Little evidence is available on the impact of welfare conditionality in other spheres e.g. social housing  Some qualitative evidence to suggest that, with appropriate support, interventions including elements of conditionality or enforcement may be effective i.e. deter some from ASB and street-based lifestyles.  Concerns that welfare conditionality leads to unintended effects, including: distancing people from support; causing hardship and destitution; displacing rather than resolving issues such as street homelessness and anti-social behaviour (Watts et al. 2014) With in the QL component we need to be able to deliver 5

6  CAQDAS – Computer Aided Qualitative Data Analysis Software  Examples - Nvivo, MaxQDA, Atlas.ti, Qualrus  Design - Most rely upon SQL / relational databases with ‘user-friendly’ frontend.  Initial fear factor - Ugly / dated interfaces, however, once you get past that they are not as intimidating as they might first seem.  Increasing importance of CAQDAS - Original hostility towards use of CAQDAS though this has greatly diminished. CAQDAS central to facilitate handling data in large-scale qualitative projects.  Team Working - Implementation of hosting of databases on dedicated server enables multi-user simultaneous access to the same project file. General CAQDAS 6

7  Three waves - need to ensure ability in structure of case nodes that can be across case and within case analysis.  Folders – As with Windows Explorer able to use folders across majority of sections to separate items and provide structure. For example, separate KI, FG, and WSU folders in both ‘sources’ and ‘nodes’ sections.  Auto-coding – Quick way to separate out what interviewer and participants said. Also could be used to quickly code more structured data such as tweets scraped from Twitter. Interesting developments in pattern-based auto-coding. Setting up the project file 7

8  Attributes: Ways of ‘tagging’ sources and nodes with information that applies to the whole node.  Classifications: Groupings of attributes that apply to particular groups. For example, one classification for attributes relating to policy makers and another for attributes relating to service users. Classifications 8

9  Commonly agreed coding schema: Meeting of team members interested in using framework matricies to agree upon a shared schema.  Benefits of having a shared schema: Coding down at second-level nodes (2a, 2b, etc) but also aggregated to top-level (2, 3, etc).  Creation of matrix: With the top-line containing everything coded below them can use them to create columns in a framework matrix. Coding for Framework 9

10 Framework Matricies 10  Summarisation – possible for each participant across the main themes. Links to original data make it possible to quickly move from the summaries to the interview transcripts.

11  Re-organisation - Search folders can be created based on coding, attribute data allowing quick separation out of particular groups.  Interrogation – Queries can combine coding, attribute data, search folders etc to return particular sections of interview transcripts.  Time saving - Collections, search folders, and queries open up diverse range of options for exploring data that would be intensively time consuming if not using CAQDAS Search folders and queries 11 Summary  Hierarchical coding structure with aggregation facilitates coding that is both ‘broad-brush’ and ‘line-by-line’ with only having to code through each interview once.  Framework matricies allows summarisation of cases by ‘top-level’ themes and through associated view quick navigation from the summary to the particular sections of text informing that summary.  Proper management of how data is prepared, coded, and assigned attributes maximises options for how to reorganise and interrogate it.


Download ppt "Managing large amounts of data and the analysis process."

Similar presentations


Ads by Google