MER Analysis using Fact View Analytic Datasets

Slides:



Advertisements
Similar presentations
Introduction to OBIEE:
Advertisements

XP New Perspectives on Microsoft Access 2002 Tutorial 71 Microsoft Access 2002 Tutorial 7 – Integrating Access With the Web and With Other Programs.
Instructors begin using McGraw-Hill’s Homework Manager by creating a unique class Web site in the system. The Class Homepage becomes the entry point for.
Submitting Course Outlines for C-ID Designation Training for Articulation Officers Summer 2012.
Intermacs Form Download Excel Tutorial Pivot Tables, Graphic Tools, Macros By: Devin Koehl.
Emdeon Office Batch Management Services This document provides detailed information on Batch Import Services and other Batch features.
PowerTeacher Gradebook PTG and PowerTeacher Pro PT Pro A Comparison The following slides will give you an overview of the changes that will occur moving.
Train the Trainer Manual for Day 1 MER - DRAFT
DU REDCap Introduction
Training Guide for Residents
Introduction to Powerschool Gradebook and tienet
IUIE Reporting Basics Workshop
PeerWise Student Instructions
AP CSP: Cleaning Data & Creating Summary Tables
Project Management: Messages
SurveyDIG 2.1 Tutorial.
Annual Performance Management Cycle Management Training Tutorial
Creating Oracle Business Intelligence Interactive Dashboards
Use Cases Discuss the what and how of use cases: Basics Benefits
GO! with Microsoft Office 2016
Working in Groups in Canvas
Plenary – Data Lifecycle, Review & Quality
HIS Topic 3: Data access and sharing
Thinking with Technology Course Module 9
PEPFAR SharePoint: Overview & Best Practices for Users
GO! with Microsoft Access 2016
Train the Trainer Manual for Day 1 MER - DRAFT
Adding Assignments and Learning Units to Your TSS Course
Reports: Pivot Table ©2015 SchoolCity, Inc. All rights reserved.
Introduction With TimeCard users can tag SharePoint events with information that converts them into time sheets. This way they can report.
How to Use Members Area of The Ninety-Nines Website
Shine Insight Reporting 101
Central Document Library Quick Reference User Guide View User Guide
Students Welcome to “Students” training module..
Welcome to our first session!
Data quality 1: Individual records
MD Online IEP System Instructional Series – PD Activity
Practice Insight Instructional Webinar Series Reporting
MD Online IEP System Instructional Series – PD Activity #7
Data Entry Interface (DEI) Overview
Office of Education Improvement and Innovation
MODULE 7 Microsoft Access 2010
Expense Report Training
SSI Toolbox Status Workbook Overview
Module 5: Data Cleaning and Building Reports
Welcome ! Excel 2013/2016 Data Consolidation (Lab Format)
Skype for Business Webinar Meeting
Microsoft Official Academic Course, Access 2016
Expense Report Training
Residency director & Faculty Training
This presentation document has been prepared by Vault Intelligence Limited (“Vault") and is intended for off line demonstration, presentation and educational.
RPM: Basic plan data entry process A step-by-step guide for Plan Leads
Maryland Online IEP System Instructional Series - PD Activity #5
Reporter Training for High School RIOTM
Inside a PMI Online Course
OLAC Course Development Review in Canvas
Tutorial 7 – Integrating Access With the Web and With Other Programs
Developing Action Plans
Guidelines for Microsoft® Office 2013
Maryland Online IEP System Instructional Series - PD Activity #5
Reporting 101 Keenan & Mona.
2019 Benefits Open Enrollment
CSI Human Resource (HR) Data Collection Prep
Reporter Training for High School RIOTM
Presentation transcript:

MER Analysis using Fact View Analytic Datasets

Course Logistics Part 1 Notes This section of the document is intended to capture information about the profile of the trainer who should deliver this course and logistics information for the delivery of this course.

Logistics Needs for this Course Element Notes Room setup/ configuration (Theatre style, semi-circle, classroom, etc.) Classroom or tables for small groups would work best Computer needs for learners (certain programs) Each learner needs a computer with Excel Clicker? Presenter’s preference White board?/ flip chart/ markers? N/A Specific materials for interactive exercise? (i.e. post-its, pens, markers, tape, etc.) Having printouts of the beginner’s or advanced exercises is helpful so that learners don’t have to view them on their computers while trying to work in Excel Notes This slide is intended to capture logistics information for the delivery of this course.

Software/Program/Technical Needs for this Course Element Notes Software (what program (s) should learners have downloaded on their computers for this course? I.e. GIS Software, Excel, etc.) Excel (some older versions of Excel may not support the ‘slicer’ feature that is referenced in the exercises. If this is the case, you can use ‘filters’ in Pivot Tables instead of ‘slicers’) Access (what logins to PEPFAR supported systems, data sets, etc., should learners have access to for this course?) Sample training datasets are provided and will be available on datim support / PALS training pages What additional technical requirements should learners have in order to participate in this course? Basic familiarity with Excel Notes This slide is intended to capture software/program/technical information for the delivery of this course.

Trainer Profile Trainer Profile Criteria Notes Background: This slide is intended to capture information about the skillset a person would need to deliver this training in the future. Trainer Profile Criteria Notes Level of familiarity with subject mater Describe what types of tasks or areas of knowledge the trainer should be comfortable with in order to deliver this course Trainer should have strong Excel skills (especially with Pivot Tables). A clear understanding of MER indicators is needed (and the data element structure of MER indicators). Recommended years of experience with PEPFAR Program 1+ Recommended certifications What certifications should the trainer have to deliver this course? Not applicable Specific skills required Does the trainer require a certain level of proficiency with PEPFAR systems or other applications? If so, describe the types of skills/ tasks the trainer should be able to perform in the system Trainer should have experience using Fact View datasets to analyze data. Trainer should also be an advanced Excel user. Access needs to PEPFAR systems To teach this course, does the trainer require a certain type of account or level of access to a PEPFAR information system? No, though I order to download regular Fact View datasets (not the sample training dataset provided for this course), then you would need access to pepfar.net or Panorama. Notes This slide is intended to capture information about the skillset a person would need to deliver this training in the future.

Why Should I Care About Fact View Datasets? Has features to make visualizations & analysis easier in Excel In Q4 Genie should be able to export pre- approved Site X IM data in the same format as Fact View Datasets (note – FY17 Result and Target periods)

Am I in the Right Session? This session best suited for those who: Have familiarity with MER 2.0 Indicators & data elements If you aren’t familiar with Excel Pivot Tables but you already feel like you’re a pro with all things DATIM, then it’s ok to stay in this session. There is a beginner walkthrough of how to use Pivot Tables in Excel if someone is unfamiliar with Pivot Tables

Agenda Topic Estimated Time 1. Understanding Fact View Datasets: Overview, Structure, Special Features, Accessing Datasets 30 minutes 2. Hands-On Exercises Using Fact View Datasets 2 hours Note – 30 minutes is a very quick overview that would require you to skip over a number of things (or talking about them very very briefly). In week 2 of PALS we decided to cover the introduction portion very quickly (allowing people to review this slide deck later on their own) so that they could have more time to do hands-on practice with the exercises. Consider how you want to spend time with your participants and adjust these times accordingly.

Session Learning Objectives At the end of today’s session participants will be able to: Know where to download Fact View Datasets and Supporting Documentation Understand the structure and unique features of Fact View Datasets Correctly create pivot tables in Excel using Fact View Datasets

Determine what best describes your user level and comfort… Which group is a good fit for me? Determine what best describes your user level and comfort… Beginner Advanced I need to learn how to create a well structured pivot table using the Fact View datasets I can create pivot tables but would like to practice answering analytic questions using the Fact View datasets In week 2 of the PALS Training we split the group into a beginner and advanced breakout sessions depending on their comfort level with Excel

Downloadable Files You’ll need to download files for the hands-on section of this session Everyone FactView_PSNU_IM_Training_Dataset (Excel File) Beginner Beginner’s Guide to Using FV Datasets FactView_Workbook_PSNU_IM_Beginner (Excel file) Advanced Advanced Scenario Questions FactView_Workbook_PSNU_IM_Advanced (Excel file) Everyone should download the FactView_PSNU_IM_Training_Dataset This is a training dataset that includes data with a small number of PSNUs from a single country. In order to protect sensitive data, IM names and IM numbers have been masked. Then download additional files based on user’s comfort level with Excel & Pivot Tables. The two Workbook_PSNU_IM files contain the answers/completed pivot tables from the exercises. If you want to prevent learners from peaking at the answers before trying to solve the scenario questions themselves then you could either add a password to the workbook or only provide the answer workbooks to participants at the end of the session.

Part I: Background About Fact View Datasets

Overview

Accessing Data: Primary Options   Description Benefits Potential limitations Panorama Web-based platform for quarterly data reviews Standardized views Calculated indicators available GREAT first place to look at MER data Limited ability to do customized analyses Available 1 week after period closes DATIM: Pivot table, visualizer Tool for exporting detailed, site-level data sets for further manipulation Access to pre-approved data (starting Q4) Data manipulation possible Can create favorites and share data views Requires knowledge of how to combine & manipulate data Calculated indicators not included ICPI FACT VIEW Datasets Data text files that can be imported into statistical software or excel Fully customizable analyses made possible Calculated indicators & features to improve analytics Data in a standardized format; easy to use in Excel pivot tables There are a number of different ways to access your data. We’ve listed three of the primary ones here (though we did not include Genie).

5 downloadable, pre-structured datasets containing MER data: Accessing Data: Primary Options 5 downloadable, pre-structured datasets containing MER data: Implementing Mechanism (IM) Priority Sub National Units (PSNU) PSNU x IM (one per OU) Site x IM (one per OU) NAT & SUBNAT The IM dataset and the PSNU dataset are global datasets so they contain data from all countries. We do produce a global PSNU x IM dataset that is primarily for use by ICPI Analysts to create some of the Excel tools for specific program areas. When we get down to the PSNU x IM and Site x IM levels we produce individual datasets for each OU

When are Fact View Datasets Released? Twice per quarter (aligned with Panorama refreshes) After initial data entry for a Quarter closes After data cleaning/deduplication closes FV datasets released ~1 week after a data entry/ dedup closes We plan to release fact view datasets twice per quarter, in line with the pepfar reporting calendar. Once after the initial data entry for a quarter closes and then again after the cleaning/deduplication period closes. It takes about a week to pull the data and check for consistency across the fact views datasets, panorama, and final.datim. Therefore Panorama and the Fact View Datasets generally both are released a week after the data entry or cleaning/deduplication period closes so keep that in mind when you’re scheduling meetings to talk about new data.

FY2017 PEPFAR Data Calendar (version: 15 Dec 2016) By Quarter in FY17 As mentioned, we are able to release datasets about a week after a data entry period closes. The initial Q4 data submission period closes on November 15th, that means you can expect to have Fact View Datasets available on the 22nd. Please note that it may take ICPI one extra day to get all of the Site x IM datsets posted online. Detailed View of Quarter 4 in FY17

What’s in a Fact View Dataset?

Overview of Structure Depending on your audience you may want to go very quickly through the “Overview of Structure” and “Data Structure” sections. Consider what, if anything you want to explicitly cover in your talk. If someone is very new to Fact Views you may want to spend some time walking them through the structure and parts of the Fact View Datasets. For other audiences, it may make sense to go quickly through a few slides in this section and focus more on the Special Features of Fact View Datasets (calculated indicators, standardized disaggregates & MCAD)

What Does Each Dataset Contain? Where? Who? What? orgUnitUID Region RegionUID OperatingUnit OperatingUnitUID CountryName SNU1 SNU1uid PSNU PSNUuid FY16SNUPrioritization FY17SNUPrioritization typeMilitary CommunityUID Community FY16CommunityPrioritization FY17CommunityPrioritization TypeCommunity FacilityUID Facility FY16FacilityPrioritization FY17FacilityPrioritization TypeFacility MechanismUID PrimePartner FundingAgency MechanismID ImplementingMechanismName dataElementUID Indicator numeratorDenom indicatorType disaggregate standardizedDisaggregate categoryOptionComobUID categoryOptionComboName Age Sex resultStatus otherDisaggregate coarseDisaggregate modality tieredSiteCounts typeTieredSupport isMCAD When? FY2015Q2 FY2015Q3 FY2015Q4 FY2015APR FY2016_TARGETS FY2016Q1 FY2016Q2 FY2016Q3 FY2016Q4 FY2016APR FY2017_TARGETS FY2017Q1 FY2017Q2 FY2017Q3 We produce 5 different levels of datasets: an IM level, a PSNU level, a PSNU x IM level, and a Site x IM level. Each type of dataset contains different information. We’ve grouped each variable according to which dataset(s) contain that variable. Green – All datasets Blue – PSNU, PSNU x IM, and Site x IM (NOT IM dataset) Gray – IM, PSNU x IM, and Site x IM (NOT PSNU dataset) Pink – Site x IM dataset only

Where? orgUnitUID Region RegionUID OperatingUnit OperatingUnitUID CountryName SNU1 SNU1uid PSNU PSNUuid FY16SNUPrioritization FY17SNUPrioritization typeMilitary CommunityUID Community FY16CommunityPrioritization FY17CommunityPrioritization TypeCommunity FacilityUID Facility FY16FacilityPrioritization FY17FacilityPrioritization TypeFacility Green – All datasets Blue – PSNU, PSNU x IM, and Site x IM (NOT IM dataset) Gray – IM, PSNU x IM, and Site x IM (NOT PSNU dataset) Pink – Site x IM dataset only

What? dataElementUID Indicator numeratorDenom indicatorType disaggregate standardizedDisaggregate categoryOptionComobUID categoryOptionComboName Age Sex resultStatus otherDisaggregate coarseDisaggregate modality tieredSiteCounts typeTieredSupport isMCAD Green – All datasets Blue – PSNU, PSNU x IM, and Site x IM (NOT IM dataset) Gray – IM, PSNU x IM, and Site x IM (NOT PSNU dataset) Pink – Site x IM dataset only

Who? MechanismUID PrimePartner FundingAgency MechanismID   PrimePartner FundingAgency MechanismID ImplementingMechanismName Green – All datasets Blue – PSNU, PSNU x IM, and Site x IM (NOT IM dataset) Gray – IM, PSNU x IM, and Site x IM (NOT PSNU dataset) Pink – Site x IM dataset only

Understanding the Fact View Structure Before doing analysis, familiarize yourself with each data column and how the dataset is structured The User’s Guide and Data Dictionary provides descriptions & cautions about each variable We strongly suggest that you download and take a look at the User’s Guide and Data Dictionary which is posted along with the datasets at each release. The data dictionary portion does describe each variable in detail and it provides some special notes and comments that will be useful – especially for people who are new to using the Fact View Datasets.

Data Structure Depending on your audience you may want to go very quickly through the “Overview of Structure” and “Data Structure” sections. Consider what, if anything you want to explicitly cover in your talk. If someone is very new to Fact Views you may want to spend some time walking them through the structure and parts of the Fact View Datasets. For other audiences, it may make sense to go quickly through a few slides in this section and focus more on the Special Features of Fact View Datasets (calculated indicators, standardized disaggregates & MCAD)

Deeper Dive 1: Org hierarchy

Understanding the Fact View Structure Structured organizational hierarchy PSNU datasets have Region > OU > Country > SNU1 > PSNU so that you can drill down geographically Site x IM datasets also include unique identifiers (UIDs) for both Community & Facility level

Understanding the Fact View Structure Structured organizational hierarchy OU x IM datasets include Prime Partner / Funding Agency / IM as well as geographic info down to country-level PSNU x IM dataset includes both sets of hierarchies

Deeper Dive 2: The Data Element Approach

MER 2.0: Guidance to Data Entry MER 2.0 Indicator Reference This slide shows how the MER 2.0 Indicator Reference sheets map/correspond to the data entry screens in DATIM. DATIM Data Results Entry

MER 2.0: Data Entry to Fact View DATIM Data Results Entry Fact View Dataset This slide shows how the DATIM data entry screens map to the way that the Fact View Datasets are laid out.

Data Elements in DATIM vs Fact View DATIM Data Element Name HTS_TST (N, DSD, Index/Aggregated Age/Sex/Result) TARGET: HTS received results indicator type resultTarget numeratorDenom disaggregate label ICPI Fact View Datasets Here you can see exactly how the DATIM Data Element Names map to how the Fact View Datasets are structured.

ICPI Fact View Datasets Data Element Structure HTS_TST (N, DSD, Index/Aggregated Age/Sex/Result) TARGET: HTS received results indicator type resultTarget numeratorDenom disaggregate label ICPI Fact View Datasets The indicator column in Fact View datasets includes: standard indicators from DATIM and additional calculated indicators (e.g., HTS_TST_NEG, HTS_TST_POS, PMTCT_STAT_POS)

ICPI Fact View Datasets Data Element Structure HTS_TST (N, DSD, Index/Aggregated Age/Sex/Result) TARGET: HTS received results indicator type resultTarget numeratorDenom disaggregate label ICPI Fact View Datasets The Numerator Denom column specifies: Numerator/Denominator (N/D) Lab specific variables: X – Perform Test P – Perform Test & Participate in PT S – Perform Test, Participate & Pass PT Always filter by Numerator Denom to avoid errors If you’re working with a variable that has both Numerators and Denominators then always be sure to specify whether N or D should be used (by including the NumeratorDenom variable as a filter or by including it in your pivot table as a row/column). As a best practice, it is good to get into the habit of ALWAYS including NumeratorDenom in your pivot table (as a filter or as a row/column) – if you make a habit of including the NumeratorDenom in EVERY Pivot Table then you will never accidently forget to include them when analyzing an indicator that has both a numerator and a denominator.

ICPI Fact View Datasets Data Element Structure HTS_TST (N, DSD, Index/Aggregated Age/Sex/Result) TARGET: HTS received results indicator type resultTarget numeratorDenom disaggregate label ICPI Fact View Datasets Indicator Type specifies: Direct Service Delivery (DSD) vs. Technical Assistance (TA) You can analyze or filter by DSD/TA. If you do not include the “Indicator Type” in your pivot table or chart then your data will automatically aggregate/sum to include both DSD and TA.

ICPI Fact View Datasets Data Element Structure HTS_TST (N, DSD, Index/Aggregated Age/Sex/Result) TARGET: HTS received results indicator type resultTarget numeratorDenom disaggregate label ICPI Fact View Datasets The disaggregate column matches what is listed in DATIM. The StandardizedDisaggregate column is unique to the Fact View Datasets. Used to make analysis of HTS_TST easier. *The previous Special Features section already explained the “StandardizedDisaggregate”. But as a reminder, the Standardized Disaggregate is a special column that the ICPI created to help facilitate analysis of MER 2.0 HTS_TST across service delivery modalities. The Disaggregate and the Standardized Disaggregate are identifical for all indicators except MER 2.0 HTS_TST, HTS_TST_POS, and HTS_TST_NEG. If you use the normal Disaggregate name like VCT/AgeAboveTen/Sex/Result and MobileMod/AgeAboveTen/Sex/Result it can be very difficult to either a) compare across modalities or b) analyze testing data regardless of modality. However, with the Standardized Disaggregate you could just pull in Modality/AgeAboveTen/Sex/Result too look at all modalities of tests done for people over aged 10. Note that you can still use the “Modality” column to add service delivery modality as a row/column to your tables or graphs.

Disaggregate Structure Separate columns for each part of the Category Option Combo Name (age, sex, result status, other disaggregate, & modality) can make data easier to analyze HTS Testing Modality (e.g., Index, Inpatient, etc) is listed under the “Other Disaggregate” column for MER 1.0 (i.e. FY15-16 as well as FY17 targets). For MER 2.0, testing modality is listed under the “Modality” for MER 2.0 (FY17 results) *For HTS – Testing Modality (e.g., Index, Inpatient, etc.) is listed under “OtherDisaggregate” for MER 1.0. Testing Modality is listed under “Modality” for MER 2.0

Disaggregate Structure Users MUST filter by either Disaggregate or Standardized Disaggregate to avoid double counting Ex. if disaggregate not specified, you’d triple count data from Total Numerator and Age/Sex and MostCompleteAgeDisagg disags) Whenever you’re working with Fact View Datasets you must ALWAYS include either the Disaggregate or Standardized Disaggregate (you don’t need to use both at the same time). If you do not include one of these then you will get the wrong numbers when analyzing data.

Additional Flags CoarseDisaggregate = TRUE or null MCAD does not trigger a TRUE in this column isMCAD = Y or N If you’re doing analysis on HTS_TST, TX_CURR, or TX_NEW it can be helpful to include the isMCAD as a filter variable. If you set up filters that 1) set CoarseDisaggregate = TRUE and 2) set isMCAD = N then you will be left with the fine age/sex disaggregates for HTS_TST, TX_CURR and TX_NEW

Special Features: Calculated Indicators, MCAD, Standardized Disaggregate

Calculated Indicators & Values

Calculated Indicators Calculated indicators are created to make analysis of complicated PEPFAR indicators simpler Automatically aggregates data by specific grouping (e.g., positive or negative) within a disaggregate. Each calculated indicator is listed under the “Indicator” column as if it is it’s a regular MER indicator.

HTS & TB HTS_TST_POS HTS_TST_NEG TB_STAT_POS TB_STAT_POS_NEWLYIDENTIFIED_POSITIVE TB_STAT_POS_KNOWNATENTRY_POSITIVE TB_STAT_NEG_NEWLYIDENTIFIED_NEGATIVE Now we’ll go through the full list of calculated indicators that are included in the Fact View Datasets. Users who are interested in analyzing data using calculated indicators can utilize the rows with the assigned names listed above in the indicator column. You’ll notice that for HTS_TST we have created two calculated indicators – one using the Positive Results Status and one using the Negative Results Status. You could use these to calculate testing yield using a combination of calculated indicators in the formula: HTS_TST_POS / [HTS_TST_NEG + HTS_TST_POS]. Within each of these calculated indicators you still find that you have disaggregates for each of these calculated indicators. You can look at TB_STAT_POS_NEWLY_IDENTIFIED_POSITIVE by age and sex bands without having to set up extremely complicated tables

PMTCT PMTCT_ART PMTCT_EID NUMERATOR PMTCT_EID_POS PMTCT_EID_LESS_EQUAL_TWO_MONTHS PMTCT_EID_TWO_TWELVE_MONTHS PMTCT_STAT_POS PMTCT_STAT_KNOWNATENTRY_POSITIVE PMTCT_STAT_NEWLYIDENTIFIED_POSITIVE PMTCT_STAT_NEWLYIDENTIFIED_NEGATIVE PMTCT_STAT DENOMINATOR

OVC & KP OVC_HIVSTAT NUMERATOR OVC_HIVSTAT_POS OVC_HIVSTAT_NEG OVC_SERV NUMERATOR OVC_SERV_OVER_18 OVC_SERV_UNDER_18 KP_PREV NUMERATOR

Summed Annual APR Totals Snapshot Annual APR Totals APR Calculated Values APR totals calculated for all indicators according to MER guidance Summed Annual APR Totals Snapshot Annual APR Totals HTS_TST TX_CURR APR = Q1 + Q2 + Q3 + Q4 APR = Q4 Another great feature of the Fact View Datasets is that we have calculated APR year end totals for every indicator according to the 2017 MER Guidance. As appropriate, when an indicator’s APR is calculated by adding together the data from each quarter, we have done that. Where the APR value is a snapshot at the end of the year, like for Treatment Current, we have set APR = Q4 value.

Most Complete Age-Sex Disaggregate (MCAD)

What’s MCAD? MCAD is a calculated disaggregate used for HTS_TST, TX_NEW, and TX_CURR These are the indicators w/ both Fine & Coarse age-sex disaggregates MCAD selects the most complete disaggregate (either Fine or Coarse) that was entered from each Site-IM DSD/TA level & combines them into a single, new calculated disaggregate <15/15+ and Male/Female/Unknown Sex Disaggregate column: contains “MostCompleteAgeDisagg” Ex. VCT/MostCompleteAgeDisagg The Most Complete Age-Sex Disaggregate (MCAD) is a calculated disaggregate used for HTS_TST, TX_NEW, and TX_CURR (all indicators that have both Fine and Coarse age-sex disaggregates). Future iterations of the Fact View Datasets may incorporate the MCAD for OVC_SERV as well. The MCAD selects the most complete disaggregate (either Fine or Coarse) that was entered from each Site-IM DSD/TA level and then combines them into a single, new calculated disaggregate (displayed as <15/15+, Male/Female, as well as Positive/Negative for HTS_TST). The MCAD algorithm is run against results and targets reported in each quarter since FY15 Q2. Users who are interested in analyzing data using the MCAD can utilize the rows with “MostCompleteAgeDisag” listed in the disaggregate column (for HTS_TST some MCADs begin with the name of the testing modality, e.g., “VCT/MostCompleteAgeDisagg”). MCAD data rows should be excluded if the values are not being used in order to avoid data duplication – this can be done by utilizing the “isMCAD” column. TIP – Panorama now uses the MCAD though it isn’t labeled as such. In Panorama if you filter on “COARSE” what you’re actually getting is the MCAD. So while this may sound confusing, you’ve probably already been using the MCAD without even knowing it!

Why MCAD? MCAD is useful when: Neither Fine nor Coarse disagg is complete (some site/IMs entered Coarse & others entered Fine) You want to analyze data across time but the most complete disag (F/C) for an indicator changed across reporting periods Ex: IMs reported Coarse in Q1 then switched to Fine in Q2 Of coarse I do!! You complete me! It is particularly useful when: 1. Neither the Fine nor Coarse disaggregate is complete because some sites/IMs entered Coarse data while others entered Fine data, or 2. The most complete disaggregate (Fine or Coarse) for an indicator has changed over different reporting reports (e.g., Coarse was used in Q1 and then IMs began reporting Fine age/sex disaggregates in Q2). Users who are interested in analyzing data using the MCAD can utilize the rows with “MostCompleteAgeDisag” listed in the disaggregate column (for HTS_TST some MCADs begin with the name of the testing modality, e.g., “VCT/MostCompleteAgeDisagg”). MCAD data rows should be excluded if the values are not being used in order to avoid data duplication.

MCAD Logic and Algorithm Fine Coarse 1. When there is a total numerator, MCAD picks either Fine OR Coarse depending on which is closer to the Total Numerator value, and privileges Fine if Fine = Coarse. [For reference….If abs(N – F) <= abs(N – C) then F If abs(N – F) > abs(N – C) then C.] 2. If there is no total numerator, MCAD picks either Fine OR Coarse depending on which is the higher value, and privileges Fine if Fine = Coarse. [For reference….If abs(F) >= abs(C) then F, If abs(F) < abs(C) then C.] In the case of dedups, the absolute value is taken for comparison abs(Fine) vs abs(Coarse), compared against abs(Total Numerator). For further details, read the User’s Guide and Data Dictionary

Standardized Disaggregate for HTS_TST

Standardized Disaggregate StandardizedDisaggregate column facilitates analysis of MER 2.0 HTS_TST Identical to the Disaggregate column for all indicators except MER 2.0 (i.e. FY17 results) for HTS_TST, HTS_TST_POS, and HTS_TST_NEG The Standardized Disaggregate is a special column that the ICPI created to help facilitate analysis of MER 2.0 HTS_TST across service delivery modalities. The Disaggregate and the Standardized Disaggregate are identical for all indicators except MER 2.0 HTS_TST, HTS_TST_POS, and HTS_TST_NEG. If you use the normal Disaggregate name like VCT/AgeAboveTen/Sex/Result and MobileMod/AgeAboveTen/Sex/Result it can be very difficult to either a) compare across modalities or b) analyze testing data regardless of modality. However, with the Standardized Disaggregate you could just pull in Modality/AgeAboveTen/Sex/Result too look at all modalities of tests done for people over aged 10. Note that you can still use the “Modality” column to add service delivery modality as a row/column to your tables or graphs.

Standardized Disaggregate This table can be found in the User’s Guide and Data Dictionary. Of note, Data from PMTCT ANC and VMMC Age/Result disaggregates were distributed across the Standardized Disaggregates according to the appropriate age bands. *Note that both the Malnutrition and Pediatric testing modalities are considered Coarse Age Bands. This means that if you want to analyze malnutrition or pediatric testing data then you should use either “Modality/AggregatedAge/Sex/Result” or “Modality/MostCompleteAgeDisagg”. Malnutrition and Pediatric modalities are NOT included in the “Modality/AgeLessThanTen/Result” standardized disaggregate.

Special Considerations for Site x IM Datasets IMPORTANT NOTE – These Special Considerations for the ICPI Site x IM Fact View Datasets ONLY apply to the ICPI Fact View Datasets. These do NOT apply to the Site x IM exports from DATIM Genie.

Site x IM Dataset Particularities The Site x IM dataset contains the most granular data of all the ICPI Fact View datasets Due to the level of detail, security measures have been implemented to protect sensitive data It is essential to understand these measures and the limitations of the dataset BEFORE proceeding with analysis Note: These apply only to the ICPI Fact View Datasets, they do NOT apply to the Site x IM exports from DATIM Genie.

Site Names PROBLEM: Some site names reveal KP or Military information e.g., Tank Battalion #36 Clinic, Sex Worker Clinic SOLUTION 1: Sensitive site names masked in dataset SOULUTION 2: KP_PREV, KP_MAT, and KP Disagg data is removed from the dataset RATIONALE: Prevents datasets containing sensitive information from being inadvertently shared KP disaggregates removed for HTS_TST & TX_NEW are not included in the ICPI Site x IM Fact View Dataset. Note: This applies only to the ICPI Fact View Datasets, they do NOT apply to the Site x IM exports from DATIM Genie.

DOD Data PEPFAR Reporting Guidance on DOD data DOD-funded military sites should be reported at the _MIL SNU DOD-funded civilian sites should be reported at the site level PROBLEM: Contrary to guidance, some DOD military sites still reported at site-level in DATIM. These cases cause security concerns

Working on long-term solution through DATIM & DOD POCs DOD Data TEMPORARY SOLUTION: All DOD-funded military sites were rolled up into _MIL SNU RATIONALE: Prevents datasets containing sensitive information from being inadvertently shared Working on long-term solution through DATIM & DOD POCs In order to protect the names and locations of sensitive military sites, the Site X IM Fact View Datasets combines (or rolls up) all DOD-funded military sites into the PSNU value _military (operating unit name). This is in accordance with how DOD-funded military data should be entered into DATIM by country teams. However, it is not in accordance with how data is displayed and aggregated in DATIM, DATIM Genie, Final.Datim, Panorama, or the other Fact View Datasets. DOD-funded civilian sites and military sites funded by other agencies were not rolled up to the _MIL level. Please contact your DOD POC for questions on site-level data for military programs.

IMPLICATION IMPLICATION: Users should not aggregate the site level data in the ICPI site x IM dataset to their associated PSNUs If summed, some PSNU totals may not match what’s listed in other data sources (Panorama, other Fact View Datasets, DATIM, or DATIM Genie)

Magnitude of Impact There are some instances where the number of OUs impacted is higher than the number of PSNUs impacted. This can occur when an indicator which are reported at the OU level are impacted.

Reminder About GENIE Exports DATIM team developing a new GENIE export structured like Site by IM Fact View Datasets Available starting FY17 Q4 Contains only results and targets from FY17 Does not include APR calculations Will include calculated indicators, MCAD & standardized disaggs ~24hr delay between when data entered & when available Special Site x IM considerations (re: military & KP data) will not apply to Genie exports. Will appear as reflected in DATIM The DATIM team has developed a new GENIE export option that will allow you to export datasets containing unapproved data – these new exports will have the exact same structure as our Site x IM Fact View Datasets. They have fixed previous size limitations on Genie pulls so that you can export an entire dataset at once (rather than large countries having to do 40 different Genie exports and then stitch them together). One key difference is that unlike our Fact View Datasets, the Genie exports will only have target and results data from the FY17 (so no targets or results from FY16 or earlier) But the exports will include all of our calculated indicators, the MCAD, and the standardized disaggregate. We should point out that the exports won’t be 100% real time, there will be an ~24 hour delay between when a partner enters data and when it is available in the Genie export. That’s because the data isn’t coming directly out of DATIM, it is being pulled from the PEPFAR Data Hub so that calculations and manipulations can be done to create the calculated indicators, MCAD, and standardized disaggregate.

Data Sharing Guidance These Data Sharing Guidance and Supporting Documentation sections are good resources for participants to be able to see, but could be skipped over or covered quickly to allow for more hands-on exercise time

Data Sharing Policy Fact View datasets are for use within the PEPFAR USG community If sharing with implementing partners, only their own site level data should be shared with the IP Sensitive key population or military data should not be shared outside of the PEPFAR USG country teams If you need to share data with external partners, it is recommended that you use data available through https://data.pepfar.net/ Sharing policy The ICPI FACT View datasets are for use within the PEPFAR USG community. For data sharing with external partners and host country governments, we recommend using the data available through https://data.pepfar.net/. For sharing with implementing partners, only their own site level data should be shared with the IP. For better understanding of programmatic areas and context outside of an individual implementing partner, PSNU data should be used. Please be aware that datasets can include sensitive key population or military data that should not be shared outside of the PEPFAR USG country teams.  Specifically, the site level data does not include site names to ensure that no key population or military data site names are inadvertently released. Site names can be found in DATIM, by using the site unique identifying number. For additional information, please contact your SI advisor. 

Access & Supporting Documentation

What’s Available? Supporting Documentation

Documentation Included with Each Release User’s Guide & Data Dictionary How to access & import into Excel properly Known nuances (general reasons why Fact Views & Panorama don’t always match DATIM exactly) Column by column explanation of data Minimal changes made each release Release Notes New passwords for datasets are created each release Written summary of validation/consistency checks for current release (actual differences between Fact View, Panorama & final.DATIM) Consistency Check “Cheat Sheet” ICPI Fact View Analytic Datasets, supporting documentation, and trainings can be downloaded on PEPFAR.net or Panorama

Validation & Consistency Checks Comparison of current Fact View Dataset to: Previous Fact View Dataset (useful to see what changed during the cleaning period) Final.DATIM (to assess system consistency) Panorama (to assess system consistency) Results found in Consistency Check “Cheat Sheet” Additional information about how to interpret the Consistency Check “Cheat Sheet” can be found in the forthcoming training: ‘Consistency Checks and Validation Process for ICPI’s Fact View Datasets’

How to Access the Datasets There are two ways to access ICPI Fact View Datasets – 1) on pepfar.net or 2) in Panorama. FV datasets are posted to pepfar.net first by ICPI. Panorama developers then download them and add them to Panorama.

Accessing via PEPFAR.net Log into PEPFAR.net Navigate to the ICPI Fact View Datasets: Home > HQ > Interagency Collaborative for Program Improvement (ICPI) > Shared Documents > ICPI Data Store > MER > “ICPI Fact View – September 22 2017” (or most recent date) Using the dropdown arrow next to the dataset you’re interested in (e.g., “ICPI_Fact_View_PSNU_IM_20170922_v1_1_[OU name]”), select download a copy. Download & open the “ICPI_Fact_View_Release_Notes_20170922.” This document contains a password for the zip file In order to unzip the file users must use the password that is located in the Release Notes that are published each release.

Accessing via PEPFAR.net – cont’d

Accessing via Panorama Login to Panorama, navigate to the home page, and click “Download Files/Links” in the bottom left of the page.

Accessing via Panorama – cont’d Click on the “Download Data Files” link Select “Analytic Data Sets and Guidance” Open the “ICPI Fact View Release Notes” (contains the passwords for each zip file) Download the dataset(s) of interest

Opening the Password Protected Files Once your Fact View Dataset is downloaded, navigate to the zipped folder on your computer. Right click the zipped folder, select “Extract All” Select a destination filepath for the dataset and click “Extract” Enter the password found in the Release Notes document and press “ok” Now that your file is unzipped, you can import the txt file into Excel (or another stats package) If you forget to extract (or unzip) the zip file then you will not be able to import the data into Excel.

How to Import the Datasets into Excel

Importing Data into Excel PSNU (global), some PSNU x IM, and some Site x IM datasets are too large to import into Excel Must use other statistical software (R, SAS, STATA, SPSS) See Word doc: Code for Manipulating ICPI Fact View Datasets in Statistical Packages Provides code for R, SAS, STATA to create smaller files (by OU or indicator) that can be imported into Excel Pepfar.net: Home > HQ > Interagency Collaborative for Program Improvement (ICPI) > Shared Documents > ICPI Data Store > MER The NAT/SUBNAT, OU x IM, and most PSNU x IM and Site x IM files for each OU can be imported directly into Excel. The global PSNU, somePSNU x IM files, and some OUs’ Site x IM files are too large to import directly into Excel. ICPI has posted a Word document called “Code for Manipulating ICPI Fact View Datasets in Statistical Packages” on pepfar.net which provides instructions on how to import and trim the file size of Fact View Datasets in various statistical packages (R, SAS, and STATA). Once the size of the files are reduced (by eliminating extraneous OUs or Indicators), the datasets can then be exported and then opened in Excel. If you do not have access to stats software in country, contact your SI Advisor or your agency SI POCs for assistance in reducing file size – if you provide information about what indicators, disaggs, etc you want someone at HQ should be able to help get you a file to use in Excel.

What do you notice about this data? Data Detective… What do you notice about this data?

HELP! Why Does My Age Column Have Dates? If you open a Fact View Dataset directly into Excel from a txt file… Excel will think that some age ranges are dates 10-14 Oct 14

STRETCH

Part II: Hands-On Practice In order to prevent participants from looking at the answers before attempting to answer the advanced scenario questions on their own, you may want to provide just a separate Excel file that contains only the raw data. Later you can provide the entire workbook so that users can check their answers. In week 2 of PALS Training we choose to break out this section into two different groups/rooms. 1) Beginner’s who were not very comfortable with using Pivot Tables and 2) Advanced Excel users who were comfortable with Pivot Tables. Participants in the beginner group opened the data set and the “Beginner’s Guide to Using FV Datasets” document. They went through the exercises in the document and facilitators were in the room to answer questions, monitor progress, and assist participants. After finishing, participants were then given the advanced group’s exercise and encouraged to continue practicing with that exercise later on. The presenter also presented the two slides in the “Pivot Tables” section. The advanced group skipped the beginner’s guide. Instead, they opened the data set and did the “Pivot Tables” and “Guided Walkthrough Scenario – Advanced Group” section together. They then went through the rest of the “Advanced Scenario Questions” on their own (with assistance from facilitators as needed). Both the beginner and advanced groups stopped their exercises 5-10 minutes before the end of the session. During the last few minutes, the presenter covered information from the “Analytic Dataset Best Practices” and “Data Validation” sections of this powerpoint (see below).

Pivot Tables

Why Pivot Tables Quick to create - you can build a pivot table in about one minute A pivot table can automatically sort, count, total or average the data stored in one table or spreadsheet, displaying the results in a second table showing the summarized data Drill down to see (or extract) the data in any portion of your table Can easily be refreshed by updating the source data Skip this slide if you are working with an advanced group

Creating a Pivot Table with Fact View Datasets 2 fields should always be included in any pivot table (as either a filter, slicer, row, or column) or chart: Indicator Disaggregate or StandardizedDisaggregate In almost all cases, you should also include: NumeratorDenom Other variables you include will depend on your analytic questions **This slide should be presented to both the beginner and advanced groups** This slide is critically important for all Fact View users The only two times you would not need to include the NumeratorDenom variable in your pivot table or chart is when: You have already filtered down to only “total numerator” or “total denominator” as the Disaggregate values you’re looking at. (in which case the NumeratorDenom indicator becomes redundant) You are only analyzing a variable that only has a numerator (i.e., there is no denominator for the indicator – like for VMMC_CIRC) Even in these two instances, it doesn’t really hurt to include the NumeratorDenom indicator as a filter, so we recommend that you include it just to be safe. Make including the NumeratorDenom in your Pivot Table a habit – that way you will never accidently make a mistake! *in old releases of the fact view datasets you also had to include isMCAD in a filer. In the current format & Fact View Datasets is no longer necessary to include (though it does not hurt to include isMCAD as a filter variable)

Guided Walkthrough Scenario – Advanced Group This section is a walk through that you can do with an advanced group before they start independently working on their advanced scenario/exercise questions. If you are working with a beginner group, just have them walk through

Walkthrough Scenarios When recording each answer, please note 3 additional pieces of information: Which dataset you used? What Field names you included in your pivot: Columns Rows Filters What calculations/formulas were needed?

Walkthrough Scenarios Scenario 1: Congratulations! You have just completed a fellowship at PEPFAR Tanzania, have just gotten a new job working as a medical officer in the C&T branch, on your first day you are asked by the PEPFAR coordinator which prioritization SNU has performed “best” at initiating new patients on ART at Q3 as the team would like to set up a conference call with the C&T branch chief to better understand what programmatic changes were implemented to accomplish these results. To help determine this, please use the Fact View analytic dataset to list the Sub-national unit that had the “best results” for TX_NEW at Q3 using these criteria:

Question 1 Percent achievement of TX_NEW target (FY17 Q1+Q2+Q3 TX_NEW_NUM result / FY17 TX_NEW_NUM target)

Question 1

Question 1 Configure your filters and/or slicers

Insert Calculated Fields

Question 1 - Answer District fy2017 targets fy2017 q1 fy2017 q2 Percent Achievement Kakonko DC 34 129 74 67 794.1% Korogwe DC 421 2322 221 187 648.5% Biharamulo DC 287 432 361 379 408.4%

Question 2 Largest increase absolute volume for Q3 (TX_NEW_NUM Q3 – TX_NEW_NUM Q2))

Insert Calculated Fields Question 2

Question 2 - Answer District Temeke MC 24838 2827 3775 4151 43.29% 376 fy2017 targets fy2017 q1 fy2017 q2 fy2017 q3 Percent Achievement Absolute Volume Temeke MC 24838 2827 3775 4151 43.29% 376 Geita DC 8895 846 822 1117 31.31% 295 Mbongwe DC 332 451 254 504 364.16% 250

Question 3 Relative increase in TX_NEW compared to FY16 (FY17 TX_NEW_NUM Q1+Q2+Q3/ TX_NEW_NUM FY16 APR)

Insert Calculated Fields

Question 3 - Answer District Korogwe DC Kasulu DC Wete 421 725 2322 fy2017 targets fy2016 APR fy2017 q1 fy2017 q2 fy2017 q3 Percent Achievement Absolute Volume Relative Increase Korogwe DC 421 725 2322 221 187 648.46% -34 376.55% Kasulu DC 64 69 75 79 51 320.31% -28 297.10% Wete 16 10 12 9 7 175.00% -2 280.00%

Advanced Breakout Exercises

Scenario 2 Now that you have survived your first week on the job, you have to learn more about the HTS program. Please use the Fact View analytic dataset to identify the PSNUs that had the “best results” for testing at Q3 using these criteria:

Scenario 2 - Questions Largest absolute increase in volume of positives (HTS_TST_POS Q3 – HTS_TST_POS Q2))  In this PSNU, what was the yield for MobileMod in Q1, Q2 and Q3 ( Yield = HTS_TST_POS/ (HTS_TST_POS+ HTS_TST_NEG)) (Hint: Use Standard Disaggregate and Modality fields (MobileMod) to simplify pivot table))  In this PSNU, what was the volume for MobileMod in Q1, Q2 and Q3? Did this represent an increase or decrease over time? (HTS_TST_POS)) (Hint: Use modality field)  In this PSNU, which facility based service delivery modality (SDM) and community based service delivery modality saw the largest increase in absolute volume of positives from Q2 to Q3 (HTS_TST_POS))? (Hint: Community modality end in “Mod”)  

Scenario 3 You are asked to investigate the changes in TX_CURR by Implementing Mechanism (IM) and differences between HTS_TST_POS and TX_NEW for each IM to have better context for your meetings with implementing partners. For this PSNU please use the Fact View analytic dataset to evaluate these questions:

Scenario 3 – Questions Which IM had the largest increase in number of patients on ART between Q2 and Q3 (i.e. TX_NET_NEW FY17 =TX_CURR_NUM Q3 FY17 – TX_CURR_NUM Q2 FY17), please record the IM name and absolute volume in your answer. For this IM, what was the TX_NEW for Q2 and Q3? For this IM, how many new positives (HTS_TST_POS) were identified in Q2 and Q3? And what is the difference between TX_NEW and HTS_TST_POS in each quarter? Which IM had the largest absolute increase in volume of positives for Q3 (HTS_TST_POS Q3 – HTS_TST_POS Q2))? (Record the number of positives for both quarters and the difference) For that IM, what was the TX_NEW for Q2 and Q3? And what is the difference between TX_NEW and HTS_TST_POS in each quarter?  What was the increase in number of patients on ART between Q2 and Q3? (i.e. TX_NET_NEW FY17 =TX_CURR_NUM Q3 FY17 – TX_CURR_NUM Q2 FY17)

Analytic Dataset Best Practices Use the last 5-10 minutes of your session to go over the next two sections with participants. Answer any questions, take comments, and discuss how people plan to use the Fact View Datasets in their work in the future.

Which Dataset is Right for Me? What level of granularity is needed? Don’t automatically go down to the Site X IM dataset! What software will you use? PSNU, PSNU X IM, & some Site X IM datasets too large to open in Excel Do you need to look at funding agency or IM? Not included in PSNU dataset Do you need to look at multiple OUs? Site X IM dataset is split into separate files for each OU which doesn’t facilitate analysis across multiple OUs 5 datasets available- OU X IM; PSNU; PSNU X IM; Site X IM ; IMPATT/NAT/SUBNAT. Which one is right for your analytic questions?

Data Validation

Pre-Analysis Sensibility Checks Before doing analysis, check to see if your data makes sense! Filters and/or pivot tables can help you review data Ask questions like: Are there missing/null values where I’d expect there to be data? Are there odd/strange values where I wouldn’t expect them? Are values significantly higher or lower than I’d expect? How complete is the disaggregate I was going to analyze?

Conclusion Analysis at the site level must be done carefully Before making programmatic decisions, need to have an understanding of potential data quality issues Site X IM dataset is most appropriate for site-level analyses only When interested in PSNU analyses, use… PSNU x IM Fact View dataset PSNU Fact View dataset Panorama Final.datim DATIM Genie Always start with a properly framed analytic question.

Validate Your Work Best practice – when you create analyses using Fact View Datasets, it is good to double check that you have correctly structured your tables/filters/formulas Use Panorama (or DATIM) as a tool to QC your work – especially when new to using Fact View Datasets

Tools and References For questions about the Fact View Datasets, See User’s Guide and Data Dictionary, Release Notes, and Training Materials found in the ICPI Data Store folder on PEPFAR.net Contact Your SI Advisor Contact ICPI (ICPI@State.gov)

How will you use the Fact View datasets in your work?

QUESTIONS?

Reference Slides These reference slides could be used to customize the presentation to your group, depending on needs/interests