Download presentation
Presentation is loading. Please wait.
Published byGarey Thornton Modified over 9 years ago
1
Structural Vulnerability Assessments An example of theory-informed, data- driven research to support for the formulation of actionable policy options March 2014
2
Approach Specify the goal (e.g. to minimize violent conflict) as measured by a third party measure (Heidelberg’s Conflict Barometer) – the target Select a broad range of comparable structural indicators that are suggested to escalate the level of conflict – the drivers Compile a structural indicator data set with country profiles for the world, continent and region over at least a decade – the profiles Train & Test the model based on the historical data and test its performance, refining and iterating until optimized – the model Forecast levels of conflict going into the future, identifying common and country-specific drivers for diagnosis – the “input” Formulate options to promote the prevention or escalation of conflict (i.e. to promote peace) – the recommended “options”
3
Assumptions & Requirements The choice of Target measure is guided by the Institution’s Mandate History Continuity and Learning – Model can be trained on the association of past profiles & levels of conflict, with measurable performance, but – Beware of novelties and discontinuities Statistical method must be tolerant of indicator collinearity, data noise and missing values Tools must be customizable, interactive and able to track performance metrics Recall = TP / (TP + FN) <<< most important for early intervention Precision = TP / (TP + FP) F Score = harmonic mean of recall & precision - BEST OVERALL MEASURE (TP = true positives, FP = false positives, TN = true negative, FN = false negatives)
4
Data Criteria: in descending order of importance Mandated Focus & Scope (e.g. gender) Available, at least 50% non-missing data Actionable (e.g. terrain versus education) Theory-informed (linkage to target variable) Open source, with regular updates Comparable, national, continental & global Replicable, with full documentation Historical, a decade plus for training
5
Data Sources: Preferred, Usable and Others Member States – preferred International Governmental Organizations – AU, RECs, UN, World Bank, AfDB, ILO, etc. (especially African) Nongovernmental Organizations – Academic Centers (e.g. Uppsala, Heidelberg, George Mason) – Research Units (e.g. CRED, IISS, SIPRI) – Advocacy (World Economic Forum, Freedom House) Ad hoc and internally generated, particularly intra- regional, for example – Informal cross-border trade internally generated – Country-specific measures limited to a region
6
Data Challenges: Missing Data and Alternate Measures 5,814 indicators across five databases (Africa, Development, Education, Gender & HPN), but 4,485 unique indicators 1,940 indicators with more than 50% non-missing data (Only 43% usable for global comparisons) With numerous variations of each: – 219 individual measures with a GDP component – 98 individual measures on school enrollment – 21 measures on education (mostly spending) Based on June 2012 World Bank data
7
Data Challenges: Two More Examples Unemployment and Employment – 56 indicators on unemployment, by age, education, gender & sector, but none with more than 46% non-missing data – 145 indicators on employment, with 15 having non-missing data greater than 78% Distribution of Income (HDI as alternative) – 22 GINI-related measures, none with more than 14 % non-missing data – 12 income share-related measures, none with more than 14 % non-missing data
8
Analysis Process: Transparent & Participatory Approach Engage stakeholders in discussions of target, indicators & data sources, with several iterations of each – Test & re-test both DV and IV operationalizations – Solicit regional expert ideas on the indicators – Explore national sources first, then IGOs and others Communicate progress on intermediate results, especially on their communication & ultimate use – Ongoing collaboration, review & validation (national and IGO officials, regional experts and academics) is required – Focus on drivers, both common and country-specific – Calculate discrimination value & distributions – Emphasize interpretation & formulation of options
9
Interpretation: Guidelines Applies to both Common & Country-Specific Drivers Consider – Performance Metrics – Distribution – Triangulation – Missing Drivers (if indicator is missing, it cannot discriminate ) Indicator utility in specific contexts, including – Indicator relevance, and – A “Smell Test” of intuition and grounded expertise
10
Formulation of Options: Guidelines Within the Institutional Mandate Actionable Prevention and/or Mitigation Consider Intermediate Requirements Define Progress Milestones Specify linkage and expected outcomes
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.