Big Data, Education, and Society

Slides:



Advertisements
Similar presentations
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Advertisements

Measurement Reliability and Validity
1 / 27 CS 709B Advanced Software Project Management and Development Software Estimation - I Based on Chapters 1-3 of the book [McConnell 2006] Steve McConnell,
Violence Prevention. Preventing school violence is a top priority for school and public safety officials today. Efforts include creating more positive.
Policing Juveniles Police typically encounter juveniles when responding to a call. Police try to treat minors with least restrictive alternative unless.
1.  The New York State Hate Crimes Act of 2000 requires DCJS to collect and analyze demographic and statistical data with respect to the number of Hate.
Shipi Kankane Prashanth Nakirekommula.  Applying analytics and risk- management capabilities to health insurance through LexisNexis data platforms. 
Do Judges Vary in Their Treatment of Race? David Abrams (U of Chicago) Marianne Bertrand (U of Chicago) Sendhil Mullainathan (Harvard) June 5, 2007.
Whistleblower Protection Institution Overview of Georgian Legislation and international experience Maia Dvalishvili Deputy Head, Civil Service Bureau of.
The Philosophy of Exotischism Ignorance Is No Excuse 1 Most of us have heard the old expression "ignorance is no excuse for breaking the law". If courts.
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
Ch. 2: Planning a Study (cont’d) pp THE RESEARCH PROPOSAL  In all empirical research studies, you systematically collect and analyze data 
Writing a Negotiation Proposal Objectives: Objectives:  To learn to write effectively as a group  To learn to write an effective solution proposal 
Persuasive Essay: writing to convince others of your opinion.
I, Too, Am CofC. Show your support for diversity and inclusion through… Course Content Climate in your Department Classroom Dynamics Inclusive Learning.
MANAGEMENT RICHARD L. DAFT.
Writing to convince others of your opinion..  Decide on your purpose: What will you convince the readers to believe or to do?  Pre-write to discover.
Artificial Intelligence Computers Think!. What is thinking?  The question is not ‘Do computers think the way humans think or at the same level? but whether.
Problem Solving / Decision MakingChapter Problem Solving / Decision Making Kepner-Tregoe The New Rational Manager Chapter 1.
Multiple Regression. Multiple Regression  Usually several variables influence the dependent variable  Example: income is influenced by years of education.
© Copyright 2011, Pearson Education, Inc. All rights reserved. Chapter 2 Drug-Taking Behavior: The Personal and Social Concerns.
UNDP /UNECE NHDR Workshop on Statistical Indicators Bratislava, 5-10 May 2003 Gender Statistics and Disaggregation by Sex Dono Abdurazakova, Gender Adviser.
Overview of Today 1) Housekeeping 2) Review your journal posts and questions 3) Discussion about numeracy in groups 4) Strategies for navigating uncertainty.
Why TRUST? Why this team? Ken Birman TRUST TRUST:Team for Research in Ubiquitous Secure Technologies September 13th 2004 NSF STC Review.
Comments on MMP report: “Revealing Race” Guy Berger, Johannesburg, 16 April 2007.
CUNA Mutual Group Proprietary Reproduction, Adaptation or Distribution Prohibited © CUNA Mutual Group 2013 Taking on Racial Equity Angela Russell, MS Manager,
AJ 58 – Community and Human Relations Chapter 12 – Toward a New Breed of Police Officer.
Quantitative Literacy Assessment At Kennedy King College Fall 2013 DRAFT - FOR DISCUSSION ONLY 1 Prepared by Robert Rollings, spring 2014.
Model validation: Introduction to statistical considerations Miguel Nakamura Centro de Investigación en Matemáticas (CIMAT), Guanajuato, Mexico
The Risky Business of Assumptions Presented by: Beth Spriggs, PMP.
Min-Seok Pang and Paul A. Pavlou Management Information Systems Fox School of Business, Temple University Dec. 12 th, 2015 Does IT.
Quantitative Literacy Assessment
Algorithmic Transparency & Quantitative Influence
Attacking Actuarial Risk Assessment Instruments—Precision and Bias
Stop , Question and Frisk Statistical Analysis
Algorithmic Fairness Risk assessments for pre-trial detention
Ch. 2: Planning a Study.
writing to convince others of your opinion.
Hypothesis Testing.
Dr. Lorne Foster York University
Democratizing Discourse about Race: Race Roundtables
MANAGEMENT RICHARD L. DAFT.
The Sociology of Race & Ethnicity
Presented by: Charlie Granville CEO, Capita Technologies Chris Baird
Overcoming Bias in DNA Mixture Interpretation
Post Incarceration Reintegration into the Workforce
MODULE 2 Myers’ Exploring Psychology 5th Ed.
Conclusion WRI 100.
Rules and Theory of Criminal Law BAIL
4.2 (Day 1)
Advancing Race Equity and Inclusion Annie E. Casey Foundation
CJA 374 Inspiring Innovation-- snaptutorial.com
Algorithmic Glass Ceiling in Social Networks
Big Data, Education, and Society
Big Data, Education, and Society
Criminal Process Bail.
writing to convince others of your opinion.
Implicit Bias in Discipline Decisions
Big Data, Education, and Society
Julia Angwin ProPublica
Risk and Decision Making
Big Data, Education, and Society
Managerial Decision Making
Bell Ringer Open your student workbook and turn to pages 27 and 28.
Algorithmic Management and Fairness
Spatial Information and Urban Analytics for Smart City Policy.
Dr. Lorne Foster York University
Professor, Criminology and Sociology Professor, Law and Epidemiology
What’s at the Root of Inclusiveness
Evaluation David Kauchak CS 158 – Fall 2019.
Presentation transcript:

Big Data, Education, and Society April 18, 2018

Assignment 3 Any questions about assignment 3?

Assignment 4 Any questions about assignment 4?

Discrimination and the Perpetuation of Bias

Algorithmic Bias (Garcia, 2016) Nothing to do with the bias-variance trade-off

Algorithmic Bias (Garcia, 2016) When an prediction model discovers bias in existing decision-making and then perpetuates/systematicizes/extends it

Example (Garcia, 2016)

Example (Garcia, 2016)

Example (Corbett-Davies et al., 2017) For example, when ProPublica examined computer-generated risk scores in Broward County, Fla., in 2016, it found that black defendants were substantially more likely than whites to be rated a high risk of committing a violent crime if released. Even among defendants who ultimately were not re-arrested, blacks were more likely than whites to be deemed risky. These results elicited a visceral sense of injustice and prompted a chorus of warnings about the dangers of artificial intelligence.

Comments? Questions?

Corbett-Davies et al. (2017) argument Yet those results don’t prove the algorithm itself is biased against black defendants — a point we’ve made previously, including in peer-reviewed research. The Broward County classifications are based on recognized risk factors, like a documented history of violence. The classifications do not explicitly consider a defendant’s race. Because of complex social and economic causes, black defendants in Broward County are in reality more likely than whites to be arrested in connection with a violent crime after release, and so classifications designed to predict such outcomes necessarily identify more black defendants as risky. This would be true regardless of whether the judgments were made by a computer or by a human decision maker. It is not biased algorithms but broader societal inequalities that drive the troubling racial differences we see in Broward County and throughout the country. It is misleading and counterproductive to blame the algorithm for uncovering real statistical patterns. Ignoring these patterns would not resolve the underlying disparities.

What are your thoughts on this argument?

What could be solutions to preventing algorithmic bias?

What could be solutions to preventing algorithmic bias? Avoid using demographic variables as predictors (GPDR) Diversity in people developing models (Garcia, 2016) Predict outcomes that are not biased – example given of violent crime (less biased) instead of all crime (more biased) (Corbett-Davies et al., 2017)

What could be solutions to preventing algorithmic bias? “Right to explanation” (GPDR) Inspect model outcomes for bias Do visibly biased outcomes result? (Garcia, 2016) Is the model more biased than its training data? Openness as to model internals and predictions (Garcia, 2016)

Which of these solutions are practically feasible in learning analytics? And which aren’t? Why/why not?

Open University UK situation

Open University UK situation How should they have addressed the problem?

Assemble into your *assignment* groups

Assemble into your *assignment* groups How could your proposed innovation become problematic or damaged due to algorithmic bias Either actual or perceived? What, if anything, could you do to mitigate that risk?

Be ready to present (briefly) to the class What is your innovation? What is the risk of algorithmic bias? What steps could you take to address it?

Now present to the class Do you think this is the biggest risk of algorithmic bias? Do you have any concerns that their solution will not work? After we complete critique in terms of these… Do you have other potential solutions?

Upcoming office hours 4/25 930am-1030am